1. Introduction:
- Deduction, Abduction & Induction
- Inductive Learning Hypothesis
- Probability Formulae
- Types of Machine Learning
- Model Selection vs Parameter Optimization
2. Supervised Learning - Regression
- Regression
- Squared Error Function
- Empirical Error (Parameter Optimization)
- Regularization
- Perceptron in Mathematical Notation
3. Probabilistic Modelling
- Aleatoric vs Epistemic Uncertainty
- Using Likelihood to Model Probability
- Bayesian Inference
- MAP estimation for a Gaussian distributed data model using regularization
- MAP = Regularized Least Squares
- Predictive vs Bayesian Predictive Distribution
4. Incremental Bayesian Learning
5. Error Minimization
- Generalized Learning Rule
- Finding Hyper Parameter
- Cross Validation
- Bias-Variance decomposition
- Bias vs Variance
- Double Descent
6. Models
- General Model Classes
- Linear Learning Models
- Finding Optimum Parameters for Linear Learning Model
- Gradient Descent
- Radial Basis Function
- Weighted Linear Regression
- Unified Model
- Relating Models to the Unified Model
7. Classification
- Linearly Separable Datasets
- Hyper planes
- One of K Encoding Scheme
- Bayesian Approach to 2 Class Classification
- Decomposition of the Bayesian Approach to the Generalized Linear Model
- Outcomes of Gaussian Modeling
- Direct Maximum Likelihood Approach
- Direct Posterior Modeling
- Three approaches to classification
Core Concepts
Three Main Approaches to Classification
- Approach 1: Discriminant Functions (Direct Mapping)
- Example: Fisher’s Linear Discriminant (LDF)
- Approach 2: Bayesian Approach
- Approach 3: Direct Posterior Modeling
- Probabilistic Discriminative Models
- Example: Logistic Regression, which is a type of Generalized Linear Model.