1. Introduction:
- Deduction, Abduction & Induction
- Inductive Learning Hypothesis
- Probability Formulae
- Types of Machine Learning
- Model Selection vs Parameter Optimization
2. Supervised Learning - Regression
- Regression
- Squared Error Function
- Empirical Error (Parameter Optimization)
- Regularization
- Perceptron in Mathematical Notation
3. Probabilistic Modelling
- Aleatoric vs Epistemic Uncertainty
- Using Likelihood to Model Probability
- Bayesian Inference
- MAP estimation for a Gaussian distributed data model using regularization
- MAP = Regularized Least Squares
- Predictive vs Bayesian Predictive Distribution
4. Incremental Bayesian Learning
5. Error Minimization
- Generalized Learning Rule
- Finding Hyper Parameter
- Cross Validation
- Bias-Variance decomposition
- Bias vs Variance
- Double Descent
6. Models
- General Model Classes
- Linear Learning Models
- Finding Optimum Parameters for Linear Learning Model
- Gradient Descent
- Radial Basis Function
- Weighted Linear Regression
- Unified Model
- Relating Models to the Unified Model
7. Classification
- Linearly Separable Datasets
- Hyper planes
- One of K Encoding Scheme
- Bayesian Approach to 2 Class Classification
- Decomposition of the Bayesian Approach to the Generalized Linear Model
- Outcomes of Gaussian Modeling
- Direct Maximum Likelihood Approach
- Logistic Regression (Probabilistic Discriminative Model)
- Discriminant Function (Fisher LDF as Example)
8. Concept Learning
- Validity of Fundamental Equivalence between Induction and Deduction
- ID3 Algorithm
- Naive Bayes Classifier
- Information gain in ID3
- Goal of Concept Learning
9. Unsupervised Learning
10. Prototypes, K-means, GMM and EM
- K-means Algorithm
- Relationship between K-means and Expectation Maximization Algorithms
- Gaussian Mixture Model
- Gaussian Mixture Regression (GMR)