Skip to content

Fundamentals & History

Everything before deep neural networks. The mathematical preliminaries, the classical algorithms (OLS, SVM, decision trees, k-means, PCA), and the historical arc that explains why deep learning eventually won.

The order of articles in the sidebar is a topological sort of prerequisites — read top to bottom and you should never hit a concept that hasn't already been defined.

Reading path

  1. Mathematical Preliminaries — linear algebra → probability → calculus → convex optimization → information theory.
  2. Learning Theory & Methodology — what does it mean to "learn" from data? ERM, bias–variance, generalization, regularization.
  3. Classical Regression — OLS, ridge/lasso, logistic regression, GLMs.
  4. Classical Classification — kNN, naive Bayes, LDA/QDA, the perceptron, SVMs and the kernel trick.
  5. Trees & Ensembles — decision trees, random forests, AdaBoost, gradient boosting.
  6. Unsupervised Learning — k-means, GMMs/EM, PCA/SVD, manifold learning.
  7. Probabilistic & Sequence Models — Bayesian networks, Markov chains, HMMs, CRFs.
  8. A Brief History of ML — symbolic AI, the perceptron controversy, the kernel era, the deep learning renaissance.

Source courses

Most of the information theory sub-section is drawn from NoteNextra · CSE5313 — Coding & Information Theory for Data Science. Other articles in this section are original placeholders to be filled in over time.

TIP

Many articles here begin as stubs. Run npm run import to pull the latest matching content from NoteNextra; the stub pages list the precise upstream files they expect.

Released under the MIT License. Content imported and adapted from NoteNextra.