The Perceptron
The perceptron is the historical ancestor of every neural network. Frank Rosenblatt introduced it in 1958 as a mechanical model of biological perception; Minsky & Papert's 1969 book Perceptrons showed its limits and triggered the first AI winter; the multi-layer perceptron and backpropagation eventually revived it.
The algorithm
Given labelled data
- If
(mistake), update .
That is the entire learning rule.
Convergence (Novikoff, 1962)
If the data is linearly separable with margin
Why it eventually mattered
- The mistake-bound proof is the prototype for online learning.
- Stacking perceptrons gave the MLP, which with backpropagation became deep learning.
- The dual form (storing only support vectors) anticipates SVMs.
What to read next
- SVM — the perceptron's margin-maximising successor.
- From Perceptron to MLP — the deep-learning continuation.
- Connectionism & The Perceptron Controversy — historical context.
Stub status
Seed introduction. Expand with the dual form, the kernel perceptron, and Minsky–Papert's XOR critique.