"Intelligence is computation"
The brain's physical structure is irrelevant - what matters is the logical operations it performs.
"Intelligence emerges from structure"
Mimic the brain's architecture: interconnected neurons that learn through experience and adjustment.
Published "A Logical Calculus of Ideas Immanent in Nervous Activity" - the first mathematical model of artificial neurons showing how networks of simple units could compute logical functions.
Turing develops the concept of the Universal Machine and computability - focusing on abstract computation rather than brain structure.
Donald Hebb publishes "The Organization of Behavior" proposing that neurons that "fire together wire together" - the foundation of neural network learning.
Turing publishes his famous paper but explicitly dismisses the importance of mimicking the nervous system's electrical structure, arguing it's "not of theoretical importance."
Marvin Minsky builds SNARC (Stochastic Neural Analog Reinforcement Calculator) - the first artificial neural network machine with 40 neurons that could learn through reinforcement.
Frank Rosenblatt creates the Perceptron - the first machine that could learn to recognize patterns through a brain-inspired architecture. Demonstrated on image recognition.
Turing dies, never seeing the explosion of neural network research that would follow in the late 1950s and prove the viability of brain-inspired architectures.
Widrow develops ADALINE, Werbos conceives backpropagation - neural networks begin to show promise in pattern recognition and adaptive learning tasks.
| Turing's Symbolic AI | Neural Network Approach |
|---|---|
| How Intelligence Works | |
| Intelligence is computation - manipulating symbols according to rules | Intelligence emerges from patterns of activation in interconnected units |
| The Brain's Role | |
| The brain's structure is irrelevant - only its logical functions matter | The brain's architecture is the key - copy its structure to copy its intelligence |
| How Machines Learn | |
| Through programming - humans write explicit instructions and rules | Through experience - adjust connection weights based on examples |
| Knowledge Representation | |
| Explicit symbols - clear, interpretable rules and logic | Distributed patterns - knowledge spread across weighted connections |
| Best Suited For | |
| Logic, reasoning, chess, theorem proving, formal systems | Pattern recognition, vision, speech, motor control, statistical learning |
| Development Method | |
| Top-down - analyze the problem, write clever algorithms | Bottom-up - build simple units, let intelligence emerge from interactions |
While Turing was busy proving that the brain's electrical nature was "not of theoretical importance," researchers like Rosenblatt were building machines that could learn to see by mimicking that very structure.
Turing's Prediction (1950): "In about fifty years' time" (by 2000) computers would pass the imitation game through clever programming.
What Actually Happened: By 2000, symbolic AI had largely stalled. The breakthrough came from neural networks - precisely the approach Turing dismissed. Modern AI that passes many versions of Turing's test (image recognition, language models, game playing) all use neural architectures.
Turing was so focused on abstract computation that he missed that how we compute might be as important as what we compute.
Today's most powerful AI systems combine both approaches: neural networks for perception and pattern recognition, symbolic systems for reasoning and planning. Turing was right that universal computation matters - but his contemporaries were also right that brain-inspired architectures unlock capabilities that pure programming struggles to achieve.
The path Turing didn't take became the highway to modern AI.