DEV Community

insane
insane

Posted on • Originally published at ishistory.pages.dev

The Philosophers Who Asked 'Can Machines Think?'

The Philosophers Who Asked "Can Machines Think?"

Paris, 1642. A nineteen-year-old boy watches his father, a tax commissioner for the French Crown, spend his evenings hunched over endless columns of numbers. The work is the 17th-century equivalent of manual data entry: adding, checking, re-adding, and checking again. It is relentless, mind-numbing, and prone to the tiny human errors that cascade into financial disasters.

The boy is Blaise Pascal. He happens to be a mathematical prodigy who independently rediscovered Euclid's geometric propositions as a child. Looking at his father’s exhaustion, he realizes that the human mind is being wasted on a task that is essentially algorithmic. He thinks: there must be a better way.

Over the next three years, Pascal builds fifty prototypes. He wrestles with gear ratios, mechanical linkages, and the "carrying" mechanism—the logic that transfers a unit to the next column when a digit rolls from nine to zero. The result is the Pascaline, the first functional mechanical calculator. You turn the dials, you crank the handle, and you read the answer.

It worked. But almost immediately, a haunting question emerged: if a machine can calculate, what else can it do? If a gear can perform the "mental" task of addition, is the mind itself just a complex set of gears?

Why Philosophy Came First

We often talk about the history of AI as a timeline of hardware: vacuum tubes, transistors, GPUs, and TPUs. We track it through breakthroughs like the Perceptron, Deep Blue, or GPT-4. But the story of AI didn't begin with silicon; it began with a "spec" written by philosophers.

Before we could build a thinking machine, we had to define what "thinking" actually was. Before we could program intelligence, we had to decide if intelligence required a soul, a biological brain, or simply the right set of logical rules.

The groundwork for everything we do in VS Code today was laid between the 17th and 20th centuries by people who had no idea they were "developers." They were mathematicians, theologians, and natural philosophers. They were trying to debug the human experience. Their debates aren't just historical curiosities; they are the exact same arguments we have on Twitter and Hacker News today regarding LLMs and "stochastic parrots."

René Descartes: The Original Hardware-Software Split

René Descartes is often the villain in modern AI stories because he gave us Dualism, but for a developer, his perspective is actually quite intuitive.

In his Meditations on First Philosophy (1641), Descartes famously stripped away everything he thought he knew until he reached a single, unhackable truth: Cogito, ergo sum (I think, therefore I am). He couldn't doubt his existence because the act of doubting was, itself, a form of thinking.

But this led him to a radical conclusion. He saw two fundamentally different types of "substances" in the universe:

  • Res extensa (Extended thing): This is matter. It has dimensions. It is the hardware. Descartes saw the human body as a sophisticated biological machine—a system of pumps, valves, and levers.
  • Res cogitans (Thinking thing): This is the mind. It is non-physical and does not occupy space. It is the "software," but in Descartes’ view, it was software that couldn't run on physical hardware.

Descartes’ legacy for AI is a "No-Go" theorem. He argued that while you could build an automaton that mimicked human speech or movement, it would never truly understand. It would be a simulation without a subject.

When people argue today that "LLMs don't actually know anything, they just predict the next token," they are being good Cartesians. They are saying that the "thinking" (res cogitans) is missing, no matter how good the "machine" (res extensa) performs. Descartes even tried to find the "API bridge" between the two, suggesting the pineal gland in the brain was where the soul talked to the body. It was a failed pull request, but it defined the "Hard Problem of Consciousness" that still plagues us.

Leibniz: The Dream of the Universal Language

If Descartes was the skeptic who drew a line in the sand, Gottfried Wilhelm Leibniz was the optimist who tried to build a bridge across it.

Leibniz, who co-invented calculus, was perhaps the first person to envision a computer in the modern sense. He built the Stepped Reckoner, a machine that outperformed Pascal’s by handling multiplication and division. But his true contribution was a concept called the characteristica universalis (a universal symbolic language).

Leibniz’s big idea was this: what if all human thought could be reduced to a set of symbols? And what if we had a calculus ratiocinator—a set of logical rules to manipulate those symbols?

He imagined a world where, if two philosophers disagreed, they wouldn't need to argue. They would simply say, "Let us calculate!" They would sit down at a machine, input their premises in the universal language, crank the handle, and out would come the logically perfect truth.

For a developer, Leibniz is the patron saint of Symbolic AI. He believed that:

  1. Knowledge can be represented as data.
  2. Reasoning can be represented as an algorithm.

Top comments (0)