DEV Community

insane
insane

Posted on • Originally published at ishistory.pages.dev

The Dartmouth Conference, 1956: The Summer AI Was Born

The Dartmouth Conference, 1956: The Summer AI Was Born

Before the Name

Let’s rewind to a time before AI was even called “AI.” Imagine you’re a researcher in the early 1950s: computers are new, expensive, and massive. The intellectual tools are there—Turing’s famous “Computing Machinery and Intelligence” had already posed the question “Can machines think?” Cybernetics is stirring up conversations about control and communication in animals and machines. Information theory is giving everyone a new way to talk about data.

But here’s the catch: all this activity is happening in isolated pockets.

  • Mathematicians are fiddling with game-playing algorithms.
  • Psychologists are speculating about learning.
  • Engineers are building machines with blinking lights and vacuum tubes.

If you’re interested in “machine intelligence,” you might read Turing, but not Wiener. You might know about neural networks, but not about symbolic logic. There’s no central hub for these conversations—no mailing list, no conference, no department, and crucially, no name for the field.

Without a name, you can’t put it on a grant application. You can’t recruit students. You can’t even explain to your colleagues what you do, because each time you have to tell the story from scratch.

What happened at Dartmouth wasn’t just about ideas—it was about giving those ideas a name and a home. That act of naming would unify scattered threads and make artificial intelligence a real, collective pursuit.

John McCarthy: The Man with the Idea

If you’ve ever used LISP, you’ve already felt John McCarthy’s influence. But before he invented LISP—or even gave AI its name—he was a young mathematician on a mission to build thinking machines.

McCarthy’s background was a patchwork of math, activism, and intellectual curiosity. He zipped through Caltech, earned a PhD at Princeton, and landed at Dartmouth in his late twenties. He wasn’t content with theory alone; he wanted to build things, to see if computers could actually do the stuff humans do—learn, reason, and solve problems.

Here’s the problem: there was no unified field for this work. People weren’t talking to each other. If you wanted to build a machine that could play chess or understand language, you had to hunt down the right papers and hope you could piece together a community from scratch.

McCarthy’s solution was classic developer hustle: bring everyone together, make it official, and kickstart something new. He reached out to three collaborators:

  • Marvin Minsky: Mathematician, neuroscientist, and soon-to-be MIT legend.
  • Nathaniel Rochester: Chief architect of IBM’s first scientific computer.
  • Claude Shannon: The “father of information theory,” already a big deal at Bell Labs.

With these names, McCarthy had both credibility and vision. He wrote up a proposal for a summer-long research project, hoping to gather the right mix of minds and spark something bigger than any one person’s work.

The Proposal and Its Famous Sentence

Let’s pause on the proposal itself. This is the document that put “artificial intelligence” on the map. It wasn’t just an outline for a conference—it was a manifesto.

The proposal laid out a clear plan: gather ten researchers for two months at Dartmouth, and let them wrestle with big questions:

  • How can computers use language?
  • What’s the theory behind computation?
  • Can machines learn and self-improve?
  • How do neural networks work?
  • Where does randomness and creativity fit in?

But the real bombshell was a single sentence:

“We propose that a 2-month, 10-man study of artificial intelligence be carried out during the summer of 1956 at Dartmouth College in Hanover, New Hampshire. The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.”

That’s a bold claim—even today. It says, basically: if we can describe how intelligence works, a machine can do it too. Not just a little bit of intelligence, not just game-playing or math, but every aspect.

This was radical. Some researchers believed intelligence was more than algorithms—maybe tied to consciousness, emotions, or embodied experience. But McCarthy was betting on formalization: make it precise, make it programmable, and let’s see if we can build it.

Choosing “artificial intelligence” as the name was just as deliberate. It wasn’t automata studies, it wasn’t cybernetics—it was AI. Clear, direct, and ambitious. Once there was a name, everything else could start to fall into place: departments, funding, research agendas.

The Rockefeller Foundation backed the project with $7,500—a modest sum, but enough to bring the thinkers together and plant the seed for a new field.

The Attendees: A Room That Made History

The original invitation list reads like a who’s-who of mid-century science. These weren’t just theorists—they were builders, experimenters, and future pioneers.

Let’s spotlight a few:

  • John McCarthy: Already tinkering with what would become LISP, a programming language that would shape AI for decades. LISP’s focus on symbolic processing made it the go-to language for AI research, and if you’re a dev today, you can trace plenty of modern languages right back to its influence.

  • Marvin Minsky: Fresh off building SNARC, one of the first neural network machines. Minsky had a knack for combining hardware and theory, and would soon found the MIT AI Lab. His work—and his sometimes controversial critiques—helped set the tone for what AI could achieve.

  • Nathaniel Rochester: Pushing the envelope on neural network simulations with IBM’s computing muscle. Rochester’s efforts to model learning in machines were early, ambitious, and a little rocky—but they paved the way for later breakthroughs.

  • Claude Shannon: Information theory royalty. Shannon’s presence helped bridge the gap between computation, logic, and the messy realities of human communication.

Others came and went over the summer. Some were psychologists, exploring learning and perception. Some were mathematicians, focused on logic and proofs. The mix was eclectic, and the conversations ranged from the practical (how do we code this?) to the philosophical (what is intelligence, anyway?).

They didn’t solve the big questions that summer. There were debates, disagreements, and plenty of loose ends. But the conference was less about breakthroughs and more about setting a direction. The people in that room would go on to define the field’s agenda for years to come.

What Actually Happened That Summer

If you’re picturing a two-month hackathon with working demos and finished products, reset your expectations. The Dartmouth Conference was more like a workshop: lots of discussion, some brainstorming, and a few attempts at collaboration.

  • Some attendees stayed for the whole summer, others dropped in for a week or two.
  • Projects were proposed, some code was written, but no grand unified theory emerged.
  • There was friction between the symbolic AI camp (reasoning and logic) and the neural network camp (learning and perception). This tension would echo through the decades, leading to cycles of boom and bust (the infamous “AI winters”).

The real outcome? The field had a name. People started calling themselves AI researchers. Universities began to recognize AI as a legitimate discipline. Funding agencies knew what to look for.

And as a dev, you can appreciate this: by giving the field a name and a community, McCarthy and his colleagues made it possible for future generations to build on their work. They unlocked a chain reaction of innovation—from early expert systems to deep learning frameworks, from LISP to Python and TensorFlow.

Why It Matters to Developers Today

So, what does this history lesson mean for you—especially if you’re a developer working with modern AI tools?

  • Names matter: The right name can unite scattered efforts, attract funding, and build communities. Think about how "DevOps," "machine learning," or "blockchain" have shaped conversations and careers.
  • Interdisciplinary teams drive innovation: The Dartmouth attendees came from math, psychology, engineering, and more. Today’s AI breakthroughs are still fueled by cross-pollination—combining algorithms, neuroscience, linguistics, and ethics.
  • Bold claims push the field forward: McCarthy’s “conjecture” was audacious and controversial, but it gave the community something to aim for. Sometimes, staking out a vision is as important as solving the immediate problem.

If you’re working on AI today—whether it’s deploying an image classifier, contributing to open source, or just experimenting with APIs—you’re part of a legacy that started with a handful of researchers in a quiet New Hampshire valley.

Conclusion

The Dartmouth Conference didn’t produce a grand unified theory or a world-changing demo. What it did was more foundational: it gave the emerging science of machine intelligence a name, a focus, and a community.

The field is still wrestling with big questions—What counts as intelligence? Can machines ever truly learn? How do we ensure AI serves humanity?—but thanks to the vision (and the audacity) of McCarthy and his collaborators, those questions have a home.

Next time you spin up an AI model or refactor some LISP-inspired syntax, take a moment to appreciate the summer of 1956. It was the season when a scattered set of ideas became a field—and when the journey toward artificial intelligence truly began.

Top comments (0)