DEV Community

Lei Hua
Lei Hua

Posted on

The Man Who Summoned Ghosts | Chapter 5: Summoning Ghosts

#ai

The Man Who Summoned Ghosts | Chapter 5: Summoning Ghosts cover

Ghosts, animals, agents, and the vocabulary Karpathy gave to AI behavior.

Originally published on Lei Hua's Substack.

Anchors:
2025-10-01 · Animals vs Ghosts (blog post) · https://karpathy.bearblog.dev/animals-vs-ghosts/
2025-10-17 · Dwarkesh Podcast · AGI is still a decade away · https://www.dwarkesh.com/p/andrej-karpathy
2025-11-29 · The space of minds (blog post) · https://karpathy.bearblog.dev/the-space-of-minds/
2025-12-19 · 2025 LLM Year in Review · https://karpathy.bearblog.dev/year-in-review-2025/


Epigraph

"Today's frontier LLM research is not about building animals. It is about summoning ghosts. ... It's possible that ghosts:animals :: planes:birds."
— Andrej Karpathy, Animals vs Ghosts · 2025-10


I. The Eve

October 1, 2025. Sixteen days before he would appear on Dwarkesh's podcast. On this day he posted an essay on his bearblog titled Animals vs Ghosts.

The essay was a response to another Dwarkesh episode — the interview with Richard Sutton (yes, the "Bitter Lesson" Sutton). On that episode, Sutton had pointed out that the current LLM paradigm isn't truly "bitter-lesson-pilled" — it builds on human-generated, finite, biased data. Karpathy's blog post agrees and disagrees at the same time: he concedes Sutton's point has weight, then says — "We are not building animals. We are summoning ghosts."

Ghosts: not intelligence grown out of biological evolution; not intelligence shaped by survival drive, curiosity, play. Ghosts are intelligence statistically distilled out of human texts. They are not the close cousins of animals. They are perhaps a different species. He even offers an analogy that would echo for the rest of the year — *ghosts are to animals as planes are to birds.*

This was a blog post, not an interview. It was his own language, his own rhythm, his own judgment. Sixteen days later he would carry the same language onto a podcast with 2 million subscribers. But the calm, almost metaphysical quality of the blog post would be amplified, on the podcast, into a sharper engineer's register.


II. The Two-Hour-Twenty-Five-Minute Conversation

October 17, 2025. Dwarkesh Patel released the interview. The title: AGI is still a decade away.

The conversation runs nine sections, from AGI timelines through: LLM cognitive deficits, RL is terrible, how humans learn, how AGI will blend into 2% GDP growth, ASI, the evolution of intelligence and culture, why self-driving took so long, the future of education.

But what shook the industry was not any one section, but a handful of sentences.

The first was about code: "I feel like the industry is making too big of a jump and is trying to pretend like this is amazing, and it's not. It's slop."

The second was about timelines, already the episode's title: "I have 15 years of prediction experience and intuition and I average things out and it feels like a decade to me."

The third was about reinforcement learning: "RL is terrible. It's just that everything else we tried is worse."

The fourth was about the current state of agents, using a concept he had carried over from his Tesla years — "march of nines." Self-driving took ten years and is still climbing the "nines" of reliability; agents will take ten years too.

Together, these sentences became a media narrative quoted everywhere — "OpenAI co-founder pops the AI bubble." Fortune wrote it up. John Coogan satirized on X that "the AI bubble has popped, time to invest in food, water, shelter, and guns." The narrative caught the sentences, but missed what was actually in the tone.


III. His Own Clarification

Four days later, on October 21, 2025, Karpathy posted a long thread on X correcting the media reading. Its most important line:

"Basically my AI timelines are about 5-10X pessimistic w.r.t. what you'll find in your neighborhood SF AI house party or on your twitter timeline, but still quite optimistic w.r.t. a rising tide of AI deniers and skeptics."

That tweet matters. It tells us where he wants to stand — the cool middle. He is neither in the heat of the SF house party nor in the anti-intellectualism of the AI deniers. What he wants to be is a sober internal critic with technical credentials.

But he could not fully control how his words were read. "Slop." "AGI is still a decade away." Those sentences traveled vastly farther than his X clarification. In the last two months of 2025, he was effectively positioned, in the public mind, as "the insider who pricked the AI bubble" — a role he himself did not fully endorse.

This is the cost of being a public thinker. When you speak sharply enough and honestly enough, the world will use your sentences for what it needs them for — not necessarily for what you intended.


IV. Faithful to His Own Facts

But beware a too-dramatic reading: "Karpathy changed. From an optimist to a pessimist." The narrative is simple, convenient, and wrong.

If you re-read everything he has said from 2022 to 2025, his core beliefs have hardly changed at all:

  • minimalism, readability, demystifying the training stack (nanoGPT → nanochat → microGPT).
  • an allergy to hype (in 2023 he was already warning "low-stakes + human in the loop"; in 2025 he is just saying the same sentence in a sharper voice).
  • the dignity of education (he started Zero to Hero in 2022; in 2025 he is still saying "pre-AGI education is useful, post-AGI education is fun").
  • a preference for open ecosystems (the "coral reef" line at Sequoia 2024; the demystification projects of 2025).

It is not he who changed. It is the facts that changed. In 2024 he had already, gently, suggested that "knowledge is not intelligence" via the cognitive-core conjecture. By the fall of 2025, he had personally verified that conjecture while writing nanochat — frontier models "remembered wrong" on unfamiliar code, kept replacing his hand-written DDP with the standard library's, refused to be corrected. It is the engineer's reason, after this kind of hands-on verification, that forces the word "slop."

He did not become a pessimist. He became someone more loyal to the truth than to his own earlier judgments. This is the greatest courage of a public thinker — and the greatest cost.


V. A Metaphor That Crosses Life and Death

There's a passage in the interview, far less famous than "slop," but possibly the deepest passage of the whole episode. Dwarkesh asked about how humans learn; Karpathy gave an unexpected answer:

"I think there's possibly no fundamental solution to this. I also think humans collapse over time. ... This is why children, they haven't overfit yet. ... We end up revisiting the same thoughts. We end up saying more and more of the same stuff, and the learning rates go down, and the collapse continues to get worse, and then everything deteriorates."

He took a machine-learning concept (mode collapse) and reverse-applied it to human aging. This kind of two-way analogy is a hallmark of his thinking — he doesn't only use the brain to understand neural networks; he uses neural networks to understand the brain. In that moment, he wasn't talking about LLMs. He was talking about himself — a thirty-nine-year-old man talking about his fear of his own mind aging.

The emotional apex of the episode isn't "slop." It is this passage.


VI. One Line for This Chapter

In chapter five, he confessed, for the first time in front of everyone, the distance between truth and his earlier judgments. He did not apologize, nor dramatize. He simply used the most restrained language an engineer can use — "slop," "march of nines," "summoning ghosts" — to tell the world: we are on the road, but not at the end; don't lie to ourselves.

And this act — a person publicly recalibrating themselves in front of the world — deserves to be remembered far more than any "AGI is still a decade away" prediction.


Sources

Top comments (0)