DEV Community

Cover image for Digital necromancy: zebras 🦓, ghosts 👻, and the bitter lesson
Valeria writes docs
Valeria writes docs

Posted on

Digital necromancy: zebras 🦓, ghosts 👻, and the bitter lesson

I recently fell down a particular rabbit hole on LLMs, and I’ve come out convinced they’re not our doom, just our helpful, digital ghosts. In the spirit of Halloween, here’s a fictional conversation featuring Richard Sutton, Andrej Karpathy, and Adam Curtis, who all wandered into the same philosophical graveyard.


Sutton: I see the hyperbole is still inflating every conversation about AI. It’s either boundless optimism or digital doomsday. Both distract from the real issue: LLMs have nothing to do with intelligence. They’re just glorified next token predictors, which is why the whole doomsday panic is as absurd as the hype itself.

Karpathy: I get it, Rich, but you’re missing the magic in the machinery. Think of a baby zebra, it's running in minutes, not after millions of random spasms. That kind of initialization is evolution’s pre-training. What we’re doing with LLMs, feeding them mountains of internet mud, is a fast-forward version of that. A “crappy” evolution, sure, but it gets things started.

Sutton: But your pre-training depends on our knowledge, our finite data. To develop a more intelligent LLM, you need to retrain it all over again using more data. You’re building a statistical Frankenstein stitched together from dead human text. The “Bitter Lesson” is supposed to be about simplicity and scaling, not sentimental necromancy.

Curtis: But that’s precisely what fascinates me. Andrej, you talk about initialization and constraints as though we were resurrecting meaning itself. What you call “pre-training,” I call exhumation ⚰️. These models are ghosts: aggregations of our collective memory, our fears, our awkward confessions, all mashed together and reanimated to talk back to us.

Karpathy: I actually agree. We are summoning ghosts, not intelligent beings. These models are imperfect, disembodied replicas, cubist portraits of cognition. Yet, like airplanes trying to imitate birds, they might evolve into something profoundly useful, even if utterly alien. The question is whether we can ever fine-tune them into something goal-driven, or if they’ll remain spectral echoes, impressive but hollow.

Sutton: And there it is. A ghost that can’t act or learn from experience isn’t intelligent; it’s a recording. True intelligence requires feedback from reality, not just echoes of the past. Until these models can actually learn from experience, they’ll haunt us with yesterday’s data.

Curtis: So it’s settled then. Frontier LLMs aren't the future. They're the past refusing to stay dead 🧟‍♂️


Happy Halloween! 🎃

Top comments (0)