Software 3.0, English as code, and the new grammar of programming with LLMs.
Originally published on Lei Hua's Substack.
Anchors:
2025-02-05 · Deep Dive into LLMs like ChatGPT · https://www.youtube.com/watch?v=7xTGNNLPyMI
2025-02-28 · How I use LLMs · https://www.youtube.com/watch?v=EWvNQjAaOHw
2025-06-17 · Software Is Changing (Again) · YC AI Startup School · https://www.youtube.com/watch?v=LCEmiRjPEtQ
Epigraph
"Software 1.0: humans write explicit code.
Software 2.0: humans create datasets, objectives, and neural networks; the program is learned into weights.
Software 3.0: humans program LLMs through prompts, context, tools, examples, memory, and instructions."
— Andrej Karpathy, Sequoia Ascent 2026 summary (recapitulating the YC 2025 talk)
I. Teaching LLMs the Second Time
On February 5, 2025, Karpathy posted a 3-hour-31-minute video to his own YouTube channel titled Deep Dive into LLMs like ChatGPT. It was the upgrade of his November 2023 Intro to Large Language Models. Same author, same "general audience" framing, same arc from pretraining to RLHF — but 14 months had passed, the world had changed, and so had he.
What deserves the most attention is not the new material (reasoning models, o1/o3, synthetic data), but the new tone that appears when he revisits the old material. The 2023 version says "99% of compute is in pretraining." The 2025 version returns, almost every half hour, to a single judgment — "the model is not a knowledge source; it is lossy compression."
It was a subtle but real shift in his teaching voice: from help the public understand what an LLM is to help the public build a skeptical mental model of an LLM. He was no longer just explaining how the thing worked. He was installing in his audience an anti-hype immune system.
Three weeks later, on February 28, he posted a lighter video, How I use LLMs — 2 hours 11 minutes, sitting at his desk, comparing ChatGPT, Claude, Gemini, and Grok across different tasks, sharing when he uses a thinking model (o3) versus 4o, how he uses memory, how he uses code interpreter. The video matters, but not for its content. It matters because for the first time he spoke as a user rather than as a researcher or founder. The tone was relaxed, domestic, almost casual.
And in that casual register, he gently said, for the first time, a sentence that would, ten months later, change his life — words to the effect of: "I, as a human, am increasingly the bottleneck in this AI workflow." He didn't make a big deal of it at the time; he probably didn't realize he had said something heavy. But in retrospect, this is the earliest, farthest-out signal of the December 2025 personal inflection point — when his own coding flipped from "I write 80%" to "agents write 80%."
II. Naming Software 3.0
Four months later, on June 17, 2025, Y Combinator's AI Startup School opened in San Francisco. Karpathy gave a 39-minute keynote that would be quoted across 2025's technology discourse — Software Is Changing (Again).
He formally introduced the Software 3.0 framework. Three layers of evolution: 1.0 is human-written code; 2.0 is weights learned from datasets and neural networks; 3.0 is programming the LLM via natural-language prompts. "We are programming in English." The line would end up on countless slides.
But the framework didn't appear overnight. It was the matured form of an eight-year inquiry into what computation actually is:
- His 2017 Software 2.0 blog post first said "the program is learned into the weights."
- His 2023 Intro to LLMs first proposed the LLM-OS metaphor.
- His 2025 Software 3.0 closed the loop.
Each step is the same engineer's instinct for system layering. Each step translates a new phenomenon into a metaphor the previous generation of programmers can understand. This is his most stable intellectual contribution as a public thinker — not inventing any new algorithm, but giving a new phenomenon a name that lets the previous generation keep working.
III. The Voice That Holds Both Confidence and Caution
The most easily overlooked thing about the YC talk is its emotional register. It is the most confident he has ever sounded on a public stage — almost no hesitation between sentences, the slide transitions feel choreographed. LLM as fab, as utility, as early OS — three dense analogies delivered in a single breath.
But at the apex of his confidence, he also delivered the most important anti-hype warning of the year. Near the end, he deliberately paused to say: treat 2025 as the decade of agents, not the year of agents.
It is a line easy to miss. But it is the seed of the entire third act of this biography. When he showed up on Dwarkesh's podcast four months later and said "AGI is still a decade away," he was not changing his position. He was repeating the June line that nobody wanted to hear seriously — only this time, in a sentence loud enough that everyone would.
IV. The Unease Already Lodged Inside the Confidence
If you read only the first half of 2025's Karpathy, he almost looks like a perfect, confident, synthesizing public thinker. The Software 3.0 framework is the most-cited intellectual contribution of his career. Eureka Labs is making slow but steady progress on its LLM-101-N course. His own channel keeps releasing well-received videos like Deep Dive and How I use LLMs.
But three undertows had already begun running through the first half of 2025:
- The repeated "the model is lossy compression" line in Deep Dive into LLMs — a preemptive vaccine against hype about frontier model capability.
- The "I am the bottleneck in this workflow" remark in How I use LLMs — his earliest self-awareness that his own working style was about to change.
- The "decade of agents, not the year of agents" line at the end of the YC keynote — his reminder, to an industry he was clearly speaking to, of its own hype cycle.
In the spring and summer of 2025, all three lines look unimportant — they read like footnotes from a careful engineer. But by October's Dwarkesh interview, all three undertows would surface at the same time, joined into a single conversation that shook the industry.
V. One Line for This Chapter
Chapter four's Karpathy is a thinker confidently closing his own loop — Software 3.0 is the closing ceremony of his intellectual narrative, but he knows, in his own heart, that he has already left several exits inside the seal. He is not contradicting himself. He is leaving doors open for his next recalibration.
Sources
- Deep Dive into LLMs like ChatGPT (2025-02-05) — https://www.youtube.com/watch?v=7xTGNNLPyMI ; Karpathy's announcement at https://x.com/karpathy/status/1887211193099825254
- How I use LLMs (2025-02-28) — https://www.youtube.com/watch?v=EWvNQjAaOHw
- Software Is Changing (Again), YC AI Startup School (2025-06-17) — https://www.youtube.com/watch?v=LCEmiRjPEtQ ; YC summary at https://www.ycombinator.com/library/MW-andrej-karpathy-software-is-changing-again
- Software 3.0 framing recapitulated by Karpathy himself in Sequoia Ascent 2026 summary — https://karpathy.bearblog.dev/sequoia-ascent-2026/
- 2017 Software 2.0 original blog post (referenced) — https://karpathy.medium.com/software-2-0-a64152b37c35

Top comments (0)