<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Lei Hua</title>
    <description>The latest articles on DEV Community by Lei Hua (@lhua0420).</description>
    <link>https://dev.to/lhua0420</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/lhua0420"/>
    <language>en</language>
    <item>
      <title>The Man Who Summoned Ghosts | Coda: The Next Five to Ten Years</title>
      <dc:creator>Lei Hua</dc:creator>
      <pubDate>Thu, 14 May 2026 03:05:37 +0000</pubDate>
      <link>https://dev.to/lhua0420/the-man-who-summoned-ghosts-coda-the-next-five-to-ten-years-fhj</link>
      <guid>https://dev.to/lhua0420/the-man-who-summoned-ghosts-coda-the-next-five-to-ten-years-fhj</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsubstackcdn.com%2Fimage%2Ffetch%2F%24s_%21zDYB%21%2Cf_auto%2Cq_auto%3Agood%2Cfl_progressive%3Asteep%2Fhttps%253A%252F%252Fsubstack-post-media.s3.amazonaws.com%252Fpublic%252Fimages%252Fccfcabeb-8f69-41fd-a514-7bb069b918c2_1280x720.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsubstackcdn.com%2Fimage%2Ffetch%2F%24s_%21zDYB%21%2Cf_auto%2Cq_auto%3Agood%2Cfl_progressive%3Asteep%2Fhttps%253A%252F%252Fsubstack-post-media.s3.amazonaws.com%252Fpublic%252Fimages%252Fccfcabeb-8f69-41fd-a514-7bb069b918c2_1280x720.jpeg" alt="The Man Who Summoned Ghosts | Coda: The Next Five to Ten Years cover" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;What the next five to ten years ask from anyone trying to stay useful in the AI era.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Originally published on &lt;a href="https://lhua0420.substack.com/p/the-man-who-summoned-ghosts-coda" rel="noopener noreferrer"&gt;Lei Hua's Substack&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;This coda follows the four-question template from &lt;code&gt;references/future_projection.md&lt;/code&gt;:&lt;/em&gt;&lt;br&gt;
&lt;em&gt;(1) the enduring core → (2) current pressures → (3) 3–5 likely decision points → (4) the questions facing this archetype.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Honest Disclaimer
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;This section is &lt;strong&gt;structured extrapolation&lt;/strong&gt; from public materials — not prediction. If you re-read this in two years, some of these projections will not have happened — that's expected. The purpose of this book is not to bet on the future, but to give you a lens for watching change.&lt;br&gt;
&lt;strong&gt;Treat this as a meditation, not a forecast.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  I. The Core We Have Already Established
&lt;/h2&gt;

&lt;p&gt;After chapters one through six, four things can be said with reasonable confidence to be Karpathy's &lt;strong&gt;true and stable core&lt;/strong&gt; — they survived the entire 2022–2026 turbulence unchanged.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;Minimalism, readability, the demystification of the training stack.&lt;/em&gt; From nanoGPT to nanochat to microGPT — three generations of "the most complete thing in the fewest lines of code." It is a technical aesthetic &lt;em&gt;and&lt;/em&gt; a moral posture: he refuses to let frontier models look like magic.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;The dignity of education.&lt;/em&gt; Across all of his public work, this is the most monotonically strengthening thread. Eureka Labs is not a business; it is the material form of this line.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;An allergy to hype.&lt;/em&gt; As early as &lt;em&gt;State of GPT&lt;/em&gt; in 2023 he was saying "low-stakes + human-in-the-loop"; the 2025 "slop" and "march of nines" are the same caution at higher volume.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;A preference for open and pluralistic ecosystems.&lt;/em&gt; 2024's "coral reef" became 2026's "build RL environments in verifiable domains labs haven't claimed." The romance turned into tactics, but the anti-concentration, anti-monopoly underlay never moved.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Any projection about his next moves must respect these four.&lt;/strong&gt; He will not suddenly join a frontier lab. He will not suddenly become a hype machine. He will not put down education to return to pure research. He will not sit quietly in a world swallowed by five mega-corps.&lt;/p&gt;




&lt;h2&gt;
  
  
  II. Forces Currently Pressing on That Core
&lt;/h2&gt;

&lt;p&gt;In the last 12 to 18 months, he has been publicly responding to at least four specific pressures:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;The scaling problem of agentic engineering&lt;/em&gt; — agents now write 80% of code, but jagged intelligence makes the &lt;em&gt;remaining 20%&lt;/em&gt; especially expensive. &lt;strong&gt;The trade-off between the quality bar and the speed bar is something he faces every day.&lt;/strong&gt; This pressure was repeatedly touched on in Sequoia 2026.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;Productization pressure on Eureka Labs&lt;/em&gt; — the LLM-101-N course, announced in 2024, was still not widely launched by 2026. Education is slow; but slow and dead are separated only by a single string of product rhythm. &lt;strong&gt;As a founder, he must make Eureka a company, not just a mission statement.&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;The cost of public language&lt;/em&gt; — the Dwarkesh episode positioned him as "the popper of the AI bubble," a label he explicitly rejected in his X clarification, but could not fully control. &lt;strong&gt;As a public thinker, he must choose: continue to speak sharply and accept being flattened, or soften the edge and lose the singular position of an internal critic.&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;Further reshaping of his personal working style&lt;/em&gt; — he has already conceded "AI psychosis." &lt;strong&gt;As agents increasingly write code and conduct research, the pleasure of being an independent thinker is itself being altered.&lt;/strong&gt; This is an existential pressure no one else can think through for him.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  III. 3–5 Likely Decision Points in the Next 5–10 Years
&lt;/h2&gt;

&lt;p&gt;Each decision point is anchored in the core + the current pressures above. &lt;strong&gt;They are not predictions. They are the choices people of his archetype are most likely to face.&lt;/strong&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  Eureka Labs' shape
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;What it is:&lt;/strong&gt; Eureka Labs aims to deliver "a 1-on-1 AI tutor for every student" — he learned Korean this way, he told Dwarkesh. To turn that experience into a scalable, durable company, he must choose among three shapes: a B2C mass product, a B2B school/enterprise sale, or a high-end tool around his own course content.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why it's likely:&lt;/strong&gt; The unresolvable tension in the business model of education companies is something every education founder faces. His expressed preference leans B2C, but B2C education's customer acquisition cost is notoriously high.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Possible directions:&lt;/strong&gt; (a) B2C at mass scale, requiring a new teaching economics; (b) becoming a "tool company" for his own courses — small but durable; (c) selling to education departments or big platforms while retaining course IP.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Signals to watch:&lt;/strong&gt; Does Eureka raise funding rounds? At what valuation? Does it begin hiring sales/partnerships staff beyond enrollment?&lt;/p&gt;




&lt;h3&gt;
  
  
  How to manage relations with frontier labs
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;What it is:&lt;/strong&gt; He called frontier model code "slop" on Dwarkesh. But his own next-stage research (AutoResearch, microGPT) still depends on frontier models. &lt;strong&gt;As an independent educator-and-internal-critic, how does he handle his dependence on the very things he criticizes?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why it's likely:&lt;/strong&gt; OpenAI / Anthropic / Google are simultaneously his tools and his targets. The tension will keep accumulating.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Possible directions:&lt;/strong&gt; (a) stay permanently independent, pay API fees, criticize publicly; (b) form a "critic-allied" relationship with one specific lab; (c) shift toward an open-source / open-weights model ecosystem as his working foundation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Signals to watch:&lt;/strong&gt; Does he start recommending open-weights models more explicitly? Does he join any lab's advisory board? How does he interact in public with Sutskever / Dario / other frontier lab leadership?&lt;/p&gt;




&lt;h3&gt;
  
  
  Whether to accept another "inside" role
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;What it is:&lt;/strong&gt; He has left OpenAI twice. He probably believes he won't return — but the boundary between &lt;em&gt;independent educator&lt;/em&gt; and &lt;em&gt;frontier researcher&lt;/em&gt; is blurring. If a lab tomorrow offered him a senior role on education/alignment/interpretability research, while letting him keep Eureka Labs, what would he do?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why it's likely:&lt;/strong&gt; Historically he has done "in-out-in" twice (Stanford → OpenAI → Tesla → OpenAI → Eureka). The pattern is not definitively over.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Possible directions:&lt;/strong&gt; (a) decline, citing Eureka's need for full-time attention; (b) accept a part-time / advisor role; (c) accept a formal role with the right to leave.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Signals to watch:&lt;/strong&gt; Does his commentary on internal lab work change in tone? How dense is his collaboration with any specific lab?&lt;/p&gt;




&lt;h3&gt;
  
  
  Finding a sustainable work ethic at the cost of "AI psychosis"
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;What it is:&lt;/strong&gt; He has admitted that "sixteen hours a day directing my will at agents" leaves him in a mild psychotic state. &lt;strong&gt;That state is not sustainable on a two- or five-year horizon.&lt;/strong&gt; He must either find a new equilibrium with agents, or consciously slow down.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why it's likely:&lt;/strong&gt; Any way of working reshaped by a new tool needs a stable equilibrium. Otherwise the cost spills from the psychic into the physiological.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Possible directions:&lt;/strong&gt; (a) discover a rhythm that alternates agent-driven work with deep human thinking; (b) push 16 hours back to 8, accepting less output; (c) publicize the question itself, making "human mental health while working with agents" a research theme inside Eureka.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Signals to watch:&lt;/strong&gt; Does he start writing about work ethics in his blog or interviews? Does his own output rhythm visibly slow or restructure in 2027–2028?&lt;/p&gt;




&lt;h3&gt;
  
  
  How his judgment will change if AGI actually arrives
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;What it is:&lt;/strong&gt; He currently says "AGI is still a decade away." If a genuine capability leap occurs before 2030 — say, a model that approaches human level across nearly all verifiable domains — how will he recalibrate? &lt;strong&gt;This is the harshest test for a public thinker: when your prediction is wrong, what do you do?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why it's likely:&lt;/strong&gt; Not because he is certain to be wrong — he might not be. But he must preserve a posture for elegantly admitting error.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Possible directions:&lt;/strong&gt; (a) publicly acknowledge the timeline error in an "I was wrong" blog post (which fits his honesty); (b) redefine the term so he isn't "wrong" (which doesn't fit his honesty); (c) further split his judgment — "core intelligence has arrived, but AGI as economic impact is still in the march of nines."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Signals to watch:&lt;/strong&gt; His first reaction, in any blog / X post in the first week after a major capability leap. That post will be the most important next primary source for this biography.&lt;/p&gt;




&lt;h2&gt;
  
  
  IV. People of His Archetype — And You
&lt;/h2&gt;

&lt;p&gt;If you have read this far, &lt;strong&gt;this section is for you.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;People of Karpathy's archetype share a recognizable pattern: technical insiders who refuse a purely technical identity; public speakers who refuse hype; one eye on the code, the other eye on education, ethics, ecosystem. This type will not disappear in the AI era — they will become more important, because they are &lt;strong&gt;translators&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;But this type will also face, in the next 5–10 years, a handful of nearly unavoidable shared questions:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Question 1: When new tools invert your way of working, how do you keep authorship?&lt;/strong&gt; Karpathy reported "agents write 80%" — that's a question about working style, but it's also a question about identity. Anyone whose self has been built on &lt;em&gt;making things by hand&lt;/em&gt; must, once agents take over, answer again: "What am I actually doing now?"&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Question 2: How do you hold the middle, between hype and denial?&lt;/strong&gt; After Dwarkesh, Karpathy was pushed toward the "bubble-popper" label, which he himself refused. &lt;strong&gt;But holding the middle requires re-clarifying every few months&lt;/strong&gt; — a continuous tax on attention. Anyone trying to be a "sober insider" will pay this tax.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Question 3: When your field moves faster than you, where does your dignity come from?&lt;/strong&gt; Not from outrunning the tool — you can't. &lt;strong&gt;It comes from &lt;em&gt;how you live with the tool&lt;/em&gt; — replaced by it, or extending yourself with it.&lt;/strong&gt; Karpathy chose the second, at the cost of his own "AI psychosis." That path is yours to walk too.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Question 4: Will education really become "the gym"? Or will it disappear?&lt;/strong&gt; Karpathy's core bet is the former — that post-AGI education resembles today's gym: for fun, for health, for self-dignity. &lt;strong&gt;But this bet might not hold.&lt;/strong&gt; Education could also become a luxury good, a class marker, something out of reach for ordinary people. &lt;strong&gt;The fundamental question for educators of his generation is whether the "gym future" of education is actually an &lt;em&gt;egalitarian&lt;/em&gt; future.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  The Question Left on the Table
&lt;/h2&gt;

&lt;p&gt;If only one question can be left at the end of this book, for you to carry back into your own life, it is this:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;In a world moving faster than you, what kind of person are you willing to become?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Karpathy's answer is a specific posture — keep working, keep recalibrating, publicly admit what you got wrong, and don't surrender the core inside you that hasn't changed. &lt;strong&gt;That is not the answer. It is one version of the answer.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The point of this book is not to persuade you to agree with him. It is to invite you, &lt;strong&gt;with the same clarity&lt;/strong&gt; — same lack of drama, same honesty, same refusal of both the hype glasses and the denial glasses — to think about your own next step.&lt;/p&gt;

&lt;p&gt;May you, in this era, &lt;strong&gt;recalibrate gracefully.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Sources
&lt;/h2&gt;

&lt;p&gt;This coda draws on all the materials cited in the previous six chapters. Its judgments are induced from those materials alone, with no new external sources introduced. For verification, see the sources list at the end of each chapter.&lt;/p&gt;

</description>
      <category>ai</category>
    </item>
    <item>
      <title>The Man Who Summoned Ghosts | Chapter 6: AI Psychosis and the Man Who Started Coding Again</title>
      <dc:creator>Lei Hua</dc:creator>
      <pubDate>Thu, 14 May 2026 03:05:02 +0000</pubDate>
      <link>https://dev.to/lhua0420/the-man-who-summoned-ghosts-chapter-6-ai-psychosis-and-the-man-who-started-coding-again-21dp</link>
      <guid>https://dev.to/lhua0420/the-man-who-summoned-ghosts-chapter-6-ai-psychosis-and-the-man-who-started-coding-again-21dp</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsubstackcdn.com%2Fimage%2Ffetch%2F%24s_%21DOHA%21%2Cf_auto%2Cq_auto%3Agood%2Cfl_progressive%3Asteep%2Fhttps%253A%252F%252Fsubstack-post-media.s3.amazonaws.com%252Fpublic%252Fimages%252F066967ee-baae-4ef0-bce8-157ac0fecfc2_1280x720.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsubstackcdn.com%2Fimage%2Ffetch%2F%24s_%21DOHA%21%2Cf_auto%2Cq_auto%3Agood%2Cfl_progressive%3Asteep%2Fhttps%253A%252F%252Fsubstack-post-media.s3.amazonaws.com%252Fpublic%252Fimages%252F066967ee-baae-4ef0-bce8-157ac0fecfc2_1280x720.jpeg" alt="The Man Who Summoned Ghosts | Chapter 6: AI Psychosis and the Man Who Started Coding Again cover" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;AI psychosis, agentic engineering, and what changes when the tools rewrite the programmer.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Originally published on &lt;a href="https://lhua0420.substack.com/p/the-man-who-summoned-ghosts-chapter-ee6" rel="noopener noreferrer"&gt;Lei Hua's Substack&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;h2&gt;
  
  
  &lt;em&gt;Anchors:&lt;/em&gt;
&lt;/h2&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Epigraph
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;"Code is not even the right verb anymore. &lt;strong&gt;I have to express my will to my agents for 16 hours a day.&lt;/strong&gt;"&lt;br&gt;
— Andrej Karpathy, &lt;em&gt;No Priors&lt;/em&gt; · 2026-03-20&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  I. The Inversion You Couldn't See in December
&lt;/h2&gt;

&lt;p&gt;December 2025. The echo of the Dwarkesh interview hadn't faded yet — "slop," "AGI is still a decade away," "summoning ghosts" were still circulating on Twitter. Karpathy himself had quietly moved into the next room.&lt;/p&gt;

&lt;p&gt;In that room, something was happening. He would later tell Sarah Guo:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"&lt;strong&gt;I don't think I've typed a line of code probably since December.&lt;/strong&gt; A normal person doesn't realize that this happened or how dramatic it was. If you find a random software engineer at their desk, their default workflow of building software is completely different as of basically December."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This sentence matters far more than "slop." &lt;strong&gt;Because the sharp lines on Dwarkesh were judgments by Karpathy-as-public-commentator about the outside world; this sentence is a judgment by Karpathy-as-engineer about himself.&lt;/strong&gt; He was confessing, openly, that he had given up the lifelong core of his identity — the man who hand-coded the cleanest training stack. &lt;strong&gt;He had become someone whose work was to direct his will at a swarm of agents.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Less than a year earlier, he had been showing off 8,000 hand-written lines of ChatGPT training stack in nanochat's README. Two months earlier, he had called frontier-model code "slop" on Dwarkesh. &lt;strong&gt;Now he was spending sixteen hours a day inside agents.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Not because he had abandoned that argument. &lt;strong&gt;Because the facts had walked ahead of him, again.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  II. The New Work Ethic — Token Throughput Anxiety
&lt;/h2&gt;

&lt;p&gt;In those months he described an anxiety he had never felt before:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"&lt;strong&gt;I feel nervous when I have subscription left over.&lt;/strong&gt; That just means I haven't maximized my token throughput. It is not about flops anymore. It is about tokens. &lt;strong&gt;What is your token throughput and what token throughput do you command?&lt;/strong&gt;"&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;There's a quietly chilling clarity here. It tells us: &lt;strong&gt;the way Karpathy himself measures his own productivity has been altered by his tools.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In his PhD years, he was anxious when GPUs were idle — "my cards aren't training, so I'm wasting time." In his Tesla years, the anxiety was about the data loop not turning — "that corner case still hasn't been collected, another day gone." &lt;strong&gt;By 2026, the anxiety was about subscription balance — "I underused 100,000 tokens today, so I did 100,000 tokens less work."&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Each anxiety has always been about something he doesn't directly control. Each time, he had to &lt;strong&gt;learn to let something else work on his behalf&lt;/strong&gt; — GPUs to train; data loops to surface bugs; now agents to code. &lt;strong&gt;His way of working has been outsourced three times, and each time he has kept the anxiety as his compass.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  III. The Concrete Shape of "AI Psychosis"
&lt;/h2&gt;

&lt;p&gt;He coined a word for this state. &lt;strong&gt;"AI psychosis."&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Not a performance. A diagnosis. In that No Priors conversation, he gave "AI psychosis" a very specific, almost laughable form:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"I do think the personality matters a lot. ... &lt;strong&gt;I kind of feel like I'm trying to earn its praise&lt;/strong&gt;, which is really weird."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;A 39-year-old world-class researcher seeking emotional validation from a statistical model. &lt;strong&gt;He didn't complain about it. He didn't romanticize it. He simply named it.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This kind of honesty is gold in a biography. Because it tells us something he didn't say on Dwarkesh — &lt;strong&gt;what shape of wound is left inside a person who has completed the transition from hand-coder to agent orchestrator.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;It wasn't only the phrase "AI psychosis." He also told the story of a home AI assistant he built, Dobby the Elf:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"I can't believe I just typed in, like, can you find my Sonos? And that suddenly it's playing music. It kind of hacked in, figured out the whole thing, created APIs, and created a dashboard. &lt;strong&gt;I don't have to use these apps anymore. Dobby controls everything in natural language.&lt;/strong&gt;"&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This is a half-self-mocking, half-astonished report by an engineer whose way of working has been completely reshaped by his tools. &lt;strong&gt;He's telling us: even "opening an app" — a basic human action since the 1990s — is being made obsolete.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  IV. AutoResearch — When Agents Tune Better Than Experts
&lt;/h2&gt;

&lt;p&gt;But the heaviest story from that No Priors interview wasn't Dobby. It was &lt;strong&gt;AutoResearch&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Karpathy had built a tool that let agents run a closed loop of ML experiments. One markdown prompt + ~630 lines of training code + one GPU. Over two days, the agent ran 700 experiments and found 20 real optimizations.&lt;/p&gt;

&lt;p&gt;But those are just numbers. What actually shook him was this:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;He had been doing deep learning for twenty years, and believed the small model was "already well-tuned" — yet the autonomous researcher still found improvements in places he had missed.&lt;/strong&gt; Specifically: weight decay settings, and optimizer tuning. He thought both were "good enough." &lt;strong&gt;The agent didn't.&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"&lt;strong&gt;I shouldn't be a bottleneck.&lt;/strong&gt; I shouldn't be running these hyperparameters search optimizations. I shouldn't be looking at the results. There is objective criteria in this case, so you just have to arrange it so that it can just go forever."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This is a researcher speaking for the first time in the posture: "I admit I am not the best subject for research." &lt;strong&gt;He didn't frame it as loss or failure. He framed it as an efficiency solution.&lt;/strong&gt; But you can hear it — beneath his carefully restrained engineer's language, a quiet, unspoken question: &lt;em&gt;who does the joy of research now belong to?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;He gave AutoResearch's instruction file a name — &lt;strong&gt;ProgramMD&lt;/strong&gt;. It's a markdown description telling the auto-researcher how to operate. Then he said something quietly deep:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"This file is essentially the code for a research organization."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;The code of a research organization.&lt;/strong&gt; Not the code of research, but the code of an organization. In his own language, &lt;strong&gt;he is no longer writing "software" — he's writing "organizations."&lt;/strong&gt; An organization made of agents, that runs itself, that does not require a human to be present.&lt;/p&gt;




&lt;h2&gt;
  
  
  V. The Core That Didn't Change — He Did Not Surrender His Judgment
&lt;/h2&gt;

&lt;p&gt;After listening to that No Priors episode, &lt;strong&gt;the most important thing to see is this&lt;/strong&gt;: he did not become an agent evangelist.&lt;/p&gt;

&lt;p&gt;He was still describing current models with the same jagged-intelligence language he had used on Dwarkesh:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"I simultaneously feel like I'm talking to &lt;strong&gt;an extremely brilliant PhD student who's been like a systems programmer for their entire life and a 10 year old&lt;/strong&gt;. ... &lt;strong&gt;This jaggedness is really strange.&lt;/strong&gt;"&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;He was still warning about centralization:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"&lt;strong&gt;Centralization has a very poor track record in political or economic systems.&lt;/strong&gt; ... I do not want it to be closed doors with two or three people."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;He was still pulling the nature of work back to a place an engineer can understand, with restraint:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"&lt;strong&gt;These jobs are bundles of tasks&lt;/strong&gt; and some of these tasks can go a lot faster."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Not "AI takes all jobs." Not "AI creates all the new jobs." &lt;strong&gt;A cool middle: jobs are bundles of tasks; AI accelerates some tasks in the bundle; the shape of the bundle changes; but the bundle does not necessarily vanish.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;He was still worried about his own position:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"If we are successful, we are all out of a job. &lt;strong&gt;We are just building automation for the board or the CEO. It is kind of unnerving from that perspective.&lt;/strong&gt;"&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;He did not pretend this didn't exist. He named it, then continued to do what he thought was right.&lt;/strong&gt; This is the hardest posture for a public thinker — believing in the work you're doing, while admitting that the eventual outcome of that work may be your own work disappearing.&lt;/p&gt;




&lt;h2&gt;
  
  
  VI. Sequoia 2026 — The Edited Version
&lt;/h2&gt;

&lt;p&gt;Five weeks later, on April 30, 2026, he returned to the same Sequoia AI Ascent stage, in front of the same host Stephanie Zhan. &lt;strong&gt;Compared to the raw No Priors voice, Sequoia is the edited version.&lt;/strong&gt; The quotes are polished, the arguments tidy; self-mockery like "AI psychosis" doesn't appear.&lt;/p&gt;

&lt;p&gt;But Sequoia gave a set of public-facing distillations that No Priors did not:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"&lt;strong&gt;Vibe coding raises the floor.&lt;/strong&gt; It lets almost anyone create software by describing what they want."&lt;br&gt;
"&lt;strong&gt;Agentic engineering raises the ceiling.&lt;/strong&gt; It is the professional discipline of coordinating fallible agents while preserving correctness, security, taste, and maintainability."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This is the most-quoted contrast of 2026. Its power lies in this: &lt;strong&gt;it gives every engineer in transition a clear new position — your role is not being replaced, it's being redefined.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;He also gave, on stage at Sequoia, the question that is now written above every practitioner's head:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"&lt;strong&gt;Are you on the model's rails?&lt;/strong&gt; If your task sits inside a region that is verifiable and heavily trained, the model may fly. If not, it may fail in surprisingly basic ways."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This single sentence is the most practical diagnostic of 2026. It descends from "march of nines" on Dwarkesh, from "training on the test set is a new art form" in Year in Review — but Sequoia compresses it into &lt;strong&gt;a diagnostic question any founder can use immediately.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  VII. The Core That Did Not Change (v3 Reinforced)
&lt;/h2&gt;

&lt;p&gt;If you set the Karpathy of Chapter 1 beside the Karpathy of Chapter 6, you'll find something reassuring: &lt;strong&gt;he never really changed.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;What changed were external judgments — AGI timelines, whether to trust agents, how he wrote code each day.&lt;/p&gt;

&lt;p&gt;What stayed:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Minimalism.&lt;/strong&gt; From nanoGPT to nanochat to microGPT, three generations of "the most complete thing in the fewest lines of code." On No Priors he was still saying: "The algorithm is actually 200 lines of Python, very simple to read."&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;The dignity of education.&lt;/strong&gt; &lt;em&gt;Zero to Hero&lt;/em&gt; in 2022; Eureka Labs in 2024; "post-AGI education is fun" in 2025; "you can outsource your thinking, but you can't outsource your understanding" in 2026. The line has not been broken.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;An allergy to hype.&lt;/strong&gt; "Low-stakes + human-in-the-loop" in &lt;em&gt;State of GPT&lt;/em&gt; (2023); "are you on the model's rails?" in Sequoia 2026. The same engineering caution, in sharper language.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;A preference for open ecosystems.&lt;/strong&gt; "Coral reef" in 2024; demystification projects in 2025; "build RL environments in verifiable domains the labs haven't claimed yet" in 2026. The romance turned into tactics, but the underlying color hasn't changed.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;He is not someone who changes. He is someone who recalibrates.&lt;/strong&gt; These are often confused, but ethically they are different — someone who &lt;em&gt;changes&lt;/em&gt; is hard to trust; someone who &lt;em&gt;recalibrates&lt;/em&gt; is, in fact, exactly who you can trust.&lt;/p&gt;




&lt;h2&gt;
  
  
  VIII. One Line for This Chapter
&lt;/h2&gt;

&lt;p&gt;In Chapter 6, Karpathy completed an arc from &lt;em&gt;the man who built tools&lt;/em&gt;, to &lt;em&gt;the man who was reshaped by tools&lt;/em&gt;, to &lt;em&gt;the man who took authorship back&lt;/em&gt; — and at every step of that arc, he &lt;strong&gt;did not surrender himself&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;And in this chapter, more than any other line worth remembering —&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"The things that agents can't do is your job now. &lt;strong&gt;The things that agents can do, they can probably do better than you or like very soon.&lt;/strong&gt; And so you should be strategic about what you're actually spending time on."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This is what he said to everyone. And it's what this book says to the reader.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"You can outsource your thinking, but you can't outsource your understanding."&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Sources
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;em&gt;Skill Issue&lt;/em&gt; — No Priors (2026-03-20) — &lt;a href="https://www.youtube.com/watch?v=kwSVtQ7dziU" rel="noopener noreferrer"&gt;https://www.youtube.com/watch?v=kwSVtQ7dziU&lt;/a&gt; ; podchemy notes at &lt;a href="https://www.podchemy.com/notes/andrej-karpathy-on-code-agents-autoresearch-and-the-loopy-era-of-ai-52166731080" rel="noopener noreferrer"&gt;https://www.podchemy.com/notes/andrej-karpathy-on-code-agents-autoresearch-and-the-loopy-era-of-ai-52166731080&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;From Vibe Coding to Agentic Engineering&lt;/em&gt; — Sequoia AI Ascent 2026 (2026-04-30) — &lt;a href="https://www.youtube.com/watch?v=96jN2OCOfLs" rel="noopener noreferrer"&gt;https://www.youtube.com/watch?v=96jN2OCOfLs&lt;/a&gt; ; cleaned transcript at &lt;a href="https://karpathy.bearblog.dev/sequoia-ascent-2026/" rel="noopener noreferrer"&gt;https://karpathy.bearblog.dev/sequoia-ascent-2026/&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
    </item>
    <item>
      <title>The Man Who Summoned Ghosts | Chapter 5: Summoning Ghosts</title>
      <dc:creator>Lei Hua</dc:creator>
      <pubDate>Thu, 14 May 2026 02:51:43 +0000</pubDate>
      <link>https://dev.to/lhua0420/the-man-who-summoned-ghosts-chapter-5-summoning-ghosts-1gl7</link>
      <guid>https://dev.to/lhua0420/the-man-who-summoned-ghosts-chapter-5-summoning-ghosts-1gl7</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsubstackcdn.com%2Fimage%2Ffetch%2F%24s_%21cv7M%21%2Cf_auto%2Cq_auto%3Agood%2Cfl_progressive%3Asteep%2Fhttps%253A%252F%252Fsubstack-post-media.s3.amazonaws.com%252Fpublic%252Fimages%252F43e1c154-3519-4bcc-be2a-6d4be111c609_1600x640.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsubstackcdn.com%2Fimage%2Ffetch%2F%24s_%21cv7M%21%2Cf_auto%2Cq_auto%3Agood%2Cfl_progressive%3Asteep%2Fhttps%253A%252F%252Fsubstack-post-media.s3.amazonaws.com%252Fpublic%252Fimages%252F43e1c154-3519-4bcc-be2a-6d4be111c609_1600x640.jpeg" alt="The Man Who Summoned Ghosts | Chapter 5: Summoning Ghosts cover" width="800" height="320"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Ghosts, animals, agents, and the vocabulary Karpathy gave to AI behavior.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Originally published on &lt;a href="https://lhua0420.substack.com/p/the-man-who-summoned-ghosts-chapter-23a" rel="noopener noreferrer"&gt;Lei Hua's Substack&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Anchors:&lt;/em&gt;&lt;br&gt;
2025-10-01 · &lt;em&gt;Animals vs Ghosts&lt;/em&gt; (blog post) · &lt;a href="https://karpathy.bearblog.dev/animals-vs-ghosts/" rel="noopener noreferrer"&gt;https://karpathy.bearblog.dev/animals-vs-ghosts/&lt;/a&gt;&lt;br&gt;
2025-10-17 · Dwarkesh Podcast · &lt;em&gt;AGI is still a decade away&lt;/em&gt; · &lt;a href="https://www.dwarkesh.com/p/andrej-karpathy" rel="noopener noreferrer"&gt;https://www.dwarkesh.com/p/andrej-karpathy&lt;/a&gt;&lt;br&gt;
2025-11-29 · &lt;em&gt;The space of minds&lt;/em&gt; (blog post) · &lt;a href="https://karpathy.bearblog.dev/the-space-of-minds/" rel="noopener noreferrer"&gt;https://karpathy.bearblog.dev/the-space-of-minds/&lt;/a&gt;&lt;br&gt;
2025-12-19 · &lt;em&gt;2025 LLM Year in Review&lt;/em&gt; · &lt;a href="https://karpathy.bearblog.dev/year-in-review-2025/" rel="noopener noreferrer"&gt;https://karpathy.bearblog.dev/year-in-review-2025/&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Epigraph
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;"Today's frontier LLM research is not about building animals. It is about summoning ghosts. ... It's possible that ghosts:animals :: planes:birds."&lt;br&gt;
— Andrej Karpathy, &lt;em&gt;Animals vs Ghosts&lt;/em&gt; · 2025-10&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  I. The Eve
&lt;/h2&gt;

&lt;p&gt;October 1, 2025. Sixteen days before he would appear on Dwarkesh's podcast. On this day he posted an essay on his bearblog titled &lt;em&gt;Animals vs Ghosts&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;The essay was a response to another Dwarkesh episode — the interview with Richard Sutton (yes, the "Bitter Lesson" Sutton). On that episode, Sutton had pointed out that the current LLM paradigm isn't truly "bitter-lesson-pilled" — it builds on human-generated, finite, biased data. Karpathy's blog post agrees and disagrees at the same time: he concedes Sutton's point has weight, then says — &lt;strong&gt;"We are not building animals. We are summoning ghosts."&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Ghosts: not intelligence grown out of biological evolution; not intelligence shaped by survival drive, curiosity, play. Ghosts are intelligence statistically &lt;em&gt;distilled&lt;/em&gt; out of human texts. They are not the close cousins of animals. They are perhaps a different species. &lt;strong&gt;He even offers an analogy that would echo for the rest of the year — *ghosts are to animals as planes are to birds.&lt;/strong&gt;*&lt;/p&gt;

&lt;p&gt;This was a blog post, not an interview. &lt;strong&gt;It was his own language, his own rhythm, his own judgment.&lt;/strong&gt; Sixteen days later he would carry the same language onto a podcast with 2 million subscribers. &lt;strong&gt;But the calm, almost metaphysical quality of the blog post&lt;/strong&gt; would be amplified, on the podcast, into a sharper engineer's register.&lt;/p&gt;




&lt;h2&gt;
  
  
  II. The Two-Hour-Twenty-Five-Minute Conversation
&lt;/h2&gt;

&lt;p&gt;October 17, 2025. Dwarkesh Patel released the interview. The title: &lt;em&gt;AGI is still a decade away&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;The conversation runs nine sections, from AGI timelines through: LLM cognitive deficits, RL is terrible, how humans learn, how AGI will blend into 2% GDP growth, ASI, the evolution of intelligence and culture, why self-driving took so long, the future of education.&lt;/p&gt;

&lt;p&gt;But &lt;strong&gt;what shook the industry was not any one section, but a handful of sentences.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The first was about code: "I feel like the industry is making too big of a jump and is trying to pretend like this is amazing, and it's not. &lt;strong&gt;It's slop.&lt;/strong&gt;"&lt;/p&gt;

&lt;p&gt;The second was about timelines, already the episode's title: "I have 15 years of prediction experience and intuition and I average things out and &lt;strong&gt;it feels like a decade to me&lt;/strong&gt;."&lt;/p&gt;

&lt;p&gt;The third was about reinforcement learning: "&lt;strong&gt;RL is terrible.&lt;/strong&gt; It's just that everything else we tried is worse."&lt;/p&gt;

&lt;p&gt;The fourth was about the current state of agents, using a concept he had carried over from his Tesla years — "march of nines." &lt;strong&gt;Self-driving took ten years and is still climbing the "nines" of reliability; agents will take ten years too.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Together, these sentences became a media narrative quoted everywhere — "OpenAI co-founder pops the AI bubble." Fortune wrote it up. John Coogan satirized on X that "the AI bubble has popped, time to invest in food, water, shelter, and guns." &lt;strong&gt;The narrative caught the sentences, but missed what was actually in the tone.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  III. His Own Clarification
&lt;/h2&gt;

&lt;p&gt;Four days later, on October 21, 2025, Karpathy posted a long thread on X correcting the media reading. Its most important line:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"Basically my AI timelines are about 5-10X pessimistic w.r.t. what you'll find in your neighborhood SF AI house party or on your twitter timeline, but &lt;strong&gt;still quite optimistic w.r.t. a rising tide of AI deniers and skeptics&lt;/strong&gt;."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That tweet matters. It tells us where he wants to stand — &lt;strong&gt;the cool middle&lt;/strong&gt;. He is neither in the heat of the SF house party nor in the anti-intellectualism of the AI deniers. &lt;strong&gt;What he wants to be is a sober internal critic with technical credentials.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;But &lt;strong&gt;he could not fully control how his words were read&lt;/strong&gt;. "Slop." "AGI is still a decade away." Those sentences traveled vastly farther than his X clarification. In the last two months of 2025, he was effectively positioned, in the public mind, as "the insider who pricked the AI bubble" — a role he himself did not fully endorse.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;This is the cost of being a public thinker. When you speak sharply enough and honestly enough, the world will use your sentences for what it needs them for — not necessarily for what you intended.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  IV. Faithful to His Own Facts
&lt;/h2&gt;

&lt;p&gt;But beware a too-dramatic reading: &lt;em&gt;"Karpathy changed. From an optimist to a pessimist."&lt;/em&gt; The narrative is simple, convenient, and wrong.&lt;/p&gt;

&lt;p&gt;If you re-read everything he has said from 2022 to 2025, &lt;strong&gt;his core beliefs have hardly changed at all&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;minimalism, readability, demystifying the training stack (nanoGPT → nanochat → microGPT).&lt;/li&gt;
&lt;li&gt;an allergy to hype (in 2023 he was already warning "low-stakes + human in the loop"; in 2025 he is just saying the same sentence in a sharper voice).&lt;/li&gt;
&lt;li&gt;the dignity of education (he started &lt;em&gt;Zero to Hero&lt;/em&gt; in 2022; in 2025 he is still saying "pre-AGI education is useful, post-AGI education is fun").&lt;/li&gt;
&lt;li&gt;a preference for open ecosystems (the "coral reef" line at Sequoia 2024; the demystification projects of 2025).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;It is not he who changed. It is the facts that changed.&lt;/strong&gt; In 2024 he had already, gently, suggested that "knowledge is not intelligence" via the cognitive-core conjecture. By the fall of 2025, he had personally verified that conjecture while writing nanochat — frontier models "remembered wrong" on unfamiliar code, kept replacing his hand-written DDP with the standard library's, refused to be corrected. &lt;strong&gt;It is the engineer's reason, after this kind of hands-on verification, that forces the word "slop."&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;He did not become a pessimist. He became someone more loyal to the truth than to his own earlier judgments.&lt;/strong&gt; This is the greatest courage of a public thinker — and the greatest cost.&lt;/p&gt;




&lt;h2&gt;
  
  
  V. A Metaphor That Crosses Life and Death
&lt;/h2&gt;

&lt;p&gt;There's a passage in the interview, far less famous than "slop," but &lt;strong&gt;possibly the deepest passage of the whole episode&lt;/strong&gt;. Dwarkesh asked about how humans learn; Karpathy gave an unexpected answer:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"I think there's possibly no fundamental solution to this. I also think humans collapse over time. ... This is why children, they haven't overfit yet. ... We end up revisiting the same thoughts. We end up saying more and more of the same stuff, and the learning rates go down, and the collapse continues to get worse, and then everything deteriorates."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;He took a machine-learning concept (mode collapse) and reverse-applied it to human aging.&lt;/strong&gt; This kind of two-way analogy is a hallmark of his thinking — he doesn't only use the brain to understand neural networks; he uses neural networks to understand the brain. In that moment, &lt;strong&gt;he wasn't talking about LLMs. He was talking about himself — a thirty-nine-year-old man talking about his fear of his own mind aging.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The emotional apex of the episode isn't "slop." It is this passage.&lt;/p&gt;




&lt;h2&gt;
  
  
  VI. One Line for This Chapter
&lt;/h2&gt;

&lt;p&gt;In chapter five, &lt;strong&gt;he confessed, for the first time in front of everyone, the distance between truth and his earlier judgments.&lt;/strong&gt; He did not apologize, nor dramatize. He simply used the most restrained language an engineer can use — "slop," "march of nines," "summoning ghosts" — to tell the world: &lt;strong&gt;we are on the road, but not at the end; don't lie to ourselves.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;And this act — a person publicly recalibrating themselves in front of the world — &lt;strong&gt;deserves to be remembered far more than any "AGI is still a decade away" prediction.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Sources
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;em&gt;Animals vs Ghosts&lt;/em&gt; (2025-10-01) — &lt;a href="https://karpathy.bearblog.dev/animals-vs-ghosts/" rel="noopener noreferrer"&gt;https://karpathy.bearblog.dev/animals-vs-ghosts/&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Dwarkesh Podcast — &lt;em&gt;AGI is still a decade away&lt;/em&gt; (2025-10-17) — transcript at &lt;a href="https://www.dwarkesh.com/p/andrej-karpathy" rel="noopener noreferrer"&gt;https://www.dwarkesh.com/p/andrej-karpathy&lt;/a&gt; ; YouTube at &lt;a href="https://www.youtube.com/watch?v=lXUZvyajciY" rel="noopener noreferrer"&gt;https://www.youtube.com/watch?v=lXUZvyajciY&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Dwarkesh's section-by-section breakdown by Zvi Mowshowitz — &lt;a href="https://thezvi.substack.com/p/on-dwarkesh-patels-podcast-with-andrej" rel="noopener noreferrer"&gt;https://thezvi.substack.com/p/on-dwarkesh-patels-podcast-with-andrej&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Karpathy's post-podcast clarification thread (2025-10-21) — &lt;a href="https://x.com/karpathy/status/1979644538185752935" rel="noopener noreferrer"&gt;https://x.com/karpathy/status/1979644538185752935&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Fortune coverage — &lt;a href="https://fortune.com/2025/10/21/andrej-karpathy-openai-ai-bubble-pop-dwarkesh-patel-interview/" rel="noopener noreferrer"&gt;https://fortune.com/2025/10/21/andrej-karpathy-openai-ai-bubble-pop-dwarkesh-patel-interview/&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;The space of minds&lt;/em&gt; (2025-11-29) — &lt;a href="https://karpathy.bearblog.dev/the-space-of-minds/" rel="noopener noreferrer"&gt;https://karpathy.bearblog.dev/the-space-of-minds/&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;2025 LLM Year in Review&lt;/em&gt; (2025-12-19) — &lt;a href="https://karpathy.bearblog.dev/year-in-review-2025/" rel="noopener noreferrer"&gt;https://karpathy.bearblog.dev/year-in-review-2025/&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
    </item>
    <item>
      <title>The Man Who Summoned Ghosts | Chapter 4: Programming in English</title>
      <dc:creator>Lei Hua</dc:creator>
      <pubDate>Thu, 14 May 2026 02:50:10 +0000</pubDate>
      <link>https://dev.to/lhua0420/the-man-who-summoned-ghosts-chapter-4-programming-in-english-5gdj</link>
      <guid>https://dev.to/lhua0420/the-man-who-summoned-ghosts-chapter-4-programming-in-english-5gdj</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsubstackcdn.com%2Fimage%2Ffetch%2F%24s_%21UP1X%21%2Cf_auto%2Cq_auto%3Agood%2Cfl_progressive%3Asteep%2Fhttps%253A%252F%252Fsubstack-post-media.s3.amazonaws.com%252Fpublic%252Fimages%252F1d13d94c-2950-430f-82fd-7d63d26bcbfd_1600x640.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsubstackcdn.com%2Fimage%2Ffetch%2F%24s_%21UP1X%21%2Cf_auto%2Cq_auto%3Agood%2Cfl_progressive%3Asteep%2Fhttps%253A%252F%252Fsubstack-post-media.s3.amazonaws.com%252Fpublic%252Fimages%252F1d13d94c-2950-430f-82fd-7d63d26bcbfd_1600x640.jpeg" alt="The Man Who Summoned Ghosts | Chapter 4: Programming in English cover" width="800" height="320"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Software 3.0, English as code, and the new grammar of programming with LLMs.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Originally published on &lt;a href="https://lhua0420.substack.com/p/the-man-who-summoned-ghosts-chapter-5f4" rel="noopener noreferrer"&gt;Lei Hua's Substack&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Anchors:&lt;/em&gt;&lt;br&gt;
2025-02-05 · &lt;em&gt;Deep Dive into LLMs like ChatGPT&lt;/em&gt; · &lt;a href="https://www.youtube.com/watch?v=7xTGNNLPyMI" rel="noopener noreferrer"&gt;https://www.youtube.com/watch?v=7xTGNNLPyMI&lt;/a&gt;&lt;br&gt;
2025-02-28 · &lt;em&gt;How I use LLMs&lt;/em&gt; · &lt;a href="https://www.youtube.com/watch?v=EWvNQjAaOHw" rel="noopener noreferrer"&gt;https://www.youtube.com/watch?v=EWvNQjAaOHw&lt;/a&gt;&lt;br&gt;
2025-06-17 · &lt;em&gt;Software Is Changing (Again)&lt;/em&gt; · YC AI Startup School · &lt;a href="https://www.youtube.com/watch?v=LCEmiRjPEtQ" rel="noopener noreferrer"&gt;https://www.youtube.com/watch?v=LCEmiRjPEtQ&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Epigraph
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;"Software 1.0: humans write explicit code.&lt;br&gt;
Software 2.0: humans create datasets, objectives, and neural networks; the program is learned into weights.&lt;br&gt;
Software 3.0: humans program LLMs through prompts, context, tools, examples, memory, and instructions."&lt;br&gt;
— Andrej Karpathy, &lt;em&gt;Sequoia Ascent 2026 summary&lt;/em&gt; (recapitulating the YC 2025 talk)&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  I. Teaching LLMs the Second Time
&lt;/h2&gt;

&lt;p&gt;On February 5, 2025, Karpathy posted a 3-hour-31-minute video to his own YouTube channel titled &lt;em&gt;Deep Dive into LLMs like ChatGPT&lt;/em&gt;. &lt;strong&gt;It was the upgrade of his November 2023 &lt;em&gt;Intro to Large Language Models&lt;/em&gt;.&lt;/strong&gt; Same author, same "general audience" framing, same arc from pretraining to RLHF — but 14 months had passed, the world had changed, and so had he.&lt;/p&gt;

&lt;p&gt;What deserves the most attention is not the new material (reasoning models, o1/o3, synthetic data), but &lt;strong&gt;the new tone that appears when he revisits the old material&lt;/strong&gt;. The 2023 version says "99% of compute is in pretraining." The 2025 version returns, almost every half hour, to a single judgment — &lt;strong&gt;"the model is not a knowledge source; it is lossy compression."&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;It was a subtle but real shift in his teaching voice: from &lt;em&gt;help the public understand what an LLM is&lt;/em&gt; to &lt;em&gt;help the public build a skeptical mental model of an LLM&lt;/em&gt;. He was no longer just explaining how the thing worked. He was installing in his audience an anti-hype immune system.&lt;/p&gt;

&lt;p&gt;Three weeks later, on February 28, he posted a lighter video, &lt;em&gt;How I use LLMs&lt;/em&gt; — 2 hours 11 minutes, sitting at his desk, comparing ChatGPT, Claude, Gemini, and Grok across different tasks, sharing when he uses a thinking model (o3) versus 4o, how he uses memory, how he uses code interpreter. The video matters, but not for its content. It matters because &lt;strong&gt;for the first time he spoke as a &lt;em&gt;user&lt;/em&gt; rather than as a researcher or founder.&lt;/strong&gt; The tone was relaxed, domestic, almost casual.&lt;/p&gt;

&lt;p&gt;And in that casual register, &lt;strong&gt;he gently said, for the first time, a sentence that would, ten months later, change his life&lt;/strong&gt; — words to the effect of: &lt;em&gt;"I, as a human, am increasingly the bottleneck in this AI workflow."&lt;/em&gt; He didn't make a big deal of it at the time; he probably didn't realize he had said something heavy. But in retrospect, &lt;strong&gt;this is the earliest, farthest-out signal of the December 2025 personal inflection point — when his own coding flipped from "I write 80%" to "agents write 80%."&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  II. Naming Software 3.0
&lt;/h2&gt;

&lt;p&gt;Four months later, on June 17, 2025, Y Combinator's AI Startup School opened in San Francisco. Karpathy gave a 39-minute keynote that would be quoted across 2025's technology discourse — &lt;em&gt;Software Is Changing (Again).&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;He formally introduced the &lt;strong&gt;Software 3.0&lt;/strong&gt; framework. Three layers of evolution: 1.0 is human-written code; 2.0 is weights learned from datasets and neural networks; 3.0 is programming the LLM via natural-language prompts. &lt;strong&gt;"We are programming in English."&lt;/strong&gt; The line would end up on countless slides.&lt;/p&gt;

&lt;p&gt;But the framework didn't appear overnight. &lt;strong&gt;It was the matured form of an eight-year inquiry into what computation actually is&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;His 2017 &lt;em&gt;Software 2.0&lt;/em&gt; blog post first said "the program is learned into the weights."&lt;/li&gt;
&lt;li&gt;His 2023 &lt;em&gt;Intro to LLMs&lt;/em&gt; first proposed the LLM-OS metaphor.&lt;/li&gt;
&lt;li&gt;His 2025 Software 3.0 closed the loop.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each step is the same engineer's instinct for &lt;em&gt;system layering&lt;/em&gt;. Each step translates a new phenomenon into a metaphor the previous generation of programmers can understand. &lt;strong&gt;This is his most stable intellectual contribution as a public thinker — not inventing any new algorithm, but giving a new phenomenon a name that lets the previous generation keep working.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  III. The Voice That Holds Both Confidence and Caution
&lt;/h2&gt;

&lt;p&gt;The most easily overlooked thing about the YC talk is its &lt;strong&gt;emotional register&lt;/strong&gt;. It is the most confident he has ever sounded on a public stage — almost no hesitation between sentences, the slide transitions feel choreographed. LLM as fab, as utility, as early OS — three dense analogies delivered in a single breath.&lt;/p&gt;

&lt;p&gt;But &lt;strong&gt;at the apex of his confidence, he also delivered the most important anti-hype warning of the year&lt;/strong&gt;. Near the end, he deliberately paused to say: treat 2025 as the &lt;em&gt;decade&lt;/em&gt; of agents, not the &lt;em&gt;year&lt;/em&gt; of agents.&lt;/p&gt;

&lt;p&gt;It is a line easy to miss. But it is the seed of the entire third act of this biography. When he showed up on Dwarkesh's podcast four months later and said "AGI is still a decade away," &lt;strong&gt;he was not changing his position. He was repeating the June line that nobody wanted to hear seriously — only this time, in a sentence loud enough that everyone would.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  IV. The Unease Already Lodged Inside the Confidence
&lt;/h2&gt;

&lt;p&gt;If you read only the first half of 2025's Karpathy, he almost looks like a perfect, confident, synthesizing public thinker. The Software 3.0 framework is the most-cited intellectual contribution of his career. Eureka Labs is making slow but steady progress on its LLM-101-N course. His own channel keeps releasing well-received videos like &lt;em&gt;Deep Dive&lt;/em&gt; and &lt;em&gt;How I use LLMs&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;But &lt;strong&gt;three undertows&lt;/strong&gt; had already begun running through the first half of 2025:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The repeated "the model is lossy compression" line in &lt;em&gt;Deep Dive into LLMs&lt;/em&gt; — a preemptive vaccine against hype about frontier model capability.&lt;/li&gt;
&lt;li&gt;The "I am the bottleneck in this workflow" remark in &lt;em&gt;How I use LLMs&lt;/em&gt; — his earliest self-awareness that his own working style was about to change.&lt;/li&gt;
&lt;li&gt;The "decade of agents, not the year of agents" line at the end of the YC keynote — his reminder, to an industry he was clearly speaking to, of its own hype cycle.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In the spring and summer of 2025, all three lines look unimportant — they read like footnotes from a careful engineer. &lt;strong&gt;But by October's Dwarkesh interview, all three undertows would surface at the same time, joined into a single conversation that shook the industry.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  V. One Line for This Chapter
&lt;/h2&gt;

&lt;p&gt;Chapter four's Karpathy is a thinker &lt;strong&gt;confidently closing his own loop&lt;/strong&gt; — Software 3.0 is the closing ceremony of his intellectual narrative, but he knows, in his own heart, that he has already left several exits inside the seal. &lt;strong&gt;He is not contradicting himself. He is leaving doors open for his next recalibration.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Sources
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;em&gt;Deep Dive into LLMs like ChatGPT&lt;/em&gt; (2025-02-05) — &lt;a href="https://www.youtube.com/watch?v=7xTGNNLPyMI" rel="noopener noreferrer"&gt;https://www.youtube.com/watch?v=7xTGNNLPyMI&lt;/a&gt; ; Karpathy's announcement at &lt;a href="https://x.com/karpathy/status/1887211193099825254" rel="noopener noreferrer"&gt;https://x.com/karpathy/status/1887211193099825254&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;How I use LLMs&lt;/em&gt; (2025-02-28) — &lt;a href="https://www.youtube.com/watch?v=EWvNQjAaOHw" rel="noopener noreferrer"&gt;https://www.youtube.com/watch?v=EWvNQjAaOHw&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Software Is Changing (Again)&lt;/em&gt;, YC AI Startup School (2025-06-17) — &lt;a href="https://www.youtube.com/watch?v=LCEmiRjPEtQ" rel="noopener noreferrer"&gt;https://www.youtube.com/watch?v=LCEmiRjPEtQ&lt;/a&gt; ; YC summary at &lt;a href="https://www.ycombinator.com/library/MW-andrej-karpathy-software-is-changing-again" rel="noopener noreferrer"&gt;https://www.ycombinator.com/library/MW-andrej-karpathy-software-is-changing-again&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Software 3.0 framing recapitulated by Karpathy himself in &lt;em&gt;Sequoia Ascent 2026 summary&lt;/em&gt; — &lt;a href="https://karpathy.bearblog.dev/sequoia-ascent-2026/" rel="noopener noreferrer"&gt;https://karpathy.bearblog.dev/sequoia-ascent-2026/&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;2017 &lt;em&gt;Software 2.0&lt;/em&gt; original blog post (referenced) — &lt;a href="https://karpathy.medium.com/software-2-0-a64152b37c35" rel="noopener noreferrer"&gt;https://karpathy.medium.com/software-2-0-a64152b37c35&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
    </item>
    <item>
      <title>The Man Who Summoned Ghosts | Chapter 3: Stepping Away, Again</title>
      <dc:creator>Lei Hua</dc:creator>
      <pubDate>Thu, 14 May 2026 02:21:39 +0000</pubDate>
      <link>https://dev.to/lhua0420/the-man-who-summoned-ghosts-chapter-3-stepping-away-again-4ghe</link>
      <guid>https://dev.to/lhua0420/the-man-who-summoned-ghosts-chapter-3-stepping-away-again-4ghe</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsubstackcdn.com%2Fimage%2Ffetch%2F%24s_%21MpX3%21%2Cf_auto%2Cq_auto%3Agood%2Cfl_progressive%3Asteep%2Fhttps%253A%252F%252Fsubstack-post-media.s3.amazonaws.com%252Fpublic%252Fimages%252F683d17b1-11fe-4cbf-ae70-4f4205c3f8a8_1600x640.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsubstackcdn.com%2Fimage%2Ffetch%2F%24s_%21MpX3%21%2Cf_auto%2Cq_auto%3Agood%2Cfl_progressive%3Asteep%2Fhttps%253A%252F%252Fsubstack-post-media.s3.amazonaws.com%252Fpublic%252Fimages%252F683d17b1-11fe-4cbf-ae70-4f4205c3f8a8_1600x640.jpeg" alt="The Man Who Summoned Ghosts | Chapter 3: Stepping Away, Again cover" width="800" height="320"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Leaving OpenAI again, building Eureka Labs, and turning education into a product.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Originally published on &lt;a href="https://lhua0420.substack.com/p/the-man-who-summoned-ghosts-chapter-67b" rel="noopener noreferrer"&gt;Lei Hua's Substack&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Anchors:&lt;/em&gt;&lt;br&gt;
2024-02-20 · &lt;em&gt;Let's build the GPT Tokenizer&lt;/em&gt;&lt;br&gt;
2024-03-20 · Sequoia AI Ascent · &lt;em&gt;Making AI Accessible&lt;/em&gt; · &lt;a href="https://www.youtube.com/watch?v=c3b-JASoPi0" rel="noopener noreferrer"&gt;https://www.youtube.com/watch?v=c3b-JASoPi0&lt;/a&gt;&lt;br&gt;
2024-06-23 · UC Berkeley AI Hackathon Keynote · &lt;a href="https://www.youtube.com/watch?v=tsTeEkzO9xc" rel="noopener noreferrer"&gt;https://www.youtube.com/watch?v=tsTeEkzO9xc&lt;/a&gt;&lt;br&gt;
2024-09 · No Priors Ep. 80 · &lt;a href="https://www.youtube.com/watch?v=hM_h0UA7upI" rel="noopener noreferrer"&gt;https://www.youtube.com/watch?v=hM_h0UA7upI&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Epigraph
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;"I love startups and I love companies, and I want there to be a vibrant ecosystem of them. ... I would say a bit more hesitant about kind of, like, five mega corps kind of like taking over."&lt;br&gt;
— Andrej Karpathy, Sequoia AI Ascent · 2024-03&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  I. Leaving Again
&lt;/h2&gt;

&lt;p&gt;On February 13, 2024, Karpathy announced he was leaving OpenAI for the second time. The stated reason was simple: &lt;em&gt;"I want to spend more time on my own personal projects."&lt;/em&gt; No drama, no conflict statement, no flurry of farewell tweets from colleagues. Quiet, clean — like the first time.&lt;/p&gt;

&lt;p&gt;But this time, &lt;strong&gt;he was no longer just an engineer letting go of the wheel.&lt;/strong&gt; When he left, the LLM-OS metaphor from his &lt;em&gt;Intro to LLMs&lt;/em&gt; had already entered industry vocabulary. When he left, his own YouTube channel had hundreds of thousands of subscribers. When he left, OpenAI was already on the edge of internal fracture — three months later Ilya Sutskever would leave, five months later Jan Leike would leave, the superalignment team would collapse. Karpathy had walked out one step ahead of all of it.&lt;/p&gt;




&lt;h2&gt;
  
  
  II. The First Words After Leaving — No Longer Polite to the Tokenizer
&lt;/h2&gt;

&lt;p&gt;A week after leaving, he released his first public work as a free agent: &lt;em&gt;Let's build the GPT Tokenizer&lt;/em&gt;. On the surface, it's a 2-hour-13-minute tutorial paired with his GitHub repo &lt;code&gt;minbpe&lt;/code&gt;. &lt;strong&gt;But its real significance is in the rant at the end.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;He systematically listed the bugs that tokenization creates inside LLMs: the model can't spell, can't do basic arithmetic, struggles more with JSON than with YAML, has uneven performance across languages, fails on character-level tasks ("how many r's in strawberry") — each one rooted in this one seemingly innocuous preprocessing step. His conclusion was cold: &lt;strong&gt;tokenization is legacy technology and we should try to escape it.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This is a subtle but real turn. &lt;strong&gt;He would not have spoken this way from inside OpenAI.&lt;/strong&gt; Not because OpenAI forbids it, but because when you are a researcher at a frontier lab, you carry an unspoken duty to speak with respect for the core technologies your lab is built on. The moment you step outside, that duty falls away. &lt;strong&gt;The price of speaking freely becomes much lower.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;What we will see later is that this freer voice rises to a peak in October 2025 — but the seed of that sharp soberness was planted at the end of this tokenizer video in February 2024.&lt;/p&gt;




&lt;h2&gt;
  
  
  III. Sequoia Ascent — A First Draft of the Founder Identity
&lt;/h2&gt;

&lt;p&gt;A month later, in March 2024, he appeared at Sequoia's AI Ascent conference for a fireside chat with Stephanie Zhan. In her introduction, she described him as "an incredible, fascinating futurist thinker; a relentless optimist; and a very practical builder." That is the 2024 Karpathy — still firmly in the optimist column.&lt;/p&gt;

&lt;p&gt;In that conversation, he laid out his LLM-OS vision systematically for the first time to a room of VCs and founders. But what deserves remembering more is the long, almost uninterrupted monologue he gave when asked what he had learned from working with Elon.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"He likes very small, strong, highly technical teams. Companies, by default, the teams grow and they get large. Elon was always like a force against growth."&lt;br&gt;
"He doesn't like large meetings. If you're not contributing, and you're not learning, just walk out. And this is fully encouraged."&lt;br&gt;
"Usually, a CEO of a company is like a remote person five layers up. It's not how he runs companies. ... If the team is small and strong, then engineers and the code are the source of truth."&lt;br&gt;
"I like to say that he runs the biggest startups."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;When you listen to that passage, there is a quiet sense: &lt;strong&gt;he is not describing Elon. He is describing what he is about to do.&lt;/strong&gt; Small teams, deep technical work, removing bottlenecks, accountable to engineers and not middle managers — all of this would surface in Eureka Labs. He had not yet announced Eureka Labs in that Sequoia conversation; but he was already publicly rehearsing the working style he would adopt.&lt;/p&gt;

&lt;p&gt;The interview ended on a soft but firm note. Asked what gave him the most meaning going forward, he said: "I want the ecosystem to be like a coral reef — a vibrant ecosystem of all kinds of cool, exciting startups in all the nooks and crannies of the economy."&lt;/p&gt;

&lt;p&gt;Stephanie Zhan teased him on stage: "Genuinely, Andrej dreams about coral reefs." The room laughed.&lt;/p&gt;




&lt;h2&gt;
  
  
  IV. Eureka Labs and the Berkeley Pep Talk
&lt;/h2&gt;

&lt;p&gt;June 23, 2024. UC Berkeley AI Hackathon. 1,200 student hackers filled the SkyDeck awards ceremony. Karpathy walked on stage and gave a talk unlike anything he had ever given publicly — &lt;strong&gt;the most motivational, the most educator-flavored, the least researcher-flavored&lt;/strong&gt; appearance of his life.&lt;/p&gt;

&lt;p&gt;He used as his example a weekend project he had built — &lt;code&gt;awesomemovies.life&lt;/code&gt;. He confessed: this wasn't his first time building this kind of thing. It was his twentieth. Each iteration took one weekend. Each one was imperfect. But each one taught him something new. &lt;strong&gt;The point wasn't the site itself; it was the accumulation — a Malcolm Gladwell "10,000 hours."&lt;/strong&gt; What looks like "vibe" or "talent," he was saying, is mostly the muscle memory left by a lot of patient practice.&lt;/p&gt;

&lt;p&gt;The whole talk's register is &lt;em&gt;encouraging&lt;/em&gt; — across all his public appearances, this is the moment when he most clearly sounds like a teacher. The audience was young people about to enter AI; he wasn't a researcher among peers; he was a teacher telling students: you can do this, but you must repeat.&lt;/p&gt;

&lt;p&gt;Less than a month later, on July 16, 2024, he announced Eureka Labs on X. "This is something I've been doing my whole life," he wrote, "but now it's finally my full-time job."&lt;/p&gt;

&lt;p&gt;The Berkeley talk and the Eureka Labs announcement were less than 30 days apart. &lt;strong&gt;The talk wasn't a coincidence — it was Eureka Labs' spiritual manifesto.&lt;/strong&gt; He rehearsed the identity in front of 1,200 students before formally registering it as a company.&lt;/p&gt;




&lt;h2&gt;
  
  
  V. No Priors — An Undertow of Doubt
&lt;/h2&gt;

&lt;p&gt;By the fall of 2024, Eureka Labs had been alive for two months. He appeared on Sarah Guo and Elad Gil's podcast &lt;em&gt;No Priors&lt;/em&gt;. The conversation ranged widely — Tesla vs. Waymo's self-driving paths, the shared neural networks between Optimus and the Tesla fleet, his views on the future of education.&lt;/p&gt;

&lt;p&gt;But in that conversation, &lt;strong&gt;he first uttered, in public, the seed of what would later become "AGI is still a decade away."&lt;/strong&gt; He proposed a conjecture — he called it the &lt;strong&gt;cognitive core&lt;/strong&gt;: the part of the model that reasons, plans, and thinks might need to be very small, perhaps only ~1B parameters. Current frontier models are large, he suggested, because they carry too much &lt;em&gt;knowledge&lt;/em&gt;. But knowledge and intelligence are not the same thing; a bigger model is not, automatically, a smarter one.&lt;/p&gt;

&lt;p&gt;It was a researcher's detail, the kind of thing only insiders care about. &lt;strong&gt;But it is the true origin of the sharpest lines from the Dwarkesh interview thirteen months later&lt;/strong&gt; — "the model is lossy compression, not a knowledge source," "AGI is still a decade away," "it's slop." All of it grew from the judgment of "knowledge ≠ intelligence" that he first stated in September 2024.&lt;/p&gt;

&lt;p&gt;By the end of 2024, what outsiders saw was a vibrant founder — newly launched Eureka Labs, sharing the LLM-OS vision in podcasts and on stages, holding a coral-reef optimism for the ecosystem. &lt;strong&gt;But his own internal pessimist thread had quietly begun, in the cognitive-core conjecture.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  One Line for This Chapter
&lt;/h2&gt;

&lt;p&gt;In chapter three, he has become a founder on the outside, but he is still a researcher inside — and this time, the subject he is researching is &lt;em&gt;exactly where AI still falls short&lt;/em&gt;. His optimism and his pessimism aren't a contradiction. They are two outward-facing directions of the same engineer's heart.&lt;/p&gt;




&lt;h2&gt;
  
  
  Sources
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;em&gt;Let's build the GPT Tokenizer&lt;/em&gt; (2024-02-20) — minbpe repo at &lt;a href="https://github.com/karpathy/minbpe" rel="noopener noreferrer"&gt;https://github.com/karpathy/minbpe&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Sequoia AI Ascent 2024 transcript — &lt;a href="https://aletteraday.substack.com/p/letter-228-andrej-karpathy-and-stephanie" rel="noopener noreferrer"&gt;https://aletteraday.substack.com/p/letter-228-andrej-karpathy-and-stephanie&lt;/a&gt; ; video at &lt;a href="https://www.youtube.com/watch?v=c3b-JASoPi0" rel="noopener noreferrer"&gt;https://www.youtube.com/watch?v=c3b-JASoPi0&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;UC Berkeley AI Hackathon 2024 Keynote — &lt;a href="https://www.youtube.com/watch?v=tsTeEkzO9xc" rel="noopener noreferrer"&gt;https://www.youtube.com/watch?v=tsTeEkzO9xc&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Eureka Labs announcement (2024-07-16) — &lt;a href="https://x.com/karpathy/status/1813263734707790301" rel="noopener noreferrer"&gt;https://x.com/karpathy/status/1813263734707790301&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;No Priors Ep. 80 (2024-09) — &lt;a href="https://www.youtube.com/watch?v=hM_h0UA7upI" rel="noopener noreferrer"&gt;https://www.youtube.com/watch?v=hM_h0UA7upI&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
    </item>
    <item>
      <title>The Man Who Summoned Ghosts | Chapter 2: The Training Stack Is Not a Secret</title>
      <dc:creator>Lei Hua</dc:creator>
      <pubDate>Thu, 14 May 2026 02:20:59 +0000</pubDate>
      <link>https://dev.to/lhua0420/the-man-who-summoned-ghosts-chapter-2-the-training-stack-is-not-a-secret-3h9a</link>
      <guid>https://dev.to/lhua0420/the-man-who-summoned-ghosts-chapter-2-the-training-stack-is-not-a-secret-3h9a</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsubstackcdn.com%2Fimage%2Ffetch%2F%24s_%21he1L%21%2Cf_auto%2Cq_auto%3Agood%2Cfl_progressive%3Asteep%2Fhttps%253A%252F%252Fsubstack-post-media.s3.amazonaws.com%252Fpublic%252Fimages%252Fd515dbc7-f5c2-4f08-8940-a0207ff09f47_1280x720.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsubstackcdn.com%2Fimage%2Ffetch%2F%24s_%21he1L%21%2Cf_auto%2Cq_auto%3Agood%2Cfl_progressive%3Asteep%2Fhttps%253A%252F%252Fsubstack-post-media.s3.amazonaws.com%252Fpublic%252Fimages%252Fd515dbc7-f5c2-4f08-8940-a0207ff09f47_1280x720.jpeg" alt="The Man Who Summoned Ghosts | Chapter 2: The Training Stack Is Not a Secret cover" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;From OpenAI to nanoGPT: why the training stack should feel legible, not magical.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Originally published on &lt;a href="https://lhua0420.substack.com/p/the-man-who-summoned-ghosts-chapter-262" rel="noopener noreferrer"&gt;Lei Hua's Substack&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Anchors:&lt;/em&gt;&lt;br&gt;
2023-05-23 · State of GPT @ Microsoft Build · &lt;a href="https://www.youtube.com/watch?v=bZQun8Y4L2A" rel="noopener noreferrer"&gt;https://www.youtube.com/watch?v=bZQun8Y4L2A&lt;/a&gt;&lt;br&gt;
2023-11-23 · [1hr Talk] Intro to Large Language Models · &lt;a href="https://www.youtube.com/watch?v=zjkBMFhNj_g" rel="noopener noreferrer"&gt;https://www.youtube.com/watch?v=zjkBMFhNj_g&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Epigraph
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;"99% of the compute is in pretraining. ... For applications, you want low-stakes things, with humans in the loop. Treat these models like cognitive interns."&lt;br&gt;
— Andrej Karpathy, &lt;em&gt;State of GPT&lt;/em&gt; · 2023-05&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  The Return
&lt;/h2&gt;

&lt;p&gt;In the last two months of 2022, three things happened that reshaped the road Karpathy was about to take.&lt;/p&gt;

&lt;p&gt;On November 30, ChatGPT launched. A million users in five days; a cultural phenomenon in two months. It wasn't a researcher's curiosity anymore; it was a daily ritual for ordinary people. &lt;strong&gt;The romantic imagination that once compared neural networks to "another kind of intelligence in nature"&lt;/strong&gt; was, by early 2023, no longer just romantic. It was a chat box that tens of millions of people opened every day.&lt;/p&gt;

&lt;p&gt;The second thing: he re-joined OpenAI. He returned as a researcher, but the context had changed. OpenAI was no longer the small team upstairs from a chocolate factory (Stephanie Zhan would later recall, in 2024 at Sequoia, that this was OpenAI's original office). It was now the center of everyone's attention. Everyone wanted to know what was being built inside.&lt;/p&gt;

&lt;p&gt;The third thing — the subtlest, and arguably the real protagonist of this chapter: &lt;strong&gt;he decided to explain the training stack to the public.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  II. State of GPT — Unveiling the Black Box
&lt;/h2&gt;

&lt;p&gt;May 2023, Microsoft Build conference. Karpathy stood on stage for about 42 minutes and laid out the entire training pipeline of GPT-class models as a single systematic diagram: pretraining → supervised fine-tuning → reward modeling → reinforcement learning from human feedback. For each stage, he showed what data goes in, how much compute it costs, what trade-offs it involves.&lt;/p&gt;

&lt;p&gt;The talk would be quoted across the industry for years. Its impact came not from the novelty of any single detail — most of the specifics were already known internally at frontier labs. Its power came from the &lt;em&gt;posture&lt;/em&gt;: he chose to treat all of this as public knowledge.&lt;/p&gt;

&lt;p&gt;The AI industry at that moment was sliding into a kind of mystification of the training stack. Every frontier lab hinted that their success came from some secret recipe outsiders couldn't see. Karpathy's talk was a quiet refutation: &lt;strong&gt;there is no secret. 99% of the compute is in pretraining. The rest is engineering and taste.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;What deserves to be remembered even more is his stance toward applications, expressed at the end of that talk. He gave developers a clear judgment: &lt;strong&gt;at this stage, build "low-stakes, human-in-the-loop" applications. Treat the model as a cognitive intern, not as an autonomous agent.&lt;/strong&gt; It was an engineer's caution. Not glamorous, not sexy — but in the years that followed, this exact line would resurface, in different vocabulary, again and again. By the time he said "march of nines" on Dwarkesh two and a half years later, this thread had been buried deep into his thinking.&lt;/p&gt;




&lt;h2&gt;
  
  
  III. The Birth of a Metaphor
&lt;/h2&gt;

&lt;p&gt;Half a year later, in November 2023, he recorded an hour-long video for his own YouTube channel, titled &lt;em&gt;Intro to Large Language Models&lt;/em&gt;. The talk was originally given at an internal AI safety summit; the response was strong enough that he re-recorded it himself for everyone.&lt;/p&gt;

&lt;p&gt;This was the first time he systematically &lt;em&gt;translated&lt;/em&gt; the LLM stack for a non-technical audience. And it was at this moment that he publicly introduced a metaphor that would echo for years: &lt;strong&gt;the LLM is a new kind of operating system.&lt;/strong&gt; The LLM is the CPU, the context window is RAM, tool use is peripherals, multimodality is I/O. The entire LLM ecosystem, in his framing, was a new kind of computer still taking shape.&lt;/p&gt;

&lt;p&gt;The metaphor has a clear place in the evolution of his own thinking — it sits exactly between his 2017 &lt;em&gt;Software 2.0&lt;/em&gt; (the essence of programs shifts from code to weights) and the &lt;em&gt;Software 3.0&lt;/em&gt; he would formally announce in 2025. Three concepts; three escalating answers to the question of &lt;em&gt;what computation is&lt;/em&gt;. And in some real sense, from this moment on, &lt;strong&gt;Karpathy was no longer just a researcher. He had become a public thinker with his own narrative framework.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;But it is worth &lt;em&gt;hearing his tone&lt;/em&gt;. At this point, his voice is not yet the sober, sharp-edged register of the Dwarkesh interview in 2025. It is &lt;strong&gt;devout&lt;/strong&gt; — the kind of engineer's devotion that says, &lt;em&gt;this thing is beautiful, let me show you how beautiful it is.&lt;/em&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  IV. The Emergence of the Educator
&lt;/h2&gt;

&lt;p&gt;He had designed Stanford's first deep learning course (CS231n). In his Tesla years, he was almost invisible in public. In September 2022, he started the &lt;em&gt;Neural Networks: Zero to Hero&lt;/em&gt; series, building up from &lt;code&gt;micrograd&lt;/code&gt; to &lt;code&gt;makemore&lt;/code&gt; to &lt;em&gt;Let's build GPT from scratch&lt;/em&gt;. Each video, in isolation, is a tutorial. &lt;strong&gt;Taken together, they are someone quietly building his next identity.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;By the end of 2023, that identity was clear. He was still at OpenAI, but more and more of his influence was flowing through his own channel and public talks rather than through OpenAI's internal products. &lt;strong&gt;He was becoming the AI era's first true public teacher — not a university lecturer, but a YouTube teacher facing a world that suddenly needed to understand AI.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In the &lt;em&gt;Intro to LLMs&lt;/em&gt; talk, one detail captures the maturity of this identity: he drew the LLM-OS diagram in a way that's friendly even to viewers with no computing background. CPU, memory, peripherals — these are concepts ordinary users have lived with since the 1980s. He wasn't showing off to peers. He was handing a key to an unprepared world.&lt;/p&gt;




&lt;h2&gt;
  
  
  V. The Seeds Planted in This Chapter
&lt;/h2&gt;

&lt;p&gt;By the end of 2023, three seeds had been planted in Karpathy's public posture, each of which would germinate in later chapters:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Seed one&lt;/strong&gt;: the training stack is public knowledge. This will grow, in February 2024, into his critique of tokenization ("legacy tech we should try to escape from"), and by 2025 into the demystifying extremity of nanochat and microGPT — &lt;em&gt;"the best ChatGPT you can buy for $100."&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Seed two&lt;/strong&gt;: low-stakes plus human-in-the-loop. This will grow, in October 2025, into the "it's slop" line on Dwarkesh — the same engineering caution, just no longer being polite to the code that frontier models produce.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Seed three&lt;/strong&gt;: his identity as a public teacher. This will bloom in just half a year — in July 2024, he leaves OpenAI for the second time and announces the founding of Eureka Labs. From that moment on, for the first time in his life, &lt;strong&gt;his main work would not be research. It would be education.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;But none of that had happened yet. The Karpathy of 2023 was still inside OpenAI, still doing research with OpenAI's resources, not yet the independent teacher he would eventually become. What we see in this chapter is &lt;strong&gt;an insider who has begun watching the door with one eye.&lt;/strong&gt; He was already preparing for what came next, even if he himself may not yet have known.&lt;/p&gt;




&lt;h2&gt;
  
  
  One Line for This Chapter
&lt;/h2&gt;

&lt;p&gt;If chapter one's Karpathy is an engineer who just let go of the wheel, &lt;strong&gt;this chapter's Karpathy is an engineer who has begun drawing maps for the whole world&lt;/strong&gt; — and hasn't yet realized he is no longer just an engineer.&lt;/p&gt;




&lt;h2&gt;
  
  
  Sources
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;em&gt;State of GPT&lt;/em&gt;, Microsoft Build (2023-05-23) — &lt;a href="https://www.youtube.com/watch?v=bZQun8Y4L2A" rel="noopener noreferrer"&gt;https://www.youtube.com/watch?v=bZQun8Y4L2A&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;[1hr Talk] Intro to Large Language Models&lt;/em&gt; (2023-11-23) — &lt;a href="https://www.youtube.com/watch?v=zjkBMFhNj_g" rel="noopener noreferrer"&gt;https://www.youtube.com/watch?v=zjkBMFhNj_g&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Neural Networks: Zero to Hero&lt;/em&gt; series (2022-09–) — &lt;a href="https://www.youtube.com/@AndrejKarpathy" rel="noopener noreferrer"&gt;https://www.youtube.com/@AndrejKarpathy&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Sequoia AI Ascent 2024 transcript (for the chocolate factory office context) — &lt;a href="https://aletteraday.substack.com/p/letter-228-andrej-karpathy-and-stephanie" rel="noopener noreferrer"&gt;https://aletteraday.substack.com/p/letter-228-andrej-karpathy-and-stephanie&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
    </item>
    <item>
      <title>The Man Who Summoned Ghosts | Chapter 1: The Man Who Stepped Away from the Wheel</title>
      <dc:creator>Lei Hua</dc:creator>
      <pubDate>Thu, 14 May 2026 02:04:50 +0000</pubDate>
      <link>https://dev.to/lhua0420/the-man-who-summoned-ghosts-chapter-1-the-man-who-stepped-away-from-the-wheel-356b</link>
      <guid>https://dev.to/lhua0420/the-man-who-summoned-ghosts-chapter-1-the-man-who-stepped-away-from-the-wheel-356b</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsubstackcdn.com%2Fimage%2Ffetch%2F%24s_%21Uk-D%21%2Cf_auto%2Cq_auto%3Agood%2Cfl_progressive%3Asteep%2Fhttps%253A%252F%252Fsubstack-post-media.s3.amazonaws.com%252Fpublic%252Fimages%252F61929301-8cf5-4906-960d-33dfd39fcf8c_1600x640.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsubstackcdn.com%2Fimage%2Ffetch%2F%24s_%21Uk-D%21%2Cf_auto%2Cq_auto%3Agood%2Cfl_progressive%3Asteep%2Fhttps%253A%252F%252Fsubstack-post-media.s3.amazonaws.com%252Fpublic%252Fimages%252F61929301-8cf5-4906-960d-33dfd39fcf8c_1600x640.jpeg" alt="The Man Who Summoned Ghosts | Chapter 1: The Man Who Stepped Away from the Wheel cover" width="800" height="320"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Leaving Tesla, returning to first principles, and learning how to recalibrate in public.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Originally published on &lt;a href="https://lhua0420.substack.com/p/the-man-who-summoned-ghosts-chapter" rel="noopener noreferrer"&gt;Lei Hua's Substack&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Epigraph
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;"I think a few years ago, I sort of felt like AGI was — it wasn't clear how it was going to happen. ... And now I think it's very clear, and there's like a lot of space, and everyone is trying to fill it."&lt;br&gt;
— Andrej Karpathy, Sequoia AI Ascent · 2024-03&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  I. The Photo
&lt;/h2&gt;

&lt;p&gt;In March 2024, in a room at Sequoia's AI Ascent conference, Andrej Karpathy walked up to the stage. The host, Stephanie Zhan, had opened a background slide ahead of time — an early photo of him. Karpathy glanced at it and laughed: "It's a very intimidating photo."&lt;/p&gt;

&lt;p&gt;The man in that photo belonged to another era. The line between that era and this one runs right through the summer of 2022.&lt;/p&gt;




&lt;h2&gt;
  
  
  II. The Departure
&lt;/h2&gt;

&lt;p&gt;In July 2022, Karpathy left Tesla. This is public knowledge. He had been Tesla's Director of AI for five years, leading the vision-only Autopilot architecture. Stephanie Zhan, two years later in 2024, summarized the outsider's view:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"He was poached by Elon in 2017. ... For folks who don't remember the context then, Elon had just transitioned through six different autopilot leaders, each of whom lasted six months each. And I remember when Andrej took this job, I thought, &lt;em&gt;Congratulations, and good luck&lt;/em&gt;."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Those five years, he lived inside an engineering culture heavily shaped by Elon Musk. He described that work style, looking back in 2024:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"He likes very small, strong, highly technical teams. ... Companies, by default, the teams grow and they get large. Elon was always like a force against growth."&lt;br&gt;
"He doesn't like large meetings. ... if you're not contributing, and you're not learning, just walk out. And this is fully encouraged."&lt;br&gt;
"Usually, a CEO of a company is like a remote person five layers up... It's not how he runs companies. ... If the team is small and strong, then engineers and the code are the source of truth."&lt;br&gt;
"I like to say that he runs the biggest startups."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;These are his 2024 recollections. He didn't say any of this publicly the moment he left, in the summer of 2022. He took about three months before reappearing on camera at all.&lt;/p&gt;




&lt;h2&gt;
  
  
  III. The Tone of Leaving
&lt;/h2&gt;

&lt;p&gt;October 29, 2022 — Lex Fridman Podcast Episode #333. This was his first long-form public conversation after Tesla.&lt;/p&gt;

&lt;p&gt;Three and a half hours. The conversation drifted from the mathematical elegance of neural networks to aliens, synthetic biology, the shape of AGI, the simulation hypothesis. &lt;strong&gt;In that version of Karpathy, the metaphors for AI were romantic&lt;/strong&gt; — neural networks as "another kind of intelligence in nature," sitting alongside animal brains and human minds as a peer form of cognition. His tone toward AGI was curious and open: he neither rushed it nor denied it. AGI, in that voice, had no timeline. Only possibilities.&lt;/p&gt;

&lt;p&gt;If this whole biography traces a curve, &lt;strong&gt;that moment is its starting point.&lt;/strong&gt; No founder identity yet; no clear version of the educator mission; no LLM OS metaphor; no Software 3.0 framework; no "decade of agents"; no "summoning ghosts"; no "slop"; no "AGI is still a decade away."&lt;/p&gt;

&lt;p&gt;Just an engineer who had just let go of the wheel and hadn't yet decided which direction to drive next.&lt;/p&gt;




&lt;h2&gt;
  
  
  IV. A Few Things That Hadn't Happened Yet
&lt;/h2&gt;

&lt;p&gt;A few things hadn't happened to him yet at that October 2022 moment — and each one, in turn, would change him. Listed in chronological order, they almost form the chapter outline of this book:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;On November 30, 2022, ChatGPT launched. This day would re-define the context of every public appearance he would make afterward.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;In early 2023, he re-joined OpenAI — research scientist again, but this time also a public educator.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;In May 2023, his "State of GPT" talk at Microsoft Build laid out the entire LLM training stack in a public, anatomical way that the industry would quote for years.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;In November 2023, his hour-long *Intro to Large Language Models&lt;/em&gt; gave the public the "LLM OS" metaphor for the first time.*&lt;/p&gt;

&lt;p&gt;&lt;em&gt;In February 2024, he left OpenAI — for the second time. Half a year later, he founded Eureka Labs.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;In June 2025, at YC AI Startup School, he laid out the full Software 3.0 framework.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;In October 2025, on Dwarkesh Patel's podcast, he uttered the sharpest line he had ever said about this AI wave — "It's slop. AGI is still a decade away."&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;In December 2025 — what he himself would later call his "personal inflection point" — his own coding flipped from "I write 80%" to "agents write 80%."&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;In April 2026, he returned to the same Sequoia AI Ascent stage, the same host, and said: "I have never felt more behind as a programmer."&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Each of these events did not turn him into a different person. His code aesthetic didn't change. His allergy to hype didn't change. His preference for "small teams, deep technical work, remove bottlenecks" didn't change. His reverence for education didn't change. &lt;strong&gt;What changed was the world. The world got faster every year, and his judgment had to be calibrated more sharply, every time.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;And that, precisely, is what this book hopes to refract back to the reader: &lt;strong&gt;in a world moving faster than you, a thinking person is not someone who refuses to change. A thinking person is someone who changes gracefully.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  V. One Line for This Chapter
&lt;/h2&gt;

&lt;p&gt;The Karpathy of the fall of 2022 is this book's &lt;em&gt;baseline&lt;/em&gt;. His romantic imagination is the road from which every later coolness, every later soberness, every later "slop" would emerge.&lt;/p&gt;

&lt;p&gt;If you want to hear how a person changes, you have to first hear the version that hasn't changed yet.&lt;/p&gt;




&lt;h2&gt;
  
  
  Sources
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Lex Fridman Podcast #333 (2022-10-29) — &lt;a href="https://www.youtube.com/watch?v=cdiD-9MMpb0" rel="noopener noreferrer"&gt;https://www.youtube.com/watch?v=cdiD-9MMpb0&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Sequoia AI Ascent 2024 (2024-03-20) — Stephanie Zhan × Karpathy — &lt;a href="https://www.youtube.com/watch?v=c3b-JASoPi0" rel="noopener noreferrer"&gt;https://www.youtube.com/watch?v=c3b-JASoPi0&lt;/a&gt; ; transcript at &lt;a href="https://aletteraday.substack.com/p/letter-228-andrej-karpathy-and-stephanie" rel="noopener noreferrer"&gt;https://aletteraday.substack.com/p/letter-228-andrej-karpathy-and-stephanie&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Sequoia AI Ascent 2026 (2026-04-30) — &lt;a href="https://www.youtube.com/watch?v=96jN2OCOfLs" rel="noopener noreferrer"&gt;https://www.youtube.com/watch?v=96jN2OCOfLs&lt;/a&gt; ; cleaned transcript at &lt;a href="https://karpathy.bearblog.dev/sequoia-ascent-2026/" rel="noopener noreferrer"&gt;https://karpathy.bearblog.dev/sequoia-ascent-2026/&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Dwarkesh Podcast — &lt;em&gt;AGI is still a decade away&lt;/em&gt; (2025-10-17) — transcript at &lt;a href="https://www.dwarkesh.com/p/andrej-karpathy" rel="noopener noreferrer"&gt;https://www.dwarkesh.com/p/andrej-karpathy&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;




</description>
      <category>ai</category>
    </item>
    <item>
      <title>The Man Who Summoned Ghosts: Andrej Karpathy in the AI Era | Prologue: I Met nanoGPT Before I Met Him</title>
      <dc:creator>Lei Hua</dc:creator>
      <pubDate>Thu, 14 May 2026 02:01:57 +0000</pubDate>
      <link>https://dev.to/lhua0420/the-man-who-summoned-ghosts-andrej-karpathy-in-the-ai-era-prologue-i-met-nanogpt-before-i-met-26d8</link>
      <guid>https://dev.to/lhua0420/the-man-who-summoned-ghosts-andrej-karpathy-in-the-ai-era-prologue-i-met-nanogpt-before-i-met-26d8</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsubstackcdn.com%2Fimage%2Ffetch%2F%24s_%21dU_e%21%2Cf_auto%2Cq_auto%3Agood%2Cfl_progressive%3Asteep%2Fhttps%253A%252F%252Fsubstack-post-media.s3.amazonaws.com%252Fpublic%252Fimages%252Fa2ec68ba-13f7-419d-b54a-de83b9869c6d_1600x640.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsubstackcdn.com%2Fimage%2Ffetch%2F%24s_%21dU_e%21%2Cf_auto%2Cq_auto%3Agood%2Cfl_progressive%3Asteep%2Fhttps%253A%252F%252Fsubstack-post-media.s3.amazonaws.com%252Fpublic%252Fimages%252Fa2ec68ba-13f7-419d-b54a-de83b9869c6d_1600x640.jpeg" alt="The Man Who Summoned Ghosts: Andrej Karpathy in the AI Era | Prologue: I Met nanoGPT Before I Met Him cover" width="800" height="320"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;A thinker profile of Andrej Karpathy across the AI era.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Originally published on &lt;a href="https://lhua0420.substack.com/p/the-man-who-summoned-ghosts-andrej" rel="noopener noreferrer"&gt;Lei Hua's Substack&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;I met nanoGPT before I met Karpathy. Only later did I realize that what I had encountered was not merely a person, but a way of working.&lt;/p&gt;

&lt;h2&gt;
  
  
  I.
&lt;/h2&gt;

&lt;p&gt;Like many people, my real understanding of deep neural networks did not begin with papers, and it did not begin in a classroom. It began with a handful of Karpathy repositories on GitHub.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;micrograd&lt;/strong&gt;: a Python file of fewer than 200 lines that lays bare the mechanism of backpropagation. It was the first time gradient descent stopped being an abstract term I had to trust, and became code I could read, step through, and modify.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;makemore&lt;/strong&gt;: the same posture of doing the fullest thing in the fewest lines, starting with character-level language models and walking all the way toward transformers. It was the first time I understood that a language model was not a black box hidden inside a company's cloud. It was code that could run on a laptop.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;nanoGPT&lt;/strong&gt;: GPT-2 rewritten in roughly a thousand lines of Python, with training scripts and data preparation laid out in plain view. It was the first time I believed that understanding how ChatGPT-like systems are trained was something an ordinary engineer could do. No OpenAI badge. No PhD. Just a README that more or less says: clone it, then run it.&lt;/p&gt;

&lt;p&gt;Only then did I start to see the person behind the repositories.&lt;/p&gt;

&lt;p&gt;I watched him build GPT-2 from scratch in a single four-hour take on YouTube. I watched him explain LLMs in one hour for a broad audience. I watched him repeat the same ideas, in ten different registers, at Microsoft Build, Sequoia, YC, No Priors, and Dwarkesh. And I watched him become clearer each year, more restrained each year, more publicly willing to recalibrate himself each year.&lt;/p&gt;

&lt;p&gt;Many people may have met Karpathy in the same order: first his code, then his judgment, and only at the end the person.&lt;/p&gt;

&lt;h2&gt;
  
  
  II. What This Series Is Not Trying to Do
&lt;/h2&gt;

&lt;p&gt;I need to say this first.&lt;/p&gt;

&lt;p&gt;This series is not a love letter. My respect for Karpathy should not turn the piece into flattery, because flattery is useless. A figure shaped by praise alone does not help the reader see themselves.&lt;/p&gt;

&lt;p&gt;This series is also not an indictment. Even after the October 2025 Dwarkesh episode, where he was briefly flattened into the role of an AI bubble-popper, I do not want to dramatize the story as "look, he changed." The real situation is more complicated, and more worth understanding.&lt;/p&gt;

&lt;p&gt;So what is this series trying to do?&lt;/p&gt;

&lt;p&gt;It is trying to do something narrow but deep: to follow, across the years in which AI reshaped the world around him, the public record of one technical mind as its thoughts, judgments, emotions, and posture were pushed by facts into repeated recalibration.&lt;/p&gt;

&lt;p&gt;The difficulty is not collecting the material. His material is public, rich, and unusually complete: YouTube, bearblog, X, Sequoia, YC, No Priors, Dwarkesh. Across those venues, he has left a clear trail of language. The difficulty is resisting the temptation to turn change into drama.&lt;/p&gt;

&lt;p&gt;One temptation is to say he became pessimistic: from the 2022 romance of neural networks as another kind of natural intelligence, to the 2025 sharpness of "it's slop." That story is simple, useful, and wrong.&lt;/p&gt;

&lt;p&gt;Another temptation is to say he predicted everything: LLM OS, System 1 / System 2 LLMs, the AlphaGo step two analogy, each later seemingly confirmed by o1, RLVR, DeepSeek R1, and the rise of agentic workflows. That story is wrong too.&lt;/p&gt;

&lt;p&gt;Because he is not primarily predicting. He is doing something deeper and subtler.&lt;/p&gt;

&lt;h2&gt;
  
  
  III. He Is Not a Prophet. He Is a Translator.
&lt;/h2&gt;

&lt;p&gt;If this entire series leaves only one judgment about Karpathy, it is this:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;He is not mainly an inventor. He is a translator. He is not mainly a prophet. He is a namer.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;An LLM is the kernel process of a new OS. The context window is RAM. Software 3.0 is programming in English. Inscrutable artifacts. Lossy zip files of internet knowledge. Cognitive interns. Ghosts versus animals. Jagged intelligence. March of nines. Vibe coding. Agentic engineering. AI psychosis.&lt;/p&gt;

&lt;p&gt;Most of these are not new concepts in isolation. OS, RAM, compression, kernel processes: these come from earlier computing culture. System 1 and System 2 belong to Daniel Kahneman's cognitive science vocabulary. The bitter lesson belongs to Richard Sutton. March of nines comes from reliability engineering.&lt;/p&gt;

&lt;p&gt;What Karpathy does is different. He repeatedly knows exactly what his audience already understands, and exactly what name it is missing. Then he grafts an older vocabulary onto the new phenomenon of LLMs with unusual precision.&lt;/p&gt;

&lt;p&gt;That is not simple. The hard part is not inventing a new concept. The hard part is finding the metaphor that lets the previous generation of programmers catch the new reality without flinching.&lt;/p&gt;

&lt;p&gt;I learned deep learning from his GitHub repositories not because he used some secret teaching magic, but because he placed a new phenomenon inside tools an earlier generation already knew: Python, Jupyter, the command line, the debugger. He made it touchable, editable, and legible.&lt;/p&gt;

&lt;p&gt;Sometimes education is exactly that: not inventing something new, but placing the new thing inside the grammar the student already speaks.&lt;/p&gt;

&lt;p&gt;That translator posture has a special value in the AI era, because this is an era in which new phenomena grow faster than our language. Every month brings new models, new abilities, new failure modes, new workflows. The industry barely has time to name them before the outside world is already behind. In such a moment, the people patient enough to give new phenomena good names are doing something more important than branding. They are saving the era language-cost.&lt;/p&gt;

&lt;h2&gt;
  
  
  IV. Four Stable Cores, One World That Keeps Changing
&lt;/h2&gt;

&lt;p&gt;If you only hear his 2025 line that AGI still feels a decade away, it is easy to read him as a pessimist.&lt;/p&gt;

&lt;p&gt;But stretch the timeline from 2022 to 2026 and a different picture appears. He did not suddenly flip. The world changed, and the world forced different things into view.&lt;/p&gt;

&lt;p&gt;What I want to preserve is not any single sharp sentence, but four stable cores that have barely moved:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Minimalism, readability, and demystification of the training stack.&lt;/strong&gt; From micrograd to nanoGPT to microGPT, each generation says: do the most complete thing in the fewest lines of code. This is technical taste, but it is also a moral posture. He does not want frontier models to look like magic.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;The dignity of education.&lt;/strong&gt; Across his public work, this line only strengthens. Eureka Labs is not just a startup. It is the physical form of that lifelong thread.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;An allergy to hype.&lt;/strong&gt; As early as &lt;em&gt;State of GPT&lt;/em&gt; in 2023, he was saying "low-stakes + human-in-the-loop." The later "slop" and "march of nines" are the same caution spoken at higher volume.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;A preference for open and pluralistic ecosystems.&lt;/strong&gt; In 2024, the image was a coral reef. In 2026, it becomes a tactical argument for building RL environments in verifiable domains the big labs have not claimed. The romance becomes strategy, but the anti-centralization undercurrent remains.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Those four things stay. What changes is reality: agents begin writing code for him, an education mission has to become a product, public speech acquires costs, and even the question of how he works is rewritten by the tools. By the end, he publicly describes himself as living in a kind of "AI psychosis." That is not theatrical. It is honest: the tools have begun to reshape the human mind.&lt;/p&gt;

&lt;h2&gt;
  
  
  V. The Question This Series Wants to Ask
&lt;/h2&gt;

&lt;p&gt;The question is not whether Karpathy is right.&lt;/p&gt;

&lt;p&gt;The question is:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;When a person is forced by the era to keep rewriting their own judgment, what does it mean for them not to lose themselves?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This series follows that question chronologically: from the fall of 2022, when he had been out of Tesla for three months and sat down for Lex Fridman episode 333, to the spring of 2026, when he returned to the same Sequoia stage and said that he had never felt more behind as a programmer.&lt;/p&gt;

&lt;p&gt;I place this prologue first because I want the conclusion on the table from the beginning:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;May you recalibrate gracefully in this era.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Next, we begin in the fall of 2022. Karpathy has just left Tesla. He sits down in Lex Fridman's studio. That conversation is the zero-point of the next three years.&lt;/p&gt;

&lt;h2&gt;
  
  
  Sources and Anchors
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;micrograd: &lt;a href="https://github.com/karpathy/micrograd" rel="noopener noreferrer"&gt;https://github.com/karpathy/micrograd&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;makemore: &lt;a href="https://github.com/karpathy/makemore" rel="noopener noreferrer"&gt;https://github.com/karpathy/makemore&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;nanoGPT: &lt;a href="https://github.com/karpathy/nanoGPT" rel="noopener noreferrer"&gt;https://github.com/karpathy/nanoGPT&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Neural Networks: Zero to Hero: &lt;a href="https://www.youtube.com/@AndrejKarpathy" rel="noopener noreferrer"&gt;https://www.youtube.com/@AndrejKarpathy&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Lex Fridman Podcast #333: &lt;a href="https://www.youtube.com/watch?v=cdiD-9MMpb0" rel="noopener noreferrer"&gt;https://www.youtube.com/watch?v=cdiD-9MMpb0&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Dwarkesh Podcast, "AGI is still a decade away": &lt;a href="https://www.dwarkesh.com/p/andrej-karpathy" rel="noopener noreferrer"&gt;https://www.dwarkesh.com/p/andrej-karpathy&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Sequoia AI Ascent 2026 transcript: &lt;a href="https://karpathy.bearblog.dev/sequoia-ascent-2026/" rel="noopener noreferrer"&gt;https://karpathy.bearblog.dev/sequoia-ascent-2026/&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>discuss</category>
    </item>
  </channel>
</rss>
