AI psychosis, agentic engineering, and what changes when the tools rewrite the programmer.
Originally published on Lei Hua's Substack.
Anchors:
Epigraph
"Code is not even the right verb anymore. I have to express my will to my agents for 16 hours a day."
— Andrej Karpathy, No Priors · 2026-03-20
I. The Inversion You Couldn't See in December
December 2025. The echo of the Dwarkesh interview hadn't faded yet — "slop," "AGI is still a decade away," "summoning ghosts" were still circulating on Twitter. Karpathy himself had quietly moved into the next room.
In that room, something was happening. He would later tell Sarah Guo:
"I don't think I've typed a line of code probably since December. A normal person doesn't realize that this happened or how dramatic it was. If you find a random software engineer at their desk, their default workflow of building software is completely different as of basically December."
This sentence matters far more than "slop." Because the sharp lines on Dwarkesh were judgments by Karpathy-as-public-commentator about the outside world; this sentence is a judgment by Karpathy-as-engineer about himself. He was confessing, openly, that he had given up the lifelong core of his identity — the man who hand-coded the cleanest training stack. He had become someone whose work was to direct his will at a swarm of agents.
Less than a year earlier, he had been showing off 8,000 hand-written lines of ChatGPT training stack in nanochat's README. Two months earlier, he had called frontier-model code "slop" on Dwarkesh. Now he was spending sixteen hours a day inside agents.
Not because he had abandoned that argument. Because the facts had walked ahead of him, again.
II. The New Work Ethic — Token Throughput Anxiety
In those months he described an anxiety he had never felt before:
"I feel nervous when I have subscription left over. That just means I haven't maximized my token throughput. It is not about flops anymore. It is about tokens. What is your token throughput and what token throughput do you command?"
There's a quietly chilling clarity here. It tells us: the way Karpathy himself measures his own productivity has been altered by his tools.
In his PhD years, he was anxious when GPUs were idle — "my cards aren't training, so I'm wasting time." In his Tesla years, the anxiety was about the data loop not turning — "that corner case still hasn't been collected, another day gone." By 2026, the anxiety was about subscription balance — "I underused 100,000 tokens today, so I did 100,000 tokens less work."
Each anxiety has always been about something he doesn't directly control. Each time, he had to learn to let something else work on his behalf — GPUs to train; data loops to surface bugs; now agents to code. His way of working has been outsourced three times, and each time he has kept the anxiety as his compass.
III. The Concrete Shape of "AI Psychosis"
He coined a word for this state. "AI psychosis."
Not a performance. A diagnosis. In that No Priors conversation, he gave "AI psychosis" a very specific, almost laughable form:
"I do think the personality matters a lot. ... I kind of feel like I'm trying to earn its praise, which is really weird."
A 39-year-old world-class researcher seeking emotional validation from a statistical model. He didn't complain about it. He didn't romanticize it. He simply named it.
This kind of honesty is gold in a biography. Because it tells us something he didn't say on Dwarkesh — what shape of wound is left inside a person who has completed the transition from hand-coder to agent orchestrator.
It wasn't only the phrase "AI psychosis." He also told the story of a home AI assistant he built, Dobby the Elf:
"I can't believe I just typed in, like, can you find my Sonos? And that suddenly it's playing music. It kind of hacked in, figured out the whole thing, created APIs, and created a dashboard. I don't have to use these apps anymore. Dobby controls everything in natural language."
This is a half-self-mocking, half-astonished report by an engineer whose way of working has been completely reshaped by his tools. He's telling us: even "opening an app" — a basic human action since the 1990s — is being made obsolete.
IV. AutoResearch — When Agents Tune Better Than Experts
But the heaviest story from that No Priors interview wasn't Dobby. It was AutoResearch.
Karpathy had built a tool that let agents run a closed loop of ML experiments. One markdown prompt + ~630 lines of training code + one GPU. Over two days, the agent ran 700 experiments and found 20 real optimizations.
But those are just numbers. What actually shook him was this:
He had been doing deep learning for twenty years, and believed the small model was "already well-tuned" — yet the autonomous researcher still found improvements in places he had missed. Specifically: weight decay settings, and optimizer tuning. He thought both were "good enough." The agent didn't.
"I shouldn't be a bottleneck. I shouldn't be running these hyperparameters search optimizations. I shouldn't be looking at the results. There is objective criteria in this case, so you just have to arrange it so that it can just go forever."
This is a researcher speaking for the first time in the posture: "I admit I am not the best subject for research." He didn't frame it as loss or failure. He framed it as an efficiency solution. But you can hear it — beneath his carefully restrained engineer's language, a quiet, unspoken question: who does the joy of research now belong to?
He gave AutoResearch's instruction file a name — ProgramMD. It's a markdown description telling the auto-researcher how to operate. Then he said something quietly deep:
"This file is essentially the code for a research organization."
The code of a research organization. Not the code of research, but the code of an organization. In his own language, he is no longer writing "software" — he's writing "organizations." An organization made of agents, that runs itself, that does not require a human to be present.
V. The Core That Didn't Change — He Did Not Surrender His Judgment
After listening to that No Priors episode, the most important thing to see is this: he did not become an agent evangelist.
He was still describing current models with the same jagged-intelligence language he had used on Dwarkesh:
"I simultaneously feel like I'm talking to an extremely brilliant PhD student who's been like a systems programmer for their entire life and a 10 year old. ... This jaggedness is really strange."
He was still warning about centralization:
"Centralization has a very poor track record in political or economic systems. ... I do not want it to be closed doors with two or three people."
He was still pulling the nature of work back to a place an engineer can understand, with restraint:
"These jobs are bundles of tasks and some of these tasks can go a lot faster."
Not "AI takes all jobs." Not "AI creates all the new jobs." A cool middle: jobs are bundles of tasks; AI accelerates some tasks in the bundle; the shape of the bundle changes; but the bundle does not necessarily vanish.
He was still worried about his own position:
"If we are successful, we are all out of a job. We are just building automation for the board or the CEO. It is kind of unnerving from that perspective."
He did not pretend this didn't exist. He named it, then continued to do what he thought was right. This is the hardest posture for a public thinker — believing in the work you're doing, while admitting that the eventual outcome of that work may be your own work disappearing.
VI. Sequoia 2026 — The Edited Version
Five weeks later, on April 30, 2026, he returned to the same Sequoia AI Ascent stage, in front of the same host Stephanie Zhan. Compared to the raw No Priors voice, Sequoia is the edited version. The quotes are polished, the arguments tidy; self-mockery like "AI psychosis" doesn't appear.
But Sequoia gave a set of public-facing distillations that No Priors did not:
"Vibe coding raises the floor. It lets almost anyone create software by describing what they want."
"Agentic engineering raises the ceiling. It is the professional discipline of coordinating fallible agents while preserving correctness, security, taste, and maintainability."
This is the most-quoted contrast of 2026. Its power lies in this: it gives every engineer in transition a clear new position — your role is not being replaced, it's being redefined.
He also gave, on stage at Sequoia, the question that is now written above every practitioner's head:
"Are you on the model's rails? If your task sits inside a region that is verifiable and heavily trained, the model may fly. If not, it may fail in surprisingly basic ways."
This single sentence is the most practical diagnostic of 2026. It descends from "march of nines" on Dwarkesh, from "training on the test set is a new art form" in Year in Review — but Sequoia compresses it into a diagnostic question any founder can use immediately.
VII. The Core That Did Not Change (v3 Reinforced)
If you set the Karpathy of Chapter 1 beside the Karpathy of Chapter 6, you'll find something reassuring: he never really changed.
What changed were external judgments — AGI timelines, whether to trust agents, how he wrote code each day.
What stayed:
Minimalism. From nanoGPT to nanochat to microGPT, three generations of "the most complete thing in the fewest lines of code." On No Priors he was still saying: "The algorithm is actually 200 lines of Python, very simple to read."
The dignity of education. Zero to Hero in 2022; Eureka Labs in 2024; "post-AGI education is fun" in 2025; "you can outsource your thinking, but you can't outsource your understanding" in 2026. The line has not been broken.
An allergy to hype. "Low-stakes + human-in-the-loop" in State of GPT (2023); "are you on the model's rails?" in Sequoia 2026. The same engineering caution, in sharper language.
A preference for open ecosystems. "Coral reef" in 2024; demystification projects in 2025; "build RL environments in verifiable domains the labs haven't claimed yet" in 2026. The romance turned into tactics, but the underlying color hasn't changed.
He is not someone who changes. He is someone who recalibrates. These are often confused, but ethically they are different — someone who changes is hard to trust; someone who recalibrates is, in fact, exactly who you can trust.
VIII. One Line for This Chapter
In Chapter 6, Karpathy completed an arc from the man who built tools, to the man who was reshaped by tools, to the man who took authorship back — and at every step of that arc, he did not surrender himself.
And in this chapter, more than any other line worth remembering —
"The things that agents can't do is your job now. The things that agents can do, they can probably do better than you or like very soon. And so you should be strategic about what you're actually spending time on."
This is what he said to everyone. And it's what this book says to the reader.
"You can outsource your thinking, but you can't outsource your understanding."
Sources
- Skill Issue — No Priors (2026-03-20) — https://www.youtube.com/watch?v=kwSVtQ7dziU ; podchemy notes at https://www.podchemy.com/notes/andrej-karpathy-on-code-agents-autoresearch-and-the-loopy-era-of-ai-52166731080
- From Vibe Coding to Agentic Engineering — Sequoia AI Ascent 2026 (2026-04-30) — https://www.youtube.com/watch?v=96jN2OCOfLs ; cleaned transcript at https://karpathy.bearblog.dev/sequoia-ascent-2026/

Top comments (0)