The Cognitive Debt No One Talked About Until Coding Agents Arrived
Simon Willison went on Lenny's Podcast and said something that stuck: coding agents create cognitive debt.
Not technical debt. Not maintenance debt. Something worse.
When you write code yourself, you understand it. When an agent writes it for you, you don't. That gap is cognitive debt—and it compounds.
What Is Cognitive Debt
Technical debt is when you ship code you know isn't quite right, planning to fix it later. It's a tradeoff: speed now, cost later.
Cognitive debt is different.
It's not about the code being wrong. It's about you not understanding the code that's right.
When an agent generates a solution:
- The code works
- The tests pass
- The feature ships
- But you can't explain why it works
You approved it. You merged it. You own it. But you didn't write it, and you don't fully grasp it.
That's cognitive debt.
Why This Is Different From Copy-Pasting Stack Overflow
Every developer has pasted code they didn't fully understand. This isn't new.
But coding agents scale the problem dramatically.
Stack Overflow pattern:
- You find a snippet
- You adapt it to your context
- You learn something in the process
- The scope is limited (a function, a pattern)
Coding agent pattern:
- You describe what you want
- The agent builds the entire solution
- You review the output, not the process
- The scope can be entire modules, services, systems
The cognitive load shifts from writing to reviewing. And reviewing code you didn't write is fundamentally different from writing it.
The Review Trap
Code review was designed for code written by humans.
When a human writes code, the review process assumes:
- The author made deliberate choices
- The author can explain their reasoning
- The author learned something while writing
- The author will remember the context
When an agent writes code:
- The author made probabilistic choices
- The author can't explain reasoning (it's not designed to)
- No learning occurred during generation
- The author doesn't remember anything (it's stateless)
You're reviewing code from an author who can't answer questions.
The reviewer becomes the only person who can explain the code. But they didn't write it either. They just approved it.
The Compounding Problem
Cognitive debt compounds like interest.
Week 1: Agent writes a feature. You review it. You understand 80% of it. Ship.
Week 2: Agent extends the feature. You review it. You understand 70% (some of Week 1's context is fading). Ship.
Week 3: Agent fixes a bug. You review it. You understand 60%. Ship.
Month 6: Something breaks. You open the code. Whose code is this?
Not literally—you know the agent wrote it. But cognitively? You have no mental model of how this system works. You're debugging code you never truly understood.
The agent that wrote it is gone. The context that generated it is lost. You're left with working code that no one in your organization can explain.
The Onboarding Gap
New engineers joining a team used to learn by:
- Reading the codebase
- Asking the authors questions
- Making small changes
- Building mental models through iteration
When agents wrote most of the code:
- They read code with no human author
- There's no one to ask about design decisions
- They can't learn the reasoning—it wasn't reasoned, it was generated
- They build mental models on probabilistic foundations
The institutional knowledge isn't in people's heads anymore. It's embedded in weights and tokens.
What Actually Works
This doesn't mean coding agents are bad. It means how you use them matters.
Good patterns:
Pair programming mode — Agent suggests, you type. You build the mental model through muscle memory.
Learning mode — Ask the agent to explain what it's doing. Read the explanation. Don't just accept the code.
Small scopes — Agents for functions, not modules. Keep the cognitive load manageable.
Rewrite to understand — After the agent generates, rewrite key sections yourself. It's slower, but you'll actually know the code.
Document the why — When an agent generates something clever, write a comment explaining why it works. Force yourself to understand.
Bad patterns:
- Accept → Ship without reading carefully
- Generate entire features without building mental models
- Treat code review as approval rather than understanding
- Assume you'll figure it out later when it breaks
The Strategic Question
Organizations adopting coding agents need to ask: Who understands this code?
If the answer is "the agent that wrote it," you're building on sand.
The agent doesn't work for you anymore. It's a black box that generated something that works. When it breaks, when it needs changes, when regulation asks you to explain it—the agent can't help you.
You own the output. You better understand it.
The Takeaway
Coding agents are force multipliers for output. They're also force multipliers for cognitive debt.
Every line of agent-generated code you ship without understanding is a line you'll have to debug without context.
The developers who thrive with agents won't be the ones who ship fastest. They'll be the ones who maintain understanding while shipping fast.
Speed without comprehension is just faster accumulation of debt.
The agent wrote the code. But you're the one who has to fix it at 3 AM when production is down. Act accordingly.
Top comments (0)