Andrej Karpathy coined the term "vibe coding" in February 2025. One year later, a wave of developers — indie hackers, SaaS builders, long-time contributors — are publicly burning out. Not from too little output. From too much of it, with no way to stop.
This post is about what actually changed under the hood, backed by data.
The productivity numbers are real. So is the problem.
The 2025 DORA Report surveyed nearly 5,000 developers and found that 90% now use AI tools at work — up 14% from 2024. Over 80% say it has improved their productivity.
Stack Overflow's 2025 Developer Survey (49,000+ respondents) confirms the trend: AI adoption keeps climbing, with 84% using or planning to use AI tools, up from 76% in 2024. But in the same survey, positive sentiment toward AI tools dropped from 72% in 2024 to just 60% in 2025.
More usage. Less enthusiasm. That gap is worth examining.
The three stop signals that disappeared
Before vibe coding, a typical dev day had a natural end:
| Signal | What it used to do |
|---|---|
| Cognitive limit hit | Forced you to stop — you literally couldn't think anymore |
| Effort/result ratio | Satisfied the "I earned this" feeling |
| Physical tiredness | Made closing the laptop easy |
With agentic workflows, you run five tasks in parallel. You never hit a cognitive wall because you're orchestrating, not grinding. The 2025 DORA Report identified this directly: developers now spend a median of two hours per day actively working with AI — and managing more concurrent workstreams than ever.
The stop signals didn't just weaken. They were removed entirely.
Burnout didn't go away — it just changed shape
Google's DORA research found no meaningful link between AI adoption and burnout — in either direction. The 2025 report found that AI has no significant impact on burnout and friction metrics, which remain tied to organizational factors that developer tooling does not change.
The Thoughtworks analysis of the same data identified a new pattern they call AI engineering waste: prompt-response latency, validation overhead, and rework from almost-correct AI output. That waste erodes efficiency and contributes directly to burnout.
Stack Overflow's survey found the top developer frustration in 2025 is AI solutions that are "almost right, but not quite" — cited by 66% of respondents. Debugging AI-generated code was the second-biggest frustration at 45%.
You're shipping more. You're also cleaning up more. And you're doing both without the natural stopping points that used to tell you when you were done.
What to actually do about it
The DORA report identifies something they call the AI Capabilities Model: seven organizational practices that determine whether AI helps or amplifies dysfunction. One of the most critical: working in small batches. It reduces friction and supports safer iteration specifically in AI-assisted environments.
At the individual level, the practical equivalent is:
- Define done before you start, not after. Without a pre-committed stopping condition, AI workflows run indefinitely.
- Treat rest as a capability, not a reward. The old model (rest when exhausted) is incompatible with tools that remove exhaustion.
- Measure value delivered, not code shipped. The DORA report is clear: lines generated is a misleading metric in an AI-assisted environment.
The bottom line
Vibe coding didn't create burnout. It removed the circuit breakers that used to prevent it. The research is consistent: AI amplifies existing patterns — good and bad. If your workflow had no stopping conditions built in, it still doesn't. The tool just made that invisible for a while.
Sources
Top comments (0)