DEV Community

Sergei Peleskov
Sergei Peleskov

Posted on

AI Didn't Make Coding Harder. It Moved the Bottleneck from Writing to Reviewing

If you're a senior engineer running multiple AI agents and feeling wrecked at the end of every day — you're not slow, and you're not falling behind. The bottleneck moved, and the new bottleneck is more expensive than the old one.

Writing used to be the expensive part

When you wrote code yourself, you built a mental model of the problem as you went. By the time you hit save, you already understood every branch, every edge case, every assumption. Review was basically free — you were reviewing your own thinking.

Now the order is flipped. The agent writes. You review. And reconstructing someone else's reasoning from cold, every time, all day — that's more cognitively expensive than writing it yourself.

Running 3–4 agents makes it worse

Most seniors aren't running one agent. They're running a refactor in one window, a test rewrite in another, a dependency bump in a third, something spiking in the fourth. Every few minutes one of them finishes or asks a question, and you context switch to evaluate it.

That interrupt pattern is the actual killer. Not the writing, not the reviewing. The switch.

Juniors don't feel this as much because juniors trust the output. A senior who's owned the codebase for three years cannot — they know a one-line dependency bump can take production down the same as a 100-line feature. Every diff is a decision.

The data

Pragmatic Engineer AI tooling survey, 2026: 95% of professional engineers use AI tools weekly. 55% regularly use agents (not chatbots). Among senior and principal engineers, that climbs to 63.5%. Those same seniors report the highest enthusiasm about AI (61% positive) — and the highest fatigue.

METR, 2025: Experienced open-source developers using AI tools on their own repositories predicted a 24% speedup. After the study, they still felt about 20% faster. Measured reality: 19% slower. A ~40 percentage point gap between perception and result.

Time saved writing code was less than time lost prompting, waiting, reviewing, correcting, and re-prompting. Validation work doesn't feel like work. It feels like thinking. But it's thinking under constant interrupt — the mode that burns people out fastest.

What actually helps

1. One agent at a time. Multi-agent orchestration is the whole pitch, but in practice it acts like interrupt-driven fatigue. Pick the single highest-value task. Run one agent on it. Sit with it until it's done, reviewed, and merged. Then start the next one. Raw output may drop a little. Cognitive load drops much more.

2. Constrain scope before prompting. The wider the task you hand an agent, the bigger the review surface you create for yourself. Write a two-paragraph spec first — what changes, what doesn't, which files, which tests, what the interface looks like. A small specified task takes 2 minutes to verify. A sprawling vibe-prompted task takes 40.

3. Protect one no-AI block per day. 2–3 hours, manual, no agents. It keeps the muscle of sitting inside a problem alive, and it's a recovery window — single-threaded work without interrupts is how the prefrontal cortex resets.

The pattern

The developers staying sharp through this shift aren't the ones running the most agents. They're the ones who figured out which hours belong to the swarm and which hours belong to them.

Not fewer tools. Better boundaries.


Full breakdown on video:

Top comments (0)