DEV Community

Joachim Zeelmaekers
Joachim Zeelmaekers

Posted on • Originally published at joachimz.me on

When Code Outpaces the Systems Around It

AI has made it easy to turn ideas into code. Teams are seeing genuine speedups in planning, initial implementation, and even early review stages. But as with any engineering advancement, there are trade-offs.

The reality is that while code generation has accelerated, delivery pipelines haven’t kept pace. When test suites still take thirty minutes to run, CI pipelines are flaky, and deployments feel like defusing bombs, delivery pipelines haven’t sped up.

Consider this: Systems have been built that work when engineers are putting out 1-2 changes a day, but they break down at 5-10 changes a day. CI still takes 30 minutes per branch, everything needs merging so PRs pile up, and there is typically one deploy to staging every half hour.

This mismatch is often overlooked. Much of the process that’s been layered onto software development comes with speed trade-offs - sometimes intentional, often just organic evolution.

Now that generating code is cheap, one would expect software roadmaps to fly through delivery. But in most cases, that’s not what happens.

We expected the bottleneck to shift to trust decisions: Is this actually the right change? Does it work correctly? Can we ship it without starting a fire drill?

Yet that hasn’t been the outcome. Let’s examine why.

The Bottleneck Phase of AI Adoption

The first hard limit in review is review.

Code can be generated faster than it can be read. That’s where things get tricky.

Small change requests turn into 27-file diffs because AI tools touch nearby helpers, “simplify” abstractions that didn’t need changing, update unnecessary tests, and reformat code on the way out. None of it is obviously broken. Some of it might even be fine. But it still leaves reviewers staring at diffs they don’t have the time to read line by line.

Often, reviewers already have their own work open in three other tabs.

Reviews often become superficial. People start to just skim through changes, and they trust a green pipeline. They approve the shape of the change because tracing every consequence takes time they don’t have. This is not out of bad intention, but rather because it’s the only way to catch up.

These review cycles and safety checks exist for a reason—they protect the production environment. The goal isn’t to scrap them, but to evolve them to match the new generation speed.

That’s one of the real costs here. The goal should not only be ship faster, the goal should be ship faster without lowering quality.

The Three Stages of AI Adoption Maturity

A predictable maturation curve emerges as AI integrates into software delivery:

Stage 1: The Code Generation Boom Teams discover they can generate code faster than ever before for planning, implementation, and even initial review.

Stage 2: The Bottleneck Phase AI gets bolted onto systems that were already slower and messier than desired. Thirty-minute test suites that were tolerable before now become absolutely ridiculous in an AI loop. Changes stack up in a queue, with engineers waiting for things to move through the queue. Flaky CI now becomes even worse, as it not only impacts one change, but ten. Risky deploys create cautious behavior just when tooling is pushing to move faster.

Stage 3: The Automated Trust Phase The real wins come from investing in unsexy but critical infrastructure: fast feedback loops that don’t make you want to scream, clear boundaries so teams aren’t stepping on each other’s toes, boringly reliable deploys, documentation that doesn’t require an archaeology degree to understand, and having enough review capacity to actually keep up with the increased volume.

A flashy AI setup could help move the needle, but lasting improvement comes from evolving systems to match the new pace.

Decision Latency: The Next Bottleneck in the Maturity Curve

Another bottleneck that emerges is decision latency around process and approvals.

Changes get drafted quickly but sit for days due to additional review requests, sign-offs, unclear test plans, and unclear ownership.

This is what we mean by decision latency. The code can be done in an afternoon. The waiting takes the rest of the week.

When it’s unclear who has the final say, when every change needs three sign-offs, or when acceptance criteria only get clear near the end, AI mostly helps teams finish the code and then wait.

This highlights that this isn’t just an engineering structure problem, but more like a process problem that needs evolution alongside AI adoption.

AI rewards decomposition

This is why breaking work down clearly matters.

A sloppy task used to waste one person’s time. Now it can waste one person’s time at much higher speed while producing a lot of plausible-looking output someone has to untangle later.

When AI helps most, the task looks like this:

  • break a problem into smaller steps that can actually be verified
  • draft an implementation plan before touching the code
  • execute a scoped change with clear constraints
  • review a diff for missing edge cases or weak tests

When it helps least, the request sounds like “clean this up” or “improve the architecture” or “just take a pass at this module.”

Those aren’t real tasks. They’re a good way to get a diff nobody asked for.

Ownership matters more, not less

When code becomes cheaper to produce, the people who actually matter aren’t the ones who can type the fastest. They’re the ones who can properly scope a change, verify it thoroughly, and stand behind the decision to ship it.

This isn’t nearly as exciting as bragging about 10x productivity gains, but it’s closer to what engineering teams actually need to succeed, without reducing quality.

It doesn’t matter whether the first draft flowed from an AI model, a Stack Overflow snippet, or late-night coding. What matters is who really understood the trade-offs involved, who bothered to check the edge cases, and who’s willing to put their name on it when it hits production.

That’s why ownership becomes more critical, not less, when AI handles the initial draft. The machine can spit out options all day long, but it can’t be paged (yet) when something breaks in the middle of the night. It can’t explain why it made a particular trade-off during Friday’s incident review. It can’t look a stakeholder in the eye and say, “I’ve got this.”

What actually helps

Look, the answer isn’t to artificially slow things down for the sake of purity. The answer is fixing the system around the code.

Here’s what that looks like in practice:

First, break work down properly. Smaller tasks aren’t just easier to manage - they’re easier to generate with AI, easier to verify, easier to review, and way easier to toss out when the model starts hallucinating or going off the rails.

Second, make those feedback loops cheaper. When pumping out more code changes, tests need to be fast, CI needs to be reliable, and deploys need to be boringly predictable. If any of those are slow or flaky, they become instant bottlenecks that negate any speed gains from AI.

Third, get crystal clear on decisions. If every meaningful change requires a meeting and three layers of approval, you’re still optimized for a world where code was hard to write. AI just highlights how broken that process is to begin with.

Fourth, treat review like the bottleneck it often is, because it frequently is. If review capacity can’t keep up with the increased volume from AI assistance, you’re just creating a bigger backlog of unreviewed changes. That’s not progress, that’s just creating future problems for yourself. Make thorough reviewing a priority.

Fifth, make ownership obvious. Someone on the team should be able to look at a change and say without hesitation: “I understand this, I’ve verified it, and I’m responsible for it shipping successfully.”

And finally, when the AI tool isn’t actually helping? Stop prompting it. Seriously. Close the chat window, go back to the original problem statement, and rewrite it in simpler terms. Sometimes the best way to use AI is to not use it at all until you’ve clarified what you actually want, because if you don’t know what you want, how do you expect Claude to know what you want?

The upside of AI in software development is absolutely real. It’s a double-edged sword that can be extremely beneficial when the correct approach is taken, but it can produce significant amounts of low-quality output when the wrong approach is used.

Top comments (0)