DEV Community

Nijat for Code Board

Posted on

Code Churn Doubled While We Were Celebrating AI Speed Gains

The number that should worry you

AI now generates roughly 41% of all code in professional workflows. Code churn — lines reverted or substantially rewritten within two weeks of being merged — has doubled from 3.3% to 7.1%, according to GitClear's analysis of over 211 million lines of code.

Meanwhile, Google's 2024 DORA report found that delivery stability decreased 7.2% year over year. More code ships. More of it breaks.

These aren't contradictory trends. They're the same trend.

We optimized the wrong bottleneck

Writing code was never the bottleneck in professional software development. Understanding it, reviewing it, and making good decisions about whether it should ship — that's where time actually goes.

AI made the fast part faster. But DORA metrics alone can't tell you whether throughput gains are real or just inflated volume. As multiple 2026 analyses have pointed out, traditional metrics like PRs merged and deployment frequency get inflated by AI output without necessarily indicating more value delivered.

High-performing teams review PRs within 4 hours. When AI-assisted workflows double or triple PR volume, maintaining that review cadence becomes structurally impossible unless something changes about how you triage, prioritize, and process code reviews.

The review bottleneck is measurable

Research from the MSR 2026 conference on agent-authored PRs found a stark pattern: 28.3% of AI-generated PRs merge almost instantly (low-friction automation), but once a PR enters the iterative review loop, it often demands disproportionate reviewer attention. Simply gating the riskiest 20% of PRs can capture 69% of total review effort.

That's an actionable insight. But most teams can't act on it because they don't have visibility into PR risk across repositories. They're still switching between tabs, manually checking CI status, and guessing which PRs need attention first.

What actually helps

The answer isn't slowing down AI adoption. It's building better signal around what ships.

  • Track code churn alongside velocity. If both are rising, your net productivity gain is smaller than it looks.
  • Measure PR pickup time. The gap between opening a PR and first review is often your biggest hidden bottleneck.
  • Triage by risk, not by order. Not every PR deserves the same review depth. Automated risk scoring — based on diff size, sensitive files, CI status — helps reviewers focus where it matters.
  • Get cross-repo visibility. If your team works across 10+ repositories, per-repo dashboards fragment your ability to see the full picture.

This is the exact problem Code Board was built to address: a unified view of every PR across every repo, with risk scores and CI intelligence that help teams prioritize reviews instead of drowning in volume.

The teams that win the AI era won't be the fastest at generating code. They'll be the ones who can still tell the difference between output and progress.

Top comments (0)