The Bottleneck Moved
For twenty years, the slowest step in the software delivery pipeline was writing code. Review, testing, and deployment all had time to keep up because there simply wasn't that much new code to process.
That era is over.
LinearB's 2026 Software Engineering Benchmarks Report — analyzing 8.1 million pull requests from 4,800 engineering teams across 42 countries — reveals a paradox that every engineering leader should internalize: teams using AI coding tools generate 25-35% more code, but PR review times have increased by 91%.
Developers feel 20% faster. They're actually 19% slower in terms of end-to-end delivery. That's a 39-point perception gap between feeling productive and being productive.
More Code, Same Review Capacity
The math is brutal. If a team goes from 100 PRs per week to 200 but still has the same number of reviewers, every PR waits longer. While it waits, the codebase changes. By the time feedback arrives, the author has moved on to something else. They rebase, re-test, and rework logic that was already correct.
This isn't theoretical. It's showing up in DORA metrics everywhere. Lead time for changes is stalling or increasing even as coding velocity climbs. The "more code, fewer releases" pattern is becoming the engineering leadership blind spot of 2026.
The Real Cost of Stale PRs
Every stale pull request carries compounding costs. Merge conflicts multiply as the target branch keeps moving. Context fades for both the author and the reviewer. Developers juggling two or three open PRs simultaneously are holding too much context, and their review quality degrades as a result.
The average engineer already abandons roughly 8% of the PRs they create. As volume increases without a corresponding increase in review throughput, that number will climb.
What Actually Helps
The answer isn't just throwing AI at reviews, though AI-assisted triage and risk scoring can help prioritize what needs human attention. The real fixes are structural:
- Visibility across repos. If PRs are scattered across dozens of repositories with no unified view, stale PRs go unnoticed. Tools like Code Board exist specifically to aggregate PRs from GitHub and GitLab into a single board so nothing falls through the cracks.
- Risk-based prioritization. Not every PR needs the same depth of review. Automatically scoring PRs by diff size, CI status, and sensitive file changes lets reviewers focus energy where it matters.
- Tracking review health as a team metric. PR cycle time, time-to-first-review, and stale PR counts are leading indicators of delivery health. If you're not measuring them, you're flying blind.
The Takeaway
Faster code generation without faster code review is just inventory. It fills your backlog, not your release notes. The teams that ship well in 2026 won't be the ones writing the most code — they'll be the ones that move PRs through review without letting them rot.
Top comments (0)