The average PR at most companies sits waiting for review for 24-72 hours. Not because reviewers are lazy. Because the system is broken.
Here's what's actually happening and how to fix it.
The 5 Reasons Code Reviews Are Slow
1. PRs Are Too Big
A 50-file PR takes exponentially longer to review than five 10-file PRs. Not just because there's more code — because the reviewer has to build a mental model of all the changes at once.
Research shows: Review quality drops dramatically after 400 lines of changes. After 1000 lines, reviewers start rubber-stamping.
Fix: Break work into smaller PRs. Ship behind feature flags if needed. A PR should do ONE thing.
2. Only 1-2 People Can Review Critical Areas
If your billing service PRs always go to the same person, you've created a bottleneck. That person also has their own work to do. Your PR sits in their queue behind 5 others.
This is a bus factor problem in disguise. If only Marcus can review billing PRs, what happens when Marcus is on vacation?
Fix: Cross-train reviewers. Pair junior engineers with seniors on reviews. Expand the pool of qualified reviewers for each critical area.
3. No Shared Context
The reviewer opens the PR and thinks: "What is this trying to do? Why is it changing the auth flow? What ticket is this for?" They spend 20 minutes just understanding the intent before they can evaluate the implementation.
Fix: Write PR descriptions. Not novels — just:
- What this changes
- Why
- How to test it
- Any risks
If your codebase has a lot of tribal knowledge, even understanding the code being changed requires asking someone. Consider investing in codebase intelligence to make system understanding self-serve.
4. Review Is Not Scheduled
Most engineers treat code review as an interruption — something they do between their "real" work. So reviews happen whenever the reviewer has a gap, which might be never.
Fix: Block time for reviews. Two 30-minute review blocks per day (morning and afternoon) creates a max 4-hour wait time. Some teams use "review o'clock" — a daily 30-minute slot where the whole team does reviews.
5. Unclear Standards
Reviewers spend time debating style (tabs vs spaces, naming conventions) instead of substance (correctness, performance, security). These debates are slow and demoralizing.
Fix: Automate style enforcement. ESLint, Prettier, Black, gofmt — whatever your language has. If a machine can catch it, a human shouldn't be spending review time on it.
The Compound Effect
Slow reviews → larger batch sizes (devs pile up changes while waiting) → even slower reviews → even larger batches.
This directly impacts your DORA metrics:
- Lead time increases because code sits in review
- Deployment frequency drops because changes batch up
- Change failure rate increases because large PRs hide bugs
- MTTR increases because it's harder to identify which change caused an issue
What Good Looks Like
Elite engineering teams:
- Median PR size: <200 lines
- Median time-to-first-review: <4 hours
- Median time-to-merge: <24 hours
- 3+ qualified reviewers for every critical area
Start Here
- This week: Measure your current median time-to-merge
- Next sprint: Implement "review o'clock" — 30 minutes daily
- This quarter: Cross-train at least 2 additional reviewers for your most bottlenecked area
The bottleneck isn't your reviewers. It's the system around them.
Glue helps identify review bottlenecks, knowledge concentration, and bus factor risks — so you can fix the system, not blame the people.
Top comments (0)