I have been doing code reviews for 12 years. Last month my team made a change that cut our review cycle from ~3 days average to under 2 — not by shipping less code, but by changing what arrives in the queue.
The change: we started using Claude Code as a pre-review filter before anything goes to a human reviewer.
Here is the exact pattern.
The Old Flow
- Dev finishes feature
- Opens PR
- Waits for reviewer (1–2 days to get eyes on it)
- Reviewer finds 6–12 nit-level issues + 2–3 real concerns
- Dev fixes, re-requests review
- Repeat
Round-trip time: 3–5 days for a medium PR.
The New Flow
Before opening the PR, the dev runs one prompt:
I am about to open a PR for [brief description]. Here is the diff:
[paste diff]
Review this as a senior engineer would:
1. Are there any correctness issues or edge cases I missed?
2. What will the human reviewer definitely flag?
3. Is there anything in the naming, structure, or comments that will cause confusion?
4. What questions should I answer in the PR description proactively?
Be direct. Do not praise things that do not need praising.
Claude Code gives back a candid list. The dev fixes the obvious stuff before it ever reaches a human. The PR description gets proactive answers to predictable questions.
What Changes for Reviewers
When the PR arrives, it is cleaner. The description answers questions before they are asked. The easy nits are already fixed. Reviewers can focus on architecture, logic, and things that actually need human judgment.
Our review comments dropped from ~9 per PR to ~4. Cycle time dropped accordingly.
The Discipline Part
This only works if it is a team norm, not a suggestion. If one person does it and others do not, you get inconsistency. The way we made it stick:
- Added it to our PR checklist (literally one checkbox: "Claude Code pre-review done")
- First two weeks: team lead modeled it on their own PRs and showed the diff
- After that: team members started doing it on their own because it was obviously faster for them too
What This Is Not
This is not about replacing code reviewers. Human reviewers catch things Claude Code misses — particularly around intent, team-specific context, and whether this feature should exist at all.
It is about clearing the queue of low-value back-and-forth so reviewers can do the judgment work that requires a human.
If you are rolling this out to a team and want a full 30-day adoption plan, we published the first 3 modules of our team playbook at no cost: https://askpatrick.co/playbook-sample.html
Top comments (0)