How I use Claude Code for code reviews — a complete workflow
Code reviews are where most engineering velocity goes to die. A PR sits for two days, gets three drive-by comments about variable names, and merges with the architectural problems untouched.
I've changed how I do code reviews using Claude Code. Here's the exact workflow.
The problem with traditional code reviews
Most code reviews focus on what's immediately visible: naming, formatting, obvious bugs. The harder questions get skipped:
- Does this change introduce subtle race conditions?
- Does this new abstraction conflict with how we do things in module X?
- Is this the right pattern, or just a pattern that works?
- What are the edge cases the author didn't test?
These questions require holding the entire codebase in your head simultaneously. Humans struggle with this. Claude Code doesn't.
My code review workflow
Step 1: Load the PR diff + context
# Get the full diff
git diff main...feature-branch > /tmp/pr-diff.txt
# Load into Claude Code
claude "Review this PR diff. First, understand what the author is trying to accomplish. Then identify: (1) correctness issues, (2) architectural concerns, (3) missing edge cases, (4) security implications. File: /tmp/pr-diff.txt"
Step 2: Cross-reference with related modules
claude "The PR touches UserService. Find all other services that interact with UserService and check if this change could break any of them. Look for: shared state, event patterns, database transaction assumptions."
This is the step humans skip in code reviews. It's also where the real bugs hide.
Step 3: Generate specific test scenarios
claude "Based on this PR, generate 10 specific test scenarios the author should have covered. Include: happy path, error cases, edge cases, and concurrent access scenarios. Be specific — actual input values, not just 'test the error case'."
Step 4: Write the review comments
claude "Write GitHub-ready PR review comments for the issues you found. Format each as: FILE:LINE - SEVERITY - COMMENT. Severity levels: blocking (must fix), suggestion (should consider), nit (optional)."
Real example: catching a subtle bug
Last month I was reviewing a PR that added caching to our user lookup:
// The PR code
async function getUser(userId) {
const cached = await redis.get(`user:${userId}`);
if (cached) return JSON.parse(cached);
const user = await db.users.findOne(userId);
await redis.set(`user:${userId}`, JSON.stringify(user), 'EX', 3600);
return user;
}
Looked fine. My Claude Code review caught it:
LINE 6 - BLOCKING - Race condition: if db.users.findOne returns null
(user not found), you're caching null and returning it.
Next request for this userId will get cached null for 1 hour.
Add: if (!user) return null; before the redis.set line.
Also: no error handling on redis.get failure. If Redis is down,
this will throw instead of falling back to the database.
Two real bugs. A human reviewer would have needed to think carefully to catch both. Claude Code caught them in seconds.
The rate limit problem
Thorough code reviews require holding a lot of context: the PR diff, the related modules, the existing patterns, the test coverage.
For large PRs — anything touching core services, database schemas, or shared utilities — I routinely burn through my Claude Code session limits mid-review. The conversation history fills up. I lose context about what I already checked.
I switched to SimplyLouie ($2/month, no rate limits) for exactly this reason. Code reviews are too important to interrupt. When you're mid-way through cross-referencing a security-sensitive change across 12 files, you can't afford to lose that context.
What changes when you do reviews this way
Coverage goes up: You catch the bugs that hide in the interactions between modules, not just the bugs visible in the diff.
Review speed goes up: Generating test scenarios takes 30 seconds. Writing review comments takes 2 minutes. The time goes to understanding the code, not formatting the feedback.
Author experience improves: Comments are specific, actionable, and prioritized. No more "this could be cleaner" without suggesting how.
Architecture stays coherent: When you can cross-reference every PR against existing patterns, you catch drift before it compounds.
The workflow in practice
I do this for every PR now that touches anything outside a single module. For tiny bug fixes it's overkill. For anything architectural, it's the only way I trust the review is actually thorough.
The commands above take about 5 minutes to run. The alternative is a 30-minute manual review that misses half the edge cases.
I use SimplyLouie for uninterrupted Claude sessions during code reviews. $2/month, no rate limits. Try it free for 7 days.
Top comments (0)