DEV Community

brian austin
brian austin

Posted on

How I use Claude Code to do code reviews asynchronously — a complete workflow

How I use Claude Code to do code reviews asynchronously — a complete workflow

Code reviews are one of the biggest bottlenecks in any engineering team. Your PR sits for hours waiting for a reviewer. You context-switch back and forth. Reviews happen at inconvenient times.

I've been using Claude Code to handle the initial review pass asynchronously — catching common issues before a human reviewer even looks at the code. Here's my exact workflow.

The problem with synchronous code review

Traditional code review has a few pain points:

  • You submit a PR and wait
  • Context gets lost while waiting
  • Reviewers find different things each time
  • Nitpicks slow down meaningful feedback
  • Timezone differences make async review essential

Claude Code handles the first pass — catching bugs, style issues, and obvious problems — so human reviewers can focus on architecture and business logic.

My async review workflow

Step 1: Generate a diff summary

Before asking Claude Code to review, give it context:

# Get the diff
git diff main...feature-branch > /tmp/review.diff

# Also get the PR description
gh pr view --json title,body > /tmp/pr-context.json
Enter fullscreen mode Exit fullscreen mode

Then ask Claude Code:

Here's the diff for PR #247 and the PR description.
Review this code for:
1. Bugs and edge cases
2. Security issues (especially input validation, SQL injection, XSS)
3. Performance problems
4. Missing error handling
5. Test coverage gaps

For each issue, give me: severity (critical/major/minor), file + line number, and a suggested fix.
Enter fullscreen mode Exit fullscreen mode

Step 2: Structure the review output

I ask Claude Code to format the review as a checklist I can paste into GitHub:

Format your review as GitHub markdown:
- Use checkboxes for each issue
- Group by: Critical Issues, Major Issues, Minor Issues, Nitpicks
- Include code snippets showing the problem AND the fix
- Add a summary at the top
Enter fullscreen mode Exit fullscreen mode

This gives me a review I can post directly, or use as notes for my own review.

Step 3: Deep dive on specific files

For complex files, I give Claude Code focused context:

Focus on auth/middleware.js in this diff.

This middleware handles JWT validation and rate limiting.
Our current rate limit is 100 req/min per IP.
We had a security incident in February where tokens weren't
being properly invalidated on logout.

Look specifically for:
1. Token validation gaps
2. Rate limit bypass possibilities  
3. Anything that could cause the February issue to recur
Enter fullscreen mode Exit fullscreen mode

Step 4: Check for test coverage

Look at the test files in this diff.
For each new function added, tell me:
1. Is there a test for the happy path?
2. Is there a test for error cases?
3. What edge cases are missing?

List the specific test cases I should add.
Enter fullscreen mode Exit fullscreen mode

Step 5: Final summary for the human reviewer

I ask Claude Code to write a summary for the human reviewer:

Write a 3-paragraph summary for the human reviewer:
1. What this PR does (in plain English)
2. The main risks to be aware of
3. What they should focus their review time on
Enter fullscreen mode Exit fullscreen mode

Handling long diffs

Large PRs are where this workflow shines most — and where you'll hit rate limits hardest.

For a 2,000-line diff, Claude Code needs to maintain context across many files simultaneously. Sessions get long. If you're using the Claude Code subscription, you'll hit the hourly rate limit partway through a complex review.

I use SimplyLouie as my ANTHROPIC_BASE_URL endpoint — it's ✌️2/month and routes to the same Claude API, so I can run long review sessions without interruption.

# Set in your shell profile
export ANTHROPIC_BASE_URL=https://simplylouie.com/api
export ANTHROPIC_API_KEY=your-key

# Now Claude Code uses SimplyLouie
claude
Enter fullscreen mode Exit fullscreen mode

Breaking up large PRs for review

For PRs over 500 lines, I split the review into chunks:

# Review by directory
git diff main...feature-branch -- src/api/ > /tmp/api-review.diff
git diff main...feature-branch -- src/frontend/ > /tmp/frontend-review.diff
git diff main...feature-branch -- tests/ > /tmp/tests-review.diff
Enter fullscreen mode Exit fullscreen mode

Then I review each chunk in a separate Claude Code session — fresh context, no accumulated confusion.

What to automate vs. what to keep human

Let Claude Code handle:

  • Style and formatting issues
  • Missing null checks
  • Obvious security antipatterns
  • Unused variables and dead code
  • Missing error handling
  • Test coverage gaps

Keep human review for:

  • Architecture decisions
  • Product logic correctness
  • Whether this PR actually solves the right problem
  • Performance under real load
  • Team conventions that aren't in the code

The time saving

My review workflow before: 45-90 minutes per substantial PR.

With Claude Code doing the first pass: 15-20 minutes for me to review the AI's findings, validate them, and add my own architectural notes.

For PRs where the AI catches everything cleanly: sometimes as little as 5 minutes.

The key insight: Claude Code doesn't replace the review. It eliminates the mechanical parts — the parts that don't require human judgment — so human reviewers can focus on what actually matters.


Running long Claude Code sessions? SimplyLouie is ✌️2/month and works as a drop-in ANTHROPIC_BASE_URL replacement — same API, no rate limit interruptions.

Top comments (0)