DEV Community

brian austin
brian austin

Posted on

How I use Claude Code for code reviews on large PRs — without hitting rate limits

Large pull requests are a code review nightmare. 50+ files changed, thousands of lines of diff, and you need to understand not just what changed but why it matters.

I've been using Claude Code for code reviews for the past few months, and I've developed a workflow that handles even 10,000-line PRs without losing context — or hitting rate limits.

Here's exactly how I do it.

The problem with reviewing large PRs

Most AI-assisted code review fails the same way: you paste the diff, the model loses context halfway through, you hit a token limit, and you start over. With a $20/month subscription, you're also watching the clock.

For large PRs, you need a systematic approach, not just "paste and hope."

My workflow: reviewable chunks

Step 1: Get a structural overview first

Before looking at any code, I ask Claude Code to map the PR:

git diff main...feature-branch --stat | claude -p "Summarize what this PR changes architecturally. Group by: new features, refactors, bug fixes, test changes. Don't look at code yet — just the file names and line counts."
Enter fullscreen mode Exit fullscreen mode

This gives me a mental model of the PR in ~30 seconds. I know which areas deserve deep review and which are mechanical changes.

Step 2: Review by domain, not by file order

Git shows files alphabetically. That's useless for understanding. I group by domain:

# Review just the API layer changes
git diff main...feature-branch -- 'src/api/**' | claude -p "Review these API changes for: breaking changes, missing input validation, auth gaps, and error handling. Be specific about line numbers."

# Review just the database layer
git diff main...feature-branch -- 'src/models/**' 'migrations/**' | claude -p "Review these database changes for: missing indexes, N+1 query risks, migration safety, and rollback concerns."
Enter fullscreen mode Exit fullscreen mode

Each domain review is a fresh, focused session. No context bleed between unrelated changes.

Step 3: Cross-cutting concerns pass

After domain reviews, I do one pass for things that span files:

git diff main...feature-branch | grep -A5 -B5 'password\|secret\|token\|auth\|admin' | claude -p "Review these security-sensitive code changes. Flag anything that could be a vulnerability."
Enter fullscreen mode Exit fullscreen mode

The grep filter means Claude only sees the relevant lines, not 10,000 lines of noise.

Step 4: The "explain the risky bits" pass

For any code I'm still uncertain about:

git show feature-branch:src/payments/processor.js | claude -p "Explain what this file does, what could go wrong at runtime, and what test cases are missing."
Enter fullscreen mode Exit fullscreen mode

Why this beats the naive approach

Naive approach: Paste entire diff → hit token limit → lose context → repeat.

Systematic approach: Each session is small and focused. Claude gives precise answers. I don't repeat work.

The total tokens used are similar — but the quality of analysis is dramatically higher when Claude isn't trying to hold 10,000 lines in context simultaneously.

The rate limit problem

Even with this workflow, a large PR review session can take 30-60 minutes of back-and-forth. If you're on a capped subscription, you'll hit rate limits right when you're deep into a critical section.

I switched to using the Claude API directly via SimplyLouie — it's ✌️2/month and uses the same Claude API endpoint. No session caps, no "slow mode" during peak hours, no losing your place mid-review because the rate limiter kicked in.

For code review specifically, the economics make obvious sense: a single blocked PR (waiting for review) costs more in developer time than a year of API access.

The output format matters

I always end my review sessions with:

Give me a structured review summary:
1. MUST FIX before merge (with line numbers)
2. SHOULD FIX before merge
3. CONSIDER for follow-up
4. Good patterns to keep
Enter fullscreen mode Exit fullscreen mode

This format maps directly to the review comments I leave in GitHub. Copy-paste with minimal editing.

What I've caught with this workflow

  • A missing database index on a column being queried in a loop (would have caused prod degradation at scale)
  • An auth check that was accidentally removed during a refactor
  • A race condition in async code that only showed up when reviewing the concurrency patterns across multiple files

None of these would have been obvious in a linear file-by-file review.


The key insight: Claude Code isn't a code review replacement — it's a code review amplifier. The systematic, domain-first approach is what turns it from a fancy grep into a real review tool.

What's your code review workflow? Do you review by file order or by domain?

Top comments (0)