DEV Community

Sahil Singh
Sahil Singh

Posted on • Originally published at glue.tools

Why Your Code Review Process Is Catching the Wrong Bugs

Your code review process catches misnamed variables, missing error handling, and style inconsistencies. It misses the fact that the PR changes a function called by 7 services, one of which has a null check that will now fail.

The first category of bugs is annoying. The second causes production incidents at 2 AM.

What Code Reviews Actually Catch

Studies show code reviews are effective at catching:

  • Style and formatting issues (95% catch rate)
  • Simple logical errors (70% catch rate)
  • Missing edge case handling (60% catch rate)

And ineffective at catching:

  • Cross-file dependency issues (15% catch rate)
  • Performance regressions (10% catch rate)
  • Concurrency bugs (5% catch rate)
  • Architecture violations (5% catch rate)

The pattern: reviews catch problems visible in the diff. They miss problems that exist in the relationship between the diff and the rest of the codebase.

The Diff Boundary Problem

A code review shows you what changed. It doesn't show you what the change affects. The reviewer sees:

- function getSession(token: string): Session | null {
+ function getSession(token: string): Session {
Enter fullscreen mode Exit fullscreen mode

This looks fine in isolation — removing the null return seems like cleaning up. But 3 files away, there's a null check:

const session = getSession(token);
if (!session) return redirectToLogin();
Enter fullscreen mode Exit fullscreen mode

That null check now has a type error. Or worse — it still compiles but the logic is dead code, meaning expired tokens no longer redirect to login.

The reviewer would need to check every caller of getSession to catch this. On a complex PR with 15 changed files, that's impractical.

Augmenting Reviews with Intelligence

What if the review tool automatically showed:

  • Every function affected by signature changes in the PR
  • Every caller of modified functions (not just the ones in the diff)
  • Historical regression patterns: "The last 2 changes to this function caused issues in the webhook handler"
  • Blast radius visualization: which features are affected by this PR

This isn't replacing human review. It's giving reviewers the context they need to catch the bugs that actually matter.

The style issues? Let ESLint handle those. Save human review time for the architectural questions that tools can surface but only humans can judge.


Originally published on glue.tools. Glue is the pre-code intelligence platform — paste a ticket, get a battle plan.

Top comments (0)