Early in my career, I thought code review meant checking for semicolons and variable naming. I would leave 30 comments on a PR, most of them cosmetic. My teammates dreaded my reviews. One senior engineer finally pulled me aside and said: "You are reviewing the paint job. Start reviewing the engine."
That conversation changed how I approach code reviews. Five years and thousands of PRs later, here is what I have learned about reviewing code like a senior engineer.
The Purpose of Code Review
Code review is not about catching typos. Linters do that. Code review serves three purposes:
- Catch design problems that automated tools cannot see
- Share knowledge across the team
- Maintain a quality bar that keeps the codebase healthy over time
If your code reviews are not doing at least two of those three things, you are wasting everyone's time.
The Senior Engineer's Review Framework
When I open a PR, I review in four passes. Each pass focuses on a different layer.
Pass 1: Understand the Why (2 minutes)
Before looking at a single line of code, I read:
- The PR description
- The linked ticket or issue
- Any design doc or RFC referenced
I need to understand what problem this PR solves and what approach the author chose. Without this context, I cannot evaluate whether the code is correct — only whether it compiles.
If the PR has no description, my first comment is: "Can you add context about what this changes and why?" This is not nitpicking. It is essential.
Pass 2: Architecture and Design (5 minutes)
Now I look at the code, but I do not read line by line. I look at the file structure:
- Which files were added, modified, or deleted?
- Does the change touch the right layers of the system?
- Are there any unexpected dependencies being introduced?
Questions I ask at this stage:
- Does this belong here? Is the code in the right module/service?
- Is this the simplest approach? Could this be done with less code?
- What happens at scale? If this endpoint gets 1000x traffic, does the approach still work?
- What are the edge cases? Empty inputs, concurrent access, network failures?
This is where senior engineers add the most value. Junior and mid-level reviewers tend to skip this pass entirely and jump straight to line-by-line reading.
Pass 3: Implementation Details (10 minutes)
Now I read the code. I am looking for:
Logic errors:
# Bug: this skips the last element
for i in range(len(items) - 1):
process(items[i])
Error handling gaps:
// What happens if the API returns a 500?
// What happens if the response is not valid JSON?
const data = await fetch(url).then(r => r.json());
Performance issues:
# N+1 query hiding inside a loop
for user in users:
orders = db.query(f"SELECT * FROM orders WHERE user_id = {user.id}")
Security concerns:
// SQL injection vulnerability
const query = `SELECT * FROM users WHERE id = ${req.params.id}`;
Pass 4: Tests and Documentation (3 minutes)
Finally, I check:
- Are there tests? Do they test behavior, not implementation?
- Do the tests cover the happy path AND edge cases?
- Is there documentation for any public API or complex logic?
- Are there any TODO comments that should be tracked as tickets?
The Art of Writing Good Review Comments
How you write comments matters as much as what you write. Bad review comments create friction and resentment. Good ones create learning and trust.
Use Questions Instead of Commands
Bad: "Move this to a utility function."
Good: "What do you think about extracting this into a utility function? I think it would make the logic in processOrder easier to follow."
Questions invite collaboration. Commands create a power dynamic.
Categorize Your Comments
I use a simple prefix system:
- [nit] — Minor style issue. Optional to address.
- [suggestion] — I think this could be better, but your call.
- [question] — I do not understand this. Help me learn.
- [must-fix] — This will cause a bug, security issue, or data loss. Must be addressed before merge.
This system lets the author quickly triage your feedback and focus on what matters.
Praise Good Code
Code review is not just about finding problems. When you see clean code, elegant solutions, or good test coverage, say so.
"Nice use of the strategy pattern here — it makes adding new payment providers really clean."
Positive feedback reinforces good practices and makes the review process feel collaborative rather than adversarial.
Common Code Review Anti-Patterns
The Bikeshedder
Spends 20 minutes arguing about whether a variable should be called userData or userInfo while ignoring a race condition in the authentication flow. Do not be this person.
The Rubber Stamper
Approves every PR with "LGTM" after 30 seconds. This provides zero value and signals to the team that reviews do not matter.
The Perfectionist
Blocks every PR until the code matches their personal style preferences. This kills velocity and demoralizes the team. Ship good code, not perfect code.
The Ghost
Takes three days to review a 50-line PR. Slow reviews are almost as bad as no reviews. Aim to review within 4 business hours.
Using AI to Augment Your Reviews
Here is something I have started doing that saves significant time: before doing my manual review, I run the code through an AI code reviewer to catch the obvious stuff.
AI Code Reviewer is a free tool where you paste in code and get instant feedback on potential bugs, security issues, performance problems, and style improvements. It handles the "Pass 3" mechanical review so I can focus my human attention on architecture, design, and mentoring.
I still do the full review myself. But having an AI pre-scan means I catch more issues in less time. It is like having a thorough junior reviewer do a first pass before I look at the PR.
The 20-Minute Rule
If a PR takes more than 20 minutes to review, one of two things is wrong:
The PR is too big. Ask the author to split it into smaller PRs. As a rule of thumb, PRs over 400 lines of code should almost always be split.
You are going too deep too fast. Start with the architecture pass. If the high-level approach is wrong, there is no point reviewing implementation details.
Building a Code Review Culture
The best engineering teams I have worked on share these traits:
- Review turnaround under 4 hours. Fast reviews keep the team moving.
- Everyone reviews, including seniors. Code review is not junior work.
- Disagreements are resolved with data. "I prefer X" is weak. "X is better because of Y benchmark/precedent" is strong.
- PR size is kept small. The team has a cultural norm around 200-400 line PRs.
- Reviews are learning opportunities. The best comment is one that teaches something new.
Start Improving Today
Pick one thing from this article and apply it to your next code review. Maybe it is the four-pass framework. Maybe it is the comment prefix system. Maybe it is running your code through AI Code Reviewer before your next PR submission to catch issues before your teammates even see them.
Great code reviews are a skill. Like any skill, they improve with deliberate practice. Start reviewing the engine, not the paint job.
Top comments (0)