A few years ago, I left a code review feeling… weird.
The code worked.
Tests were passing.
But the comments?
“This is wrong”
“Bad approach”
“Rewrite this”
No explanation.
No context.
Just judgment.
Nothing was technically incorrect.
But something was off.
That’s when it clicked:
Code reviews aren’t just about code, they’re about communication.
And done poorly, they don’t improve the codebase, they just slowly damage the team.
TL;DR
- Great code reviews focus on improving the code, not judging the developer.
- The difference is in how feedback is framed, not just what is said.
- Healthy teams optimize for clarity, context, and collaboration, not ego.
Table of Contents
- Why Code Reviews Go Wrong
- What Great Code Reviews Actually Do
- 1. Focus on the Code, Not the Person
- 2. Explain the Why, Not Just the What
- 3. Turn Judgments Into Questions
- 4. Avoid Bike-Shedding
- 5. Balance Speed and Depth
- Real Example: From Harsh to Helpful
- How to Handle Receiving Feedback
- Practical Team Agreements
- Final Thoughts
Why Code Reviews Go Wrong
Most developers don’t want to give bad feedback, but code reviews often become tense anyway.
Why?
Because they sit at the intersection of:
- ownership (“this is my code”)
- identity (“this reflects my skills”)
- time pressure (“we need to ship”)
Add a bit of unclear communication, and things escalate quickly.
What starts as feedback becomes criticism.
What should be collaboration becomes friction.
What Great Code Reviews Actually Do
A good code review doesn’t just catch bugs.
It does three things:
- improves the code
- shares knowledge
- strengthens the team
If one of these is missing, something is off, especially the last one, because a team that avoids reviews or fears them will never move fast in the long run.
1. Focus on the Code, Not the Person
This sounds obvious, but small wording differences matter a lot.
Compare:
This is wrong
vs
This logic might lead to edge cases when the input is empty
The first feels like judgment.
The second feels like collaboration.
A simple shift helps:
👉 describe the problem, not the person
Instead of:
- “you did this wrong”
Try:
- “this approach could cause issues because…”
It’s a small change, but it completely changes how feedback is received.
2. Explain the Why, Not Just the What
One of the most frustrating things in code reviews is vague feedback.
Refactor this
Why?
What’s the problem?
What’s the goal?
Without context, the comment becomes noise.
Good feedback always answers one question:
Why does this matter?
For example:
This works, but extracting this into a function could make it easier to test and reuse
Now the author understands:
- the intent
- the benefit
- the trade-off
And can make a better decision.
3. Turn Judgments Into Questions
When in doubt, ask.
Questions are powerful because they:
- invite discussion
- reduce defensiveness
- uncover context you might be missing
Instead of:
This is over-engineered
Try:
Do we need this level of abstraction here, or could a simpler approach work?
Same concern.
Completely different outcome.
Sometimes, you’ll even realize the original author had a valid reason.
4. Avoid Bike-Shedding
Not all feedback has the same weight, but many reviews treat everything equally.
You end up with long threads about:
- variable names
- formatting
- personal preferences
While more important issues get less attention.
This is called bike-shedding and it kills productivity.
A simple rule helps:
👉 focus on what impacts:
- correctness
- readability
- maintainability
Leave the rest to:
- linters
- formatters
- team conventions
Your energy is limited, so use it where it matters.
5. Balance Speed and Depth
There’s a tension in every code review:
- go deep → better quality
- go fast → better velocity
Great teams don’t pick one, they balance both.
Some practical ways:
- keep PRs small
- review early, not just at the end
- avoid “mega reviews” with 1000+ lines
Because the larger the change, the harder it is to review well.
And the easier it is for issues to slip through.
Real Example: From Harsh to Helpful
Let’s take a real-style example.
❌ Harsh
This is messy. Refactor it.
This creates:
- confusion
- frustration
- back-and-forth
✅ Helpful
This works, but it’s a bit hard to follow due to the nested conditions.
What do you think about extracting this part into a separate function? It might make the flow easier to read and test.
Now we have:
- context
- suggestion
- collaboration
Same intent, much better outcome.
How to Handle Receiving Feedback
Giving feedback is only half of the equation.
Receiving it matters just as much.
A few things that help:
- assume positive intent
- ask for clarification if something is unclear
- don’t take comments personally
- push back when needed, but with context
A good code review is not about agreeing on everything.
It’s about reaching a better solution together.
Practical Team Agreements
The best teams don’t leave this to chance.
They define how reviews should work.
A few simple agreements can make a big difference:
- “Explain the why” is mandatory
- Prefer questions over judgments
- Focus on high-impact issues first
- Keep PRs small and focused
- No ego in discussions
These are small rules, but they create a much healthier environment.
Final Thoughts
Code reviews are one of the most powerful tools a team has, but only if they’re done right because bad reviews don’t just slow down development, they create friction, frustration, and eventually silence.
Great reviews do the opposite:
- They improve the code.
- They spread knowledge.
- They build trust.
And in the long run, that’s what makes a team truly fast.
If this resonated with you:
- Leave a ❤️ reaction
- Drop a 🦄 unicorn
- Share the best (or worst) code review you’ve experienced
And if you enjoy this kind of content, follow me here on DEV for more.
Top comments (13)
The "explain the why" point is the one that changed how our team reviews code. We have a rule: if your comment is just flagging a problem, it's not a complete comment. You need to explain the reasoning or link to context. "This could cause a race condition under X load pattern" is a review. "This is bad" is just noise.
The other thing we learned running a distributed team: async code reviews can feel colder than they are. We added a short convention where "Nit:" means it's a preference, not a blocker. Reduced so much confusion about what actually needed to change before merging. The drama usually comes from ambiguity — people don't know if the reviewer is blocking or just thinking out loud.
This is such a great example of a team turning principles into actual practice.
The “explain the why” rule is spot on: it transforms reviews from gatekeeping into knowledge sharing. A comment without context doesn’t just feel harsher, it’s also far less useful. The moment you add reasoning, it becomes something others can learn from (and even challenge constructively).
I also really like your “Nit:” convention. That’s a perfect illustration of reducing ambiguity, which, as you said, is where most of the drama comes from. When people don’t know whether something is blocking or optional, they tend to assume the worst, especially in async communication.
What you’ve done there is subtle but powerful: you’ve made intent visible. And that’s a huge part of keeping reviews calm and efficient.
Have you found that these conventions needed reinforcement over time, or did they stick naturally once the team saw the benefit?
Honestly, they stuck faster than I expected — but only because we made them the default, not the exception. We added a PR template with a checklist that includes "Did you explain the why for non-obvious changes?" and the Nit: convention was in the template's comment guidelines. When something is in the template, it becomes muscle memory within a few weeks. The one thing that needed reinforcement was getting people to actually read the "why" comments in reviews instead of just scanning for approval. We fixed that by making the first review of every new team member's PR a pairing session — walk through the comments together. After that, they get it.
That’s a great example of making the behavior systemic instead of relying on individual discipline.
Making “explain the why” part of the default PR flow is huge, it removes the ambiguity entirely. And I really like the pairing-on-first-review idea; it’s such a simple way to calibrate expectations early instead of correcting later.
The point about people skipping the “why” is interesting too, it shows that good habits need both writing and reading reinforcement. Templates solve the first, but your pairing approach neatly solves the second.
Feels like the common thread here is: if you want review culture to stick, you have to design for it, not just advocate for it.
As someone working a lot with distributed systems, this resonates hard. The bigger and more decoupled your architecture gets, the more important code reviews become as a shared understanding layer, not just a quality gate.
One thing I’ve been thinking about lately: in cloud-native environments (Kubernetes, microservices, infra-as-code), a lot of the “why” behind decisions isn’t obvious from the code itself. It often lives in system constraints, scaling assumptions, or even past incidents.
I’m curious how others handle this:
I’ve started encouraging engineers to treat PR descriptions almost like mini design docs — especially when touching infra or concurrency-sensitive areas. It slows things down a bit upfront, but seems to reduce back-and-forth a lot.
Also, for async teams: how do you keep reviews human? I’ve seen great technical feedback still feel cold or intimidating without small touches of tone.
Would love to hear how others are navigating this, especially in high-scale or multi-team environments ☁️
This is such an important angle, especially once you move beyond single-repo or tightly coupled systems.
I strongly agree: in cloud-native and distributed environments, the “why” often lives outside the code. If that context isn’t brought into the PR, reviewers are forced to guess and that’s where misunderstandings (or overly cautious blocking) start to creep in.
I tend to think of it this way:
That doesn’t mean writing a full design doc every time, but giving just enough background so the reviewer can make informed decisions:
Your idea of “mini design docs” in PR descriptions is exactly the direction I’ve seen work well, especially for infra or concurrency-heavy changes.
On the “keeping reviews human” side: small signals go a long way. A quick “Nice catch” or “Good question” can completely change how feedback is perceived, without adding noise.
I am curious: have you found any patterns that help balance just enough context without overwhelming the reviewer?
Love that framing: “authors own context” is going straight into my team’s guidelines 😄
And yes, the “how much context is enough” question is the ongoing struggle.
What’s been working for us lately is a kind of layered approach:
That way, reviewers can choose their depth without being forced into a wall of text.
On the human side, I totally agree. It’s funny how a simple “Good catch” or even an emoji can soften what would otherwise feel like criticism. Especially async, tone doesn’t just help. It is part of the message.
One thing I’m still experimenting with: encouraging more “thinking out loud” in reviews without making it feel like indecision. In complex systems, sometimes uncertainty is honest, but it can also confuse authors if not framed well.
Have you seen teams handle that well?
That layered approach is excellent, it respects both time and complexity, which is exactly the balance most teams struggle with. The TL;DR + optional depth model maps really well to how people actually review code in practice.
On the “thinking out loud” point, I’ve definitely seen that go both ways. When it’s unstructured, it can feel like uncertainty or even lack of confidence. But when it’s framed, it becomes one of the most powerful tools in a review.
What seems to work well is making intent explicit, for example:
“Thinking out loud: could this create issues under X condition?”
“Not blocking — just exploring alternatives here”
“I might be missing context, but…”
That small bit of framing does two things:
It reduces the pressure on the author (they don’t feel immediately judged)
It invites collaboration instead of forcing a decision
In a way, it turns the review into a shared problem-solving space rather than a pass/fail checkpoint.
And zooming out a bit: what you’re describing across all these practices (context, tone, clarity of intent) really comes down to one thing: reducing ambiguity. That’s the root cause of most “drama” in reviews.
This has been a fantastic discussion, exactly the kind of nuance I was hoping the article would spark 🙂
Thank you @elenchen
This has been such a valuable exchange, thank you @gavincettolo for taking the time to go this deep.
The “framed thinking out loud” idea is 🔥 that’s exactly the missing piece I’ve been trying to articulate. It keeps the door open for collaboration without creating confusion or extra cognitive load.
Also really appreciate the way you tied everything back to reducing ambiguity, that clicked for me. It’s such a simple lens, but it explains so many of the issues we see in real teams.
Definitely taking a few of these ideas back to my team (especially the intent framing + context ownership).
Thanks again for the thoughtful responses, and for writing the article in the first place 🙌
This really hit home. I’ve been on teams where code reviews felt like a battleground, and others where they felt like mentoring sessions: same process, completely different outcomes just because of tone and intent.
One thing I’ve noticed is that consistency matters as much as kindness. If some reviewers are thoughtful and others are blunt or overly critical, it creates confusion about what “good feedback” actually looks like, especially for newer devs.
Curious how you’ve seen teams handle that: do you think it’s better to formalize review guidelines (like a shared checklist or examples of good comments), or let the culture evolve more organically through modeling behavior?
Loved the article ❤️ definitely sharing this with my team.
That’s a great observation, inconsistency is one of the fastest ways to erode trust in code reviews.
I’ve seen teams struggle with exactly this, especially as they grow. Newer developers, in particular, end up “guessing” which tone is acceptable depending on who reviews their PR, and that uncertainty creates unnecessary stress.
In my experience, the best approach is a mix of both:
Start with lightweight guidelines. Not rigid rules, but a shared baseline. Things like “prefer questions over commands” or “criticize the code, not the person.” This gives everyone a common language.
Reinforce through modeling. Guidelines alone don’t stick. Senior engineers and team leads need to consistently demonstrate the behavior in real reviews — that’s what people actually learn from.
Call it out (gently) when it drifts. If a review comes across as too harsh or unclear, a quick nudge in private can help recalibrate without creating friction.
Make it part of onboarding. Showing examples of good reviews early helps set expectations before bad habits form.
So yes — some structure helps, but culture ultimately spreads through everyday interactions. The goal isn’t to standardize personality, just to align on respect and clarity so everyone feels safe contributing.
Really glad the article resonated with you 🙂
Really enjoyed this, especially the emphasis on treating reviews as collaboration, not judgment. The point about separating code from coder and using questions instead of blunt statements resonates a lot. It’s surprising how much tone alone can change whether feedback is accepted or resisted.
One thing I’ve been wondering in practice: how do you keep this “no-drama” approach working when time pressure is high? In many teams, deadlines tend to push reviews toward being shorter, more direct (and sometimes harsher), or even skipped altogether, which seems to undo all the cultural effort.
Do you have any concrete habits or team rituals that help maintain constructive feedback even under tight delivery pressure?
Great question and honestly, this is where most teams slip.
Time pressure doesn’t create bad review habits, it exposes them. If a team only “does good reviews” when things are calm, then the practice isn’t really embedded yet.
A few things I’ve seen work in real teams:
Lower the bar for what a “good review” is under pressure. It doesn’t have to be perfect or exhaustive, just constructive. Even a quick “Have you considered X?” is better than silence or blunt directives.
Optimize for smaller changes. When PRs are small, reviews stay fast and thoughtful. Large PRs are where tone and quality degrade first when time is tight.
Make intent explicit. A short line like “Quick pass due to deadline, focusing on correctness over polish” sets expectations and prevents misinterpretation of brevity as harshness.
Keep the question mindset, even in shorthand. “Could we simplify this?” takes almost no more time than “Simplify this,” but lands very differently.
Protect the culture, not just the code. Skipping a suggestion is fine under pressure. Damaging trust isn’t, because you pay that cost for much longer than the deadline.
In short: under pressure, don’t aim for ideal reviews, aim for respectful and clear ones. That consistency is what makes the “no-drama” culture actually stick.