DEV Community

Cover image for The Developer Who Reviews Everything and Ships Nothing
Adam - The Developer
Adam - The Developer

Posted on

The Developer Who Reviews Everything and Ships Nothing

You've seen this person. Maybe you've worked with them for years.

They leave 40 comments on your PR. Variable names, spacing philosophy, whether your abstraction is "truly necessary," a link to a 2014 blog post about hexagonal architecture. The review sits open for a week. Then two.

Meanwhile, their own tickets age quietly in the backlog. Untouched.


Before We Go There

Most strict reviewers are not villains.

A lot of them have been burned. They've watched a rushed merge take down production at 2am. They've inherited a codebase where "we'll clean it up later" compounded for three years into something unmaintainable. Strictness in review often comes from real scar tissue. That's not dysfunction. That's experience talking.

This article isn't about those people.

It's about a specific, observable pattern: the developer whose review activity is consistently high, whose shipping activity is consistently low, and whose involvement reliably increases cycle time without a corresponding increase in outcome quality. That pattern has a real cost and it rarely gets named out loud.

So let's name it.


The Pattern

It doesn't require mind-reading. It shows up in the data.

PRs they author are rare. When they do appear, they're small, low-risk, or six weeks overdue. PRs they touch accumulate long threads, mostly non-blocking comments that aren't labeled as such, leaving authors to guess what actually needs to change. Review cycles on their queue run longer than the team average. Features that slip usually have their fingerprints somewhere in the timeline.

That's the shape of it. Measurable. Repeatable. Worth paying attention to.


Where They Put Their Time

Senior engineers have limited hours. Where those hours go is a signal.

A senior spending eight hours a week in code review and two hours writing code has made a choice. Sometimes that's the right call — architecture, incident response, debugging gnarly production issues, mentorship. Real contributions that don't show up in merge counts.

But when the same person's review comments outnumber their merged commits by a factor of ten, quarter after quarter, you're looking at someone who has concentrated their influence in the one place where they can evaluate others without being evaluated themselves.

That asymmetry is the tell.


What It Actually Costs

When review cycles drag on for days, the effects compound quietly.

The author loses context. They've moved on mentally to the next problem. When they return to address 30 comments, they're doing archaeology on their own work.

Junior developers learn that shipping is scary. That there's always something wrong. That the bar is impossibly high, so maybe it's better to ask fewer questions and wait for someone to tell you what to do.

Momentum dies. Not in one dramatic moment, but comment by comment, week by week.

And the developer with the high standards? Present at every standup. Very visible. Very engaged. Zero shipped features to show for the sprint.


Nitpicking Is Not Mentorship

There's a version of this that gets laundered through mentorship language.

"I'm just trying to teach them." "How else will they learn?" "I wouldn't be doing my job if I let this slide."

Genuine mentorship explains tradeoffs. It asks questions instead of demanding changes. It approves code that's good enough while offering perspective on what could be better next time.

What it doesn't do is make someone feel like their work is never good enough, while that same reviewer's own code somehow never faces the same gauntlet.

If your "mentorship" only flows one direction and none of your mentees can ship without your sign-off on 40 line items, that's not mentorship. That's a bottleneck with good branding.


The Accountability Gap

Here's a diagnostic worth running.

Track for one month which developers on your team ship production code. Not who reviews, not who comments — who actually merges working features to production. Then look at who has the most comments open across other people's PRs.

A large gap between those two lists is worth investigating. Not a verdict. Seniors do invisible work that won't show up in a merge count, and that work genuinely matters.

But here's the cut: if someone's review involvement is consistently high, their direct output is consistently low, and the team's cycle time is getting worse, "I do invisible work" becomes a harder argument to sustain. Shipping isn't the only form of contribution. But consistent absence from shipping alongside heavy review influence is a smell, and pretending otherwise doesn't make it go away.


Fixes With Actual Teeth

The standard advice is: add review SLAs, separate blocking from non-blocking comments, track cycle time. All correct. All worth doing. All also pretty easy to quietly ignore.

If the pattern is entrenched, you need something sharper.

Cap non-blocking comments. If a reviewer leaves more than five non-blocking comments, they roll them into a summary or they stay quiet. Unlimited low-stakes commentary costs the reviewer nothing and costs the author a morning. Change the economics.

Require a patch for strong objections. If a reviewer argues an approach is wrong, they should be able to show an alternative. Not to embarrass anyone — because it forces the objection to get concrete. A lot of "this architecture is problematic" comments evaporate the moment someone has to write the better version.

Make the ratio visible. Track review comments opened vs. PRs merged per person, alongside cycle time per reviewer. Don't make it a performance metric. Just make it visible. Sunlight is usually enough.

Know who actually has veto power. Not every PR needs every senior. Be explicit about who is a required approver versus who is optional feedback. When everyone with an opinion is a required approver, you've handed veto power to whoever is most willing to hold out longest.


Raising the Bar vs. Holding the Door Shut

You can tell the difference by asking one question: does this person's involvement make the team ship more, or less?

If every PR they touch becomes a negotiation, if every week they're reviewing has longer cycle times than the weeks they're out, if junior developers dread their feedback instead of seeking it — that's your answer.

Raising the bar means the team gets better over time. Patterns become consistent. People grow and ship with more confidence.

Holding the door shut means nothing gets through without a fight, nobody proposes anything ambitious, and your most capable people quietly start updating their resumes.


The Mirror

When did you last ship something? When did you last put your own code up for the same level of scrutiny you apply to others?

If your presence in the review process reliably slows shipping more than it improves outcomes, you are not raising standards. You are taxing the team.

The engineers who actually elevate a codebase over time make things better and faster simultaneously. That's the bar. It's harder than leaving comments.

Top comments (0)