DEV Community

Cover image for AI Is Quietly Destroying Code Review — And Nobody Is Stopping It
Harsh
Harsh

Posted on

AI Is Quietly Destroying Code Review — And Nobody Is Stopping It

It Started With a PR That Made Me Question Everything

Six months ago, I merged a pull request that I'm still not proud of.

The code looked clean. The logic seemed sound. My AI assistant had helped write it, another AI tool had reviewed it, and I — a senior developer with 5 years of experience — had approved it with a confident "LGTM 🚀".

Three weeks later, it caused a data inconsistency bug that took us 40 hours to debug.

The worst part? When I went back and actually read the code — really read it — I could see the problem. It was hiding in plain sight, beneath perfectly formatted, well-named, beautifully commented code that looked like it was written by a thoughtful engineer.

It wasn't written by a thoughtful engineer. It was generated by one AI, rubber-stamped by another, and approved by a human who had forgotten how to be skeptical.

That human was me.


The New Code Review Pipeline (And Why It's Broken)

Here's what "code review" looks like at a growing number of teams right now:

Developer → GitHub Copilot writes code
         → CodeRabbit / Cursor reviews it
         → Developer skims the AI summary
         → "Looks good!" ✅
         → Merge
Enter fullscreen mode Exit fullscreen mode

We've automated the process of code review without preserving the purpose of it.

Code review was never just about catching bugs. It was about:

  • Knowledge transfer — juniors learning from seniors by reading real decisions
  • Architectural awareness — everyone understanding how the system fits together
  • Collective ownership — building a team that genuinely cares about the codebase
  • Human judgment — asking "wait, should we even be doing this?"

AI tools are shockingly good at the surface layer. They'll catch a missing null check, flag a potential SQL injection, suggest better variable names.

But they don't ask why.


What AI Can't See (But A Human Reviewer Would)

Let me give you a real example from my team.

A junior dev submitted a PR that added a new caching layer. The code was technically correct. The AI reviewer loved it — "Efficient implementation! Good use of Redis TTL! Well-documented!"

What the AI didn't ask:

  • "Hey, we already have a caching layer in the service above this. Did you know about it?"
  • "This will cache user-specific data globally. Is that a GDPR concern?"
  • "Why are we solving this with a cache? Is the underlying query just slow because of a missing index?"

A senior engineer would have asked all three questions in the first 30 seconds of reading.

The AI approved it. I almost did too.

This is the silent danger. Not that AI writes bad code. It's that AI-assisted code review is selectively blind — precise on syntax, invisible on context.


The Psychological Shift Nobody Is Talking About

Here's what's happening inside our heads, and we need to be honest about it.

When I open a PR that was written with AI assistance, I feel a subtle but real shift. The code looks more polished. The variable names are consistent. The comments are thorough. My lizard brain whispers: "This seems fine."

I'm fighting against the halo effect — where surface quality signals deep quality.

Handwritten code with a messy variable name and a // TODO: fix this comment actually makes me more alert. I slow down. I ask questions. I engage.

AI-generated code is too clean to trigger my suspicion.

And then there's the social pressure layer. If a CodeRabbit or Copilot review says "No issues found ✅", and you leave a critical comment, you feel like you're the one being difficult. After all, the AI checked it. Who are you to disagree?

This is how we're slowly outsourcing our professional judgment.


I'm Not Anti-AI. I'm Pro-Honesty.

Let me be very clear: I use AI tools every single day. They make me faster. They catch things I miss. They're genuinely useful.

But there's a difference between:

AI as a first pass — catch obvious issues before human review

AI as a replacement — skip human judgment entirely

The problem isn't the tools. The problem is how we're positioning them.

When a company says "our AI does code review," they're making a product claim. When a developer says "the AI already checked it," they're making an excuse.

We need to stop confusing the two.


What Real Code Review Looks Like in the AI Era

Here's what I've changed on my team after that painful incident:

1. AI review is mandatory. Human review is non-negotiable.

AI tools flag the obvious. Humans review for context, architecture, and consequence. Both happen. Neither replaces the other.

2. Ask "Why" out loud, every time.

Before approving any PR, I now force myself to answer: "Why is this change being made?" If I can't answer without looking at the ticket, I don't approve it.

3. Rotate code review ownership.

Juniors review seniors' PRs. Yes, really. The code gets better AND knowledge transfers in both directions.

4. Add AI-generated code markers.

If code is substantially AI-generated, it gets tagged. Not as a punishment — as a signal for extra human scrutiny, not less.

5. Celebrate slow reviews.

A PR that sits in review for a day with 10 comments is a success story. A PR merged in 5 minutes with 0 comments should make you nervous.


The Thing That Keeps Me Up At Night

We are training a generation of developers who have never had to truly read someone else's code.

They open a PR, run it through AI review, skim the summary, and merge. They're not lazy — they're efficient, by the only definition of efficiency they've been taught.

But code review is where developers grow. It's where you learn to think about edge cases. It's where you absorb architectural patterns. It's where you develop the professional instinct that no AI can give you.

If we automate that away, we don't just get worse code reviews.

We get worse engineers.

And in five years, when we need someone to make a judgment call that no AI can make — someone who deeply understands the system, the business, the users — we'll look around and realize we never developed that person.

Because we let an AI do their job for them before they got the chance to learn it.


What Can You Do Right Now?

  1. Audit your team's review process. How many PRs are merged with zero human comments? That number should concern you.

  2. Set a rule: AI review assists, humans decide. Document it. Enforce it.

  3. Have the uncomfortable conversation. Tell your team that "LGTM, AI checked it" is not a valid review.

  4. Review one PR this week the old-fashioned way — no AI summary, just you and the code diff. Notice how different it feels.

  5. Share this article if it resonated. Because honestly? Most teams won't fix this until enough people start talking about it.


Final Thought

AI is not destroying code review because it's malicious. It's doing it because we let it. Because "faster" felt like "better." Because we confused automation with improvement.

The best code reviewers I know don't just read code. They read between the lines. They ask uncomfortable questions. They slow things down when slowing down is the right call.

That's a human skill. Guard it like it's valuable.

Because it is.


If this hit close to home, I'd love to hear your experience in the comments. What does AI-assisted code review look like at your company? Are you navigating this well — or quietly worried, like I was?

Let's talk about it before it gets worse.


✍️ Written by a Me, refined with AI assistance. The opinions, experiences, and judgment calls are entirely my own.

Top comments (5)

Collapse
 
embernoglow profile image
EmberNoGlow

Excellent post! I was immediately reminded of a post about how AI-generated PRs started being pushed into the Godot repository, and it was truly disgusting. AI wasn't that powerful back then (this was probably 2023 or 2024), and even the people who wrote the code didn't know how it worked. This is probably the biggest problem with modern open source, where a team's efforts are focused not on writing code but on filtering PRs generated in a couple of minutes.

Collapse
 
harsh2644 profile image
Harsh

Thank you for sharing this I hadn't heard about the Godot repository incident, but that's both fascinating and terrifying! 😮 The fact that people were pushing AI-generated PRs without even understanding the code is exactly the kind of thing I was worried about. You're absolutely right — the problem isn't just about reviewing, it's about the signal-to-noise ratio in open source. Maintainers are now spending more time filtering out AI slop than actually reviewing quality contributions. It's like we've shifted from 'how do we write good code' to 'how do we filter out bad AI code.' Do you have a link to that Godot discussion? Would love to read more about how they handled it. Thanks again for adding this context!

Collapse
 
embernoglow profile image
EmberNoGlow

Yes! Here is a post by Rémi Verschelde.

Thread Thread
 
harsh2644 profile image
Harsh

Thank you so much for sharing the link, EmberNoGlow! 💖

Thread Thread
 
embernoglow profile image
EmberNoGlow

You're welcome