DEV Community

Cover image for AI Writes the Code. AI Reviews the Code. So Why Do They Still Need You?
Andrés Clúa
Andrés Clúa

Posted on

AI Writes the Code. AI Reviews the Code. So Why Do They Still Need You?

An AI writes your code. A different AI reviews it, flags the bugs, suggests the fix, and posts a clean summary to your PR. Cost: about $25. Time: minutes.

So what exactly are you doing here?

That's not a rhetorical question. It's the question every engineering team is quietly asking right now. And the companies that get it wrong are already paying for it.

The Wall and the Tap

Amazon recently held a mandatory meeting to address a wave of production incidents tied to AI-assisted code. Their internal briefing described outages with "high blast radius" caused by generative AI changes, and admitted that best practices for this workflow simply don't exist yet.

One incident stands out: an AI coding tool was asked to make a routine change to an AWS environment. Instead, it decided to delete and recreate the entire thing. Thirteen hours of recovery. Amazon called it an "extremely limited event." The affected tool served customers in mainland China.

They asked the AI to fix a leaky tap, and it knocked down the wall.

Amazon's fix wasn't more automation. It was more human oversight. Junior and mid-level engineers can no longer ship AI-assisted code without a senior signing off. One of the most technically advanced companies in the world looked at the problem and concluded: we need more experienced humans in the loop, not fewer.

The Numbers Agree

A CodeRabbit analysis of over 470 pull requests found that AI-generated code produces roughly 1.7x more issues per PR than human-written code. The biggest gaps weren't cosmetic. Logic errors, missing null checks, broken control flow, security misconfigurations. The kind of things that pass CI but blow up in production at 3 AM.

AI-generated code tends to look clean. It follows conventions, names things reasonably, formats well. But underneath that surface, it skips the guardrails that experienced engineers build instinctively. The code looks right. That's what makes it dangerous.

We Didn't Ship Garbage Before AI Either

Before AI we had code reviews. QA cycles. Staging environments. Integration tests. Senior engineers who would block your PR because they'd seen that exact failure before in production. We built entire cultures around shipping reliable software because humans were in the loop.

AI doesn't eliminate that need. It amplifies it. When you produce code 10x faster, you produce bugs 10x faster too. The volume goes up. The surface area for failure goes up. Remove the human filter to "move fast" and you get what Amazon got: a trend of high-impact outages with no safeguards.

The smart move was never to take humans out of the loop. It's to take the tedious work out of their day so they can focus on what prevents disasters: judgment, context, and ownership.

Great Demo. Terrible Production App.

Tools like Lovable, Bolt, and others let you go from idea to working app in minutes. The UIs look polished, the features work, the demo is convincing. As a proof of concept, they're unbeatable.

But if you're reading this, there's a decent chance you've already hit the wall. The wall where the app works beautifully in a demo but falls apart the moment real users touch it. No rate limiting. No input validation. No security headers. Endpoints wide open. The kind of stuff that doesn't show up in a screenshot but absolutely shows up in a penetration test.

I wrote about this after stress-testing an AI-built app. XSS payloads accepted with a smile. SQL injection that went through without a flinch. An SSRF vulnerability where the server tried to fetch internal cloud metadata from a URL someone typed into a text field. The app worked. It just wasn't safe.

I want people who use these tools on my team. Someone who spins up a working prototype in an afternoon to validate an idea before we spend two sprints building it? That person is incredibly valuable.

But here's the thing. Designers, marketers, and even developers who have never dealt with infrastructure, security, or scale should not look at these tools and think "I can ship this in production by tomorrow." Not because they're not smart. Because they haven't lived through the failures that teach you what production actually demands. Experience isn't gatekeeping. It's knowing what breaks when 10,000 real users hit your app at once instead of just you clicking around in a browser.

The AI builds the proof of concept. The engineer turns it into a product.

Your Value Was Never the Syntax

AI doesn't know your system. It doesn't know that your team decided last sprint to deprecate that service. It doesn't know the CTO's opinion on over-abstraction. It doesn't understand that the "elegant" solution will break the deploy pipeline because of a legacy dependency nobody wants to touch.

Code is decisions. Every function, every architectural choice, every trade-off reflects something the model doesn't have access to: the context of your team, your users, your business. The model gives you signal. You give it meaning.

The Responsibility Didn't Change

AI writing code doesn't eliminate engineers. It eliminates the mechanical fraction of the job. What remains is judgment, context, ownership, and taste. The things no model update will automate away because they require understanding why you're building something, not just how.

We're still human. The tools changed. The responsibility didn't.

The question isn't whether AI can write and review your code. It can. The question is whether you can do everything around the code that the AI can't.

If the answer is yes, you've never been more valuable.

Top comments (0)