Amazon just told junior and mid-level engineers they need a senior engineer to sign off on any AI-assisted code change. This came after a series of outages, including one on March 2 where their own AI tool Q was flagged as a primary contributor to lost orders and website errors.
An internal briefing described a "trend of incidents" with high blast radius involving gen-AI assisted changes. Amazon's SVP of e-commerce services Dave Treadwell reportedly pushed for the new policy. The company publicly downplayed it, with a spokesperson telling Business Insider it's "not accurate" that all AI changes need sign-off. But internal documents tell a different story.
The real problem isn't bad code
Here's what's interesting. This isn't Amazon saying AI code is bad. They're saying AI code without oversight is bad. There's a difference.
The problem isn't that AI writes broken code. It's that AI writes plausible-looking code that passes a quick glance. A junior dev generates something with Copilot or Q, it looks reasonable, it passes basic tests, and it ships. Then it hits an edge case the AI never considered because it was pattern-matching, not reasoning.
I've seen this on my own team. AI-generated code passes code review faster because reviewers unconsciously trust code that looks clean and well-structured. But looking clean and being correct aren't the same thing. The most dangerous bugs are the ones that look right.
Adding friction back in
Amazon's response is basically adding friction back into the process. More documentation. More approvals. "Controlled friction," they call it.
Which is ironic — because the entire pitch for AI coding tools was removing friction. 🤔
The real question isn't whether this is the right move. It obviously is. The question is whether every company will need to learn this lesson through their own outages first.
Right now most teams are in the move fast phase with AI tools. "Break things" hasn't happened yet for most of them. Amazon's at the scale where "break things" means millions in lost revenue.
The most honest take from big tech
One internal doc said:
"GenAI usage in control plane operations will accelerate exposure of sharp edges and places where guardrails don't exist."
That's the most honest assessment of AI coding I've read from a big tech company. Not "AI will replace developers." Not "AI is just a tool." But: AI will find every gap in your process and blow through it.
What's your team's policy on reviewing AI-generated code? Same bar as human code, or stricter?
Top comments (0)