Tailscale's CEO just dropped a post on Hacker News that hit 362 points in hours. The claim is simple and brutal.
Every layer of approval makes a process 10x slower.
Not 2x. Not 3x. Ten times. And he's counting wall-clock time, not effort.
The Math That Hurts 🧮
Here's the math. You code a bug fix in 30 minutes. Getting it peer reviewed takes 5 hours. Getting a design doc approved takes a week. Getting it on another team's calendar takes a quarter.
Each step multiplies the last by 10.
AI Made It Worse, Not Better
Now throw AI into the mix. Claude writes your fix in 3 minutes instead of 30. Cool. You saved 27 minutes. But the reviewer still takes 5 hours. And now they're annoyed because you didn't even read the code yourself before sending it over.
The data backs this up. A 2025 Faros AI study of 10,000 devs found that engineers using AI tools completed 21% more tasks and merged 98% more PRs. But review time jumped 91%. LinearB's benchmarks show AI-generated PRs wait 4.6 times longer before a reviewer picks them up.
Teams went from 10-15 PRs per week to 50-100. The pipeline is flooded.
The Factory Floor Taught Us This Already 🏭
Here's the uncomfortable part. Adding more reviewers doesn't help. Avery Pennarun, the Tailscale CEO, compared it to W. E. Deming's manufacturing research. In factories, adding a second QA pass after the first didn't double quality. The first QA team relaxed because they knew someone would catch their misses. The production team stopped checking their own work because that's what QA was for.
The same thing happens in code review. The more review layers you add, the less responsibility anyone takes for quality at the source.
Deming's fix in manufacturing was radical. Toyota eliminated the QA phase entirely. Gave every worker a stop-the-line button. Built quality into the process instead of inspecting for it at the end.
American factories copied the buttons. Nobody pushed them. They were afraid of getting fired. The system only works when there's trust.
The Trust Problem 🔒
So what does this mean for us?
The Sonar 2026 survey says 96% of devs don't trust AI-generated code. Only 32.7% of AI PRs pass review without changes, compared to 84.4% for human code. The trust isn't there. And without trust, you can't remove review layers. You can only add more.
Pennarun's solution is counterintuitive. Smaller teams with clear interfaces. Components built by tight groups of a few people and a few coding bots. Quality by evolution, not inspection.
It's Not a Tooling Problem ⚡
The code review bottleneck isn't a tooling problem. It's an org design problem wearing a technical costume.
AI made Step 1 of the pipeline 10x faster. Steps 2 through 5 haven't moved. Until we fix those, every new coding speedup just creates a bigger traffic jam upstream.
What's the review culture like on your team — is it a rubber stamp, a bottleneck, or something that actually works?
Top comments (0)