Every developer has stared at a codebase and thought: "burn it all down." The spaghetti logic, the bizarre naming conventions from three CTOs ago, the test suite that tests nothing. I get it. I've felt that exact urge.
But here's what I've learned after watching this play out a dozen times: the rewrite almost never works.
The Seduction of the Clean Slate
There's a psychological pull to starting fresh. Fred Brooks called it the "second-system effect" back in 1975 — the tendency to over-engineer your second attempt because you think you finally understand the problem. You don't. You understand the old problem. The new one is already mutating.
The pitch always sounds reasonable. "We'll use modern frameworks. We'll design it properly this time. Six months, tops." Six months becomes eighteen. The old system keeps shipping features while the rewrite team burns through budget in a parallel universe.
Netscape learned this the hard way. They rewrote Navigator from scratch — it took three years. By the time they shipped, Internet Explorer owned the market. Joel Spolsky wrote about this back in 2000 and called it "the single worst strategic mistake that any software company can make." Two decades later, companies still make it weekly.
Why Rewrites Implode
You're not rewriting code — you're rewriting institutional knowledge. That weird if statement on line 847? It handles a billing edge case from 2019 that cost $200K when it broke. Nobody documented it. The original dev left. You'll discover it three months post-launch when angry customers call.
The old system doesn't freeze while you build. Sales closes deals. Support files tickets. Product ships features. Every week, your rewrite falls further behind the moving target. You're building a copy of last year's product.
You split your engineering team in half. One half maintains the legacy system (resentfully). The other half builds the new thing (optimistically). Neither has enough people. Both codebases suffer.
The Strategy That Actually Works
Incremental replacement. The Strangler Fig pattern — named after tropical trees that grow around a host tree and eventually replace it.
Pick the ugliest, most painful module. Rebuild it behind an interface. Swap it in. Repeat. You get immediate wins. The risk stays contained. If module three goes sideways, modules one and two are already running in production.
Martin Fowler has been advocating this approach for years, and the math supports it. You deliver value continuously instead of gambling everything on a big-bang switchover.
When a Rewrite Actually Makes Sense
I'm not saying never. A few situations where starting over is defensible:
- Platform migration: desktop to web, or switching language ecosystems entirely — incremental migration might genuinely be harder
- Tiny team, tiny codebase: under five people, under 50K lines, the risk is manageable
- The old system literally can't evolve: regulatory requirements that demand a completely different architecture
But if your rewrite plan includes phrases like "18-month timeline" or "feature parity by Q4" — you're walking into the trap.
The best code you'll ever ship isn't the shiny new system. It's the old system, fixed one piece at a time, still running while you sleep.
What's the worst rewrite you've witnessed? Did it survive? I want your war stories.
Top comments (0)