Here's something it took me years to accept: no matter how good you are, your codebase is rotting right now.
It's not your fault. It's biology.
The number that explains everything
Your brain can hold 4 to 7 chunks of information in working memory at once. That's the same for me, for you, for the most brilliant senior dev you've ever worked with. It's a biological limit, not a skill issue.
Now think: how many implicit interactions does the codebase you maintain have? 50 thousand lines? 200 thousand? Half a million?
You're trying to monitor a system with thousands of invisible dependencies using a processor that runs 7 threads. It won't work.
The cycle I've seen dozens of times
I've been working with software for 22 years. The pattern is always the same:
Month 1 — The architect designs everything with clarity. Decisions make sense. Everyone's aligned.
Month 6 — The architect is still there, but no longer remembers why they chose that 30-second timeout instead of 5. "There must have been a reason."
Month 12 — Two of the three original engineers left. The reason for the timeout? It left with them. What remained was a comment // don't change this with no explanation.
Month 18 — The new dev, who is competent, changes the timeout to 5s because "30s is absurd." Production breaks at 3 AM. Nobody understands why.
The problem is never the new dev. The problem is that context doesn't survive time.
The failures that kept me up at night
After two decades, I collect these stories:
The innocent hook. A usePopup() that adds a global listener to the DOM. Clean, tested, code review approved. Except it was used in 100 components. Now there are 100 listeners firing on every click anywhere on the page. No individual code review will catch this — because each review only sees its one usage.
The phantom cache. A legitimate refactor changes how an object is constructed. Functionally identical. Referentially different. The cache that depended on === never works again. Tests pass. App doesn't crash. It just gets silently slower, week after week, until someone opens a ticket three months later.
The elegant N+1. A loop with an ORM that produces beautiful code to read and 50 separate SQL queries to execute. The dev sees elegance. The database sees a bombardment.
The pattern is always the same: the problem isn't visible at the abstraction level you're working at. You'd need to see multiple levels simultaneously — and your 4 to 7 memory chunks won't let you.
Where AI actually changes the game
I know there's a lot of hype. I know half the promises about AI in software are marketing. But there's one thing AI does that is structurally different from any tool that's ever existed:
Giant context window. When you change a line, AI can see the 47 modules that call that function, check if any of them depend on referential equality, and detect if your "simple fix" will silently break something on the other side of the codebase. Not because it's smarter — because it has more active working memory.
Tireless vigilance. Friday at 5 PM, deploy scheduled for Monday, code review gets more superficial. On the 100th PR of the week, attention is lower than on the first. AI applies the same scrutiny to PR #1 and PR #100. No "this time is different," no "it's just a hotfix."
Institutional memory. "Why does this service have a 30s timeout?" — the new dev doesn't know, the 2-year-old Slack thread doesn't show up in search. An AI with access to history remembers: "INC-4521, March 2024, spot instances with cold start up to 25s."
But hold on — humans are still irreplaceable
Before anyone thinks I'm saying AI will replace devs: it won't.
AI is extraordinary at following patterns and detecting violations. But it doesn't invent new paradigms. It doesn't know you're accepting that tech debt because the market window closes in 3 weeks. It doesn't know "don't deploy on Monday because the payments team deploys on Tuesday." It doesn't know what happened during last Black Friday.
Judgment, creativity, business context, tribal knowledge — that's still ours.
The real partnership
The future isn't "devs vs. AI." It's:
- AI handles consistency and scale — detects violations, maintains memory, checks cascade impacts, watches without fatigue
- Humans focus on judgment and creativity — decide what to build, make strategic trade-offs, invent new things, navigate organizational politics
Teams that resist this out of pride ("I can review everything myself") will accumulate entropy faster. Not because they're worse devs — because they're humans with fixed cognitive limits facing growing complexity.
The clock is ticking
If your codebase is over 18 months old, the entropy is already there. The question isn't whether it exists — it's whether you have a system capable of continuously defusing it.
Or whether you're counting on 4-7 chunks of human memory to monitor hundreds of thousands of lines.
I wrote an expanded version with diagrams, charts, and detailed technical examples here: stickybit.com.br/codebase-timebomb/
If you maintain a legacy codebase (and who doesn't?), it's worth a read.
Top comments (1)
The working memory constraint (4-7 chunks) is real, and I think AI is genuinely useful here — not as a replacement for understanding, but as an external memory system.
I've started using AI to explain code I haven't touched in months before modifying it. Not to write new code, but to re-load my mental model. "Explain what this module does, what its dependencies are, and why this timeout is set to 30 seconds." It's surprisingly accurate when it has the full file as context.
Doesn't solve the turnover problem you described, but it shortens the ramp-up from weeks to hours.