I've shipped more code in the last six months than in the previous two years. That should feel like a triumph. Mostly it feels like dread.
Vibe coding — the practice of describing what you want and letting an AI write the actual implementation — went from Andrej Karpathy's offhand tweet to industry standard faster than any tooling shift I've seen. Every engineer I know has Cursor, Copilot, or something similar open right now. Entire startups are being built this way. I work on a team of eight developers and we've roughly tripled our output velocity.
And we have absolutely no idea what half our codebase does anymore.
The Speed Is Real. The Understanding Isn't.
Here's what actually happens when you vibe-code a feature: you describe the problem, the AI writes 200 lines, you read enough to confirm it looks right, tests pass, you ship. Fast. Feels good.
Then three weeks later there's a bug in that code. You open it and it's... fine? It's structured, it's commented, it handles edge cases you didn't even think to specify. But it's not yours. You didn't make the hundred small decisions that went into it. You don't have the muscle memory of having written it. You read it like a stranger's code, except the stranger was very thorough, which somehow makes it worse.
This is the part the productivity discourse skips. Code isn't just a deliverable. It's also a knowledge artifact. When you write it, you internalize it. When the AI writes it and you skim it, you don't — not really.
Tech Debt With a New Shape
Traditional tech debt is a known quantity. Rushed code, copy-paste patterns, a module that grew too big. You can see it, point at it, schedule time to fix it.
AI-generated tech debt looks clean. It has proper abstractions. The variable names are good. There are docstrings. It will pass every code review heuristic you've got.
But it's often solving the wrong problem extremely well. Because the AI doesn't know what you actually meant, it knows what you said. Those aren't always the same thing. And when you're moving fast and the tests are green, you don't catch the drift until you're three features deep and the whole thing needs rewiring.
I've watched this happen on a production codebase. We built a notification system in about four hours using an LLM. Worked perfectly. Two months later we needed to extend it and discovered the AI had made an architectural choice — totally reasonable on its own terms — that made the extension basically impossible without a rewrite. Nobody caught it because the code looked fine.
The rewrite took a week.
The Junior Dev Parallel
There's a comparison worth sitting with here. What AI-generated code looks like is extremely talented junior developer output. Smart, technically competent, but written without institutional context. Without knowing why certain decisions were made two years ago. Without the scar tissue from the last time we tried this approach.
Senior engineers are valuable not because they write syntactically correct code. That's table stakes. They're valuable because they carry context. They know the shape of the problem space. They know what's bitten the team before.
Vibe coding externalizes the syntax production and keeps the context problem entirely in your head. If anything, it increases the premium on engineers who actually understand systems deeply. The floor is rising, but the ceiling matters more.
What I've Changed
I'm not stopping. The productivity gains are real and our competitors are using these tools too, so this isn't a philosophical debate. But a few things have shifted in how I work:
Slow down on the review, not the generation. I let the AI write fast. Then I read the output carefully — not to catch bugs, but to understand decisions. If I can't explain why the AI made each non-trivial choice, I ask it to. If the explanation doesn't satisfy me, I rewrite that section myself.
Write the architecture notes yourself. The AI can write code but it can't write your team's reasoning. After any significant AI-assisted feature, I spend 20 minutes writing a short note: why this approach, what we considered and rejected, what assumptions this code makes about the system. Future-me needs that. The AI won't provide it.
Flag AI-heavy modules explicitly. Sounds paranoid. Isn't. We added a comment convention — # AI-generated, review carefully before extension — not as a scarlet letter but as a signal to slow down. It's paid off twice already.
Treat it like pair programming, not autocomplete. The mental model shift matters. If you pair with someone, you stay engaged and push back. If you just accept autocomplete, you drift. Same tool, completely different outcomes depending on your posture.
The Honest Take
Vibe coding is a force multiplier. So is a car. Cars also require you to watch where you're going.
The engineers who will struggle are the ones who let the speed trick them into thinking they understand things they don't. The ones who will thrive are the ones who use AI to move faster through the parts that don't require deep thought, while staying genuinely sharp on the parts that do.
That distinction — knowing which parts require your real attention — is more important than ever. And it's not something an AI can tell you.
Not yet, anyway.
Top comments (0)