I came back to a project I'd built in a weekend three months ago and spent the first hour just trying to figure out what I had written. Not debugging. Not adding a feature. Just reading. Trying to reconstruct the decisions that had gone into a 400-line file that did something with auth that I couldn't quite follow anymore.
The thing is, I remembered building it. It felt great. Claude and I were in a rhythm, I was accepting suggestions, tweaking, moving fast. By Sunday night I had something that worked. I shipped it, moved on, felt good about it.
Three months later it was a stranger's code.
This is the part of the vibe coding conversation that doesn't get enough airtime. Everyone talks about how fast you can go. Nobody talks about what you're left with. Because the speed is real, I'm not disputing that. But what AI optimizes for is getting you to working code in the shortest path possible. It does not optimize for the person who has to read that code at 11pm six months later trying to understand why a middleware is set up the way it is.
The specific thing that broke me that Sunday was a custom hook that was doing three things, had no comments, had variable names that were technically descriptive but not in any way that reflected why those variables existed. I asked Claude to explain it to me. It gave me a confident explanation that was plausible but not quite right, because it didn't have the original context either. The context existed nowhere. It had evaporated the moment I closed the chat.
That's the real problem. AI output without structure is technical debt at the speed of thought. You're not accumulating debt slowly, the way you do when a team cuts corners over six months. You're accumulating it in an afternoon, invisibly, in code that looks clean and runs fine right up until you need to change it.
The counterargument I hear is: write better prompts, add comments, review what you accept. All valid. But this misunderstands how vibe coding actually feels in the moment. You're in flow. Things are working. Stopping to add documentation comments to AI output feels like putting on a seatbelt after you've already reached your destination. You don't do it, because it's already done, and then you ship it.
What actually makes AI-generated code maintainable isn't discipline after the fact. It's structure before the fact. When the model has clear context about what a module is supposed to do, what patterns the codebase uses, what the naming conventions are, the output reflects that. Not perfectly, but enough. The code it writes when it knows the shape of your system is code you can come back to. The code it writes in a vacuum is code that made sense in the moment and belongs to a conversation that no longer exists.
This is what giving the AI the right context upfront actually buys you. Not just fewer bugs in the short run. A codebase you can still work in when the original momentum is gone and it's just you, a file, and a decision you made three months ago that you need to understand.
Vibe coding as a creation mode is genuinely useful. As a maintenance strategy it's a disaster. The fix isn't to slow down, it's to make the context travel with the project so the model isn't making structural decisions in the dark every single time.
If you want to see what that looks like in practice, we built npxskills.xyz around exactly this idea, structured skill files that give your agent the context it needs before it starts, not after.

Top comments (0)