I'll admit it. When vibe coding took off, I was completely seduced.
You describe what you want, the AI builds it, you test it, prompt again, and watch something that would have taken weeks appear in an afternoon. I spent a solid stretch just asking for things and watching them materialize. It felt like finally having a capable assistant who didn't need me to explain every little detail.
Then things started breaking.
Not right away. The first failures were subtle. A feature that looked correct but didn't handle edge cases. A database connection that worked in dev but silently dropped data in production. An authentication flow that technically ran but exposed user records to anyone who poked at the URL. I'm a data analyst, not a software engineer, so I didn't catch these problems until they were already baked in. Which is the whole story, really.
What Vibe Coding Is (And Why the First 70% Feels Like Magic)
Andrej Karpathy coined the term in early 2025. The idea: you fully give in to the AI, describe what you want, accept the code it generates, and iterate through prompts rather than through understanding. You're not writing code. You're directing an actor.
For prototypes, this is genuinely transformative. You can spin up a proof of concept faster than you could draw the wireframe. The velocity is real. A developer who spends an afternoon vibe coding can produce what used to take a week of manual work. That's not hype. There's data behind it.
The problem isn't the first seventy percent. The problem is what happens after.
The 70% Problem Has a Name, and It's Not a Bug
Researchers at Columbia University's DAPLab analyzed the top AI coding agents, including Cline, Claude, Cursor, Replit, and V0. They found nine consistent failure patterns across all of them.
The most dangerous weren't the obvious ones. They weren't crashes or compile errors. They were silent failures in error handling and business logic. Code that runs clean. Code that passes every test you think to write. Code that does something slightly different from what you actually needed, and you won't know until a user finds it for you.
That's the seventy percent problem. The AI takes you most of the way there. The last thirty percent is where the architecture matters, where the edge cases live, where the system needs to actually behave correctly under conditions nobody wrote a prompt for.
You can't vibe your way through that.
The Numbers Are Worse Than You Think
I'm a data person. I don't like arguments built on vibes (ironic, given the topic). So here's what the actual research shows.
A study from METR, an organization that evaluates frontier AI models, ran a randomized controlled trial with experienced open-source developers. These weren't beginners fumbling with AI for the first time. Before starting, developers predicted they'd be twenty-four percent faster with AI tools. After finishing, they still believed they had been twenty percent faster.
They were actually nineteen percent slower.
Read that again. Experienced developers, certain they were gaining speed, were measurably losing it. The overhead of reviewing, debugging, and correcting AI-generated code ate every minute the AI saved them.
Meanwhile, GitClear analyzed two hundred eleven million lines of code from companies including Google, Microsoft, and Meta. With AI tools, code volume went up ten percent. Code quality collapsed. Refactoring dropped from twenty-five percent of changes to ten percent, a sixty percent decline. Code churn jumped. Duplication increased forty-eight percent. Code that needed to be rewritten within two weeks hit a new high.
Forty-five percent of AI-generated code samples contain security vulnerabilities. Roughly ninety percent of AI-built projects never reach production. Forrester estimates that by 2026, seventy-five percent of technology decision-makers will face moderate to severe technical debt from AI-generated code.
One industry analyst puts the accumulated bill at one-point-five trillion dollars by 2027.
This is not a hypothetical future problem. It's already here.
The Real Issue Isn't the AI
I want to be direct about something, because most articles on this topic get it wrong.
AI is genuinely impressive at writing code. That was never the issue. The issue is that vibes are not a system. When you describe what you want and accept what you get, you're building on a foundation that no one designed. You're accumulating code that works in isolation but wasn't architected to work together. You're creating a structure where the first floor was generated by one prompt, the second floor by another, and nobody thought about whether the staircase makes sense.
The dependency problem illustrates this perfectly. An AI might tell you your project needs three packages. The actual transitive dependency load often expands thirteen-and-a-half times that. You ship three visible packages and thirty-seven invisible ones, none of which you vetted. That's the iceberg. You only see the tip.
Firebase databases ship open. Supabase tables miss row-level security. API keys get hardcoded in client-side JavaScript. The AI isn't malicious. It optimizes for "it works," and "it's secure" was never part of the prompt. And when you're moving fast and the thing runs, you don't check for the problems you don't know to look for.
Senior engineers who've spent years getting burned by these exact failure modes know where to look. Vibe coding removes them from the equation and replaces them with velocity. That sounds like a good trade until the production incident.
What the 30% Actually Requires
The last thirty percent of any real project is architectural. It's the part that requires you to have thought about what the system is supposed to do before you built it. It includes:
Error handling for when things go wrong, not just when they go right. Authentication you can't bypass. Server-side data validation. Security configurations locked down by default. Dependency management that doesn't quietly pull in unvetted packages. Edge case logic your users will absolutely find.
You can't prompt your way to these things after the fact. You have to think about them before you start. That means having a system, not just vibes.
This is exactly why the CORE system matters. It works as the architectural layer that makes AI output usable, without becoming a rigid framework that slows you down. When you define constraints, objectives, roles, and execution parameters before you start generating, you're giving the AI a skeleton to fill in rather than asking it to build a skeleton from scratch. The difference in output quality is not marginal. It's substantial.
You still get the speed. You get it without the time bomb.
How to Vibe Code Without Building a Disaster
There's a version of this that works. Build the system first, then let AI execute inside it.
Think of it like hiring a skilled contractor. You don't hand a contractor the keys to your house and say "make it better." You have architectural plans, a permit, a scope of work, and someone who understands what the finished product is supposed to look like. The contractor is still doing skilled work. You're just not outsourcing the thinking that has to happen before the work starts.
For AI, that means a few things practically:
Define the architecture before you generate. Know what tables exist, what the authentication flow does, what the API boundaries are. Write this down somewhere the AI can reference it.
Before anything goes somewhere real, review it. Go past "does it run" and ask "does it do what I think it does." Run it against edge cases. Read the security-sensitive sections yourself.
AI output is a first draft, never a final deliverable. The velocity gain is in getting from blank page to reviewable draft. The oversight is still your job.
Don't deploy prototypes. This should be obvious but the data says it isn't. The same tools and workflows that make great proof-of-concept code will produce dangerous production code if you skip the step where someone thinks about what production actually requires.
The People Who Will Win This
The vibe coding conversation is happening in a very binary way right now. Either it's the future of everything and everyone who doesn't adopt it will be left behind, or it's a disaster waiting to happen that responsible people should avoid.
Neither of those is right.
The people who come out ahead are the ones who use AI for speed without abandoning the discipline that keeps systems from falling apart. They're not the fastest vibe coders. They're not the ones who refuse to use AI tools at all. They're the ones who build the right system, then let AI do the heavy lifting inside it.
That's a skill worth developing. The technology is only getting more capable. The question of how to work with it well is the actual competitive advantage.
Start With the System
If you've been trying to use AI tools and getting inconsistent results, this is usually why. The tool works fine. There's just no system underneath it.
I built the CORE system framework specifically for this. It's a structured way to define what you're building before you start generating, so the AI has something real to work with instead of guessing. It covers Constraints, Objectives, Roles, and Execution parameters. Not a rigid checklist. A thinking framework that forces the architectural decisions to happen up front.
You can find it in the TotalValue store along with the other prompt systems I've built for exactly this kind of problem.
And if you want to see whether your current AI outputs have problems you haven't caught yet, start with the free AI Signature Scrub. It's not just for writing. It's for anything where you need to know if what came out actually holds up.
TotalValue Group LLC builds AI prompt systems that replace work you'd normally pay a consultant to do. Browse all tools at totalvalue.com/products.
Start free with AI Signature Scrub.
Learn more at TotalValue.com.
Robert Kirkpatrick is the founder of TotalValue Group LLC and builds AI prompt systems that replace work you'd normally pay a consultant to do. He's a data analyst by trade who got tired of watching people fight AI tools that were designed to help them.
Top comments (0)