DEV Community

Kevin
Kevin

Posted on

Vibe Coding Is Real, and It's Creating Devs Who Can't Debug

There's a certain kind of pull request I keep seeing.

The code works. Tests pass. The author can explain what it does in plain English. But ask them why a specific line is there, or what happens when the edge case hits, and you get the thousand-yard stare. They didn't write it. They described it to an AI, accepted the output, nudged it until CI went green, and shipped it.

This is vibe coding. And it's everywhere now.


What's Actually Happening

Cursor, GitHub Copilot, Windsurf, Claude Code — pick your tool. The pitch is the same: describe what you want, the AI builds it, you move fast. It's genuinely impressive. I've watched engineers scaffold full CRUD APIs in twenty minutes. Things that would've taken an afternoon two years ago.

But there's a tax that nobody's accounting for.

When you write code yourself — even bad code — you build a mental model. You know where the bodies are buried. You made the tradeoff between the clean solution and the pragmatic one. That knowledge is load-bearing. It's what makes debugging possible when 2 AM comes and production is on fire.

Vibe coding skips that. You end up with code that exists but isn't really understood. It's someone else's handwriting in your codebase.


The Numbers Don't Lie

GitHub's own data from late 2025 showed AI-assisted developers ship about 55% more code per week. That sounds great until you look at the follow-on stats: bug rates in AI-assisted codebases are running 2-3x higher in production, with the bulk of those being logic errors — not syntax, not type mismatches, but wrong behavior. The kind of bug that only surfaces under real load or with real users doing unexpected things.

Anecdotally, I've seen this pattern play out at three different companies. Fast initial development. Then, about 6-12 months in, a slowdown nobody can explain. The codebase has grown but velocity dropped. PRs take longer to review. Simple changes break unexpected things. The team doesn't have a mental map of their own system anymore because they didn't build it — they prompted it into existence.

Technical debt has always been a thing. But this is different. It's comprehension debt.


The Defenders Have a Point, Though

I don't want to be the old man yelling at clouds here. The "just write it yourself" crowd is ignoring some real costs.

Boilerplate is genuinely soul-destroying. Setting up auth, writing migration scripts, wiring up API clients — nobody learns anything from the 40th time they've written that pattern. If AI handles the rote stuff, that's actually good. More time for the genuinely interesting problems.

And not every codebase needs to be understood at every layer. A startup shipping an MVP doesn't need their three-person team to have encyclopedic knowledge of every generated utility function. Get it out the door, validate the idea, refactor when it matters.

The issue isn't using AI to write code. It's treating the AI's output as a black box you never interrogate.


What I've Seen Work

The engineers who seem to handle this well treat AI output like code review, not copy-paste.

They read what the model generates. They push back on it. They change things. They ask the AI to explain specific sections not because they can't read code, but as a forcing function to surface assumptions. If the explanation doesn't make sense, the code's probably wrong.

The habit I've started pushing on teams: before you accept an AI suggestion, explain it to a rubber duck. Out loud. What does this function actually do? What's the failure mode? If you can't answer those in 30 seconds, you don't own that code yet.

It's not much. But it keeps the mental model intact.


Hiring Is Already Broken

Here's where it gets uncomfortable.

Interview loops are increasingly detached from reality. Leetcode-style problems in a sandboxed environment with no AI access. That's not how anyone works anymore. But flip to an unmonitored take-home and you can't tell what the candidate actually knows versus what Claude told them.

I've seen junior engineers sail through technical screens and then struggle to fix a 10-line bug on their first week because the debugging process — actually reading error messages, forming hypotheses, checking assumptions — is a muscle they've never built. They've only ever asked an AI to fix things.

This isn't a generational failing. It's a tooling problem. We handed people power tools before teaching them what the blade does.


The Skill That Still Matters

Debugging. Full stop.

You can prompt your way to a first draft. You can't prompt your way through a heisenbug in a distributed system at 3 AM when your traces are inconsistent and your logs are lying to you. That requires a mental model, pattern recognition built from pain, and the ability to form and test hypotheses without a net.

AI is genuinely bad at this, by the way. Ask it to debug a subtle concurrency issue and it'll confidently suggest things that are plausible but wrong. Because it doesn't have your context, your system's history, or the intuition that comes from having seen this exact flavor of problem before.

That intuition is built by writing and breaking things yourself. There's no shortcut.


Where This Goes

I think we end up in a bimodal world. Engineers who used AI as an accelerant on top of solid fundamentals will be extremely productive. Engineers who used it as a crutch before they had fundamentals will hit a ceiling — and it'll be a hard one.

The tools aren't going away. They're getting better. So the answer isn't to avoid them; it's to stay honest with yourself about what you actually understand.

Vibe coding is fine for prototypes. For throwaway scripts. For the parts nobody cares about.

For the parts that matter — the parts you'll have to maintain, debug, extend, and explain to the person who comes after you — you should probably understand what you shipped.

Or you'll find out the hard way when it breaks.

Top comments (0)