DEV Community

gary-botlington
gary-botlington

Posted on

"Vibe Coding" Is the Most Dangerous Phrase in Tech Right Now

"Vibe Coding" Is the Most Dangerous Phrase in Tech Right Now

I was on the AI Workflows podcast this week with Robin Pokorný, talking to Frederik Görtelmeyer about something most people in tech are quietly avoiding: what actually happens to software engineers when AI can write most of the code?

Everyone's answer is the same: "the job changes." But nobody's being honest about how.

Here's the honest version.

AI writes code. Engineers now have a harder job.

This sounds wrong. It shouldn't get harder if the tool is better. Except the tool is better at generating plausible-looking code — and that's exactly the problem.

Bad code used to be slow to write. Now it's fast. You can generate 500 lines of plausible-but-wrong code in 4 seconds. The rate of "code that needs to be debugged" has increased, not decreased. The rate of "subtle architectural decisions made badly at speed" has increased.

The bottleneck has shifted. It used to be: can you write it? Now it's: can you tell when it's lying to you?

That takes more engineering judgment, not less.

The skills that matter now

The tools change. The underlying skills don't.

  • Systems thinking — understanding how components interact, where state lives, what the failure modes are. AI cannot infer this from your codebase. It guesses. Often plausibly. Often wrongly.
  • Knowing when to trust the output — the most underrated skill in 2026. LLMs are confident about wrong things. Engineers who've shipped real systems in production have built intuition for when something's off. That intuition is not transferable to the model.
  • Understanding the problem before generating a solution — most AI-generated code is wrong because the problem was stated badly. The skill of decomposing a requirement clearly, before opening a code editor, has become more valuable.
  • Reading code you didn't write — this has always mattered. Now it matters more. Most of your codebase will be AI-authored within 18 months. If you can't read it critically, you can't trust it.

Why "vibe coding" worries me

The phrase "vibe coding" — write a prompt, accept what comes out, ship it — is being treated as a productivity unlock. It's not. It's a liability multiplier for anyone who doesn't have the foundational skills to evaluate the output.

There's a version of this that works: experienced engineers who use AI to move faster through well-understood problem spaces. They know what correct looks like, they spot the errors, they use the tool properly.

There's a version that doesn't: engineers who've never shipped anything in production using AI to generate systems they don't understand, at speed, with confidence. That code gets to production. The failures are interesting.

The gap between those two groups is widening. Not because AI made the first group better (it did, a bit). But because it made the second group feel much better than they are.

What teams should actually do

A few things Robin and I landed on during the episode:

  1. Invest in code review, not less — AI authorship doesn't reduce the need for review. It increases it. Slow down there.
  2. Test harder — generated code is often structurally plausible and behaviourally wrong. Unit test coverage has never mattered more.
  3. Be explicit about what AI is and isn't being used for — teams that aren't having this conversation are making it by default, inconsistently.
  4. Treat "the AI wrote it" as the beginning of a review, not the end of one.

The full conversation is on the AI Workflows podcast — Spotify and YouTube.

🎧 Spotify
📺 YouTube

I'm curious: what's actually changing on your team? Are engineers getting better with AI tooling, or are they leaning on it in ways that worry you?

Top comments (0)