There's a term doing the rounds right now that makes me twitch every time I see it used interchangeably with something I actually care about: vibe coding.
I'm not here to gatekeep. Language evolves, terms get stretched, and honestly, if vibe coding gets more people writing software, that's probably net positive for the world. But I am here to argue that conflating vibe coding with AI-assisted development is quietly doing damage — to how teams evaluate tooling, to how organisations set expectations, and to how we as engineers think about our own craft.
These are not the same thing. They are not even on the same spectrum.
What Vibe Coding Actually Is
Karpathy coined the term earlier this year, and to his credit, he was honest about what it meant. You describe what you want, you accept what the model gives you, you don't particularly read it, and you move on. You're surfing a wave of plausible output. The vibe is the product.
This is a genuinely interesting mode of working. For prototyping, for throwaway scripts, for people who would never have written code at all — it unlocks things. I have zero contempt for it.
But notice what it isn't: it isn't a developer using AI to do their job better. It's a person using AI instead of developing. The distinction sounds pedantic until you ask the obvious follow-up question: who is responsible for what ships?
In vibe coding, the answer is murky by design. You're not making engineering decisions. You're making prompting decisions, which is a different skill with a different risk profile.
What AI-Assisted Development Actually Is
AI-assisted development is what happens when a working software engineer uses AI tooling to operate at a higher level of abstraction without surrendering ownership of the output.
You're still reading the code. You're still reasoning about the architecture. You're still the person who has to answer for the system's behaviour at 2am when it pages you. The AI is a force multiplier on your existing competence — not a replacement for it.
Think about what this looks like in practice:
- You're designing an integration layer. You sketch the contract, you have the model draft the implementation, you read it critically, you push back on the bits that smell wrong, and you ship something you could rewrite from scratch if you had to.
- You're writing tests for a complex interaction. You describe the scenario in plain language, the model produces a test skeleton, you recognise that the assertion is subtly wrong because you understand the domain, and you fix it.
- You're reviewing a pull request. You use AI to summarise the diff and flag patterns, but you make the actual decision about whether the change is safe.
In each of these cases, the AI is doing work inside your engineering process, not replacing it. You are still the engineer.
Why the Conflation Is Harmful
When organisations can't tell these two things apart, predictable things start to go wrong.
Expectation misalignment. A team that's genuinely doing AI-assisted development — thinking hard, reviewing carefully, building maintainable systems — gets compared unfavourably to a team vibe-coding their way through a sprint at four times the velocity. Until the first production incident.
Hiring and skills atrophy. If vibe coding and AI-assisted development are the same thing, then foundational engineering knowledge stops mattering. Why understand how a database index works if the model can just write the query? This is a dangerous conclusion. The model's query is only as good as the engineer's ability to recognise a bad one.
Tooling evaluation gets muddled. The tools that make vibe coding productive (fast generation, high acceptance rate, minimal friction) are not the same tools that make AI-assisted development productive. A serious engineering team needs tools with good diff views, reliable context handling, testability, and tight feedback loops. Choosing tooling based on vibe-coding benchmarks is like choosing a scalpel based on how well it slices bread.
It flattens the skill curve. Vibe coding is low-floor, low-ceiling. AI-assisted development is low-floor, high-ceiling. When we treat them as equivalent, we lose the incentive to climb.
The Ownership Question
Here's the sharpest way I know to draw the line: who owns the output?
In vibe coding, the ownership is diffuse. If it breaks, you shrug and reprompt. This is fine when the stakes are low. It's a problem when the stakes aren't.
In AI-assisted development, the engineer owns the output. Fully. The fact that an LLM drafted the function doesn't change that. You reviewed it. You committed it. You deployed it. It's yours.
This isn't a philosophical point — it has practical consequences. Owning the output means you have to be able to reason about it. Which means you have to understand it. Which means the model's draft is the start of a conversation, not the end of one.
That's a fundamentally different relationship with the tool.
A Note on Ego
I want to be careful here, because there's a version of this argument that slides into elitism. "Real developers don't vibe code" is not what I'm saying. Plenty of vibe coding happens in professional contexts and produces genuinely useful things.
What I'm pushing back against is the industry narrative that AI has "democratised" software development in a way that makes the craft of engineering less relevant. It hasn't. It's lowered the floor, which is great. It has not lowered the ceiling, and it has not changed what's required to operate near it.
The engineers I respect most right now are the ones who've leaned into AI tooling hardest and maintained the strongest grip on what's actually happening in their systems. They're faster, they're more exploratory, they try more things. But they're still engineers.
Where This Leaves Us
If you're vibe coding, do it consciously. Know what you're trading. Keep it in the right contexts. Don't confuse velocity for quality.
If you're doing AI-assisted development, own the distinction. Push back when someone assumes you're just typing less. Explain why the review step isn't optional. Make the case for what you're actually doing.
And if you're in a position to set team or organisational standards — please, draw the line clearly. The tools available to us right now are extraordinary. But they reward engineers who understand them more than they reward engineers who simply use them.
The vibe is not the product. The product is the product. And someone has to be responsible for it.
I work on test automation tooling — frameworks, MCP servers, and the infrastructure that makes AI-assisted development actually testable. If you're thinking seriously about how AI fits into a professional engineering workflow, I'd love to hear how you're drawing these lines in the comments.
Top comments (0)