I have been building software with AI tools for about two years now. I ship with Claude Code every day. And I still do not vibe code. Here is why that distinction matters more than people think.
The first time I watched someone vibe code, I felt the same thing I feel watching someone drive with their knees. Technically possible. Impressive for about thirty seconds. And then just a matter of time.
I use AI every single day. I am not writing this from some purist position where I compile my own tools and distrust anything generated by a machine. I use Claude to write boilerplate, draft logic, suggest patterns I would have spent an hour looking up. It has made me faster in the ways that were boring to be slow in.
But I do not let it think for me.
There is a difference and it matters more than almost anything else I have learned in the last two years of building with these tools.
What Vibe Coding Actually Is
Vibe coding is not just "using AI to help write code." That is a category error that people make to either defend or attack it. Using AI to help write code is just programming now. That is what the tools are for.
Vibe coding is something specific: it is prompting without understanding, accepting without reading, and shipping without testing. It is the workflow where the developer's job becomes describing what they want and clicking approve.
The output looks like software. It passes the smell test. It runs. And then, three weeks later, something quietly breaks in production and you spend an afternoon staring at code you do not actually understand, written by a model that does not remember writing it.
I have seen this happen to smart people. I have started to do it myself on late nights when I was tired and the model was confident. It is seductive because the short loop feels productive. You say a thing, the code appears, it works. The feedback is immediate and positive.
The cost is invisible until it is not.
The Line I Draw
My workflow goes in one direction: I understand first, then I use AI to execute faster.
That means I write the unit test before I ask the model for the implementation. Not because I am rigorous by nature — I am not — but because writing the test forces me to know what I actually want. It forces me to think about edge cases before I have code that creates attachment to a specific approach. It forces me to have a definition of done that exists outside my head.
When I hand that context to the model, the output is different. Not because the model is smarter — it is the same model but because I am asking a specific, bounded question instead of a vague, open-ended one. The difference between "write me a function that handles payments" and "write me a function that takes a Stripe webhook payload, validates the signature, extracts the event type, and returns a typed result with this shape" is the difference between code that kind of works and code that actually does the thing.
Then I read what comes back. All of it. Even when it is long. Especially when it is long.
This sounds obvious. It is not practiced as much as it sounds.
What AI Is Actually Good At
The honest list of where AI makes me dramatically faster:
Boilerplate that I know the shape of. If I know I need a repository pattern with these five methods, I can describe it precisely and get it in thirty seconds instead of fifteen minutes. I understand what it should look like before I ask. The AI just types faster than I do.
Surfacing options I had not thought of. I will describe a problem and ask what patterns exist for solving it, not for the model to pick one, but so I have a more complete menu. Then I decide. The model does not know my codebase, my constraints, or my risk tolerance.
Catching things I missed. After I write something, I ask the model to review it — specifically to look for edge cases, error paths I did not handle, security issues I glossed over. It finds real things. Not always, but often enough that I have made it a habit.
Writing tests for logic I just wrote. Once the implementation is done and I understand it, I will have the model write additional test cases. It is good at thinking of inputs I did not try.
What it is not good at: deciding what to build, deciding how to architect something that will need to scale, or writing code I am not equipped to review. When I catch myself in that last situation, I stop and learn the thing first.
The Senior Developer Problem
There is a version of this conversation that gets framed as: AI will replace junior developers but senior developers are safe because they can guide it.
I do not think this is quite right, and I think believing it creates a complacency that is more dangerous than the replacement question.
The thing that makes a senior developer valuable is not primarily the ability to generate correct code. It is the ability to know which code should not exist, which abstractions will turn into debt, which requirements are wrong before you build them. That judgment comes from having been wrong enough times to develop taste.
AI does not have taste. It has pattern completion. It will write you a technically correct solution to the wrong problem with the same confidence it writes a technically correct solution to the right one. It cannot tell the difference.
If you are not developing the judgment — because you are outsourcing the thinking to the model — you are not building toward senior. You are extending the period where you do not yet know what you do not know.
Why This Is Not About Being Anti-AI
I am not arguing for slowing down or using fewer tools. I am arguing for staying in the driver's seat of your own work.
The people I have watched get the most out of these tools are the ones who get more done, not the ones who get more generated. The difference is that they know what done means before they start, and they verify they reached it before they ship.
I write unit tests first. Then integration tests against real systems, not mocks — I learned the hard way that mocked tests can pass while the actual integration is broken. Then end-to-end tests with Playwright for the paths users actually take. And I read everything the model gives me before I commit it.
That workflow is slower than vibe coding for the first hour. It is faster than vibe coding over any meaningful timescale.
The AI handles the typing. I stay responsible for the thinking.
That is the only arrangement I trust.
Top comments (0)