With the rise of AI agentic coding, Vibe-Coding entered our dev life and stirred quite a storm. The web is full of vibe-coding opinions and rants, some say it's a new dawn, some lash at it - but this post is not any of these.
In this post, I would like to share some of my current insights on the process, especially what works for me and how to avoid the many "traps" vibe coding may lead you into.
As I see it, like in many technologies and tools, it is never the tech's fault but rather the wrong usage of it. But unlike other techs, vibe coding kinda lures you into using it wrongly. It requires very strict self-discipline.
If we have to, we can separate vibe coders into two - those who know and understand the technology the agent uses and have a good architectural mindset, vs. those who just wanna get something working fast without caring what's going on under the hood. We can go further and separate it even more - those who know how to articulate their intentions well in plain English and those who don't.
The rise of the "verbal programmer"
Tech has a repeating growth cycle - from something that is unnatural for humans, like typing code, into something that blends smoothly with our natural mental model, like touch screens, voice AI assistants, VR, etc.
The software development world was always dominated by those who could think like a computer. Take an idea they have and translate it into "if" statements and loops. This is not the case anymore.
AI coding requires a different set of skills, one might call "soft skills." It is how you communicate your intentions in a verbal way. The top skill required is for one to master the English language (LLMs are mainly trained in the English language, that's a cold fact), and if you wish to receive good outcomes from it, you must master this communication skill.
It is no longer how fast you type or how well you know a syntax, but how accurately you can express your intentions and requirements.
The "verbal programmer" is the one who will get the better results and faster than the "analytic programmer."
I simply love it.
Not just "What" but also "How"
If someone asked me what is the most crucial thing to understand about coding with AI, I would say, "Understand that it is a development assistant, not a code magician."
Many claim that they're frustrated with the results they get. I used to say that too when I first started playing with it, and I think I know why it happened - we used to think that we only need to tell our agent what to do, but for better results, you also need to tell it how you want it to be done. And for the "how," you need to know your shit. If you leave it to the agent, you're in for big surprises.
So instead of "get the list of songs from this API and render it as a list," you might wanna say, "get the list of songs using the fetch API and render it in a list by reusing the Song component as the rendering component for a single song, having the song unique ID as the key."
It is more than ok to ask the agent to give you options you can choose from and create a small pros and cons comparison. In the end, you are the one who needs to decide what direction to go.
If you are not familiar with the solutions offered, it is a great opportunity to learn about them, the first step being to ask the agent why it thinks this is the best solution for the problem at stake.
Which model?
I must admit that I don't have an answer for that, and TBH I don't think that there will ever be a good answer for that. The model race is happening as we speak, and almost every day you hear of another model version which is supposed to be better at this and that. The FOMO is overwhelming and distracting us from the main objective.
To quote Westley,
"
Lifesoftware development is pain, Highness, anyone who says differently is selling something."
Code still remains hard, and creating a complex application is still a challenge.
Like you, I used a bunch of them, jumping from one to another, and I managed to get good results from what was considered a "poor at coding" model, and bad results from those who claimed to be good at coding.
It's true that model training stops at different times and the weight parameters can vary to address coding better. Still, my 2 cents on the matter - the model you choose is not that critical. Most of us don't deal with cutting-edge tech in our day-to-day, and for the large majority of code generation, the top models are pretty much the same.
I tend to think of it as a guitar player messing around with many overdrive pedals when what he should really be focusing on is playing the riff correctly.
Architectural concerns
Many have said it before - agents don't excel at making good architectural decisions.
They are absolutely right (pun intended).
If we take React, for example, unless you specify it, there is a good chance the agents will attempt to write everything in a single component with no real thinking about reuse or SoC. I don't blame them. How could they know that you planned to use the SongList component in another component? How should they know that the SongList component should not be aware of the logic which fetches the data it renders?
Another example is creating Express server endpoints. If you don't specify it, there is nothing holding the agent from creating the routes, the controllers, and the services which handle the request all in the same file.
While determining the next token in creating a component might be "straightforward" for the model, the architectural decisions are more complex, and the content on which the model has trained probably does not represent your organization's business logic. Chances are it will fail to make the right decisions then.
Again, this is not saying the model isn't "smart" (like I heard many say). It simply means that it hasn't got the context it needs to make the right decisions. And the context needed for these kinds of decisions is massive.
Yet, having written these lines, I started wondering... what if the agent writes the routes, controller, and services in the same file? Should we fear spaghetti code in the era of AI?
After all, I can give the spaghetti code to my agent and it will know how to untangle it... right?
I have my answer for that (spoiler alert - "don't encourage spaghetti code!"), but I also leave it here for you to ponder about as well. Leave your answers in the comments below.
As mentioned before, our agent also doesn't always know how to choose the most suitable solutions for the problem. For example, if you tell it to create code for fetching data from a certain endpoint, you will most likely end up using the "axios" lib. Why? Because this is the pattern the model has seen commonly in many projects, but it does not mean that it suits your project. You would be quite satisfied with the native fetch API. I, for one, would always prefer the native APIs first over installing third-party libraries. In fact, I have this as a stored "memory" in Cursor:
"The user prefers that the assistant always use native APIs and validate official documentation before implementing custom in-house or third-party solutions."
What works for me
I cannot consider what I'm doing "vibe-coding". For lack of a better term, I will call it "Responsible vibe-coding"
Here how it goes -
Todos
I first start with a clear and very precise TODO. This can be a simple feature or a bug fix. I believe that everyone has their own list of what should be done, whether it is an MD file with bullets or a Jira board.
This allows me to get a high-level view of what should be done and consider if there are tasks that should come before or after one another. The smaller the task is, the better. Once I have this list sorted out (and it's a dynamic list, so tasks can be added, removed, or reordered in it), I can continue to the next steps.
I found that categorizing the list helps me stay focused on what's important. Here are my categories:
- MVP - The things that the product cannot be completed without. Every task there is mandatory
- Bugs - The list of bugs that I found along the way or that I know of. These tasks completion is also mandatory
- Refactoring - Anything I would like to re-organize or write in a different way, while not altering the current functionality. Especially when you're vibe-coding, soon enough you will discover areas where things could have been done differently to improve code reusing, ease future maintenance etc.
- Nice to have - Well, this goes without saying.
Some of the tasks required a plan. A PRD (Product Requirements Document) is in order, and this is the first stage I start to request the AI agent for assistance.
The plan - PRD (or SPEC)
I would take the task at hand and ask the agent to create a PRD for it. An example for such a prompt may look like this:
For the task of <task from the todo here>, create a PRD which consists of the following: a brief of the current status, a solution overview, very detailed implementation phases, each containing steps that can be marked once completed. Each phase should be a deliverable phase, meaning the application is still shippable once the phase is completed. There is no need for risk assessments, future planning, or testing recommendations.
Write the PRD to a prd-<task-name>.md under the prd directory.
Recently, Cursor released a "Plan" mode to their chat, which does a pretty good job, asking leading and clarification questions and producing a good plan for the agent to follow.
When the agent completes this task, you should have a PRD markdown file under the prd folder of your project. I think it makes a lot of sense to have a dedicated directory for those PRDs in your project. They come in very handy when you want to communicate with others and hand in details of the implementation.
The most critical part in this step is to review the PRD.
Given that your task was small and well defined, the PRD should not be a long document. Go over the solution overview and see if it fits your requirements. Check the implementation phases and make sure that the agent did not insert any undesired steps, because it will try.
Ask the agent a question about the decisions it took. Don't let it get away with the "You're absolutely right!" BS. Ask it to give the pros and cons of a decision you're not sure about and decide accordingly.
Implementing the PRD
When you're satisfied with the PRD, it is time to move forward to the implementation. I usually go with,
Start implementing phase 1.1 <the title for the phase>. Implement this phase only and don't continue until I approve your implementation.
Agents do have the tendency to get carried away (I literally had an agent telling me, "Sorry, I got carried away," when I asked why it continued implementing the next phase).
Once the agent finishes implementing the phase, go over the modified code, review it, see that you understand what it did, and if not, require explanations (this is, BTW, a great way to learn stuff as you go).
Don't be quick to click that "Keep all" button. Take the time to review the changes; otherwise, you will lose track very quickly. If you don't understand what your agent is doing, you're in for some serious time loss.
Also, if you think the agent modified too many files - more than was needed given your understanding of the architecture - you might be facing a vibe-code-smell (see section on that below), and you might wanna stop and see what you can do to better architect your code. The conclusions may lead you into adding some tasks under the "Refactoring" todo section.
Debugging
Agents tend to insert a lot of console logs and other debugging tools when they generate the code. I don't mind it that much, and I think that when the feature is being implemented, it greatly helps to figure out if it works as expected.
Loggers also help the agent find bugs in the implementation. I found myself, in several cases, giving the agent the log stack from my browser or Node server for it to figure out what went wrong and at what point, and it was very helpful.
Having said that, it would be wise to clean up the massive logging after the implementation is completed, leaving only the crucial log messages.
Clean up
Surprisingly enough, the agent seems to avoid discarding unused files.
Many times, I found the agent leaving "dead code" behind and moving on, and I ended up telling it, "Remove any redundant or dead code from the modified files." You'd be surprised at how much junk is accumulated during vibe-coding.
Vibe code smell
I came to the realization that there is such a thing as Vibe-code-smell, and it's important to identify it as soon as possible and act to un-smell it.
Like in traditional code-smell, this is an indication of "... weaknesses in design that may slow down development or increase the risk of bugs or failures in the future."
Identifying the code-smell when vibe-coding is a bit more challenging, because you're less intimate with the code being written. But here are a few indications you can keep an eye on -
Too many modified files
As simple as it sounds, this is when your agent performs modifications in too many files, way more than what you thought your prompt required. This usually hints that your code design is broken, perhaps SoC is not well implemented correctly. In React, for example, it may indicate props drilling or bad state management, perhaps a component that needs to be extracted into smaller components and reused properly.
In any case, this is the first indication that your agent is running wild and that you are losing control over what it's doing.
"You're absolutely right!"
Careful now with how you accept this warm compliment from your agent. The model behind the agent, as you know, is a simple token-resolver machine in the end, and if you suggest an idea to it, it becomes part of its context, therefore, it will attempt to justify it in the next resolved tokens. Although you are very smart, you cannot be absolutely right all the time.
Ask the agent to debate, show the pros and cons of your suggestion, and maybe offer other options with comparisons.
Too many iterations
Sometimes you will see the agent going into a loop of trying to solve an issue, saying that it found the solution, implementing it, checking it, realizing it was not solved, and then repeating.
For me, after the 3rd attempt, I will stop it and look at the code myself, while having the agent as an assistant in debugging it. Read what the agent is outputting in the chat to make sure it did not get carried away outside the scope of the issue at hand. This is also a good place to say that I'm not in favor of giving the agent free permission to run any command or tool at will. I always want to approve what it is about to execute when it's not just modifying code. Aside from avoiding the risks in it, this also gives me good control over stopping it from spiraling into endless iterations.
In conclusion
Responsible Vibe Coding is not about resisting the new way of developing - it's about embracing it with awareness, discipline, and ownership. The power of AI-driven coding agents is undeniable, but that power comes with a responsibility: to stay in control, to think architecturally, and to communicate intentionally. The line between "let the agent handle it" and "guide the agent effectively" is thin, and learning to walk it makes all the difference between chaos and craftsmanship.
Vibe coding can make us faster, but only if we remain thoughtful. It can make development smoother, but only if we don't surrender our understanding of what's happening under the hood. The agent can generate the code, but it's still our job to ensure it makes sense - structurally, logically, and contextually.
In the end, Responsible Vibe Coding is not just a method - it's a mindset. It's about shifting from "prompt and pray" to "plan, guide, and verify." It's about treating the AI as a capable collaborator, not an infallible magician. When done right, it brings out the best in both human reasoning and machine efficiency. When done carelessly, it amplifies the very weaknesses we once hoped it would fix.
So keep your architecture clean, your prompts precise, and your curiosity alive. Code responsibly, vibe intentionally.
Cheers
Photo by Lukas Tennie on Unsplash
Top comments (0)