DEV Community

Cover image for Vibe Coding Is Addictive, and It’s Quietly Destroying Software Quality
CarGDev
CarGDev

Posted on

Vibe Coding Is Addictive, and It’s Quietly Destroying Software Quality

Recently I have been use the claude code cli to create some internal projects to myself, to get into Vibe Coding area, it is an incredible app I may say, probably I will continue use it more and more, but I still have complains and of course I have a complain with the price that is quite expensive, I won't focus on that, I think the LLMs are actually even more costly but we are not seeing the real prices yet. Companies have to gain customers in the area first, and then they will get the real costs.

As I said before, I have this thought: when you're going to code, with money or without, ships will be really fast, but that doesn't mean it will be good; it just means I don't need to Google it or Stack Overflow it (if that is a verb).


Reflections on AI-generated code and developer responsibility

I have generated a project with 1396 directories and 17121 files, which I have to review one by one, and I have already started. It will take time, but this helps me to understand what the LLM is trying to do. I'm talking about Opus 4.5 here, which is the best LLM for coding so far. So recently, I watched many videos about how software engineers will disappear. I doubt it, but let's think for a second that this is true and that developers are people who never evolve with new frameworks.

So, how companies will generate products will be either prompted by another LLM or by a few people writing prompts insanely because, you know:

"Fix this, this time has to work, please."
Enter fullscreen mode Exit fullscreen mode

The reality is that, for many years, we've said as software developers/engineers that our work is not about code; coding is just one part of the task we have, and LLMs beat us at typing for sure, though not at creativity.

As I mentioned, I have generated 17121 files, which is insane now. As a developer, I'm lazy; that is why I use NVim. I don't need to think about what I'm doing. I just have to do it (that is another discussion, but use NVim).


Reviewing structure, trade-offs, and maintainability

At this point, let's talk about the structure. The structure actually was a mess. I wrote rules within .claude/ to maintain the structure the most accurately as possible, even with this, the LLM hallucinates sometimes. So let's analyze what I got as a product:

The good part

The good part that I got from this was:

  • I have the architecture I wanted
  • I wrote a bunch of code insanely fast
  • I kind of manage Claude's code to either approve or deny the changes
  • The functions most of the time are written in modules
  • It follows the agents' files and skills

The issues

As soon as I have to read the code I found:

  • I have to learn how the code was written. I know the rules, but not how this app was created or coded
  • I have mixes between files in JavaScript with Typescript
  • If something is hard to get, Claude will mock the code to make it appear to be working
  • I have imports where literally importing a function and types like
import {
    functionA,
    functionB,
    type functionC
    functionC
    type functionB
} from '@/tools/some'
Enter fullscreen mode Exit fullscreen mode

The most common will be getting imports like:

import {
    functionA,
    functionB,
    functionC
} from '@/tools/some';

import type {
    functionB,
    functionC
} from '@/types/some';
Enter fullscreen mode Exit fullscreen mode

Then I have to change the functions and types in separate files, which will be easier to debug in the future

  • I got functions where the types were imported directly on a random function on line 450, just because
  • I face code defining the types directly on the line:
state.push({
  role: role as 'user' | 'admin',
  content: content,
});
Enter fullscreen mode Exit fullscreen mode
  • I got functional programming in the same file where I exported a class

The code is really hard to debug in the first instance, has comments everywhere, and overwrites the logic elsewhere. I found issues like using the exact same logic in multiple functions and files; that part isn't scalable. Again, it looks super good because it is faster, but it is not maintainable.


Where velocity creates risk

There will be people who will tell me that companies want to get things faster, be the best on the market, and stay competitive. Stay on the podium faster than their competitors. I agree on all those the companies need revenue year after year to keep their bussiness but be quicker is not same as get quality, the software is antrophic and changes over the time, those products Vibe coded today will be a pain on the near future, does not need to be 10 years on the market to say a software is not scalable, as soon as you have customer coming to your application, a lot of issues will be arised and with that could be vulnerabilities.

Talking about vulnerabilities is another thing with LLMs; we write software to define an architecture and add layers to the application. But the LLMs seem they hate layers and find a way to be more effective, that does not matter if they need to connect the database directly with the UI, it is working and was fast. That is the reason the LLMs have a lot of insecure things, with a simple emoji, and an LLM can get prompt injection.

That is because the LLMs are to generate content. They are super good at predicting what the token needs to be after the previous token to sound coherent, cohesive, and trustworthy. When they mock data to make the app run, they are calculating the probability that the code will work. I have worked with LLMs that review the code I write, too, and again, it is the same story. The LLM is losing context over the time. I've seen reviews where I implemented the changes it suggested in the PR comment, and then it complains about the suggestion the LLM made a couple of commits ago.


When to rely on AI, and when not to

This is not easy because prompting feels good and gives you the feeling you are doing the best thing ever, but when I suggest avoiding the AI:

  • When you want to create an application in a language you don't know. It is easier for me to review code written in TypeScript because it is the language I primarily know, but I have to check code generated by the LLM in Rust. I'll probably say yes to everything.
  • Learn how to do it by yourself first, if you are not able to understand what it means and the API, before creating one. The LLM will give you anything, and you will be very proud of the result.
  • Review every single line generated by the LLM. It is cool to get a lot of lines that look colorful on the IDE, but read every line to see if that makes sense.
  • Read about how things work, be curious about. Educate yourself, you don't need to make everything you learn but you have to get an idea and knowledge about how everything works, if tomorrow you are on a plane and you only have the flight time to finish a piece of code, you have to be able to do it by yourself.

Now, when I suggest, it's cool to use the AI:

  • When you have expertise in the language
  • When you know the product and understand the business logic
  • When you want a quick explanation of how the code is written

Why understanding comes first

The issue is not that LLMs can code; it is how we use them to generate that code. How the LLM contribute myself to be better tomorrow, everything that is discovered or invented has to make us better. If in 10 years we saw that AI was really a bad thing (like social media), then even if it is the best thing ever, it would be useless; it wouldn't matter because, as humans, the tool isn't helping us to be better or stay in a better position.

Use all the tools out there, but first learn how to be a software engineer without any tools. A brief story: when I was in middle school, I started learning about algebra, and there were calculators that solved equations; I have never used one of those. I wanted to know how the equations work, even though I didn't use the quadratic formula, I learn how we got the quadratic formula, and then everything clicked. It will happen the same with the LLM for me; if I don't how to code on some language first I need to learning it before speed myself.

Top comments (0)