DEV Community

Dave Mackey
Dave Mackey

Posted on • Edited on

The Dangers of Vibe Coding Part 1: Premature Optimization

Stronger Hook in the Intro:

I wrote this article, but I asked ChatGPT for recommendations on how to improve it. It suggested I needed a stronger hook for the intro and I enjoyed its suggestion:

"I set out to build a simple browser extension. Two hours later, I was knee-deep in cyclomatic complexity metrics. Welcome to vibe coding with AI." -- ChatGPT.

Your Regularly Scheduled Intro...

Funny, intelligent, and not at all the way I talk...anyways...

I've noticed an issue I've run into repeatedly with vibe coding is premature optimization. It happened again with a simple browser extension (ForgetfulMe) I'm vibing currently.

No, I'm not talking about the AI doing premature optimization on its own (though that happens frequently). Instead, I'm talking about the AI tempting me to focus on premature optimization.

"TL;DR: AI coding assistants are great at pointing out problems—but they also make it dangerously easy to procrastinate via premature optimization. In early-stage projects, chasing perfection can stall real progress." -- ChatGPT.

The Temptations of AI Feedback

One of my practices is to regularly ask the AI to create a markdown document with significant ways in which the code can be improved, bugs that need fixing, and best practices that aren't being followed. This works well, a little too well.

For example, when I recently asked it to analyze the codebase for dead code it did so and included some additional tidbits such as:

A. Overly Complex Functions

Several functions exceed the complexity limit of 10:

High Complexity Functions

  • handleMessage in background.js (complexity: 19)
  • handleSignup in auth-ui.js (complexity: 11)
  • getBookmarks in supabase-service.js (complexity: 16)
  • categorizeError in error-handler.js (complexity: 38)
  • getUserMessage in error-handler.js (complexity: 21)
  • createButton in ui-components.js (complexity: 12)
  • createFormField in ui-components.js (complexity: 11)
  • createListItem in ui-components.js (complexity: 12)
  • toSupabaseFormat in bookmark-transformer.js (complexity: 15)

Impact: High - Reduces maintainability and testability
Recommendation: Break down into smaller, focused functions

Well, none of that is good. It also reported on the files I have exceeding 300 lines, functions with too many parameters, and the list goes on.

At other times it's recommended implementing dependency inversion, updating integration tests - you get the idea.

It has an abundance of ways that I can and should improve the code base (now if only AI wrote a clean codebase to start with!).

Why It's a Trap

The danger is that I get distracted from what I was actually trying to accomplish (building X feature or fixing Y bug) and spend precious time on refactoring.

Yes, these things need to be done - but especially for early prototyping building something is more important than building something perfect.

Even as I say that it sits poorly in my mouth. I like architecting things the right way, I don't like sloppy code. I'd rather take a little longer to implement things well than save some time writing messy code.

But...

Conclusion

If AI is to be used productively we have to balance developing features against perfecting code. For side projects, for early prototyping too much optimization is a bad thing. Sure, the AI can do it for us, but (at least currently) this isn't a fast process and the hours we spend on refactoring are hours not spent on building out basic functionality.

What do you think?

"Have you caught yourself vibe coding into a refactoring rabbit hole? How do you balance AI feedback with actual progress?" -- ChatGPT.

What Else Did ChatGPT Do?

  • I followed its suggestion to break up a long paragraph.
  • I also implemented section headings as it suggested, but generally used my own wording.

Top comments (22)

Collapse
 
meipark profile image
Mei Park

Have you tried different models? Notice any differences? I've found Claude's models to be quite practical

Collapse
 
rodatdoh profile image
Rod Falanga (NM DOH) • Edited

I like to use Claude, too. However, even Claude can result in adding unnecessary complexity. Just this week I was using Claude to analyze some Blazor code I'm working on, trying to figure out why buttons on a page weren't working. I asked Claude, which I'm using in GitHub Copilot, why the buttons weren't working. It gave me some code suggestions in a couple of files, as well as adding a NuGet package. It didn't work. I tried Claude again, more suggestions, etc. Still didn't work. By now I'm thinking that this thing is becoming complex, so I backed those changes out. Then I thought, maybe the rendermode was wrong. Sure enough, I left @rendermode InteractiveAuto out of the Razor file. I put that in and, presto-chango, it worked! One line of code that I neglected to enter, rather than a few dozen lines of code that were useless.

I'll still use Claude, but I'm more cautious of the suggestions it, or any other AI Agent, will give me.

Collapse
 
davidshq profile image
Dave Mackey

Yeah, it's definitely a struggle. I go back and forth on how helpful AI is.

I think another big problem is the fact that the AI doesn't back out code changes when they fail - it keeps piling on top change after change after change. So it tries restructuring something and that doesn't work so it refactors something else and it still doesn't work. By the time it actually makes the fix it's touched 4 or 5 different areas of code that didn't need to be touched at all.

Thread Thread
 
rodatdoh profile image
Rod Falanga (NM DOH)

Man isn't that the truth!! It makes me wonder if I should just create a branch after each interaction with an AI.

Collapse
 
iliketocodesometimes profile image
Sean Wheeler

I use Claude Sonnet 4 for a lot of the coding I do.

Not sure if I'm alone on this, but the model is really good at UI and front-end development but he begins to struggle on back-end. Specifically to do with security and more complicated user functions.

He'll also sometimes just randomly throw on additional features that I didn't ask for - which can slow overall development time as I have to go through and remove it.

Collapse
 
davidshq profile image
Dave Mackey

Interesting! How are you using Claude? Claude Code? Cursor? Copilot? etc.?

I found Cursor to be somewhat frustrating when it comes to UI, but Claude Code seems to do better.

I've had the random features thrown in as well - quite frustrating. If I see that happening I often roll back the code (in Cursor) and then tell it to implement what I asked while avoiding making any other changes...it's still annoying, but at least I don't have to go through and remove it.

Collapse
 
davidshq profile image
Dave Mackey

I have used a few different models. I also find that Claude's models seem to do quite well. I'm less impressed with OpenAI and I need to experiment with Gemini more, it feels so foreign.

Collapse
 
prema_ananda profile image
Prema Ananda

Gemini 2.5 Pro and Claude 4 Sonnet are approximately at the same level

Collapse
 
schusterbraun profile image
Schuster Braun

Totally agree with this. AI is an engine for "good ideas" on next steps. I do think though that it does identify done states. So if you give it a goal (I'm not saying it does the best job) but it can judge when it's done. But yeah I think it'll be harder to triage open ended good ideas. So maybe having a clear Definition of Done and Focusing on getting to that point. Do clean up in another step with a clear Definition of Done.

Collapse
 
rodatdoh profile image
Rod Falanga (NM DOH)

I like that idea. It would be good if we could ask AI if something is done, within the definition of done that we've decided upon. And if it is, then the AI would say, "Looks good to me", then stop. Rather than a never-ending list of improvement suggestions .

Collapse
 
davidshq profile image
Dave Mackey

I agree giving a definition of done is helpful. The problem I've run into is that the AI creates a list of tasks (e.g. 5 code refactors to improve code quality) but it faulters on maybe 2 of them and requires a lot of handholding. It gets there eventually, but the handholding takes me away from continuing the prototype.

Collapse
 
schusterbraun profile image
Schuster Braun

I'm working on this as well. Is it right sizing the ask? Asks too big get bugs, too small feels like hand holding. Some of the answer I know is task decomposition. You ask it to break down big asks into the small ones, it automates some of the work out of hand holding. But I too am trying to keep it out of the black hole of fixing incorrect assumptions

Thread Thread
 
davidshq profile image
Dave Mackey

For the newest extension I've been working on (ForgetfulMe) I started off writing it with Cursor and have continued working on it with Claude Code. I've been spending a lot of time refactoring so I told Claude to create fresh specs for the extension based on the current code, provided other guidance on tech stack, methods, etc. and now am starting fresh with Claude.

In a clean branch (I deleted all files) I've placed the docs Claude created, told Claude to read them, and to create a step-by-step todo list...then I'm going to have it try and see what it churns out. Interested to see what happens.

Collapse
 
prema_ananda profile image
Prema Ananda

Totally agree! This problem is very familiar.
When AI starts diving too deep into details and suggesting tons of "improvements", I actively discourage it. Sometimes I want to scold it for such eagerness, but I hold back 😅
I just undo everything it created and clearly explain: "I need to solve specific task X, not rewrite the entire project. Let's focus only on this."
The key is to immediately cut off AI's attempts to "improve everything around" and keep focus on the current goal.

Collapse
 
davidshq profile image
Dave Mackey

Haha, I give in to the scolding sometimes!

Collapse
 
xwero profile image
david duymelinck • Edited

If you are a non technical person I think vibe coding a prototype is a great way to show developers your idea of the application.
A developer should not let AI do prototyping.

I find when I'm typing I often get other ideas about the solution. Then I stop and explore that idea to access if it is the better solution or not. If I let AI do the typing I remove those moments from my process.

A prototype is like a fixer upper. The core is good enough to go into production with minor changes. And outside the core the quality matters less because most of the times it will be rebuild. And then there are the parts that are in between.
Creating code for different levels of cleanliness is hard to define in a prompt. I never even attempted it because I have no clue how to do that.

I know I didn't directly respond to the question. But maybe it can help you find an answer.

Collapse
 
davidshq profile image
Dave Mackey

I think there is a lot of wisdom in what you state here. I'm still working on the balance myself. I'm more willing to vibe code on things that aren't important but that I'd like to have now.

Collapse
 
xwero profile image
david duymelinck

I can agree with your view. When I wrote; A developer should not let AI do prototyping, I was thinking about long term projects.

If you got an idea and you want to quickly see it working, AI can be a good tool. With the consequence that it is possible you have to rewrite the whole thing if the idea turns out to be something you want to continue to develop.
I think at some point in the process you have to decide if the idea is a throwaway or a keeper. And If it is a keeper you should invest in code quality.

Collapse
 
fireinthedawn profile image
between-kittens-and-riots

I get your point but when I think of optimizing code I think about optimizing to make it perform faster when being executed rather than doing things like breaking it up into smaller functions in order to make it easier to maintain. I'm not entirely focused on the "Developer experience" aspect of Optimization. It's a priority to me that the code actually do the job while using the least amount of resources and as fast and cleanly as necessary. I'm not certain ChatGPT is great at writing C Code for instance that is optimized in the sense of getting the job done in the least amount of time.

Collapse
 
davidshq profile image
Dave Mackey

Yeah, I guess I could have called it overengineering or something.

Collapse
 
myranna profile image
Myr-anna

It's hard to find the balance between efficiency while maintaining your own knowledge base. For me I've seen too many classmates at my university utilize vibe coding instead of their own knowledge on a topic (I can't say I'm not guilty either). But for someone like me, who falls into the rabbit hole of optimization (and procrastination), AI just lengthens the amount of time I spend per project.

I appreciate you bringing this topic up because now I feel less alone in my optimization struggles. I think it's important for those who don't know a topic to learn it and then we can utilize AI when the time comes for it, but since it's so new, we don't really know how to regulate it large-scale, and especially for our own personal use.

That's just my 2-cents I hope everyone's projects are going well!

Collapse
 
davidshq profile image
Dave Mackey

I'm planning on writing another post with thoughts on balancing learning / utilizing. I agree that having knowledge on a topic is still incredibly important!