DEV Community

Cover image for The AI development trap that wastes your time
Samuel-Zacharie FAURE
Samuel-Zacharie FAURE

Posted on

The AI development trap that wastes your time

Have this ever happened to you?

You are asking your AI agent to develop something, correct a bug, or whatnot.

It's currently completely lost. You are burning a huge amount of tokens, and wasting your time. Despite refining your prompts more and more, it's just refusing to correctly do what you're asking it to do.

What's happening and how to avoid this annoying cycle?

Take a step back: Why are we using AI in the first place?

Experienced programmers know this: the real gains of AI is not in the speed of development. A good senior can often fix a bug or develop a small feature faster than AI.

The real gain of AI is in the cognitive load reduction.

Writing code is hard, at the very least because your syntax must be perfect. And even for small logic iterations, you need to use your brain. And brain use is a finite resource - You have a fixed amount of thinking in you which refreshes when you take a break or sleep.

The real productivity gains of AI come from cognitive load reduction: because you are using your brain less, you can do much more during your day of work.

But this is a delicate balance. How much thinking should you really be doing?

Too much thinking and you're wasting on productivity gains. Not enough thinking, and you're entering the sunk-cost fallacy loop.

This probably happened to you

You're prompting and prompting again, and usually it works great, but today for some reason, your agent is completely lost.

You realize you would have been done much sooner if you hadn't used AI at all, but you're in too deep by now, surely by the next prompt it will finally do what you're asking for... Okay, the next... Okay, the next for sure...

The thing is, because you didn't invest the initial cognitive load, you are just a little bit too lost. But thinking is hard and you don't want to spend this cognitive load now, especially since this would be equivalent to basically starting all over. After all this time and tokens burnt.

So you keep prompting and hoping for the best, and it only gets worse.

What to do in this situation?

Take a step back and a deep breath. Realize the problem is that you haven't engaged your brain enough on this task.

Ask yourself those questions:

Do I understand exactly the specifications I'm trying to implement, or the bug I'm trying to solve?

If not, then take some time to define the specifications or understand the issue better. You can ask for help from the AI, but no coding authorized!

Do I have an exact plan for implementing my changes?

If not, then take some time to think about your implementation plan. Use the atomic git commit workflow if you need to. Humans should take baby steps when developing, and so should AI.

What is the current abstraction level to which I should be prompting now?

Prompting can be high-level ("Implement this feature") or it can be low-level ("Refactor this method, rename this variable"). High-level prompting is more desirable in terms of productivity gains, but if AI could be consistently efficient at high-level, we'd all be out of a job by now.

Entering the sunk-cost fallacy cycle generally means you've overestimated the level of prompting you can ask from your model for this specific task. Take it down one or two levels. Right now, you need to take your AI by the hand and gently guide it to your solution.

Which other information am I lacking?

Think about any information you need to make your changes which is not absolutely clear for you at the moment. Use AI to explore the codebase if you need to, or to brainstorm solutions; but then again, no coding authorized for now!

Bonus: Work with your agent in TDD

Test-Driven development is a great programming method (Although I know this is controversial opinion) but with the popularization of AI agents, it became (in my opinion) a must-have.

Do you have your tests successfully failing? The bug is well reproduced? The specifications are well defined? If not, start your agent here. Writing tests has never been so quick and easy thanks to AI.

Just keep in mind that this is the most important part of development, and also the hardest. This is the part where your brain needs to be in full alert, everything should be crystal-clear and well defined.

You will also need to be very specific in your prompts about not coding the actual solution but only coding the tests for now. AI Agents have been trained to code solutions, so by default it will try, even when you didn't ask for it.

If nothing works

Close your AI agent. Go back to the whiteboard. Reset both yours and the agent's context.

You will manage just fine! Just stop wasting your time on a sunk-cost fallacy cycle. It is a hard pill to swallow, but nothing you can't do. Stay in the command chair, and it will all work out.

An old proverb says: "Alcohol is a great servant, but a terrible master" - This is also true for AI. Be the master of your AI, not its servant.

Top comments (22)

Collapse
 
canro91 profile image
Cesar Aguirre

Great point about cognitive load.

Do I understand exactly the specifications I'm trying to implement, or the bug I'm trying to solve?
Do I have an exact plan for implementing my changes?
What is the current abstraction level to which I should be prompting now?
Which other information am I lacking?

These four questions made me think AI is helping us rediscover coding.

Now AI agents/bots/chats are forcing us to follow a process to get better result from the tool. But these four questions have been true all this time: understanding what to write, coming up with a execution plan (even if that's outlining a solution with comments), decomposing a problem into smaller ones, and ask clarifying questions. That's coding since the beginning of time.

Great piece!

Collapse
 
mikeydorje profile image
Mikey Dorje

The real gain of Al is in the cognitive load reduction.

💯

Collapse
 
smyekh profile image
Smyekh David-West

These are great points!

I’ve come to realise that if the AI seems lost, it’s often a reflection of the user being lost as well. Barring, of course, the occasional day it decides to hallucinate. And don’t we all have our off days?

The advice to "take some time to think about your implementation plan," particularly rings true. Too often, we begin prompting without a concrete strategy, simply hoping the AI will grasp the gist of our problem and deliver a solution.

I agree that prompts must be premeditated. I find it effective to state the primary goal in the initial query to set the tone and establish a shared context. This ensures we are on the same page from the start. Ultimately, the session's context and your own clarity on the problem are the most critical components for success.

Collapse
 
netnavi profile image
Ahmad Firdaus

WHen using AI, I often start it from determining how the project should be done.
brainstorming the ideas, creating project check point as control, and code assist to prevent typo and etc. AI also helpful to create project log and raw documentation for speed reading.

basically we are who choose what AI to use and what for. its good for creating a frame and draw sketch, but the finished products is depend on our personal touch.

several tries with high level prompting, AI have tendencies to repeat the same mistake over and over. or sometimes hallucinating with a job that not needed, once the AI cannot find a file, but the file is there, just because the text formatting is not usual like utf-16. its cycled for few times before its finally give up, and then voila STackoverflow have the answer.

Collapse
 
devloic profile image
Loic Devaux

I experienced those cycles many times. If I can't suggest the AI an alternative approach I will restart the AI with a new context... but sometimes insisting pays out and task are resolved after 7 to 8 debugging cycles so it's really difficult to take the decision to "reset". I think this situation will evolve once AI will be able to make autonomous tests with different kind of systems, for example for web apps with the chrome devtools mcp , these tools can give the ai the extra insights it needs. I think we are not far away of having AI directly in the runtimes, feels like Tron.

Collapse
 
wizardzeb profile image
Zeb

This is why I do not vibe code. It's just not healthy. I try to use AI as a thinking partner not a servant. If the AI starts to spiral I go back to old fashioned Googling or using the debugger. Use your brain people!

Collapse
 
jlhfr profile image
JLHfr

I'm still very new in the dev space. But one of the first things I realized, as many of you mentioned already: AI isn't a reliable source if you don't understand the code by yourself. It only adds more confusion to the whole process, if you think you can rely on it completely.

Collapse
 
shemith_mohanan_6361bb8a2 profile image
shemith mohanan

Brilliant insight 👏 — loved the reminder that AI’s real value is in reducing cognitive load, not just speeding up work.
The “sunk-cost prompting loop” part hit hard — and that closing line, “Be the master of your AI, not its servant,” is gold. 🔥

Collapse
 
parag_nandy_roy profile image
Parag Nandy Roy

Love this take ....AI isn’t a speed booster....it’s a mental load reducer...

Collapse
 
dingowashisnamo profile image
Jeremy Strong

I have a few ways of approaching this. Often when stuck in this cycle in indicates that there's something wrong with the architecture. Its not simple enough. It doesnt flow right. When the AI cant write tests, its an indication of the same thing.

I generally start with that, ask the AI if the overall approach is right. WHY are we stuck? Help me rewrite it. Dont try to save lines of code, spend them for simple reasoning.

Another approach I use is to build a simplest possible template that represents the logic I'm trying to implement. I used this when troubleshooting a nested drag and drop feature. I went round and round inside the application code. Every time I went round, the ai lost sight of anything but the problem and broke something.

So I had it build a simple, isolated template. Got that working, and had it refer to the template to fix my main logic.

Both of these approaches work. It can feel like you're losing progress because you aren't attacking the problem you see. the squiggle, the failing test. But taking a step back lets you reason around the problem.

Some comments may only be visible to logged-in visitors. Sign in to view all comments.