It turns out I was wrong and a simple step fixed it
So just like most devs I have been fiddling around with multiple AI agents, mainly all bigger ones like ChatGPT, Grok, Gemini and ofcourse Claude. I have not gotten around to Kimi just yet.
However most of my time has been with Claude, I have even purchased extra credits to create more and more and let the agent do most of my work.
But recently I found myself doing what most devs do, jumping straight into implementation, letting the agent just figure it out. And yeah, it works, until it doesn't and you're three hours deep trying to figure out why you keep burning tokens with ok results instead of great.
The Fix
The fix that actually changed how I work with Claude: use Plan mode first, and write tests.
Before I let Claude touch any code now, I switch to Plan mode and make it think through the approach. No code yet, just the plan. Once I'm happy with the direction, then we implement. And alongside that, tests. Not after, not "I'll add those later" during.
And this is where it gets interesting: Plan mode caught things that the immediate agent did not. Simple stuff like "hey, this component is tightly coupled, we should decouple it before adding to it." Obvious in hindsight, easy to miss when you're just vibing through features.
Conclusion
It sounds obvious written out like that. But it's one of those things you have to learn yourself on a few times before it sticks.
The difference in output is noticeable. Claude is significantly less likely to go off the rails when it's committed to a plan upfront, and the tests catch the moments where it quietly breaks something it wasn't supposed to touch.
Two steps that take maybe five extra minutes and save you from a lot of frustrated backtracking.
Try it if you haven't.

Top comments (0)