The worst way to use AI for debugging: paste the error, ask for a fix, apply the first suggestion.
This works maybe 40% of the time. The other 60%, you get a fix that addresses the symptom, the bug comes back in a different form, and you're stuck in a loop.
Here's the approach that actually finds root causes.
Give the model what it needs
AI can't debug without information. The minimum useful bug report:
Error: [exact error message, including stack trace]
File: [file path and line number]
Trigger: [what action causes this]
Expected: [what should happen]
Actual: [what happens]
Recent changes: [what changed since this last worked]
Skipping "recent changes" is where most debugging sessions go wrong. The error looks unrelated to what you changed. It's usually not.
Force the model to read the code
AI will infer behavior from function names and type signatures if you let it. Inference is guessing.
Before asking for a fix:
Read [file]. Read [related file]. Do not infer — read the actual implementation.
Describe what the code does, then identify the root cause of the error.
Do not propose a fix until you have identified the root cause.
Separating "find the cause" from "propose the fix" produces better results than asking for both at once.
The four-phase approach
Reproduce — Verify you can reliably produce the error. If you can't reproduce it, you can't verify the fix.
Isolate — Which part of the code is responsible? Ask the model to narrow it down. "Is this in the database layer, the service layer, or the route handler?"
Root cause — What specifically is causing the behavior? Not "the query is failing" but "the query uses user_id from the session, but unauthenticated requests have no session so user_id is undefined."
Fix — Now, and only now, propose a solution. The fix should address the root cause, not the symptom.
Watch for the shotgun approach
Some AI-generated fixes change multiple things at once. "I updated the query, added null checking, and also fixed the session handling." This tells you nothing about which change fixed the bug — and means you're carrying untested changes.
Ask for one change at a time:
Propose the minimal change that addresses the root cause. One change. I will test it before applying anything else.
Verifying the fix
After applying: run the exact test case that was failing. Then run adjacent test cases — bugs often hide behind each other.
"The tests pass" is not the same as "the bug is fixed." Confirm the specific behavior that was broken is now correct.
When AI debugging stalls
If you're three iterations in and still not finding it, stop using AI and use your debugger. Set a breakpoint, inspect the actual values at runtime. The model is working from static code; the debugger shows you what's actually happening.
AI is good at understanding code. It's not a substitute for observing runtime behavior.
More dev workflow patterns: builtbyzac.com/power-moves.html.
Top comments (0)