Most developers debug AI output by staring at it and hoping the problem jumps out. That's not debugging — that's praying.
Here are the five questions I ask every time an AI assistant gives me something wrong. They take two minutes and consistently save me an hour of back-and-forth.
1. "What Did You Assume That I Didn't State?"
This is the single most powerful debugging question. AI assistants fill gaps with assumptions — sometimes reasonable, sometimes catastrophic.
Before implementing, list every assumption you're making
about the input format, error handling, and deployment environment.
Last week, my assistant assumed a JSON payload would always have a user field. It didn't. This question would have caught it in 10 seconds.
2. "Which Part of My Prompt Are You Ignoring?"
Long prompts have a dirty secret: models drop constraints. Especially ones in the middle.
Re-read my original prompt. List each constraint I specified.
For each one, confirm whether your output satisfies it.
I've caught dropped requirements this way at least twice a week. It's not the model being dumb — it's context window attention decay.
3. "What Would Break This?"
Instead of reviewing for correctness, review for failure modes.
List 3 inputs that would cause this code to fail,
throw an exception, or produce incorrect output.
This turns your assistant from a code generator into a code reviewer. The failure cases it generates are usually the exact edge cases you'd miss in manual review.
4. "Show Me the Data Flow, Step by Step"
When output is wrong but you can't see why, make the model trace its own work.
Walk through this function with the input: {"items": []}
Show me the value of each variable at each step.
This is rubber-duck debugging, but the duck actually talks back. I've found off-by-one errors, null reference bugs, and logic inversions this way.
5. "What's the Simplest Version That Works?"
When AI output is overcomplicated, this question cuts through the noise.
Rewrite this to be the simplest possible implementation
that passes all the requirements. Remove every line that
isn't strictly necessary.
Nine times out of ten, the simplified version is not only shorter — it's more correct. Complexity is where bugs hide.
The Workflow
I don't ask all five every time. My rule:
- Output looks wrong → Start with #1 (assumptions) and #3 (break it)
- Output is subtly off → Use #4 (data flow trace)
- Output is bloated → Jump to #5 (simplify)
- Output ignores instructions → #2 (constraint check)
These aren't magic prompts. They're the same questions a senior developer asks during code review. The difference is that you're making the AI do the work instead of doing it yourself.
Five questions. Two minutes. One hour saved. Every day.
Top comments (0)