When AI-generated code doesn't work, most developers do the same thing: paste the error back and say "fix it."
That's the slowest way to debug. Here are five questions I ask instead â each one cuts straight to the root cause.
1. "What assumptions did you make about the input?"
AI models fill gaps silently. If your function receives null and the AI assumed it would always get a string, the fix isn't in the logic â it's in the assumption.
Before fixing the bug, list every assumption you made about:
- Input types and shapes
- Environment variables
- Dependencies being available
- State from previous steps
This single question catches about 40% of my AI bugs.
2. "Which part of this code did you copy vs. generate fresh?"
AI assistants blend memorized patterns with novel generation. The bugs almost always live at the seam â where a memorized snippet meets your specific codebase.
Ask the AI to annotate which parts are pattern-matched vs. custom. The custom parts need the closest review.
3. "What would make this fail silently?"
Loud failures are easy. Silent failures ship to production. I ask this question before every merge:
Review this code for silent failure modes:
- Swallowed exceptions
- Default values that mask errors
- Race conditions that only fail under load
- Off-by-one errors that pass small test cases
4. "Show me the simplest input that breaks this."
Instead of asking "is this correct?", ask for a minimal failing case. AI models are surprisingly good at adversarial testing when you ask directly:
Generate the smallest possible input that would cause
this function to return an incorrect result or throw
an unexpected error.
5. "If you were mass-reviewing this code, what would you flag?"
This reframes the AI from author to reviewer. Authors defend their code; reviewers find problems. The shift in role consistently surfaces issues the AI didn't mention during generation.
Pretend you're reviewing this as a pull request from
a junior developer. What would you flag? Be specific â
line numbers and concrete concerns, not general advice.
The Pattern
Notice these questions share a structure: they don't ask "fix it." They ask the AI to think about its own output from a different angle. That's the real debugging skill â not better prompts, but better questions.
I run through these five every time AI code doesn't work on the first try. Most days, question #1 or #4 finds the bug in under a minute.
Which debugging questions do you ask your AI assistant? Drop your favorites below â I'm always looking for new angles.
Top comments (0)