You know the trick — explain your code to a rubber duck and the bug reveals itself. Turns out it works on AI too, except you make the AI be the duck.
The Problem
Your AI assistant returns code that looks right. It runs. But something about it feels off — maybe it picked a weird data structure, ignored an edge case, or used a pattern you didn\'t expect. You can\'t tell if it\'s wrong or just different.
Most people either accept it or start over. There\'s a better move.
The Rubber Duck Prompt
After the AI generates code, hit it with this:
Before I review this, walk me through your decisions:
1. Why did you choose this data structure?
2. What alternatives did you consider and reject?
3. What edge cases did you think about?
4. What assumptions are you making about the input?
5. What would break first if requirements change?
That\'s it. Five questions. The AI is forced to justify every choice.
Why This Works
When an LLM generates code, it\'s pattern-matching from training data. It doesn\'t know why it picked Map over Object — but when you ask, it reconstructs the reasoning. And in that reconstruction, gaps surface:
- "I assumed the input is always sorted" → it wasn\'t
- "I chose a flat array for simplicity" → but you need O(1) lookups
- "I didn\'t consider empty input" → obvious bug
The explanation becomes a free code review.
Real Example
I asked Claude to write a rate limiter. It returned a token bucket implementation. Looked fine. Then I ran the Rubber Duck Prompt:
"I chose a token bucket because it handles bursts well. I assumed requests are single-threaded..."
Stop. My service is multi-threaded. That assumption would have caused a race condition in production. Caught in 30 seconds, not 3 hours of debugging.
When to Use It
- After any generated code longer than ~30 lines
- When the output "looks right" but you didn\'t specify the approach
- Before merging AI-generated PRs
- When onboarding to unfamiliar code the AI wrote previously
When to Skip It
- Trivial code (formatting, simple CRUD)
- You specified the exact approach in your prompt
- You\'re prototyping and correctness doesn\'t matter yet
Template
Save this as your post-generation step:
## Review Gate: Rubber Duck Check
Explain your implementation decisions:
- Data structures chosen and why
- Alternatives considered
- Edge cases handled (and deliberately skipped)
- Assumptions about input/environment
- Fragility points if requirements change
The Takeaway
Don\'t trust AI code that you can\'t explain. But you don\'t have to explain it yourself — make the AI explain it to you. The bugs are hiding in the assumptions, and assumptions only surface when you ask.
Top comments (0)