Rubber duck debugging works because explaining a problem forces you to think clearly. AI can do the same thing — but better, because it asks follow-up questions.
Here's a workflow I use when I'm stuck on a design decision or a gnarly bug. Takes about 10 minutes and consistently gets me unstuck.
Step 1: Dump your context, not your question
Most people open ChatGPT and ask "how do I fix X?" That's too narrow. Instead, give full context first:
I'm working on [system/feature]. Here's what I'm trying to accomplish: [goal].
Here's what I've tried: [approach 1], [approach 2].
Here's where I'm stuck: [specific blocker].
Don't give me a solution yet. Ask me clarifying questions until you understand the problem fully.
That last line is the key. Forcing the model to interrogate you before answering surfaces assumptions you didn't know you were making.
Step 2: Answer its questions honestly
When it asks "what constraints are you working under?" or "what happens if you do X?" — actually answer. Don't shortcut to "just give me the answer." The back-and-forth is the point.
Typically 2–3 rounds of Q&A is enough.
Step 3: Ask for the devil's advocate take
Once you've landed on a direction, run this:
Here's the approach I'm leaning toward: [your plan].
Now argue against it. What are the top 3 reasons this is the wrong call?
This is where AI earns its keep. It'll surface edge cases, scalability concerns, or maintenance debt you glossed over. You don't have to agree with all of it — but you should be able to rebut each point.
Step 4: Synthesize a decision log entry
End the session with:
Summarize our conversation as a short architectural decision record (ADR):
- Context
- Decision
- Alternatives considered
- Consequences
Paste that into your PR description or Notion doc. Future-you (and your teammates) will thank you.
Why this works
The standard "explain this to me" prompt treats AI as a search engine. This workflow treats it as a thinking partner with an agenda: to stress-test your reasoning before you commit to it.
The difference in output quality is significant — especially for decisions that are hard to reverse.
If you want more structured prompts for engineering decisions, code reviews, and career conversations, I put together a playbook of them here: AI Prompt Playbook for Engineers. Practical, copy-paste ready, no filler.
Top comments (0)