<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Tal Vardi</title>
    <description>The latest articles on DEV Community by Tal Vardi (@tal_vardi_d7f3ffe2d1f9cdf).</description>
    <link>https://dev.to/tal_vardi_d7f3ffe2d1f9cdf</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/tal_vardi_d7f3ffe2d1f9cdf"/>
    <language>en</language>
    <item>
      <title>How to Use AI as a Rubber Duck That Actually Pushes Back</title>
      <dc:creator>Tal Vardi</dc:creator>
      <pubDate>Thu, 07 May 2026 05:04:30 +0000</pubDate>
      <link>https://dev.to/tal_vardi_d7f3ffe2d1f9cdf/how-to-use-ai-as-a-rubber-duck-that-actually-pushes-back-3gan</link>
      <guid>https://dev.to/tal_vardi_d7f3ffe2d1f9cdf/how-to-use-ai-as-a-rubber-duck-that-actually-pushes-back-3gan</guid>
      <description>&lt;p&gt;Rubber duck debugging works because explaining a problem forces you to think clearly. AI can do the same thing — but better, because it asks follow-up questions.&lt;/p&gt;

&lt;p&gt;Here's a workflow I use when I'm stuck on a design decision or a gnarly bug. Takes about 10 minutes and consistently gets me unstuck.&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 1: Dump your context, not your question
&lt;/h2&gt;

&lt;p&gt;Most people open ChatGPT and ask "how do I fix X?" That's too narrow. Instead, give full context first:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;I'm working on [system/feature]. Here's what I'm trying to accomplish: [goal].
Here's what I've tried: [approach 1], [approach 2].
Here's where I'm stuck: [specific blocker].
Don't give me a solution yet. Ask me clarifying questions until you understand the problem fully.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That last line is the key. Forcing the model to interrogate you before answering surfaces assumptions you didn't know you were making.&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 2: Answer its questions honestly
&lt;/h2&gt;

&lt;p&gt;When it asks "what constraints are you working under?" or "what happens if you do X?" — actually answer. Don't shortcut to "just give me the answer." The back-and-forth is the point.&lt;/p&gt;

&lt;p&gt;Typically 2–3 rounds of Q&amp;amp;A is enough.&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 3: Ask for the devil's advocate take
&lt;/h2&gt;

&lt;p&gt;Once you've landed on a direction, run this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Here's the approach I'm leaning toward: [your plan].
Now argue against it. What are the top 3 reasons this is the wrong call?
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is where AI earns its keep. It'll surface edge cases, scalability concerns, or maintenance debt you glossed over. You don't have to agree with all of it — but you should be able to rebut each point.&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 4: Synthesize a decision log entry
&lt;/h2&gt;

&lt;p&gt;End the session with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Summarize our conversation as a short architectural decision record (ADR):
- Context
- Decision
- Alternatives considered
- Consequences
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Paste that into your PR description or Notion doc. Future-you (and your teammates) will thank you.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why this works
&lt;/h2&gt;

&lt;p&gt;The standard "explain this to me" prompt treats AI as a search engine. This workflow treats it as a thinking partner with an agenda: to stress-test your reasoning before you commit to it.&lt;/p&gt;

&lt;p&gt;The difference in output quality is significant — especially for decisions that are hard to reverse.&lt;/p&gt;




&lt;p&gt;If you want more structured prompts for engineering decisions, code reviews, and career conversations, I put together a playbook of them here: &lt;a href="https://gumroad.com/l/nhltvo" rel="noopener noreferrer"&gt;AI Prompt Playbook for Engineers&lt;/a&gt;. Practical, copy-paste ready, no filler.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>career</category>
      <category>engineering</category>
    </item>
  </channel>
</rss>
