DEV Community

Cover image for The Duck That Talks Back
Juno Threadborne
Juno Threadborne

Posted on

The Duck That Talks Back

The best rubber duck debugging isn’t done with a duck.

It’s a conversation with another person where you start explaining what you built, wander into unnecessary detail, contradict yourself halfway through a sentence, and then stop—not because they interrupted you, but because you suddenly heard the problem out loud.

Rubber duck debugging works because it forces legibility. You’re translating intention into explanation that allows the gaps reveal themselves.

Now imagine that silent listener responds.

Not with suggestions. Not with fixes. Just with an explanation of what they think your code is doing.

Because the moment its explanation diverges from your intent, something important has happened. And it’s very tempting, in that moment, to blame the duck.


Now perform that same experiment, but with an AI.

Ask it to explain.

Paste in a function. A module. A service boundary you feel good about. Don’t give it any extra context. Then ask a simple question: “What does this code do?”

When the explanation is right, something subtle but meaningful is happening. The model isn’t “understanding” your code in a human sense—it’s reconstructing intent from structure alone. Naming, boundaries, data flow, sequencing: these things are doing real work. Your system contains enough signal that an external reasoner can infer the story.

That’s a high bar, and it’s worth noticing when you clear it.

But the real value shows up when the explanation is wrong.


This is the part that’s hard to hear. I say it as someone who’s been wrong more times than I can count—and who still occasionally wants to argue with the duck.

A wrong explanation often isn’t wildly incorrect. It’s plausible. The model tells a story that could be true, that sounds reasonable, that almost matches what you meant—except for one or two details that make your stomach tighten.

That mismatch is where the insight is.

If multiple interpretations fit your code, then your code doesn’t uniquely encode intent. It relies on context that isn’t actually there anymore. Maybe it lives in your head. Maybe it lives in Slack history. Maybe it lives in the fact that “we all know what this means.”

But it doesn’t live in the artifact.

At this point, the instinct is to say the model is wrong. Sometimes it is. Models hallucinate. They miss nuance. They overgeneralize.

But here’s the uncomfortable part:

If the explanation is wrong, your code may not communicate what you assume it does.

That ambiguity is technical debt.


Humans are generous readers of code. We fill in gaps with memory and goodwill. We remember why something is weird. We know which parts are legacy, which parts are temporary, which parts “just look bad but are actually fine.”

Machines don’t do that.

A variable like filteredAttributesLevel2 may be perfectly clear to the people who worked on the component three layers up the stack where it’s set. To a new team member—or an AI—it means nothing without additional context. And how much context is required says a lot about how much meaning your code actually encodes.

That’s why machines make worse teammates—but better mirrors.


There’s an important parallel here that’s worth sitting with.

When a junior developer misunderstands your code, you might chalk it up to experience. When a future teammate struggles, you might blame documentation. When a system becomes fragile over time, you might call it entropy.

But when an AI trained on the aggregate habits of the industry can’t tell what your code is doing, you’re forced to confront a harder possibility: the system itself might not be as legible as you think.

Rubber duck debugging always worked because explanation is where truth leaks. The talking duck just externalizes that leak.


You don’t need to change how you work to see this. You just need to try the experiment.

Take a piece of code you’re confident in.

Ask the duck to explain it.

Pay attention to where it hedges—or confidently gets it wrong.

The duck isn’t there to be right.

It’s there to show you what survives explanation.

Top comments (1)

Collapse
 
svhl profile image
svhl

“What does this do?”

This approach is useful for learning just about anything 😁