Why dialogue beats one-shot prompting
Introduction
In my previous article, I introduced a principle-based approach to AI collaboration: five phases (Request, Consult, Plan, Execute, Verify) and three guiding principles. That article focused on what to do.
This article focuses on how to communicate.
What Bothered Me About Prompt Engineering Guides
I've read many prompt engineering guides. They tell you how to write the perfect prompt: use clear structure, include examples, specify the format, add context in the right sections.
So I tried. I followed the templates. I filled in every section. I added context I thought I was supposed to add.
The results got worse, not better.
I was writing things I didn't actually need to say. Padding my prompts to match an expected structure. The more "correct" my prompts looked, the more confused the responses became.
Then I realized the problem. These guides assume something that isn't true for me:
You should be able to write the perfect prompt on the first try.
I can't. Maybe you can't either.
Part 1: Two Styles
Looking back at my own conversations with AI, I noticed two patterns.
When I Tried to Be Clever
- I spent too long crafting the "right" question
- I accepted answers that sounded good, even when something felt off
- I rushed toward the final result
This rarely worked. The output looked polished but missed what I actually wanted.
When I Gave Up on Being Clever
- I started messy, said "I'm not sure how to ask this"
- I pushed back when answers didn't make sense
- I moved slowly, checking alignment at each step
This worked better. Much better.
The second approach felt inefficient at first. But I started noticing something:
Each time I said "wait, that's not what I meant," the next response got closer to what I needed. Each correction added something I couldn't have written upfront.
Starting messy wasn't slower. It was deeper.
Part 2: What Helped Me
Here are things I started doing that made a difference. Not rules—just patterns that worked for me.
Talk Like Yourself
I used to try to write "proper" prompts. Formal. Structured. AI-ish.
It didn't help. When I started using my own words, my own rhythm, things got clearer. The AI adapted to me. I didn't need to adapt to it.
Make "I Don't Understand" Safe
This was the biggest shift.
I used to get frustrated when the AI gave vague or wrong answers. Then I tried something different: I explicitly said, "If anything is unclear, tell me instead of guessing."
It changed everything. When the AI admitted confusion, I could fix it. When it guessed, I got garbage wrapped in confidence.
Now I see "I don't understand" as a good sign. It means we can actually get somewhere.
Ask "Why?" and "How Sure?"
I started asking for the reasoning behind answers. And I asked how confident the AI was—solid, uncertain, or just guessing.
This helped me catch problems early. When the reasoning was weak, I knew not to build on it.
Stop When Something Feels Off
I used to push forward even when my gut said something was wrong. "Let's just see where this goes."
It always made things worse. Now I stop. I say "something feels off here" and we figure out what it is before moving on.
Ask for Comparisons
"How does A differ from B?" became one of my most-used phrases.
When I got an answer I wasn't sure about, I asked for an alternative. Then I compared them. This made decisions much clearer than evaluating a single option.
Break It Down
I stopped asking for the final answer in one shot.
Instead, I built toward it. Step by step. Each step was a chance to catch misalignment before it compounded.
Use Small Signals
I started using short phrases to keep us on the same page:
- "OK" — I understand, let's continue
- "Wait" — Something's off, let's pause
- "Let's reset" — Forget what we discussed, start fresh from here
These small signals prevented a lot of confusion. Without them, I kept finding that the AI was building on assumptions I never agreed to.
Say When You're Unsure
I used to hide my uncertainty. I thought I needed to sound decisive.
But when I started saying "I'm not sure about this approach," something interesting happened. The AI offered alternatives. My uncertainty opened doors instead of closing them.
Part 3: Where This Helped
Writing
I let the AI draft, then talked until the tone felt right.
When it didn't, I asked: "How would this sound in a more casual tone?" Comparing versions helped me find what I actually wanted.
Code Review
Before accepting suggestions, I asked for the reasoning. When it felt shaky, I said so. "Wait—I don't follow that logic. Can you walk through it differently?"
Learning Something New
When the AI said it wasn't sure, I narrowed the scope. "Let's focus on just this part first." Small clarifications prevented big misunderstandings.
Closing
I can't write the perfect prompt on the first try.
For a long time, I thought that was my limitation. Now I think it's just how this works.
The back-and-forth isn't inefficiency. It's the process. Each correction, each "that's not what I meant," builds understanding that I couldn't have written upfront.
I stopped trying to be clever. I started being ordinary.
It works better.
This isn't a technique for better prompts. It's a way of structuring communication so that decisions stay human and execution stays fast.
Top comments (0)