One of the most important things I’ve learned when working with large language models (like ChatGPT or Gemini) is not to assume they truly
understood me.
Yes, they will always say they understood.
But the truth? Not always. Sometimes they “invent” their own interpretation — and when you get to implementation, you realize the result is completely off track.
🎯 So here’s a small tip that completely transformed the quality of my results:
When I give a complex instruction, I don’t ask “Did you understand?” — instead I ask:
Do you have any more questions before we move on to implementation?
Was anything unclear to you? Don’t make assumptions — ask me.
And surprisingly — most of the time, the model actually asks really good questions.
Questions that force me to clarify myself, think through edge cases, and sometimes even realize what I haven’t fully decided yet.
📈 Since I started doing this, I’ve been getting outputs that are much more accurate, thoughtful, and clear.
Top comments (2)
So true.
Artificial intelligence gives us the illusion that it understands us, but in reality it’s very limited and doesn’t know everything.
Great post!
Thanks, Shira!
Glad you liked the post