We've all experienced that frustrating moment when an AI starts answering your question, then suddenly says "I can't assist with that."
Why this feels worse than a simple "I don't know":
It tricks your brain
You start mentally following the solution, then have to abruptly stop that thought process.It wastes time
Reading half an answer takes longer than getting an immediate "I don't know".It feels dishonest
Like when someone pretends to help but is actually avoiding the work.
Common situations:
- Starts explaining a technical concept in detail... then refuses to finish
- Begins walking through a solution step-by-step... then quits at the most critical point
- Lists multiple setup steps... but won't provide the actual resolution
What would help:
- Early warnings when confidence is low
- Clear boundaries about what can/can't be answered
- Partial solutions with explanations of limitations
Let's discuss:
What's the most important improvement AI systems could make to handle these situations better?
Top comments (0)