Most people blame the AI when results are bad.
The real problem is almost always the prompt.
This visual guide breaks down the 6 steps that
separate prompts that work from prompts that waste
your time.
1. Define Your Goal Clearly
The model will always produce something —
the question is whether it matches what you needed.
❌ "Write something about Docker"
✅ "Write a 200-word intro explaining why developers
use Docker, for an audience that knows Python but
has never used containers"
2. Assign a Role to the AI
Roles narrow the probability space the model draws from.
❌ "Explain this code"
✅ "You are a senior backend developer. Review this
code and explain potential performance issues to a
junior dev"
3. Give Context and Audience
Without context, the model guesses. It usually guesses wrong.
❌ "Write a README"
✅ "Write a README for a REST API built with FastAPI,
targeting developers who are familiar with Python
but new to APIs"
4. Specify Format and Length
Format is part of the output. Don't leave it to chance.
❌ "Summarize this"
✅ "Summarize this in 3 bullet points, each under
20 words, for a non-technical audience"
5. Use Sequential Prompts
Multi-part prompts produce multi-part mediocrity.
One task at a time.
❌ "Analyze this, summarize it, find risks,
and format it as a report"
✅ Ask for the analysis first → review it →
then ask for risks → then the report
6. Re-anchor in Long Conversations
The model drifts in long chats. Remind it of
the key constraints every few exchanges.
✅ "Remember: responses under 150 words,
technical tone, no corporate language"
The gap between bad results and great results
is almost never the tool. It's the instructions.
Same model. Better prompt. Completely different output.****
Top comments (0)