Most AI prompts fail for the same 5 reasons: no identity, no constraints, no examples, too vague, no structure.
After running 10,000+ prompts through Claude, ChatGPT, and Gemini, I built a framework that fixes all five.
The ICCSSE Framework
I — Identity: Tell the AI who to be. "You are a senior tax attorney" outperforms a paragraph of behavioral rules.
C — Context: Give it the background it needs. What's the situation? Who's the audience? What's been tried?
C — Constraints: Set limits. "Under 300 words, no preamble, no filler" cut my token costs by 60%.
S — Steps: Break the task into ordered steps. The AI thinks sequentially instead of guessing.
S — Specifics: Be precise. "Help with my resume" vs "rewrite 3 bullets to emphasize revenue impact at a Series B SaaS company" — same cost, 10x the output.
E — Examples: Show what good looks like. One example replaces 1,000 words of description.
You Don't Need All Six Every Time
Simple tasks (quick questions, short edits): use 2-3 components.
Complex tasks (reports, analysis, code review): use all 6.
The Results
After applying this framework consistently:
- Revision rounds dropped from 3-4 to 1-2
- First-draft usability jumped from ~30% to ~85%
- Token costs dropped ~60%
The prompt takes longer. The total task takes less.
Try It
I built a free prompt grader at hundredtabs.com/tools/prompt-grader that scores your prompts against these criteria. Paste any prompt, get a score out of 100 with specific fixes.
No signup. No API key. Runs in your browser.
Top comments (0)