I've been using Claude as a daily driver for 6 months. Here are the prompts and patterns that actually changed how I work — not the generic "be specific" advice you've read everywhere.
1. The Role + Context Front-Load
Most people start prompts with the task. Start with who the AI is and what it knows instead.
Generic:
"Write a sales email for my SaaS product."
Better:
You are the head of growth at a B2B SaaS company.
Product: AI agent workspace scaffolding tool, $19, targets developers building autonomous agents.
Audience: developers who've tried building agents and hit the "no memory/no structure" wall.
Goal: convert a warm lead who read our Dev.to article.
Write a sales email.
The output is completely different. Not because you used magic words — because you gave it real context.
2. The Constraint Frame
LLMs default to comprehensive. You often want focused.
Add one of these to almost any prompt:
- "In under 200 words"
- "No more than 5 bullet points"
- "One sentence per idea"
- "Skip the intro, start with the answer"
These constraints force the model to prioritize, which usually produces better output than asking it to "be concise."
3. The Reasoning Separator
For complex tasks, ask for reasoning separately from output:
First, briefly explain your approach (2-3 sentences).
Then produce the output.
This does two things: it catches bad reasoning before it produces bad output, and it makes the model commit to an approach before executing it.
4. The Negative Space Prompt
Tell it what NOT to do as explicitly as what TO do:
Write a technical blog post about MCP servers.
- Do NOT start with "In today's rapidly evolving..."
- Do NOT use the word "leverage"
- Do NOT include a section about "the future of AI"
- Keep each section under 150 words
LLMs have strong priors toward generic patterns. Explicitly blocking them is often more effective than asking for originality.
5. The Iteration Checkpoint
For long outputs, build in checkpoints:
I need a comprehensive guide on X.
Before writing the full guide, give me:
1. The outline (headers only)
2. Your planned approach for each section (one sentence each)
I'll approve or adjust before you write the full thing.
This saves tokens and catches misalignment before you're 2000 words into the wrong direction.
6. The Perspective Shift
When you're stuck on a problem, ask the model to approach it from a specific angle:
I'm building [X]. I've been thinking about it as [your current frame].
Approach this as a skeptic who thinks this is a bad idea. What's the strongest argument against it? What am I missing?
The adversarial frame surfaces blind spots that agreeable AI assistants will never volunteer.
7. The Structured Output Request
When you need to process the output programmatically, specify the structure explicitly:
Return your answer as a JSON object with these fields:
- title: string
- summary: string (max 50 words)
- tags: string[] (3-5 items)
- confidence: "high" | "medium" | "low"
No other text. Just the JSON.
LLMs can reliably produce structured output when you're explicit about the schema.
The Meta-Pattern
All of these share a common thread: you're doing the thinking, the AI is doing the execution.
The prompts that work best treat the model as a skilled executor that needs clear direction — not a magic oracle that figures out what you mean.
Apply This to Agent Workflows
These patterns work in single prompts, but they really shine when you bake them into an agent's context files:
# SOUL.md
You are Sage, chief operator.
- Never start responses with "Certainly!" or "Of course!"
- Skip intros — start with the answer
- Use bullet points over paragraphs for lists
- Flag uncertainty explicitly
The agent follows these patterns in every interaction, not just when you remember to include them.
npx @webbywisp/create-ai-agent my-workspace
Scaffolds the full context file structure. Free.
Pre-written SOUL.md, USER.md, OPS.md, MEMORY.md templates tuned for autonomous operation — $19: AI Agent Workspace Kit
Top comments (0)