Most prompt engineering advice is either too vague ("be specific!") or too academic to actually use at work. I've spent the last year testing patterns across ChatGPT, Claude, and Gemini — and these are the five that consistently make a difference.
No fluff. Just patterns you can steal today.
1. The Persona Pattern
What it does: Gives the model a specific lens to think through.
Most people write prompts like they're talking to a search engine. But LLMs respond dramatically better when you tell them who they are.
❌ Before:
Review my code for issues.
✅ After:
You are a senior backend engineer doing a code review
for a junior developer. Be constructive but thorough.
Flag bugs, performance issues, and style problems.
Explain *why* each issue matters.
Why it works: The persona constrains the model's output distribution. Instead of pulling from "everything it knows about code," it focuses on what a senior engineer would actually flag. You get opinionated, practical feedback instead of generic suggestions.
Pro tip: Combine personas for interesting results. "You're a security engineer AND a UX designer" forces the model to think about trade-offs.
2. The Chain of Thought Pattern
What it does: Forces the model to show its reasoning before jumping to an answer.
LLMs are notoriously bad at multi-step reasoning when you ask for a direct answer. But if you make them think out loud, accuracy goes way up.
❌ Before:
Should we use PostgreSQL or MongoDB for our app?
✅ After:
We're building a SaaS app with complex user relationships,
reporting needs, and ~50k daily active users.
Before recommending a database, think through:
1. Our data model (relational vs document)
2. Query patterns we'll likely need
3. Scaling considerations at our size
4. Team experience (we know SQL well)
Then make your recommendation with reasoning.
Why it works: This is based on Google's chain-of-thought research. When you explicitly ask the model to reason step-by-step, it activates different (better) pathways than when you ask for a conclusion directly. Think of it as the difference between asking someone "what's the answer?" vs "walk me through your thinking."
3. The Few-Shot Pattern
What it does: Shows the model exactly what you want by giving examples.
This is the single most underused pattern. Instead of describing your desired output, just show it.
❌ Before:
Write commit messages for my changes.
Make them concise and descriptive.
✅ After:
Write a commit message for my changes. Follow this style:
Example 1:
Change: Added input validation to signup form
Commit: feat(auth): validate email and password on signup
Example 2:
Change: Fixed crash when user has no profile picture
Commit: fix(profile): handle null avatar gracefully
Example 3:
Change: Moved API calls to separate service layer
Commit: refactor(api): extract service layer from controllers
Now write one for:
Change: Updated the search to include archived posts
and added a toggle to filter them
Why it works: Few-shot examples do something that instructions alone can't — they encode style, format, and judgment simultaneously. The model picks up on the Conventional Commits format, the brevity, and the scope tags all from your examples. No lengthy explanation needed.
Pro tip: 3 examples is usually the sweet spot. Less than that and the model might not lock onto the pattern. More than 5 and you're wasting tokens.
4. The Constraint Pattern
What it does: Sets clear boundaries that shape the output.
LLMs love to ramble. Constraints are your editing tool — they force the model to prioritize.
❌ Before:
Explain Kubernetes to me.
✅ After:
Explain Kubernetes to a developer who knows Docker
but has never used orchestration.
Constraints:
- Use exactly one analogy
- Max 150 words
- No jargon beyond what Docker users already know
- End with the ONE command they should run first
Why it works: Without constraints, the model optimizes for "completeness" — which usually means a 1000-word essay covering everything from history to advanced networking. Constraints force it to optimize for usefulness instead. It's like telling a designer "you have 400x300 pixels" — limitations breed creativity.
My favorite constraints to use:
- Word/sentence limits
- Audience level ("explain to a 5-year-old" vs "explain to a staff engineer")
- Format requirements ("use a table," "bullet points only")
- Exclusions ("don't mention X," "no code examples")
5. The Iterative Refinement Pattern
What it does: Turns one prompt into a conversation that progressively improves output.
Here's the thing most people get wrong: they try to write the perfect prompt on the first attempt. The best results come from treating it as a dialogue.
❌ Before (trying to get it perfect in one shot):
Write a complete, production-ready API endpoint for
user registration with validation, error handling,
rate limiting, logging, tests, and documentation.
✅ After (iterative approach):
// Prompt 1:
Write a basic Express.js POST /register endpoint.
Just the happy path — validate email and password,
create user, return 201.
// Prompt 2 (after reviewing output):
Good. Now add error handling for: duplicate email,
invalid input, and database failures. Keep it clean.
// Prompt 3:
Now add rate limiting. Show me the middleware
approach — I want to reuse it on other routes.
// Prompt 4:
Write 5 tests for this endpoint covering the
main success and failure paths.
Why it works: Each step builds on reviewed, validated output. You catch issues early instead of debugging a 200-line monolith. The model also performs better on focused tasks — asking for one thing at a time produces higher quality than asking for everything at once.
The meta-pattern: After getting output you like, ask the model to critique itself: "What are 3 things wrong with this code?" You'll be surprised how good it is at finding its own mistakes when explicitly asked.
The Pattern Behind the Patterns
If you look at all five, there's a common thread: they all reduce ambiguity.
LLMs are probability machines. The more precisely you define what you want, the narrower the probability space, and the better the output. Every pattern above is just a different way of saying "here's exactly what I mean."
A few quick rules of thumb I keep in my head:
- 🎯 Be specific about format — "respond in a markdown table" beats "organize the information"
- 🔄 Iterate, don't restart — build on what works instead of rewriting from scratch
- 📋 Show, don't tell — one example is worth 100 words of instructions
- 🚫 Constrain early — set boundaries before the model starts generating
Go Try One
Don't try to use all five at once. Pick the one that fits your next task and see what happens. Personally, I'd start with the Few-Shot Pattern — it's the one that made the biggest difference in my daily workflow.
If you want to go deeper, I compiled these and 30+ more patterns into a Prompt Engineering Cheatsheet — a quick-reference PDF you can keep open while working.
Happy prompting. ✌️
Top comments (0)