If you’ve ever had an LLM nail a task once… and then completely face-plant the next time with “the same prompt”, you’ve already learned the hard lesson:
- A good prompt isn’t one sentence.
- A good prompt is a mini-spec.
Over the last year, I’ve ended up with a simple structure that’s boring in the best way: it consistently produces usable output for dev work, writing, and analysis.
I call it the 3-layer prompt template:
- Context (what the model needs to know)
- Task (what you actually want)
- Constraints (how the output must behave)
Below is the template, why each part matters, and a few copy/paste examples you can adapt.
Why prompts fail: missing “spec surface area”
Most prompts fail because they leave huge decisions implicit:
- Who is the audience?
- What format should the output be in?
- What’s “good enough” vs. overkill?
- What should be avoided?
The model will make those decisions for you… but not always the way you’d make them.
The solution isn’t “long prompts”. It’s complete prompts.
The template (copy/paste)
CONTEXT
- You are: <role>
- You are helping with: <project/task background>
- Inputs:
- <paste relevant code / notes / data>
TASK
- Do: <exact thing you want>
CONSTRAINTS
- Output format: <bullets/table/JSON/markdown>
- Quality bar: <what “good” looks like>
- Must include: <required sections>
- Must avoid: <anti-goals>
- If info is missing: <ask N questions / make assumptions explicitly>
This looks almost too simple, but it forces clarity where prompts usually get fuzzy.
Layer 1: Context (reduce guesswork)
Context answers “what world are we in?”
Good context is not your life story. It’s:
- The role you want it to take (“senior reviewer”, “pair programmer”, “tech editor”)
- The artifacts it needs to reason about (code, logs, requirements)
- The goal of the larger system (so it can make tradeoffs)
Example: context for a refactor
CONTEXT
- You are: a senior TypeScript engineer.
- You are helping with: refactoring a Node.js service for reliability.
- Inputs:
- Here is the function and its tests:
<PASTE>
This is enough to shift output from generic advice to concrete engineering choices.
Layer 2: Task (one sentence, not a vibe)
Tasks should be unambiguous. “Help me improve this” is a vibe.
Better:
- “Identify 3 likely failure modes and propose code changes.”
- “Rewrite this function to be pure and add unit tests.”
- “Generate a migration plan with rollout steps and rollback.”
Example: crisp task
TASK
- Do: refactor the function to eliminate shared mutable state and make failures explicit.
One sentence is fine. Just make it a real instruction.
Layer 3: Constraints (where the magic happens)
Constraints are the difference between:
- “Here’s a wall of text”
- “Here’s something I can drop into a PR”
Constraints I use constantly:
- Output format (markdown headers, checklist, JSON)
- Quality bar (“production-safe”, “quick draft”, “minimal diff”)
- Must include (“edge cases”, “tradeoffs”, “tests”)
- Must avoid (“don’t change public API”, “no new deps”)
- Missing info policy (“ask 3 questions first”)
Example: constraints for PR-ready output
CONSTRAINTS
- Output format: markdown with sections: Summary, Changes, Code, Tests.
- Quality bar: production-safe, minimal diff.
- Must include: updated code snippet(s) and 3 tests.
- Must avoid: adding new dependencies.
- If info is missing: ask up to 3 questions before proceeding.
Putting it together: a “do it for real” prompt
Here’s a full prompt you can paste into your LLM of choice.
CONTEXT
- You are: a senior Python engineer with strong testing discipline.
- You are helping with: improving a small CLI tool.
- Inputs:
- This script sometimes crashes in CI with "KeyError: token".
- Code:
<PASTE YOUR SCRIPT>
TASK
- Do: diagnose the likely cause, propose a fix, and add tests to prevent regressions.
CONSTRAINTS
- Output format: markdown with sections: Diagnosis, Fix, Patch, Tests.
- Quality bar: production-safe; avoid broad try/except.
- Must include: a minimal reproduction explanation.
- Must avoid: changing CLI flags or behavior for valid inputs.
- If info is missing: ask 2 questions first.
A tiny helper script: keep prompts consistent
If you do this often, you can even template it.
Here’s a minimal Node.js helper that fills a prompt template from a JSON file:
// prompt.js
import fs from "node:fs";
const spec = JSON.parse(fs.readFileSync(process.argv[2], "utf8"));
const prompt = `CONTEXT\n- You are: ${spec.role}\n- You are helping with: ${spec.background}\n- Inputs:\n${spec.inputs}\n\nTASK\n- Do: ${spec.task}\n\nCONSTRAINTS\n- Output format: ${spec.format}\n- Quality bar: ${spec.quality}\n- Must include: ${spec.mustInclude}\n- Must avoid: ${spec.mustAvoid}\n- If info is missing: ${spec.missingInfoPolicy}\n`;
console.log(prompt);
Usage:
node prompt.js spec.json | pbcopy # macOS
node prompt.js spec.json | xclip -selection clipboard # Linux
Once you have your own “prompt specs” in files, reuse becomes trivial.
A quick checklist before you hit enter
- Did I provide the artifacts the model needs (code, logs, constraints)?
- Is the task something you could hand to a human and expect the same result?
- Do constraints prevent the output from wasting time (format, scope, limits)?
If you nail those three, you’ll get fewer “assistant-y essays” and more usable work.
Want my battle-tested prompt patterns?
I keep a Prompt Engineering Cheatsheet with the exact templates I use for code review, debugging, planning, and writing.
- Grab the free sample here: https://getnovapress.gumroad.com/l/prompt-sample
- Full cheatsheet is $9+: https://getnovapress.gumroad.com
Top comments (0)