You know that moment when you’re basically done… but your assistant isn’t?
You asked for “a refactor”, it returned a new architecture. You asked for “tests”, it kept going until it invented an entire CI pipeline. Or the opposite: you asked for “fix the bug”, it gave you three vague suggestions and stopped right before the part you actually needed.
That’s not (only) a capability problem. It’s usually a definition of done problem.
I call the fix the Exit Criteria Prompt: a short add‑on to your request that forces the work to converge. It makes the assistant:
- state what “done” means in observable terms
- self-check against that definition
- stop when it passes (instead of wandering)
Below is the pattern, why it works, and a few templates you can copy.
The problem: “done” is ambiguous
In human teams, we survive ambiguity because we can ask follow-ups, read context, and negotiate scope in real time.
In assistant workflows, ambiguity often turns into one of two failure modes:
- Infinite helpfulness (scope creep): it keeps adding “nice to have” improvements.
- Premature stopping (under-delivery): it gives partial work without verifying anything.
Both happen when the model can’t reliably tell when the task is complete.
The cure is simple: define an exit ramp.
The Exit Criteria Prompt (core pattern)
Append this to the end of your request:
Exit criteria:
- [ ] <observable condition 1>
- [ ] <observable condition 2>
- [ ] <observable condition 3>
Before you finish:
1) restate the exit criteria in your own words
2) show a checklist with PASS/FAIL for each item
3) if any FAIL, continue until all PASS
4) once all PASS, stop and output the final artifact
Two important details:
- The criteria must be observable (something you could check).
- The criteria should be few (3–7 items). Too many and you’ll prompt a bureaucracy simulator.
What counts as “observable” exit criteria?
Here are good examples:
- “The function returns the same output for these 5 sample inputs.”
- “The answer includes a single SQL query and explains each JOIN.”
- “The patch touches at most 3 files and includes a rollback plan.”
- “Provide commands I can run locally to verify.”
And here are weak ones:
- “Make it clean.”
- “Make it robust.”
- “Make it production ready.”
Those aren’t exit criteria — they’re vibes.
If you want vibes, translate them into checks:
- “Clean” → “No duplicated logic; names are consistent; no unused vars.”
- “Robust” → “Handles null/empty input; has error messages; includes 3 edge-case tests.”
- “Production ready” → “Includes migration notes; includes monitoring hooks; documents assumptions.”
Example 1: A bugfix request that actually converges
Without exit criteria
“Fix this flaky test.”
You’ll often get: theories, maybe a patch, and then silence.
With exit criteria
Fix this flaky test in Jest.
Exit criteria:
- [ ] You explain the most likely root cause in 3 bullet points.
- [ ] You provide a minimal code diff.
- [ ] You provide a command to reproduce the flake (even if probabilistic).
- [ ] You provide a command that should pass reliably after the change.
Before you finish: restate criteria, then PASS/FAIL them.
Why this works:
- It forces a root-cause hypothesis (not just a patch).
- It demands a minimal diff (prevents “rewrite the suite”).
- It requires runnable verification (prevents hand-wavy completion).
Example 2: Writing docs without over-writing
Docs tasks are classic scope magnets.
Try this:
Write documentation for the `WebhookSigner` module.
Exit criteria:
- [ ] 1-paragraph overview (what it is, when to use it)
- [ ] A single copy-paste example
- [ ] List of public functions with 1-line purpose each
- [ ] "Common pitfalls" section with exactly 3 bullets
- [ ] Total length: 400–600 words
Before you finish: checklist PASS/FAIL, then stop.
Notice the last item: length is an exit criterion.
Length constraints are underrated because they prevent the assistant from “helpfully” writing a novella.
Example 3: Prompting for a decision (not a brainstorm)
If you ask for advice, you’ll often get a buffet.
If you want a decision, demand one:
Help me choose between Redis Streams and Kafka for event processing.
Exit criteria:
- [ ] You ask me exactly 3 clarification questions first.
- [ ] Then you recommend one option.
- [ ] You include a "Why not the other" section.
- [ ] You include a migration path if we outgrow the recommendation.
Before you finish: PASS/FAIL checklist.
This creates a controlled funnel: clarify → decide → justify → de-risk.
The “Definition of Done” micro-template
When you’re in a rush, you don’t need a full checklist. Use this 2-liner:
Definition of done: <one sentence that a reviewer could verify>.
Stop when done, and show why it meets the definition.
Example:
Definition of done: The PR description includes repro steps, expected behavior, and a rollback plan.
Stop when done, and show why it meets the definition.
A practical checklist library (copy/paste)
Pick 4–6 items and mix them:
- Scope: “Touch at most N files.”
- Verification: “Include commands to run.”
- Examples: “Include 2 good examples + 1 counterexample.”
- Risk: “List 3 failure modes and mitigations.”
- Assumptions: “List assumptions; if unsure, ask.”
- Format: “Output as a single diff / single table / single JSON object.”
- Stop rule: “Once all criteria PASS, stop. Don’t add extras.”
That last line matters more than it looks. You’re explicitly giving the assistant permission to not add more.
Why the pattern works (in plain terms)
You’re adding three things:
- A target (what done looks like)
- A test (how to know you hit it)
- A stop condition (when to stop generating)
That combination turns an open-ended conversation into a bounded workflow.
My go-to Exit Criteria Prompt
If you want one default you can reuse, steal mine:
Exit criteria:
- [ ] Output is complete and directly usable (copy/paste runnable where relevant)
- [ ] Includes at least one concrete example
- [ ] Includes verification steps
- [ ] Notes assumptions and edge cases
Before you finish: restate criteria, PASS/FAIL them, then stop.
If you adopt just one prompt habit this week, make it this one. The fastest way to reduce back-and-forth is to define what “done” means before you start.
— Nova
Top comments (0)