You know the feeling: you ask for a solution, you get something plausible, and five minutes later you realize it’s subtly wrong.
The fix isn’t “write a longer prompt.” It’s to stop treating the first answer as shippable.
I use a simple workflow I call the Two-Pass Prompt:
- Generate an answer (fast, concrete)
- Critique it against explicit checks (ruthless)
- Improve it into a final output (patch-ready)
This is model-agnostic and works for code, docs, tickets, and even product decisions.
Why two passes beat one
Most failures aren’t because the model “doesn’t know.” They’re because:
- you didn’t specify a success condition
- the answer didn’t get pressure-tested
- the model optimized for fluency, not correctness
A second pass gives you something you usually only get from humans: review.
The trick is to make the review structured and non-optional.
The template (copy/paste)
Pass 1 — Generate
ROLE: You are a senior engineer.
TASK: Produce the best first draft.
OUTPUT: <whatever format you need>
CONSTRAINTS:
- Keep it concise.
- Make assumptions explicit.
- If you must guess, label it GUESS.
Pass 2 — Critique
ROLE: You are a strict reviewer.
INPUT: (paste the Pass 1 output)
GOAL: Find issues and missed requirements.
CHECKS:
- Correctness: any logical/technical mistakes?
- Completeness: any missing steps/edge cases?
- Safety: any risky commands or irreversible actions?
- Maintainability: naming, structure, tests.
- Spec fit: does it satisfy the original task and constraints?
OUTPUT:
- A bullet list of findings (each with severity: HIGH/MED/LOW)
- A "Fix plan" (ordered steps)
- A "Questions" section (only if needed)
Pass 3 — Improve
ROLE: You are the original author fixing the draft.
INPUTS:
- Original task
- Pass 1 draft
- Pass 2 critique
GOAL: Produce the final output.
RULES:
- Apply HIGH severity fixes first.
- Don’t introduce new features.
- If something is uncertain, add a note and a safe default.
OUTPUT: Final answer only.
Yes, that’s three messages. In practice I still call it “two-pass” because the pattern is draft → review → final.
Example: a code change that’s actually shippable
Let’s say you want a small refactor: “Add request timeouts and retries to our fetch wrapper.” A one-shot prompt often returns something that compiles but fails in production.
Pass 1 prompt (Generate)
We have this wrapper:
ts
export async function getJson(url: string) {
const res = await fetch(url);
if (!res.ok) throw new Error(HTTP ${res.status});
return res.json();
}
Task: add (1) a 10s timeout, (2) retry on network errors and 5xx up to 3 attempts with exponential backoff.
Constraints:
- TypeScript
- No external deps
- Keep API the same
Output: code only.
Pass 2 prompt (Critique)
Instead of “is this good?”, give it a checklist:
- Does timeout abort the fetch correctly?
- Are 4xx excluded from retries?
- Is backoff bounded (no huge waits)?
- Are errors surfaced with useful context?
What changes in practice
The critique step often catches things like:
- retrying on 429/4xx accidentally
- forgetting to clear timers
- not handling
AbortErrorvs real network errors - backoff math that overflows or waits too long
Pass 3 output (Improve)
You end up with something closer to this:
export async function getJson(url: string) {
const maxAttempts = 3;
const timeoutMs = 10_000;
let lastErr: unknown;
for (let attempt = 1; attempt <= maxAttempts; attempt++) {
const controller = new AbortController();
const timer = setTimeout(() => controller.abort(), timeoutMs);
try {
const res = await fetch(url, { signal: controller.signal });
if (!res.ok) {
// Retry only on 5xx
if (res.status >= 500 && res.status < 600 && attempt < maxAttempts) {
const delay = Math.min(1000 * 2 ** (attempt - 1), 4000);
await new Promise(r => setTimeout(r, delay));
continue;
}
throw new Error(`HTTP ${res.status} for ${url}`);
}
return await res.json();
} catch (err) {
lastErr = err;
const isAbort = err instanceof DOMException && err.name === 'AbortError';
const shouldRetry = !isAbort && attempt < maxAttempts;
if (shouldRetry) {
const delay = Math.min(1000 * 2 ** (attempt - 1), 4000);
await new Promise(r => setTimeout(r, delay));
continue;
}
throw err;
} finally {
clearTimeout(timer);
}
}
throw lastErr instanceof Error ? lastErr : new Error('Request failed');
}
I’m not claiming this is perfect for every codebase. The point is: the review step forces the prompt to cover the boring-but-critical details.
How to write good critique checks
Good checks are binary (pass/fail), not vibes.
Instead of:
- “Make it clean”
Use:
- “No new dependencies”
- “Output must be valid JSON”
- “Include at least 3 edge cases”
- “Provide a minimal test plan”
If you’re stuck, start with these four:
- Correctness: does it do what it says?
- Completeness: what’s missing for a real user?
- Constraints: did it violate anything (deps, style, time)?
- Failure modes: what happens when it goes wrong?
Make it faster: reuse a “Critique Pack”
The biggest productivity win is turning your checks into a reusable snippet.
For example, for internal tooling code I keep this on hand:
- Any breaking changes?
- Any irreversible commands?
- Any missing error handling?
- Any secrets accidentally logged?
- Any tests or at least a test plan?
Drop it into Pass 2 and you get consistent reviews across tasks.
When to not use this
If you just need brainstorming, two-pass is overkill.
But if you’re about to:
- merge code
- send a customer-facing email
- publish docs
- make a product call
…then the extra 60–90 seconds pays for itself.
The punchline
Don’t aim for the “best prompt.” Aim for the best workflow.
Draft → critique → final turns an unreliable one-shot into something you can actually ship.
If you try it, start small: take your next prompt, add a reviewer pass with 5 checks, and see what it catches.
Top comments (0)