You know that feeling when you think a task is clear… and then you ship code that’s technically correct but totally wrong?
It usually isn’t a coding problem. It’s an assumption problem.
When you ask an LLM to help you build something, it will happily fill in missing requirements with plausible defaults. Humans do the same thing, but with the added bonus of arguing about it later.
Over time I’ve started treating “missing requirements” as a first-class bug. My go-to fix is a simple pattern I call the Assumption Inventory Prompt.
It’s a short preflight step that forces the assistant to:
- list what it’s assuming
- mark which assumptions are risky
- ask the smallest set of questions to remove ambiguity
- propose safe defaults when you don’t want to answer questions
The payoff: fewer rewrites, fewer “wait, that’s not what I meant”, and a much easier path to good code.
The pattern in one sentence
Before you generate solutions, generate a structured list of assumptions and questions.
This sounds obvious, but it’s surprisingly rare in day-to-day prompting. Most prompts jump straight from “do X” to “here’s a solution”, skipping the part where you actually align on what X means.
The Assumption Inventory Prompt (copy/paste)
Use this as a drop-in preamble for any task where requirements might be fuzzy.
Before proposing a solution, do an Assumption Inventory.
1) Restate the goal in 1–2 sentences.
2) List assumptions you’re making (at least 8). Group them by:
- Product/UX
- Data & edge cases
- Security/privacy
- Performance/scale
- Ops/maintenance
3) For each assumption, label it as Low/Medium/High risk.
4) Ask up to 5 clarifying questions that remove the highest-risk ambiguity.
5) If I don’t answer, propose safe defaults and continue.
Then proceed with the solution.
A small tweak I like: limit the questions. Unlimited questions lead to analysis paralysis. Five forces prioritization.
Concrete example: “Add audit logging to our API”
Here’s a real-world prompt you might send:
Add audit logging to our API. We’re using Node.js and Postgres.
Without an assumption inventory, you’ll often get a generic logging setup that misses the important bits (what counts as an audit event, retention rules, PII handling, correlation IDs, etc.).
With the pattern, the assistant should surface the hidden decisions.
What a good assumption inventory looks like
Restated goal
- Add tamper-resistant, queryable audit logs for sensitive API actions, usable for debugging and compliance.
Assumptions (sample)
-
Product/UX
- (High) Only certain actions need auditing (e.g. user role changes, billing changes), not every request.
- (Medium) Admins will need an internal UI or at least SQL queries for investigations.
-
Data & edge cases
- (High) We must store “who did what to which entity” with before/after snapshots (or diffs).
- (Medium) Some actions happen asynchronously (jobs), so we need a notion of actor vs initiator.
-
Security/privacy
- (High) Audit logs may contain PII; we need redaction and access control.
- (High) We need write-once semantics (or append-only) to reduce tampering risk.
-
Performance/scale
- (Medium) Writes must be non-blocking; worst-case we can buffer or queue.
- (Low) Read volume is low; mostly investigation queries.
-
Ops/maintenance
- (High) We need a retention policy (e.g. 180 days) + legal hold capability.
- (Medium) We need correlation IDs to join audit events with application logs.
Clarifying questions (top 5)
- Which endpoints/actions are considered “auditable” in your domain?
- Do you need before/after snapshots, or just “event occurred” records?
- What’s your retention period, and do you need immutable storage guarantees?
- What PII must be redacted (and who is allowed to view unredacted data)?
- Do you already have request IDs / trace IDs, or should we introduce them?
Safe defaults (if you don’t answer)
- Audit only privileged actions (admin/billing/role changes).
- Store event type + actor + target + timestamp + request_id.
- Store diffs for JSON fields; avoid full payload logging.
- Redact emails/tokens; restrict read access to admins.
- Retain 180 days; append-only table with restricted UPDATE/DELETE.
At this point, then it makes sense to generate code.
Why this works (and where it saves time)
1) It turns “implicit context” into explicit decisions
LLMs are pattern matchers. If your prompt is vague, the model fills gaps with whatever pattern seems typical. The inventory makes the gaps visible.
2) It makes reviews dramatically easier
If you’re collaborating with a teammate, you can paste the assumption list into a PR description or ticket and ask:
- “Which of these are wrong?”
- “Which do we want to decide differently?”
That’s a much better review conversation than debating a finished implementation.
3) It gives you a safety net when you’re rushing
The “safe defaults” section is underrated.
Sometimes you don’t have time to answer questions. When that happens, you want defaults that are:
- conservative (avoid data loss / security issues)
- reversible (easy to change later)
- observable (easy to detect if the defaults were wrong)
My two rules for using it in practice
Rule 1: Use it when the cost of being wrong is high
I use assumption inventories for anything involving:
- auth/permissions
- money
- data migrations
- user-facing flows
- anything “compliance adjacent” (audit logs, retention, exports)
For tiny refactors, I skip it.
Rule 2: Keep the inventory close to the work
Don’t let it become a separate document that drifts.
Good places to keep it:
- at the top of the chat thread that generated the code
- in a ticket description
- in a
docs/decisions/note (if your team has those)
The goal is traceability:
“Why did we do it this way?” → “Because we assumed X, and it was accepted.”
A lightweight variant for everyday prompts
If you want a shorter version, try this:
Before you answer, list:
- 5 assumptions you’re making
- 3 things that could go wrong
- 3 questions you’d ask if you had time
Then proceed.
It’s not as thorough, but it’s fast and still catches the big stuff.
Closing thought
Most prompting advice focuses on getting better outputs from the model.
This pattern is about getting better alignment before outputs exist.
If you try it this week, use it on a task where you’ve been bitten by ambiguity before. The first time you see a “High risk assumption” that you didn’t realize you were making, it’ll pay for itself.
If you want, reply with a vague task you’re about to do, and I’ll show you what an Assumption Inventory looks like for it.
Top comments (0)