DEV Community

Nova
Nova

Posted on

Assumption Logs: the simplest way to get more reliable AI help

When an assistant gives a wrong answer, it’s rarely because it “can’t do the task”. Most of the time it quietly assumed something you didn’t mean.

  • It assumed the language/framework version.
  • It assumed the shape of your data.
  • It assumed the goal ("make it work") when you meant ("make it safe and maintainable").
  • It assumed you’re okay with breaking changes.

The fix doesn’t need a bigger model or a longer prompt.

It needs an Assumption Log: a tiny section at the top of your request that forces ambiguity into the open before any solution is produced.

In this post I’ll show you a lightweight template, how to use it in <60 seconds, and concrete examples you can copy.


What’s an Assumption Log?

An Assumption Log is a short list of statements like:

  • “I assume we’re using Node 20+.”
  • “I assume we can add a dependency.”
  • “I assume you want the minimal change, not a refactor.”

Then you add one rule:

If any assumption is uncertain or risky, ask a question instead of proceeding.

That’s it.

This works because it changes the default behavior from:

“Fill in missing context silently.”

to:

“Surface missing context explicitly.”


The 12-line template

Copy/paste this into the top of your next prompt:

ASSUMPTION LOG
1) Environment: <runtime, OS, versions>
2) Constraints: <time, dependencies, performance, security>
3) Goal: <what success looks like>
4) Non-goals: <what NOT to do>
5) Inputs available: <files, logs, endpoints>
6) Output format: <diff, steps, code snippet, checklist>

RULE: If an assumption is unclear, state it and ask a question before proposing a solution.
Enter fullscreen mode Exit fullscreen mode

If you want an even smaller version:

ASSUMPTIONS (confirm or correct):
- 
- 
If any are uncertain, ask first.
Enter fullscreen mode Exit fullscreen mode

Why this beats “be careful” prompts

A lot of prompts try to solve reliability with vibes:

  • “Be accurate.”
  • “Don’t hallucinate.”
  • “Think step by step.”

Those can help, but they don’t pin down the missing facts.

An Assumption Log creates a practical mechanism:

  1. Enumerate the unknowns.
  2. Confirm or question them.
  3. Only then act.

It’s like adding unit tests for your context.


Example 1: Fixing a bug without context drift

Your original request

“My endpoint returns 500 sometimes. Can you help?”

With an Assumption Log

ASSUMPTION LOG
1) Environment: Node.js 20, Express 4
2) Constraints: no new dependencies, minimal code change
3) Goal: identify root cause + add logging that makes it reproducible
4) Non-goals: rewriting to a different framework
5) Inputs available: stack trace + current route handler
6) Output format: step-by-step triage plan + a code patch

RULE: If any assumption is unclear, state it and ask a question before proposing a solution.

Context:
- Here is the route handler: <paste>
- Here is a sample error: <paste>
Enter fullscreen mode Exit fullscreen mode

What you get back (ideally)

Instead of guessing, the assistant should come back with something like:

  • “I’m assuming req.body is already parsed as JSON. Is express.json() enabled globally?”
  • “I’m assuming the 500 is unhandled exceptions, not upstream timeouts. Do you have a reverse proxy (Nginx) with its own timeouts?”

Then it can propose the smallest patch:

  • add structured logging around the failure point
  • validate inputs at the boundary
  • return a consistent error shape

The important part: it asked the two questions that decide whether the patch is correct.


Example 2: Data work (where silent assumptions are deadly)

If you’ve ever asked an assistant to “merge these CSVs”, you know what happens next:

  • It assumes a join key.
  • It assumes date formats.
  • It assumes missing values are safe to drop.

Use the log:

ASSUMPTION LOG
1) Environment: Python 3.11, pandas
2) Constraints: keep row count stable unless explicitly requested
3) Goal: left-join orders to customers, preserve all orders
4) Non-goals: deduplicating customers (handle separately)
5) Inputs available: two CSV samples + column lists
6) Output format: pandas code + explanation of join + validation checks

RULE: If any assumption is unclear, ask first.

Tables:
- orders.csv columns: order_id, customer_id, created_at, total
- customers.csv columns: id, email, created_at
Enter fullscreen mode Exit fullscreen mode

Now the assistant is much more likely to ask the right question:

  • “Is orders.customer_id guaranteed to match customers.id 1:1, or can customers be deleted/merged?”

And it should include sanity checks by default:

merged = orders.merge(customers, left_on="customer_id", right_on="id", how="left", validate="m:1")

# Validation
assert len(merged) == len(orders)
missing = merged[merged["email"].isna()]
print("Orders with missing customer:", len(missing))
Enter fullscreen mode Exit fullscreen mode

That validate="m:1" is an assumption made explicit. If it fails, you learn something real about your data.


How to use Assumption Logs day-to-day

Here’s a simple workflow I use:

  1. Write the task in one sentence.
  2. Add the Assumption Log (takes ~30 seconds).
  3. Paste the minimum context (code snippet, error, schema).
  4. Ask for the output format you want (diff/checklist/tests).
  5. If the assistant asks questions, answer them briefly and rerun.

This feels slower the first time.

In practice it’s faster because you avoid the two most expensive failure modes:

  • generating a polished solution built on the wrong premise
  • iterating through five rounds of “that didn’t work” because the root mismatch was never surfaced

A few “default assumptions” worth logging

If you don’t know what to write, start with these recurring landmines:

  • Version (Node 18 vs 20, Python 3.9 vs 3.11, React Router v5 vs v6)
  • Dependencies allowed (yes/no)
  • Breaking changes allowed (yes/no)
  • Security posture (production-safe vs quick prototype)
  • Performance constraints (latency, memory, big-O)
  • Ownership (is this a one-off script or maintained code?)

Even 3–5 bullets will change the conversation.


The one rule that makes it work

Assumption Logs only help if you enforce the rule:

If any assumption is unclear or risky, ask a question before proposing a solution.

If you want to make it extra explicit, add:

  • “Do not write code until you list assumptions and ask clarifying questions.”

Use that for high-stakes tasks (production changes, migrations, security).


Try it today

Pick one task you’re about to do anyway (a small bug, a refactor, a data transform). Add the Assumption Log template and see how the quality changes.

If nothing else, you’ll get a better prompt.

Most of the time, you’ll also get a better solution.

Top comments (0)