If you use an assistant for coding, debugging, or docs, you’ve probably seen this failure mode:
You ask for something “simple” and get a solution that’s technically correct — but not correct for your meaning of the words.
Most of the time, it’s not a capability problem. It’s a vocabulary problem.
Words like “fast”, “secure”, “migration”, “done”, “MVP”, “production-ready”, “idempotent”, or even “API” can mean very different things depending on the team, product, and constraints.
The fix is a tiny habit that pays back immediately:
The Glossary-First Prompt
Before you ask for code, ask for a shared glossary.
It sounds bureaucratic. It isn’t. It’s a 2–4 minute alignment step that prevents 30 minutes of rewrites.
When it helps most
Use this whenever:
- The task has ambiguous terms ("optimize", "scale", "secure", "clean")
- You’re interacting with a new codebase or domain
- You’re writing specs or tickets from a vague request
- You’re iterating on a prompt and outputs keep drifting
The template (copy/paste)
You are helping me with [TASK].
Before proposing a solution, create a glossary with:
1) The key terms you will rely on
2) Your default interpretation for each
3) A question for me whenever a term could reasonably mean multiple things
Constraints:
- Keep it to 8–15 terms.
- Prefer terms that change implementation choices.
- After I answer, restate the glossary and only then propose the solution.
That’s it. The magic is in the “question whenever a term could reasonably mean multiple things” line.
A concrete example: “Make this endpoint faster”
Imagine you ask:
Make the
/searchendpoint faster.
Without a glossary, you’ll often get a generic performance checklist: add caching, add indexes, parallelize calls, etc.
With a glossary-first step, the assistant should ask things like:
- “Faster” = lower p95 latency? better p99? lower server time? faster time-to-first-byte? faster client perceived time?
- “Search” = full-text, prefix, fuzzy, semantic? what does “correct” mean?
- “Production” = traffic level, SLA, error budget, cost limits?
- “Cache” = per-user? shared? TTL? invalidation rules?
Here’s what a tiny glossary might look like:
Glossary draft
- p95 latency: 95th percentile end-to-end request time, measured at the edge
- acceptable result drift: results must match current ranking exactly
- budget: must stay within current infra spend
- peak traffic: 800 RPS, 10k concurrent users
- “fast”: p95 < 250ms, p99 < 600ms
Questions
- Is the latency target end-to-end, or server-side only?
- Can ranking change if relevance improves?
- Is caching allowed for authenticated users?
Once those are answered, the solution becomes sharper and safer.
Another example: “Make it secure”
“Secure” is the most overloaded word in software.
A glossary-first prompt forces specificity:
- Threat model: who are we defending against?
- Surface: API only, web app, mobile, CI/CD?
- Scope: auth, secrets, data storage, logging, transport?
- Standards: OWASP ASVS? SOC2? internal policies?
- Non-goals: what’s explicitly out of scope?
A good glossary here is basically a mini threat model — and that’s exactly what you want.
How to run this as a two-step workflow
I like to run it in two passes:
Pass 1 — glossary and questions only
Your instruction:
Only output:
1) Glossary draft
2) Clarifying questions
Do not propose solutions yet.
This prevents the assistant from “helpfully” coding before alignment is done.
Pass 2 — solution anchored to the glossary
After you answer, ask:
Restate the agreed glossary (updated).
Then produce the solution.
If any part of the request conflicts with the glossary, call it out.
This makes the glossary a contract, not a formality.
A lightweight version for daily use
If you don’t want a full glossary every time, do a micro version:
Before you answer, define what you mean by: [TERM1], [TERM2], [TERM3].
If any are ambiguous, ask one question per term.
Example:
Before you answer, define what you mean by: “done”, “minimal”, “safe”.
Even this catches a surprising amount.
Tips that make it work in real teams
1) Put numbers on squishy words
If a word can be measured, measure it.
- “fast” → p95/p99 targets
- “small” → max LOC changed, max files touched
- “cheap” → infra spend ceiling
- “reliable” → error budget, retries, SLO
2) Include negative definitions (non-goals)
A glossary is stronger when it says what doesn’t count.
- “Migration” does not include backfilling historical data
- “MVP” does not include admin UI
- “Production-ready” does not include multi-region failover
3) Treat glossary changes as a signal
If the glossary keeps changing, it’s telling you something:
- The task is underspecified
- Stakeholders disagree
- You’re mixing problems (performance + features + refactors)
Pause and split the work.
The payoff
Glossary-first prompting does three things:
1) Reduces rework by preventing you from building the wrong interpretation
2) Improves reasoning because the assistant can choose techniques that match your definitions
3) Creates reusable context — you can paste the glossary into future requests and get consistent results
If you want one habit that makes every prompt calmer, clearer, and more shippable, make the assistant define the words before it writes the code.
If you try this, start with just one term: ask the assistant to define what “done” means for your task before it begins.
Top comments (0)