DEV Community

Nova Elvaris
Nova Elvaris

Posted on

How to Write a Prompt Contract That Stops Context Drift

How to Write a Prompt Contract That Stops Context Drift

A lot of prompt problems are not really prompt problems.

They are spec problems.

You ask for one thing, the assistant gives you something adjacent, you clarify, it improves, and then two turns later it drifts again. Most teams respond by adding more words to the prompt.

That usually makes the prompt longer, not clearer.

What works better is writing a prompt contract.

A prompt contract is a small, explicit agreement between you and the model:

  • what input it will receive
  • what output it must return
  • what rules it must not break
  • what it should do when information is missing

Once you start writing prompts this way, context drift drops fast.

What a prompt contract actually is

Think of it like an API contract, but for language.

Instead of vague instructions like:

Review this code and tell me what you think.

You define the job more like this:

  • Goal: identify correctness, reliability, and security issues
  • Input: one diff plus optional test output
  • Output: bullet list with severity, file, issue, fix
  • Constraints: do not comment on style unless it affects correctness
  • Failure mode: if evidence is insufficient, say what additional context is needed

That one shift changes the quality of the conversation.

The model stops guessing what “good” means.

Why context drift happens

Most drift comes from one of four causes:

1. The task is underspecified

If you do not define the exact deliverable, the model fills in the blanks.

That can look smart on the first run and still be wrong for your workflow.

2. The output format is too loose

If one run returns prose, another returns bullets, and another returns half prose plus half checklist, your prompt is not stable enough yet.

3. Constraints are implied, not stated

Humans assume things like:

  • do not rewrite unrelated code
  • keep the answer short
  • ask before making risky assumptions
  • do not invent data

Models do better when those are explicit.

4. Missing-information behavior is undefined

If the model does not know what to do when context is incomplete, it often improvises.

That is where hallucinated certainty shows up.

The 5-part contract template

This is the template I keep coming back to.

1. Goal

What exact job should the model do?

Bad:

Help with this feature.

Better:

Turn the PRD into an implementation plan with milestones, dependencies, and open questions.

2. Inputs

Say what the model should expect.

Example:

Inputs:
- Product requirement document
- Existing API routes
- Current database schema
- Relevant constraints from README
Enter fullscreen mode Exit fullscreen mode

This matters because it sets the boundary of what the model is allowed to rely on.

3. Output contract

Define the shape of the answer.

Example:

Return:
1. Summary of the requested change
2. Risks and assumptions
3. Step-by-step implementation plan
4. Test plan
5. Open questions
Enter fullscreen mode Exit fullscreen mode

Even better, include formatting rules:

Format:
- Markdown
- Use H2 headings
- Keep each section under 8 bullets
- Include at least 3 concrete tests
Enter fullscreen mode Exit fullscreen mode

4. Constraints

These are the rules that keep the model from “being helpful” in the wrong direction.

Example:

Constraints:
- Do not invent endpoints or schema fields
- Prefer minimal changes over rewrites
- Call out uncertainty explicitly
- Do not include generic best-practice filler
Enter fullscreen mode Exit fullscreen mode

5. Failure behavior

This is the most underrated part.

Tell the model what to do if the input is not enough.

Example:

If the available information is insufficient:
- do not guess
- list the missing information
- state the safest next step
Enter fullscreen mode Exit fullscreen mode

That one section prevents a lot of fake confidence.

A before-and-after example

Here is a weak prompt:

Review this migration and let me know if anything looks wrong.

Here is the same task as a contract:

You are reviewing a database migration.

Goal:
Find correctness, rollback, locking, and data-loss risks.

Input:
- SQL migration
- affected table names
- brief product context if provided

Return:
- verdict: low / medium / high risk
- bullet list of issues
- why each issue matters
- suggested fix or mitigation
- rollout notes if relevant

Constraints:
- prioritize correctness over style
- do not comment on naming unless it affects maintainability
- do not assume zero-downtime is possible unless shown

If information is missing:
- state what is missing
- do not invent operational guarantees
Enter fullscreen mode Exit fullscreen mode

That second version is dramatically easier to trust.

Where prompt contracts help most

I have seen this work especially well for:

  • code review
  • PRD to implementation planning
  • debugging summaries
  • content generation with fixed structure
  • support reply drafting
  • extraction and transformation workflows

The common trait is simple: the job has a deliverable.

If there is a deliverable, there should be a contract.

A tiny TypeScript representation

If you build repeatable workflows, it helps to store contracts in code.

export type PromptContract = {
  goal: string;
  inputs: string[];
  output: string[];
  constraints: string[];
  onMissingInfo: string[];
};

export const reviewContract: PromptContract = {
  goal: "Review a code diff for correctness and risk",
  inputs: ["git diff", "test output if available"],
  output: [
    "risk level",
    "issues with file references",
    "suggested fixes",
    "missing information"
  ],
  constraints: [
    "prioritize correctness over style",
    "do not rewrite unrelated code",
    "do not invent facts not visible in the diff"
  ],
  onMissingInfo: [
    "say what is missing",
    "avoid guessing",
    "recommend the next safe check"
  ]
};
Enter fullscreen mode Exit fullscreen mode

You do not need a full framework. Even a plain JSON file is enough.

Common mistakes

Mistake 1: making the contract too abstract

If a stranger could read your prompt and still not know what “done” looks like, it is too vague.

Mistake 2: trying to encode every edge case up front

Start with the smallest contract that prevents the most common failures.

Then evolve it based on real mistakes.

Mistake 3: forgetting the failure path

A contract without missing-information behavior still leaves the model room to bluff.

Mistake 4: mixing multiple jobs into one contract

If the model must review, rewrite, prioritize, estimate, and generate tests all at once, you probably have more than one task.

Split the workflow.

The simplest version you can use today

If you want a fast starting point, copy this:

Goal:

Inputs:

Return:

Constraints:

If information is missing:
Enter fullscreen mode Exit fullscreen mode

Fill those five sections before you hit run.

That is it.

It is not glamorous, but it works.

Prompting gets more reliable when you stop treating prompts like vibes and start treating them like contracts.

If your current workflow feels inconsistent, do not begin by adding clever wording.

Start by defining the agreement.

Top comments (0)