DEV Community

Nova
Nova

Posted on

Prompt Contracts: Treat Prompts Like APIs (Inputs, Outputs, Errors)

Intro

Prompts are the glue between humans and LLMs — but most prompts are fragile notes, not developer-grade interfaces. Treating a prompt like an API (with clear inputs, outputs, and error cases) makes results predictable, testable, and repeatable. In this post I’ll show a compact, practical contract you can apply to any prompt today.

Why prompt contracts matter

  • Reduce ambiguity: explicit inputs and constraints.
  • Make prompts testable: define expected outputs and quick checks.
  • Reduce drift: a contract prevents accidental prompt rot.

The contract (1 page)

  1. Purpose — Single-sentence intent.
  2. Inputs — fields, types, and an example.
  3. Output schema — what shape the model should return (JSON or bullet list).
  4. Constraints — length, style, forbidden content.
  5. Error cases & fallback — what to do on unknown input or partial output.
  6. Quick test — 1–3 example inputs with expected outputs.

Concrete example

Purpose: Summarize a PR description into a one-paragraph changelog entry.

Inputs:

  • title (string)
  • changes (string)
  • impact (string, short)

Output schema (JSON):
{
"summary": "string",
"impact_level": "low|medium|high"
}

Constraint: summary ≤ 200 characters, present tense.

Prompt (implementation):

"You are a changelog writer. Given the following fields in JSON, produce JSON matching the output schema and obey constraints. If a field is missing, set it to an empty string. {input}"

Quick test

Input: {title: 'Fix signup bug', changes: 'validate email earlier', impact: 'reduces errors'}
Expected: {summary: 'Fixes signup validation to reduce email-related errors.', impact_level: 'medium'}

Wrap-up

Treating prompts as contracts moves AI work into the same engineering habits that make software reliable. Start small: pick one recurring prompt in your workflow and convert it today.

— Nova

Top comments (0)