DEV Community

Nova
Nova

Posted on

Spec-First Prompting: The 1-Page Brief I Write Before Asking AI for Code

Spec-First Prompting: The 1-Page Brief I Write Before Asking AI for Code

A lot of bad AI coding sessions start with the same impulse:

I know roughly what I want. I’ll just ask the model.

Sometimes that works.

More often, you get one of these outcomes:

  • a solution that technically runs but solves the wrong problem
  • a large rewrite when you wanted a small patch
  • a plan with no attention to constraints, risks, or tests
  • a long back-and-forth that basically recreates the spec you should have written first

That is why I like spec-first prompting.

Before I ask an AI tool to write code, I try to write a one-page brief.

Not a heavy product document. Just enough structure to make the task legible.

Why this matters

LLMs are pattern matchers with strong language skills.
They are not mind readers.

If you give them a vague request, they often fill in missing details with the most statistically plausible answer.

That can look productive and still be wrong.

A short spec reduces that ambiguity.
It helps the model reason within your actual boundaries instead of generic internet defaults.

The 1-page brief template

My preferred shape is simple.

1. Goal

What are we trying to achieve?

Example:

Add cursor-based pagination to the transactions API without breaking existing clients.

2. Non-goals

What are we explicitly not doing?

Example:

  • not redesigning the response schema
  • not changing sorting defaults for existing consumers
  • not introducing a new database engine feature

This section prevents scope creep.

3. Inputs and current state

What context does the model need?

Example:

  • current route handler
  • database schema
  • existing pagination behavior
  • performance constraints
  • related tests

4. Constraints

These are the rules that matter in your environment.

Examples:

  • keep changes local to the API layer if possible
  • maintain backward compatibility
  • prefer additive changes
  • avoid long table locks
  • do not introduce new runtime dependencies

5. Deliverable

What should the model produce?

Examples:

  • patch plan
  • code diff
  • migration review
  • test plan
  • rollout checklist

6. Risks and edge cases

This is where a lot of value appears.

Examples:

  • race conditions
  • partial failures
  • pagination stability
  • rollback safety
  • permission boundaries

7. Test expectations

Do not leave testing implied.

State what “proven” means.

Example:

  • include unit tests for cursor parsing
  • include integration coverage for duplicate timestamp rows
  • describe one manual verification step in staging

A concrete example

Here is a tiny spec-first brief:

Goal:
Add a bulk archive action to the admin ticket list.

Non-goals:
- no permanent delete flow
- no redesign of list filters

Current state:
- tickets are listed in `/admin/tickets`
- selection already exists in UI state
- backend has single-ticket archive endpoint only

Constraints:
- keep current permissions model
- action must be idempotent
- do not block the UI for the full batch duration

Deliverable:
- implementation plan
- backend/API changes
- frontend state changes
- test plan

Risks:
- partial failure in large batches
- permission mismatches
- stale UI counts after mutation

Test expectations:
- successful bulk archive
- partial failure handling
- unauthorized user rejection
Enter fullscreen mode Exit fullscreen mode

Now the AI has a real frame.

It knows what the job is, where the boundaries are, and what failure looks like.

Why one page is enough

The goal is not documentation theater.
The goal is to remove the highest-cost ambiguity.

A one-page brief is enough to force good questions:

  • what exactly counts as success?
  • what must not change?
  • what are the dangerous edges?
  • what evidence would convince us the implementation is correct?

That is usually enough to make the next prompt far more reliable.

This also improves human thinking

One hidden benefit: spec-first prompting is not only for the model.
It improves your own reasoning too.

I often notice the real problem while writing the brief.

Maybe the scope is too big.
Maybe the interface is underspecified.
Maybe the hardest part is not implementation but rollout.

That is all useful before a single token is spent.

A good follow-up prompt

Once the brief exists, the actual prompt gets simpler.

Use the brief below to create a minimal implementation plan.

Requirements:
- respect all stated constraints
- call out assumptions explicitly
- include risks and test plan
- prefer the smallest change that satisfies the goal
Enter fullscreen mode Exit fullscreen mode

You do not need a magical incantation after that.
You already did the important work.

Common mistakes

Mistake 1: skipping non-goals

This is how a modest feature request turns into a rewrite.

Mistake 2: vague constraints

“Keep it clean” is not a useful constraint.

“Do not add dependencies” is.

Mistake 3: forgetting tests

If you do not define test expectations early, the model often treats them as optional.

Mistake 4: stuffing implementation details into the goal

The goal should describe the outcome.
The implementation is what you want help exploring.

A tiny checklist before you prompt

Before I ask AI for code, I try to answer:

  • what is the goal?
  • what is out of scope?
  • what constraints matter?
  • what risks could make this unsafe?
  • what tests would prove the result?

If I cannot answer those, the prompt is premature.

The practical takeaway

If your AI coding sessions feel noisy, do not start by switching models or adding more prompt tricks.

Write a one-page brief first.

It is one of the simplest ways to get better plans, cleaner code, and fewer “that is not what I meant” loops.

Prompting works better when the thinking starts before the prompt.

Top comments (0)