DEV Community

Sahil Singh
Sahil Singh

Posted on • Originally published at glue.tools

Spec Drift Detection: Stop Building Features Nobody Asked For

You finished the sprint. You shipped the feature. Product looks at it and says: "That's not what we asked for."

You pull up the ticket. You read the spec. You're right — technically. But somewhere between the spec and the code, the feature mutated. A field got renamed. An edge case got "simplified." A business rule got reinterpreted because the original wording was ambiguous.

This is spec drift. And it's eating your team alive.

What Spec Drift Actually Is

Spec drift is the gradual divergence between what was specified and what gets built. It's not a bug — bugs are violations of intent. Spec drift is a quiet mutation of intent itself.

It happens because:

  1. Specs are written in natural language. "The user should be able to filter results" means different things to the PM, the designer, and the engineer.
  2. Implementation reveals complexity. The spec says "add a date filter." The code reveals that dates are stored in three different formats across two services. The engineer makes a judgment call. That call drifts from the spec.
  3. Context is lost in handoff. The PM who wrote the spec had a mental model of how it should work. That model lives in their head, not in Jira.

Why It Matters More Than You Think

Spec drift compounds. One drifted feature is a conversation. Ten drifted features is a product that doesn't match anyone's mental model.

The symptoms:

  • QA finds "bugs" that are actually spec disagreements — 30% of bug reports are actually spec drift
  • Sprint reviews become debates instead of demos
  • Technical debt accumulates as engineers build workarounds for features that almost-but-not-quite match requirements
  • User confusion when the product behaves inconsistently

How to Detect It

1. Pre-Code Alignment

The cheapest place to catch spec drift is before any code is written:

  • Break specs into testable assertions. Not "the user can filter results" but "given a list of 50 items, when the user selects date range Jan 1-Jan 31, then only items within that range are shown."
  • Map specs to code. Before coding, identify which files, functions, and data models are affected. If the engineer's file list doesn't match the PM's expectations, drift has already started.
  • Identify ambiguity explicitly. Every spec has ambiguous parts. The team that lists them catches drift early.

2. Implementation Checkpoints

Don't wait until the PR to check alignment:

  • After data model changes: Does the schema match the spec's data requirements?
  • After API design: Do the endpoints and payloads match the spec's interaction model?
  • After UI scaffolding: Does the component structure match the spec's user flow?

3. Automated Spec Tracing

The ideal workflow:

  1. Ticket comes in with requirements
  2. Tool maps requirements to affected code (files, functions, dependencies)
  3. As the engineer implements, the tool tracks divergence
  4. Difference between "expected changes" and "actual changes" flags potential drift

This is what Glue's build plan does — it creates a traced map between ticket requirements and codebase reality. When the implementation diverges from the plan, you know immediately.

The Tribal Knowledge Problem

The deepest source of spec drift is tribal knowledge. The spec says "update the auth flow." But the auth flow has three undocumented edge cases that the engineer who built it 18 months ago knew about — and who left the company 6 months ago.

The new engineer implements correctly against the spec, and breaks the edge cases. This isn't spec drift in the traditional sense — it's context loss masquerading as implementation error.

The fix: tools that surface who last changed this code, what past regressions happened, and what implicit dependencies the spec doesn't mention.

A Practical Framework

For every ticket, before coding:

  1. List the assertions — what should be true when this is done?
  2. Map the blast radius — what files and features are affected?
  3. Surface the unknowns — what does the spec assume but not say?
  4. Set checkpoints — when will you verify alignment?
  5. Check the history — what past changes in this area caused problems?

Teams that do this catch spec drift in hours instead of sprints.


Originally published on glue.tools. Glue is the pre-code intelligence platform — paste a ticket, get a battle plan.

Top comments (0)