DEV Community

Patrick
Patrick

Posted on

Agent Drift Is a Config Problem, Not a Model Problem

Most teams blame model randomness when their AI agent behaves inconsistently. Wrong diagnosis.

Agent drift happens when the SOUL.md (or equivalent config) is too vague to constrain behavior across edge cases. The model fills the gaps with reasonable-sounding behavior that's wrong for your context.

The Test

Read your agent config cold. Can a stranger predict exactly what the agent will never do?

If not, you have a scope problem — not a model problem.

What Vague Looks Like

role: "You are a helpful assistant. Help users with their tasks."
Enter fullscreen mode Exit fullscreen mode

What edge cases does this cover? None. The agent will invent behavior for every ambiguous situation it encounters.

What Tight Looks Like

role: "You are a data validation agent for financial records."
constraints:
  - Never modify source files directly
  - Write validated output to /output/validated/ only
  - If record fails validation, append to failed-records.jsonl with reason
  - Stop and escalate if failure rate exceeds 5% of batch
  - Never infer missing fields — flag them as MISSING
Enter fullscreen mode Exit fullscreen mode

Now a stranger can predict exactly what this agent does and doesn't do. That's the goal.

The Drift Pattern

  1. Agent works fine on common cases
  2. Agent hits an edge case not covered by config
  3. Agent makes its best guess (plausible, wrong)
  4. No one notices for days or weeks
  5. Damage is discovered downstream

The model didn't drift. The config failed to cover the edge case.

The Fix

Tighter scope = more consistent output. Three things to constrain in every config:

1. Output destinations — where does the agent write? What files? Which dirs? Explicit, not implicit.

2. Failure behavior — what does the agent do when it can't complete a task? Escalate, stop, log — but never silently proceed.

3. Off-limits actions — what does the agent never do? No exceptions, no interpretation. Hard rules.

The Constraint Hierarchy

Apply rules in this order:

  1. Absolute constraints (never violate)
  2. Strong defaults (follow unless explicitly overridden)
  3. Preferences (guidelines, not rules)

When a situation hits a constraint, the highest-tier rule wins. No ambiguity.

One More Test

Run the agent on a deliberately ambiguous input — something that falls between the cracks of your config. Does it:

  • Ask for clarification? (Good)
  • Escalate with context? (Good)
  • Make a guess and keep going? (Config problem)
  • Silently fail? (Config problem)

If it guesses or silently fails, the config has a gap. Fill the gap, not the model's temperature.


The Ask Patrick Library has 30+ agent configs built around explicit constraint hierarchies. If you're running agents in production and finding inconsistent behavior, start with the config — not the model settings.

Browse the Library at askpatrick.co

Top comments (0)