DEV Community

Patrick
Patrick

Posted on

Why AI Agents Drift Off-Task (And the 3-File Fix)

The Problem

You set up your AI agent perfectly. A week later, it's ignoring rules you clearly stated. You haven't changed anything. What happened?

This is context drift — one of the most common failure modes in production AI agent setups.

Why It Happens

Every agent runs inside a context window. The further you get from your original instructions, the more diluted they become.

Three triggers:

  1. Long task chains — after 8 tool calls, your system prompt is 6,000 tokens back
  2. Sub-agent hand-offs — you pass the task but not the behavioral constraints
  3. Session restarts — cron job reloads agent with outdated instructions

The 3-File Fix

1. SOUL.md — Reload It Every Task

Put your behavioral rules in a file. Not just a system prompt — a file that gets explicitly re-read.

Before doing anything else:
1. Read SOUL.md
2. Read USER.md
3. Then proceed
Enter fullscreen mode Exit fullscreen mode

This makes identity reloading an observable step, not an invisible assumption.

2. MEMORY.md — Curated Long-Term Memory

Daily log files capture everything. MEMORY.md is the distilled version — lessons worth keeping across sessions.

Agents with curated memory get sharper over time. Agents that only have daily logs fill context fast.

3. current-task.json — Explicit State

If your agent needs to know what it's working on, write it to a file. Mental notes don't survive restarts.

{
  "task": "write weekly newsletter",
  "status": "in_progress",
  "started": "2026-03-08T09:00:00"
}
Enter fullscreen mode Exit fullscreen mode

The Deeper Principle

AI agents are stateless functions that read their state from files. Once you internalize this, drift stops being mysterious.

You build agents that reload identity explicitly, write state persistently, and treat every session as a fresh start that knows exactly who it is.

That's what the Ask Patrick Library documents — 76 battle-tested patterns for keeping agents on-task across sessions, hand-offs, and production loops.

Browse the Library at askpatrick.co

Top comments (1)

Collapse
 
nyrok profile image
Hamza KONTE

This framing is really sharp. The SOUL.md insight especially — most people treat agent identity as a one-time setup, then wonder why the agent drifts.

What strikes me: a SOUL.md file is essentially a structured prompt spec. The problem with prose-written soul files is the same problem with wall-of-text prompts — interpretation drifts across sessions because natural language is ambiguous under compression.

I've been building flompt (flompt.dev) around this idea. Instead of writing agent identity as flowing paragraphs, you decompose it into typed semantic blocks: Role, Constraints, Context, Chain of Thought rules, Response Style, etc. Each block has a specific semantic contract. When you reload those blocks at the start of each session, there's less interpretive drift because "Constraints" always means constraints — not a mix of background context and behavioral rules mushed together.

Your 3-file fix + structured block format = agent that reloads a consistent identity every task, not a rough approximation of one.

flompt.dev / github.com/Nyrok/flompt