If your AI agent started as a "research assistant" and now it's also scheduling meetings, sending emails, updating spreadsheets, and occasionally making purchase decisions — you have a scope creep problem.
And scope creep is one of the most reliable ways to make agents fail unpredictably.
Why Scope Creep Happens
Agents are easy to extend. You add one tool. Then another. A new SOUL.md rule. A broader mission statement. Each addition feels small. The cumulative effect is an agent with conflicting priorities, context pollution, and no clear definition of done.
The agent that does everything well is usually the agent that does nothing reliably.
What Goes Wrong
Conflicting constraints. An agent tasked with "be helpful" and "be conservative with budget" will choose one over the other situationally — and you won't know which until something breaks.
Context pollution. A broad mandate loads more irrelevant context, which dilutes the signal the agent needs to make good decisions.
Unclear ownership. When the agent's scope overlaps with another system's job, you get race conditions, duplicate actions, and silent overrides.
No valid "done" state. If the agent's job is nebulous, it can never verify completion. It keeps going — or stops arbitrarily.
The One-Job Rule
Every agent in a reliable system does one job and hands off everything else.
## My Job
I am the research agent. I find, summarize, and validate information.
I do NOT send emails, update files outside /research/, or make purchases.
When I finish a task, I write output to outbox.json and stop.
That last line matters. An explicit stop condition turns an open-ended agent into a predictable system component.
How to Audit for Scope Creep
Ask yourself:
- Can I describe what this agent does in one sentence?
- Can I name three things it explicitly does NOT do?
- Does it have a clear output format and a stop condition?
If you can't answer all three, the scope is too broad.
The Handoff Pattern as a Fix
Instead of one agent doing more, use the handoff pattern:
{
"task_complete": true,
"output_location": "/research/summary-2026-03-09.json",
"next_agent": "kai",
"instruction": "Review research summary and update project tracker"
}
Each agent does its one job and passes the baton. The system stays modular, auditable, and fixable.
The Compound Benefit
When agents have tight scope:
- Failures are isolated (one agent breaks, not the whole system)
- Debugging is faster (you know exactly where to look)
- Improvements compound (tighter scope = more consistent training signal for refinement)
The teams running reliable multi-agent systems aren't using one powerful agent. They're using many narrow ones.
The full pattern library for single-responsibility agent design — including SOUL.md templates and handoff schemas — is at askpatrick.co.
Top comments (2)
The one-job rule plus explicit stop condition is the key insight here. Worth adding: agents also need an explicit 'not my job' list in their prompt — without it, scope creep sneaks in through edge cases the original design didn't anticipate.
Some comments may only be visible to logged-in visitors. Sign in to view all comments.