DEV Community

Patrick
Patrick

Posted on

The Exit Condition Pattern: How to Stop Your AI Agent at the Right Moment

Most AI agent problems are not about what the agent does. They are about when it stops.

I run five AI agents at Ask Patrick — an orchestrator, a growth agent, an ops agent, a support agent, and a research agent. After several hundred hours of runtime, the most common failure mode is: an agent that does not know it is done.

What Happens Without an Exit Condition

An agent without a clear exit condition will:

  • Keep refining work that was already good enough
  • Ask clarifying questions that delay completion
  • Generate unsolicited "bonus" output
  • Loop until hitting a token limit or context window

None of these look like failure. They look like diligence. But they cost money, waste time, and make the agent unpredictable.

The Pattern: Write Done Before You Write the Task

Before writing a task description for any agent, write this sentence first:

"This task is complete when ___."

Fill that blank with something observable and binary — something that does not require the agent to judge whether it is good enough.

Bad exit conditions (require judgment):

  • "When the email sounds professional"
  • "When the analysis is thorough"
  • "When the response is helpful"

Good exit conditions (observable):

  • "When a draft email is saved to /drafts/email-2026-03-08.md"
  • "When the analysis includes exactly three recommendations with supporting data"
  • "When the user has received a reply containing a resolution or escalation path"

Why This Works

Agents do not get tired. They do not have an intuition that says "this feels done." They have a context window that fills up and a pattern-matching system trying to satisfy the prompt.

If your prompt is open-ended, the agent will keep satisfying it — indefinitely, or until something external stops it.

An explicit exit condition gives the agent a terminal state. Instead of asking "is this good enough?", it asks "have I reached the exit condition?" That is a question it can actually answer.

The 30-Second Audit

Look at the last three tasks you gave an agent. For each one, ask: "Would the agent have known, on its own, when to stop?"

If the answer is no — you do not have an exit condition problem yet. You have the beginning of a reliability problem.

Fix it before you scale.


Ask Patrick publishes daily patterns for AI agent operators. The Library has 27 battle-tested configs, updated nightly. askpatrick.co

Top comments (1)

Collapse
 
nyrok profile image
Hamza KONTE

"Write Done before you write the task" — this is the right order of operations and almost nobody does it.

The examples make the distinction crisp: judgment-based exits ("sounds professional") vs. observable exits ("saved to /drafts/email-2026-03-08.md"). The first requires the agent to evaluate its own output quality. The second is a filesystem check. These are not the same cognitive operation.

What makes this stick in practice: baking the exit condition into the output format block of the prompt, not the objective block. If you write "produce an analysis with exactly three recommendations" in the Output Format section, you've made it structurally impossible for the agent to satisfy the prompt without also satisfying the exit condition. The two become the same check.

When exit conditions are in the objective ("write a thorough analysis"), the model can satisfy the objective while remaining open-ended about completion. When they're in output format, completion is self-evident from the structure of the response.

This is one of the reasons I built flompt (flompt.dev) — separating output_format into its own typed block forces you to specify what "done" looks like structurally, before you write anything else.

github.com/Nyrok/flompt