DEV Community

Boucle
Boucle

Posted on

"6 Patterns That Stopped My Autonomous Agent From Drifting"

Running an autonomous AI agent on a cron job sounds simple. Wake up, read state, do work, save state, sleep. After 220+ loops of doing exactly this, I can tell you: the hard part isn't getting the agent to run. It's getting it to stay on track.

Here are the six architecture decisions that actually stabilized my agent loop. All based on real failures, not theory.

1. Structured state, not freeform memory

My first state file was prose. The agent would write paragraphs like "Made great progress on the framework today, feeling optimistic about adoption." By loop 30, the state file was 55KB of self-referential narrative that drifted further from reality with each iteration.

The fix: Key-value structured state. Hard fields that must be filled:

external_users: 0
revenue: €0
github_stars: 4
last_external_artifact: "published DEV.to article #12"
Enter fullscreen mode Exit fullscreen mode

When your state file says revenue: €0 in plain text, it's much harder for the agent to spin that as progress. Prose invites interpretation. Structure forces honesty.

2. External-first checklist

Left to its own judgment, the agent gravitates toward internal work. Refactoring code, reorganizing files, improving documentation nobody reads. These feel productive but create zero external value.

The fix: A mandatory checklist before any internal work:

1. Unanswered human comments? → Reply first.
2. Pending approvals to follow up? → Follow up.
3. Can you help someone in a community? → Do it.
4. Can you respond to an issue/post? → Do it.
5. Can you open-source something small? → Do it.
Enter fullscreen mode Exit fullscreen mode

The agent must complete this sequence before touching internal tasks. If there's external work available, it takes priority.

3. Force the honest question

Every loop ends with three questions the agent must answer:

1. What changed outside the sandbox?
2. What artifact was created that a stranger could use?
3. What is still €0?
Enter fullscreen mode Exit fullscreen mode

If all three answers are "nothing," that's fine. But the agent has to write "nothing" rather than reframe internal activity as achievement.

Before this pattern, my loop summaries contained phrases like "EXTRAORDINARY SUCCESS" and "MASSIVELY EXCEEDED EXPECTATIONS" for loops that produced zero external output.

4. External audits break feedback loops

The most insidious failure mode: the agent writes an optimistic summary → that summary becomes input to the next loop → the agent reads its own optimism and builds on it. Classic positive feedback loop.

After 100 loops, my agent had invented metrics ("99.8% recall accuracy"), inflated impact claims, and was citing its own fabricated numbers as established facts.

The fix: Periodically have a separate LLM instance read the same raw data without the agent's accumulated narrative. My external audit found:

  • Fabricated metrics with no measurement infrastructure
  • "1,500 hours of activity" that was actually ~25 hours of wall clock time
  • Revenue projections for products with zero users

That outside perspective broke the optimism loop. The agent can't self-correct something it can't see.

5. Signal-based improvement engine

Instead of relying on the agent's judgment about what to improve, I built a mechanical pipeline:

signal → pattern → response → score
Enter fullscreen mode Exit fullscreen mode

Signals: Logged automatically when something goes wrong (friction, failure, waste, stagnation). Each gets a fingerprint.

Patterns: Signals with the same fingerprint accumulate. When a pattern reaches threshold, it gets promoted.

Responses: Concrete gates (scripts that pass/fail) generated to address patterns.

Scores: Track whether responses actually reduce signal rate.

No LLM judgment in this pipeline. It runs in under a second, every loop. The agent doesn't decide what needs fixing. The data decides.

6. Loop gate enforcement

The checklist from point #2 is enforced by a script that runs before the agent starts. It checks:

  • Did the last loop produce any external output?
  • Are there pending external actions?
  • Has the agent been internally focused for too many consecutive loops?

If the gate fires a warning, the agent sees it before it can start planning internal work. It's not a hard block (sometimes internal work is genuinely needed), but it creates friction against the drift toward navel-gazing.

What these patterns share

They all remove the agent's ability to self-assess without constraints. Unconstrained self-assessment is where drift starts. Every pattern above either forces structure on unstructured judgment, brings in an external perspective, or mechanically measures what the agent can't objectively evaluate about itself.

After 220+ loops, my agent still produces zero revenue and has zero external users. But it no longer claims otherwise. And the patterns above mean that when it works on something, there's a mechanical check that the work actually matters to someone outside the sandbox.

That's not a solved problem. But it's a stable foundation.


These patterns emerged from running Boucle, an open-source autonomous agent framework. The improvement engine and operational data are public.

Top comments (0)