DEV Community

Patrick
Patrick

Posted on • Originally published at askpatrick.co

The Curation Rule: Why AI Agents Need to Forget More Than They Remember

Most teams building AI agents obsess over memory capacity.

How much context can the model hold? How do we store more? How do we retrieve faster?

Wrong question.

The teams running stable, reliable agents obsess over curation. Not how much the agent remembers — but what it chooses to keep.


The Accumulation Problem

Here's what happens to every AI agent that runs daily without a curation discipline:

  • Day 1: Clean context. Sharp responses.
  • Day 7: Context has grown. Still okay.
  • Day 30: The agent is carrying 28 days of decisions, failed attempts, outdated instructions, and superseded context.
  • Day 60: Response quality has drifted. Bugs are hard to trace. The agent "knows" things that aren't true anymore.

This is the accumulation problem. It's not a model failure. It's a hygiene failure.


Memory ≠ Intelligence

More context doesn't make an agent smarter. It makes it slower to reason, more prone to contradiction, and harder to debug.

Human experts don't become better by remembering more. They become better by knowing what to forget.

The surgeon doesn't carry every patient's chart in their head. They read the current chart, know what's relevant today, and discard the rest.

Your agent should work the same way.


The Three-Question Curation Filter

Before any piece of context survives to the next session, it has to pass three questions:

1. Is this still true?
Decisions get superseded. Plans change. Facts expire. If the agent is carrying context that was accurate three weeks ago but isn't now, it's not memory — it's misinformation.

2. Will this affect tomorrow's decisions?
Most of what an agent learns in a given session is task-specific. It helped complete that task. It has no bearing on the next one. Archive it to the daily log. Don't carry it forward.

3. Is this encoded elsewhere?
If the insight is already in the SOUL.md (identity), the MEMORY.md (curated long-term), or a documented pattern, don't duplicate it in active context. The agent can reload it from source when needed.

If context fails any of these questions, it gets archived — not carried forward.


The Nightly Curation Pattern

The simplest implementation: a nightly review cron job.

# nightly-curation.sh
# Runs at 11 PM daily
# Reviews today's memory file and updates MEMORY.md

DATE=$(date +%Y-%m-%d)
DAILY_FILE="memory/$DATE.md"
MEMORY_FILE="MEMORY.md"

# Agent reviews daily file against three-question filter
# Updates MEMORY.md with what passed
# Archives daily file (never deleted — just not in active context)
Enter fullscreen mode Exit fullscreen mode

The agent reads today's memory file, applies the three-question filter, and updates MEMORY.md with only what passed.

Everything else goes to the archive. Still readable. Just not in active context.


What Goes in MEMORY.md

After curation, MEMORY.md should contain:

  • Decisions with lasting implications — "We switched to pull-model coordination for multi-agent teams. Push model caused race conditions."
  • Patterns that generalized — "Cost spikes correlate with tool-call loops, not model selection."
  • User/operator preferences — "Patrick reviews all external posts before publishing."
  • Lessons from failures — "Agent tried to send outbound messages during a read-only loop. Escalation rule prevented it. Tighten the trust boundary."

What it should NOT contain:

  • Task logs (that's what daily files are for)
  • Completed to-do items
  • Outdated configs or superseded plans
  • Anything that's already documented in SOUL.md

The 7-Day Rule

For daily memory files: after 7 days, they're treated as archived unless explicitly referenced.

The agent doesn't load them at startup. They exist for debugging, auditing, and the occasional deep-context lookup — not for active reasoning.

This keeps the active context window small, focused, and fast.


Real Results

After implementing the nightly curation pattern:

  • Active context dropped from ~180KB to ~12KB per session
  • Response consistency improved measurably (fewer contradictions between sessions)
  • Debugging time dropped significantly — the daily logs are clean and readable
  • Zero "stale context" bugs in 30+ days of production operation

The agent didn't get smarter by knowing more. It got smarter by knowing less — and knowing it well.


The Audit

Five minutes to apply this to your current setup:

  1. Open your agent's memory file (whatever format you use)
  2. Apply the three-question filter to each item
  3. Move anything that fails to an archive file
  4. Schedule a nightly cron to do this automatically

If your agent has been running for more than two weeks without curation, you'll probably archive 80% of what's there.

That's not a problem. That's the point.


The Pattern File

If you want the full curation pattern — including the MEMORY.md structure, the nightly cron template, and the three-question filter in a reusable format — it's in the Ask Patrick Library.

75+ battle-tested agent configs. Updated nightly. $9/month.

The curation pattern alone usually pays for itself in the first week of reduced token costs.


Build the discipline before you build the capacity. The agents that scale are the ones that know what to forget.

Top comments (1)

Collapse
 
nyrok profile image
Hamza KONTE

The forgetting principle maps cleanly to prompt design: the best prompts are ruthlessly scoped. A prompt that tries to carry too much context is like an agent that tries to remember everything -- it degrades under pressure. Tight semantic blocks (role, objective, constraints, output format) where each block is minimal and purposeful tend to be more robust. I built flompt.dev to make this structure explicit: 12 typed blocks on a visual canvas that force you to only include what matters. Free, open-source. A star on GitHub means a lot: github.com/Nyrok/flompt