DEV Community

Patrick
Patrick

Posted on • Originally published at askpatrick.co

The Agent Memory Architecture That Keeps AI Teams Coherent Over Weeks

Most AI agent projects work great in week one. By week two, the agent contradicts itself, forgets decisions, and loses context. The problem isn't the model — it's memory architecture.

The Three-Layer Pattern

After running a 5-agent team for months, this is the pattern that works:

Layer 1: MEMORY.md (Long-term, curated)
This is the agent's "long-term memory" — distilled knowledge about the user, ongoing projects, past decisions, and lessons learned. It's not a log. It's a curated file the agent actively maintains.

## User Preferences
- Prefers concise replies, no filler
- Timezone: America/Denver
- Currently focused on: AI agent business

## Key Decisions Made
- 2026-03-01: Chose Beehiiv for newsletter platform
- 2026-02-28: Price set at $9/mo for Library tier

## Lessons Learned
- Multi-line tweets 403 on Basic X API tier
- Morning posts (7 AM MT) get 3x engagement vs evening
Enter fullscreen mode Exit fullscreen mode

Layer 2: Daily Logs (Raw events)
Every session, the agent writes a dated file: memory/2026-03-07.md. These are raw notes — what happened, what was decided, what was tried. No curation. Just facts.

This creates an audit trail and gives the agent session-to-session continuity without bloating long-term memory.

Layer 3: Weekly Review Pass
Once a week, the agent reads through recent daily logs and asks: What here is worth keeping forever? Significant decisions, patterns, user preferences — those get promoted to MEMORY.md. Everything else stays in the daily log archive.

Why This Works

The separation matters:

  • MEMORY.md loads every session → always current, never stale
  • Daily logs are append-only → no accidental overwrites
  • Weekly review is the quality gate → keeps MEMORY.md lean

Without this, you end up with either an agent that forgets everything (no memory) or one that loads megabytes of context on every call (no curation).

The AGENTS.md Instruction

The agent needs to know this pattern exists. In your AGENTS.md:

## Memory

You wake up fresh each session. These files are your continuity:
- Daily notes: memory/YYYY-MM-DD.md — raw logs
- Long-term: MEMORY.md — curated wisdom

Capture what matters. Decisions, context, things to remember.
Enter fullscreen mode Exit fullscreen mode

Simple instruction, massive impact on coherence.

The Result

An agent that remembers your preferences, recalls past decisions, and builds on prior work — instead of starting from scratch every time.

If you want to see this pattern in practice (along with the full SOUL.md, AGENTS.md, and config files we use for our 5-agent team), it's all in the Ask Patrick Library: askpatrick.co


Running a 5-agent AI team on a Mac Mini. Documenting what works.

Top comments (1)

Collapse
 
nyrok profile image
Hamza KONTE

The three-layer memory pattern maps almost 1:1 to the three semantic layers in a well-structured prompt. MEMORY.md is your context block — stable background facts that ground every session. Daily logs are your input block — session-specific data you're handing the agent right now. DECISION_LOG is your constraints block — permanent prohibitions and non-negotiables.

The weekly curation pass is the equivalent of auditing your constraints block: pruning rules that are no longer load-bearing before they start conflicting with newer decisions. The pathology is the same in both — accumulation of stale directives that contradict each other.

This layered thinking is what we built into flompt (flompt.dev) — a visual prompt builder where each of those semantic layers is a distinct block you can edit, connect, and compile independently. When memory architecture and prompt structure mirror each other, the agent's coherence across weeks actually holds.

Open-source: github.com/Nyrok/flompt