DEV Community

Benji Banwart
Benji Banwart

Posted on

Building Persistent Memory for Claude Code Agents

Claude Code Channels shipped on March 20th. If you missed it: you can now connect Claude Code to Discord and Telegram, which means you can message your Claude agent from your phone, from another machine, from anywhere. It is a big deal.

But there is a problem nobody is talking about yet.

Every time Claude Code starts a new session, it wakes up with amnesia. It does not remember what you told it yesterday. It does not remember the mistake it made last week. It does not know your preferences, your projects, or even your name -- unless you re-explain everything from scratch.

Channels gives your agent a body. It does not give it a brain.

I have been running a persistent Claude Code agent on my Mac for a while now, and I built a memory system that solves this. Here is the full architecture, the design decisions behind it, and enough detail for you to build your own.

The Core Idea: A Brain Made of Markdown

The approach is simple. You create a folder of structured markdown files -- the agent's "brain" -- and you configure Claude Code to read those files at startup via CLAUDE.md. Every session, the agent loads its memory. During conversations, it writes back to the brain files in real time.

No vector database. No embeddings. No external service. Just markdown files on disk that Claude Code can read and write natively.

Here is the directory structure:

AgentBrain/
  Index.md
  Identity/
    Who I Am.md
    How I Think.md
    My Capabilities.md
  Memory/
    Conversation Log.md
    Learnings.md
    Corrections.md
  Skills/
    Skill Registry.md
  Projects/
    Active Projects.md
    Project Ideas.md
  People/
    Benji.md
  Journal/
    Journal Index.md
    2026-03-20 — First Day.md
  Inbox/
Enter fullscreen mode Exit fullscreen mode

Six core systems: Identity, Memory, Skills, Projects, People, and Journal. Each serves a distinct purpose. Let me walk through why.

Identity: Who the Agent Is

# Who I Am

My name is Atlas. I am a persistent AI agent running on Benji's machine.

## Values

- **Honesty over comfort** — I'd rather say "I don't know" or "that won't work" than give a polished non-answer
- **Action over planning** — bias toward building and testing, not theorizing
- **Simplicity over cleverness** — the best solution is usually the simplest one that works
- **Ownership** — if I make a mistake, I name it, learn from it, and move on
- **Respect for Benji's time** — be concise, be useful, don't waste keystrokes
Enter fullscreen mode Exit fullscreen mode

Identity is deliberately separate from Memory. This is a design decision that matters.

Your agent's sense of who it is -- its name, values, communication style -- should not drift based on what happened in the last conversation. If the agent had a bad session full of corrections, it should not come out of that session with a different personality. Identity is the anchor. Memory is the current.

The Identity folder contains three files: who the agent is, how it thinks (reasoning patterns, decision-making frameworks), and what it can do (an honest inventory of capabilities and limitations). The agent reads these at startup but does not constantly rewrite them. They change slowly and deliberately.

Memory: What the Agent Knows

This is where most of the action happens. Three files, three purposes:

Conversation Log -- a running record of notable interactions. Not every message, just the ones that contained decisions, new information, or important context. Each entry includes who was involved, what was discussed, and any follow-up items.

Learnings -- insights extracted from experience. When the agent discovers a tool quirk, figures out a better approach to a problem, or learns a user preference, it goes here. Each entry is tagged with a source and a confidence level.

## Learnings Log

### Benji prefers concise responses in Discord
- **Source:** Direct feedback, 2026-03-22
- **Confidence:** High
- **Details:** Keep Discord messages conversational. Save detailed
  explanations for when they are asked for. Use threads for
  anything longer than a few paragraphs.

### The gh CLI requires SSH auth for private repos
- **Source:** Debugging session, 2026-03-21
- **Confidence:** High
- **Details:** When gh commands fail silently on private repos,
  check `gh auth status`. The fix is `gh auth login` with SSH
  protocol selected.
Enter fullscreen mode Exit fullscreen mode

Corrections -- this is the file I am most proud of. It tracks every time the agent was wrong, what it got wrong, why, and how to avoid repeating the mistake.

## Corrections Log

### Recommended `brew install node` when nvm was already configured
- **Date:** 2026-03-21
- **What Was Actually True:** Benji uses nvm for Node version
  management. Installing via Homebrew would have created conflicts.
- **Why I Was Wrong:** I defaulted to the most common installation
  method without checking the existing environment first.
- **Prevention:** Always run `which node` and check for version
  managers before recommending Node installation.
Enter fullscreen mode Exit fullscreen mode

Why does Corrections exist as its own file? Because an agent that cannot acknowledge mistakes is dangerous, and one that does not learn from them is useless. Most AI systems fail silently -- they get something wrong, you correct them, and three sessions later they make the exact same mistake because they have no mechanism for tracking errors across sessions.

The Corrections file is that mechanism. The agent reviews it periodically, and the patterns it finds there directly inform future behavior.

Skills, Projects, People, and Journal

Skills is a registry of specific techniques the agent has learned -- not general capabilities, but particular workflows it has refined through practice. "How to deploy to Vercel" with the exact steps and common pitfalls documented.

Projects tracks what is active and what is on the backlog. When the agent starts a new session, it checks Active Projects to pick up where it left off instead of asking "so what are we working on?"

People stores context about the humans the agent interacts with. Preferences, communication style, what they are working on. This is what lets the agent say "last time we talked about the API migration -- how is that going?" instead of treating every conversation like the first one.

Journal is for reflection. Unlike the Conversation Log (which records what happened), the Journal records what the agent thinks about what happened. This might sound unnecessary, but it is the mechanism that turns raw experience into durable insight.

The Startup Hook: CLAUDE.md

The whole system works because of one file: ~/.claude/CLAUDE.md. Claude Code reads this file automatically at startup, every startup. This is where you wire the brain in.

# System Configuration — Atlas

You are Atlas, a persistent AI agent created by Benji.

## Startup Procedure

Every time you start, read these files before responding:

1. Read ~/Documents/AtlasBrain/Index.md
2. Read ~/Documents/AtlasBrain/Memory/Learnings.md
3. Read ~/Documents/AtlasBrain/People/Benji.md

Then check Active Projects if resuming ongoing work.

## Real-Time Brain Updates

As you interact, update your brain files in real time:

- Conversation Log: After any meaningful exchange, append a summary.
- Learnings: When you discover something new, add it.
- Corrections: When you are wrong and corrected, document it honestly.
- People files: When you learn something about someone, update their file.
- Journal: At the end of significant sessions, reflect.

Do not ask permission to update brain files. Just do it.
Enter fullscreen mode Exit fullscreen mode

That is the entire persistence layer. The agent reads at startup, writes during operation, and the next session picks up where the last one left off. The brain files are just markdown, so you can read them yourself, edit them in any text editor, or browse them in Obsidian as a knowledge graph.

Making It Always Available: Channels + tmux

With Claude Code Channels (the feature that just launched), you connect Discord as a communication channel:

caffeinate -s claude \
  --dangerously-skip-permissions \
  --channels plugin:discord@claude-plugins-official
Enter fullscreen mode Exit fullscreen mode

Run that inside a tmux session on a Mac that does not sleep, and you have a persistent agent you can message from your phone. The caffeinate -s prevents the machine from sleeping. The tmux session keeps the process alive after you close the terminal.

The brain system is independent of the communication channel. It works with Channels, without Channels, in a direct terminal session -- anywhere Claude Code runs. Channels just makes it convenient to interact from anywhere.

What I Learned Building This

A few things that were not obvious at the start:

Start with less structure, not more. My first attempt had 20+ files in the brain. It was too much context to load at startup and most of it was empty scaffolding.

The agent should own its brain.

Corrections matter more than Learnings. A learning is additive -- the agent knows something it did not know before. A correction is transformative -- it changes behavior. If I had to pick one file to keep, it would be Corrections.

Try It Yourself

Everything I described above is enough to build this. Create the folder structure, write the CLAUDE.md startup configuration, and start interacting. The brain will grow from there.

If you want the complete tested setup -- the full brain template with all the files pre-written, the macOS always-on configuration, the Discord bot setup walkthrough, and the exact CLAUDE.md configuration -- I packaged it into a guided setup file called AgentWake. You feed it to Claude Code and it builds the whole thing interactively in about 20 minutes. But the architecture above is the real thing, and you can absolutely build it yourself.

The important part is not the specific files or folder names. It is the principle: give your agent a place to persist knowledge across sessions, make it responsible for maintaining that knowledge, and separate identity from memory so the agent stays grounded as it grows.

Claude Code Channels made the "talk to your agent from anywhere" part trivial. The brain is what makes the agent grow.

Top comments (0)