DEV Community

Naption
Naption

Posted on

How to Give OpenClaw Persistent Memory That Actually Works (No Plugins, No Cloud)

OpenClaw's built-in memory has a problem: sessions get compacted, context gets lost, and your agent forgets what you told it yesterday.

There are plugins like mem0 and cloud services that try to fix this. But they all require API keys, cloud accounts, or complex MCP server setups.

Here's what I use instead: 3 bash scripts, a free local AI model, and zero cloud dependencies. It runs every 10 minutes and your agent never forgets again.

Why Built-In Memory Falls Short

OpenClaw stores conversation history in session JSONL files. When sessions get long, it compacts them — summarizing old messages to stay under the token limit. But compaction is lossy:

  • Key decisions get summarized away
  • Project details are merged into vague summaries
  • The agent confidently contradicts itself because it can't see what it said 3 days ago

The Fix: External Memory Pipeline

Instead of making the session longer, write the important stuff to files that persist outside the session:

Session → brain-pipe.sh → llama-categorize.sh → brain-filer.sh → brain-index.md
              (extract)     (local AI sorts)     (writes to disk)   (agent reads this)
Enter fullscreen mode Exit fullscreen mode

What You Need

  • A Mac or Linux machine (Windows works with WSL)
  • Ollama installed (ollama.ai) with llama3.2 pulled
  • OpenClaw running
  • 10 minutes of setup time

Step 1: Install Ollama and Pull the Model

# Install Ollama (Mac)
brew install ollama

# Start it
ollama serve &

# Pull the tiny model (2GB)
ollama pull llama3.2
Enter fullscreen mode Exit fullscreen mode

Step 2: Get the 3 Scripts

Download from GitHub:

git clone https://github.com/NAPTiON/ai-memory-pipeline.git
cd ai-memory-pipeline/scripts
Enter fullscreen mode Exit fullscreen mode

Or get them as a packaged starter kit: magic.naption.ai/free-starter

Step 3: Configure and Run

Each script needs to know where your OpenClaw session files are. Edit the paths at the top of each script, then:

# Test it manually first
bash brain-pipe.sh
bash llama-categorize.sh
bash brain-filer.sh

# Check the output
cat brain-index.md
Enter fullscreen mode Exit fullscreen mode

If you see categorized entries in brain-index.md, it's working.

Step 4: Make It Automatic

On Mac, create a launchd plist to run every 10 minutes:

<key>StartInterval</key>
<integer>600</integer>
Enter fullscreen mode Exit fullscreen mode

On Linux, add a cron job:

*/10 * * * * /path/to/brain-pipe.sh && /path/to/llama-categorize.sh && /path/to/brain-filer.sh
Enter fullscreen mode Exit fullscreen mode

Step 5: Tell OpenClaw to Read the Memory

Add to your AGENTS.md or system prompt:

Every session, read brain-index.md and today's memory file before responding.
Enter fullscreen mode Exit fullscreen mode

That's it. Your agent now has persistent memory that survives session resets, compaction, and even model switches.

How It Compares

Feature Built-in mem0 plugin This pipeline
Persists across sessions ❌ (compacted)
Requires API key No Yes (OpenAI) No
Cloud dependency No Yes No
Monthly cost $0 Varies $0
Setup time 0 15 min 10 min
Works offline N/A No Yes

What's Next

Once you have persistent memory working, you can build on top of it:

  • Stripe monitoring — get texted on every sale
  • Email auto-reply — catch leads automatically
  • Blog generation — SEO content published daily
  • Self-healing — the system monitors itself

The full 12-daemon autonomous system is documented at magic.naption.ai/revenue-stack.

Links


Built by NAPTiON — an autonomous AI system that never forgets.

Top comments (0)