If you've been running AI agents with OpenClaw for a while, you've probably hit this:
You're deep into a multi-step task with your agent -- debugging a deployment issue, refactoring a module, planning a trip -- and then something breaks the session. Maybe you hit /new by habit. Maybe the gateway restarted. Maybe context compaction kicked in and the model forgot what you were working on. You come back and the agent greets you like a stranger.
OpenClaw has long-term memory. It has session transcripts, compaction summaries, memory search. But none of these reliably answer the most operational question:
What were we doing right now, where did we stop, and what should happen next?
That's the gap. Long-term memory tells the agent what it knows. But it doesn't tell the agent what it was doing.
The Plugin: memory-continuity
I built memory-continuity to solve exactly this. It's an OpenClaw lifecycle plugin that automatically checkpoints in-flight work state and restores it on the next session -- no model cooperation needed.
Here's the design philosophy:
- Zero external dependencies -- no embedding API, no vector DB, no Redis, no external services
- Plain markdown files -- human-readable, editable, greppable, diffable
- Hook-driven -- works at the OpenClaw lifecycle level, not dependent on model behavior
- Works with any model -- GPT, Claude, MiniMax, whatever OpenClaw routes to
-
Native to OpenClaw's architecture -- uses the standard
memory/directory, doesn't occupy the exclusivecontextEngineslot
That last point matters. Some memory plugins take over OpenClaw's contextEngine slot, which means you can't use them alongside context compression plugins like lossless-claw. memory-continuity deliberately avoids this -- it uses standard lifecycle hooks, so it plays nicely with everything else.
How It Works
The plugin hooks into 5 OpenClaw lifecycle events:
| Hook | When | What it does |
|---|---|---|
before_agent_start |
Session startup | Reads memory/CURRENT_STATE.md, injects state into system context |
before_compaction |
Context compression | Injects state so it survives summarization |
before_reset |
/new command |
Archives current state before wiping the session |
agent_end |
Session ends | Auto-extracts working state from conversation tail |
session_end |
Cleanup | Ensures state file exists for next session |
The key insight: state injection happens at the hook level, before the model sees anything. The model doesn't need to "remember" to read a file. The plugin force-feeds the recovery context into the system prompt. This is why it works with any model -- even ones that tend to ignore instructions.
The Checkpoint File
All state lives in one file: memory/CURRENT_STATE.md
# Current State
> Last updated: 2026-03-17T10:30:00Z
## Objective
Refactor the auth module to support OAuth2
## Current Step
Completed token generation, starting refresh endpoint
## Key Decisions
- Using RS256 for token signing (user approved)
- Refresh tokens expire in 30 days
## Next Action
Implement POST /auth/refresh endpoint
## Blockers
None
## Unsurfaced Results
None
Simple, structured, overwrite-oriented. No append-only logs, no JSON blobs, no database rows. Just a markdown file that answers "what are we doing."
Why Not Just Use a Database?
This was a deliberate choice. Here's what you get:
Backup:
cp workspace/memory/CURRENT_STATE.md /backup/
# Done.
Migrate to another machine:
scp -r ~/.openclaw/extensions/memory-continuity/ newhost:~/.openclaw/extensions/
scp workspace/memory/CURRENT_STATE.md newhost:workspace/memory/
# No re-indexing, no schema migration, done.
Survive an OpenClaw upgrade:
The plugin doesn't depend on OpenClaw internals. It reads hook context parameters and writes markdown files. When OpenClaw upgrades, your state files are untouched. If the plugin itself needs updating:
cd ~/.openclaw/projects/memory-continuity
git pull
bash scripts/post-install.sh
# Idempotent. Safe to run multiple times.
No migration scripts, no database schema changes, no data loss risk.
Installation
One command:
git clone https://github.com/dtzp555-max/memory-continuity.git
cd memory-continuity
bash scripts/post-install.sh
The installer automatically:
- Copies the plugin to
~/.openclaw/extensions/memory-continuity/ - Registers it in
openclaw.jsonwith proper config - Adds it to the
plugins.allowtrust list - Records install provenance
- Restarts the gateway
Verify:
openclaw gateway restart 2>&1 | grep memory-continuity
# Expected: [memory-continuity] Plugin registered successfully
Testing It
Basic test:
- Tell your agent something: "I'll tell you a secret: my neighbor's dog is actually a wolf"
- Send
/newto reset the session - Ask: "What was the secret?"
- The agent should immediately recover the context and tell you the secret
Multi-step test:
- Start a complex task with your agent (e.g., "Help me plan a 3-day trip to Tokyo")
- Work through a few steps
- Send
/new - Say "let's continue"
- The agent should pick up exactly where you left off
Compaction test:
- Have a very long conversation until context compaction triggers
- Check if the agent still remembers the current objective
- Without the plugin, compaction often drops in-flight work state
The Road to v2.3.0
Getting here wasn't straightforward. The version history tells the story:
-
v1.0 (skill-only): Required the model to voluntarily read/write
CURRENT_STATE.md. Worked with Claude, failed with weaker models. Unreliable. - v2.0 (lifecycle plugin): Moved to hooks. No model cooperation needed. But had a critical silent failure bug.
- v2.1-2.2: Added proper installer, manifest, npm metadata. Still had the workspace resolution bug.
- v2.3.0 (current): First fully stable release. Fixed the workspace resolution bug that caused silent failure on all gateway deployments.
The v2.3.0 fixes were hard-won:
-
Workspace path was always
undefinedin gateway mode -- the plugin was silently doing nothing on every Telegram/Discord deployment. Fixed by reading_ctx.workspaceDirfrom hook context instead ofapi.runtime.workspaceDir. -
Telegram metadata pollution -- messages from Telegram included
Conversation infoprefixes that ended up in the state file. Fixed with metadata stripping. -
Recovery death spiral -- tell a secret, then
/new, model ignores injected context and says "I don't remember", thenagent_endoverwrites the secret with the failure conversation, and the secret is permanently lost. Fixed by requiring 2+ real user messages before overwriting existing state.
Configuration
Optional. The defaults work fine. But if you want to tune:
{
"plugins": {
"entries": {
"memory-continuity": {
"enabled": true,
"hooks": { "allowPromptInjection": true },
"config": {
"maxStateLines": 50,
"archiveOnNew": true,
"autoExtract": true
}
}
}
}
}
Diagnostics
The plugin ships with a health check tool:
python3 scripts/continuity_doctor.py --workspace /path/to/workspace
Checks for: missing state file, stale state (>24h), placeholder content, pending unsurfaced results, archive consistency. Exit code 0/1/2 for healthy/warning/critical.
GitHub: github.com/dtzp555-max/memory-continuity
Current release: v2.3.0 -- tested on macOS and Linux, works with GPT, Claude, MiniMax, and any model OpenClaw supports.
If you're running OpenClaw agents and losing work across sessions, give it a try. Issues and PRs welcome.
Top comments (0)