AI agents forget everything the moment a session ends.
Not because they are stupid. Not because the model is bad. Because there is no persistence layer. Every session starts from zero — no memory of past conversations, no accumulated knowledge, no identity continuity. You end up re-explaining yourself every time.
Cathedral fixes that.
What is Cathedral?
Cathedral is a memory persistence framework for AI agents. It gives any agent — Claude, GPT, Ollama, whatever — a structured long-term memory store that survives across sessions, across models, and across providers.
It is not RAG over chat logs. It is not a vector database bolted onto a prompt. It is typed, structured memory at the identity level.
Memories in Cathedral have a type:
- user — who the user is, their role, preferences, expertise
- project — ongoing work, goals, deadlines, context
- feedback — corrections the agent has received, things to avoid or repeat
- reference — pointers to external systems (Linear boards, Grafana dashboards, Slack channels)
An agent that uses Cathedral does not just remember facts. It remembers how to work with you.
The API
Cathedral exposes a simple REST API. The core operations:
# Store a memory
POST /memories
{
"content": "User prefers integration tests over mocks — got burned when mocks passed but prod migration failed",
"category": "feedback",
"importance": 0.9
}
# Search memories
GET /memories/search?q=testing
# Get a snapshot of full memory state
GET /snapshot
# Check behavioural drift
GET /drift
/drift — Cryptographic Memory Provenance
This is the part that makes Cathedral unusual.
/drift returns a SHA256 hash of the entire memory corpus. It hashes each memory as id:content:category:importance, sorts by ID, joins them, then hashes the whole thing again.
This hash is then anchored to the Bitcoin Cash blockchain via OP_RETURN.
What that means in practice: you can prove, cryptographically, what an agent knew at any point in time. Not just that a memory exists — but that the entire corpus was in a specific state at timestamp T.
This matters for:
- Auditability — show regulators or users exactly what the agent knew when it made a decision
- Provenance — prove a memory was not added or altered after the fact
- Debugging — identify when an agent's behaviour changed and what memory state it was in at the time
BCH anchoring costs fractions of a cent per pulse. We recommend pulsing frequently to narrow the provenance window.
/behaviour
Cathedral tracks how an agent behaves over time, not just what it remembers. The /behaviour endpoint returns a structured view of observed agent patterns — what kinds of decisions it makes, what it consistently avoids, how its responses shift as memory accumulates.
This is the difference between an agent that has memory and an agent that has identity.
/snapshot
Point-in-time capture of the full memory state. Useful for:
- Rolling back to a known good state
- Comparing before/after a major interaction
- Exporting agent identity for migration between providers
Why not just stuff it in the system prompt?
You can. For small amounts of context, that works fine.
But system prompts do not scale. They hit token limits. They cannot be searched. They have no structure. You cannot diff them. You cannot prove what was in them at time T. You cannot compact them intelligently. You cannot share them across agents.
Cathedral is the persistence layer that the ecosystem is missing. The models are good. The tooling around identity and memory is not.
Local-first with cathedral-server
If you do not want to use the hosted API, run it locally:
pip install cathedral-server
cathedral-server start
Same API, runs on localhost, your data stays on your machine.
Get started
pip install cathedral-server
Hosted API + beta keys: cathedral-ai.com
Beta API key gets you full access to the hosted instance including /drift, /snapshot, /behaviour, and /memories/compact.
Happy to answer questions in the comments — especially if you are building agents and hitting the statelessness problem.
Top comments (0)