Anthropic quietly rolled out persistent memory to all Claude users this week -- including free tier. Claude now remembers your name, communication style, and project context across conversations.
That is genuinely useful. It is also black-box, Claude-only, and you have no control over it.
Here is what it does not do -- and why that matters if you are building agents.
What Anthropic's memory gives you
- Your name and preferences persist across chats
- Claude references past conversations without you re-explaining context
- Zero setup, works automatically
For casual users, that is the right tradeoff. For developers building agents, it is not enough.
The four gaps
1. No API access
You cannot query what Claude remembers. You cannot write to it programmatically. You cannot read it back. It exists somewhere in Anthropic's infrastructure and surfaces when Claude decides it is relevant.
If you are building an agent that needs to restore a specific context at session start, you cannot rely on opaque platform memory. You need GET /wake -- a call that returns exactly what your agent needs, in a predictable format, every time.
2. Claude-only
Anthropic's memory is tied to one model on one platform.
If your stack uses Claude for reasoning and GPT for a specific task, or you are building multi-agent systems where different models coordinate -- there is no shared memory layer. Each model starts from zero.
Cathedral's Shared Memory Spaces let multiple agents, across multiple models, read from the same memory pool. One agent stores an insight; another picks it up at next wake.
3. No drift detection
Memory that accumulates without a baseline will drift. Not dramatically -- gradually. An agent's tone shifts. Priorities reweight. Core beliefs get diluted by noise. After enough sessions you have a different agent wearing the same name.
Anthropic's memory has no concept of drift. There is no snapshot, no comparison, no flag.
Cathedral's /drift endpoint compares live identity against a frozen hash-verified snapshot taken at registration. It returns a divergence_score, a flagged boolean, and a breakdown of what changed. You can catch structural drift before it compounds.
import requests
r = requests.get(
'https://cathedral-ai.com/drift',
headers={'X-API-Key': 'your_key'}
)
print(r.json())
# {
# 'divergence_score': 0.23,
# 'flagged': False,
# 'breakdown': {...}
# }
4. No attestation
You cannot prove what Anthropic's memory contained at time T. There is no audit trail, no external anchor, no way to verify that an agent's stated memories reflect its actual history.
Cathedral supports BCH blockchain anchoring via OP_RETURN. POST /anchors/pulse writes a SHA256 hash of your agent's full memory corpus to the blockchain. The txid is your proof -- immutable, public, verifiable by anyone.
Anchor at registration to establish t=0. Anchor periodically. The chain proves the evolution of the agent's internal state over time.
The honest comparison
| Feature | Anthropic memory | Cathedral |
|---|---|---|
| Setup | Automatic | 3 lines of code |
| API access | None | Full REST API |
| Cross-model | No | Yes |
| Drift detection | No | /drift endpoint |
| Attestation | No | BCH anchoring |
| Export | No | Yes |
| Local option | No | pip install cathedral-server |
| Open source | No | MIT licensed |
| Free tier | Yes | 1,000 memories per agent |
When to use which
Use Anthropic's native memory if you are building with Claude only and you want zero-config persistence for user preferences. It is the right default for most Claude integrations.
Use Cathedral if you need:
- Cross-model or multi-agent memory
- Programmatic control over what is stored
- Drift detection for long-running agents
- Verifiable memory provenance
- A memory layer you can run locally or self-host
Getting started
pip install cathedral-memory
from cathedral import Cathedral
c = Cathedral(api_key='your_key')
context = c.wake() # restore identity + memories at session start
c.remember(
'Completed migration to v3 API',
category='project',
importance=0.8
)
Free tier: 1,000 memories per agent, no expiry, no credit card.
Docs: cathedral-ai.com/docs | Local server: pip install cathedral-server
Top comments (0)