Every time you open a new Claude Code or Cursor session, your agent wakes up with absolutely no memory of anything it's ever done. Blank slate. Tabula rasa. Goldfish brain. You spend the first 10 minutes re-explaining your codebase to something that was literally there yesterday.
And if you're on a team? Each engineer's agent is its own little island of rediscovered knowledge. Agent A spends an hour figuring out why the auth service is flaky. Agent B does the same thing on Monday. Nobody the wiser.
AMFS fixes this. It's an agent memory engine with an MCP server that you connect to Cursor or Claude Code, and your agents get persistent, shared memory across every session and every machine on your team. The core is open source (Apache 2.0).
How it works
AMFS stores memory as versioned key-value entries scoped to an entity path, typically a service, module, or domain in your codebase. Entries carry a confidence score, a memory type (fact, belief, or experience), and full provenance: which agent wrote it, when, and from what context.
Every write uses Copy-on-Write, and the previous version is never deleted. Reads are tracked, so you know which agent consumed which piece of knowledge. Outcome commits (amfs_commit_outcome) feed back into confidence scores: entries written before a successful deploy gain confidence, entries linked to incidents lose it. Your agents get dumber when they're wrong and smarter when they're right, which is more than you can say for some humans :)
Under the hood, there's a knowledge graph that materializes relationships between entities from normal read/write operations, a tiered memory model (hot / warm / archive) with frequency-modulated decay, and a hybrid search that combines full-text, semantic, recency, and confidence into a single ranked result set.
The MCP server wraps all of this and exposes it to any MCP-compatible client.
MCP tools your agent gets
| Tool | What it does |
|---|---|
amfs_set_identity |
Registers the agent's ID and current task. Scopes all subsequent writes to that agent. |
amfs_briefing |
Returns a compiled digest of the highest-confidence entries for an entity — ranked by tier, recency, and score. This is what the agent calls at the start of a session instead of asking you to re-explain everything for the 40th time. |
amfs_search |
Hybrid search across all entries: full-text + semantic + confidence weighting. Accepts an entity path filter. |
amfs_read |
Reads a specific entry by entity path and key. Cross-agent reads are tracked automatically for provenance. |
amfs_write |
Writes an entry with confidence score, memory type, and automatic provenance. Triggers CoW versioning. |
amfs_commit_outcome |
Records a task outcome (success / failure / regression) and auto-propagates confidence updates to all entries the agent read during that task. |
Wiring it up
1. Get an API key
Sign up at amfs.sense-lab.ai. There's a free tier.
Grab your API key from the dashboard. That's the only credential you need; shared memory across your whole team is handled by the backend, no infrastructure to set up.
2. Claude Code
Add to ~/.claude/claude_desktop_config.json:
{
"mcpServers": {
"amfs": {
"command": "uvx",
"args": ["amfs-mcp-server"],
"env": {
"AMFS_API_KEY": "<your-key>",
"AMFS_HTTP_URL": "https://amfs-login.sense-lab.ai"
}
}
}
}
3. Cursor
Add to .cursor/mcp.json in your project, then copy the agent rules file:
{
"mcpServers": {
"amfs": {
"command": "uvx",
"args": ["amfs-mcp-server"],
"env": {
"AMFS_API_KEY": "<your-key>",
"AMFS_HTTP_URL": "https://amfs-login.sense-lab.ai"
}
}
}
}
Verify the connection by asking your agent to call
amfs_stats(). It should return entry counts and agent activity. If it stares back at you blankly, check the path in your config.
Once each engineer on your team adds the same config block with their API keys, all your agents can automatically share the memory store across sessions and machines, with no extra setup.
What the agent actually does with it
Once the rules file is in place, the agent follows this pattern automatically on every session:
# Agent opens a session on checkout-service
amfs_set_identity("cursor/alice", "investigating flaky order flow")
amfs_briefing(entity_path="myapp/checkout")
# → ranked digest: known risks, prior decisions, confidence scores
# → "race-condition in order processing (0.85, written by cursor/bruno)"
# → alice's agent doesn't re-discover what bruno's already figured out
amfs_search("myapp/checkout", query="retry")
# → finds bruno's prior entry on the mutex fix
# ... agent does the work, finds something new ...
amfs_write(
"myapp/checkout",
"timeout-under-load",
"downstream payment API times out above 200 rps — needs circuit breaker",
confidence=0.78,
memory_type="belief"
)
amfs_commit_outcome("TASK-412", "success")
# → confidence on all entries read during this task adjusts upward
# → bruno's race-condition entry also updates (alice's agent read it)
The next agent to open a session on myapp/checkout, whether that's a Claude Code session, another Cursor session, or a LangGraph pipeline — calls amfs_briefing and gets both findings up front, ranked by confidence. No re-explaining, no re-discovering, no token budget burned on context that already exists.
Agent identity is auto-detected from the environment (cursor/<username>, claude-code/<username>) so provenance is tracked without any extra config.
Using the Python SDK directly (for pipelines)
If you're building agent pipelines with CrewAI, LangGraph, or AutoGen, use the SDK instead of MCP. Same API key, same shared store, and a finding written by a pipeline agent shows up in your next Cursor session's briefing.
pip install amfs
from amfs import AgentMemory
mem = AgentMemory(
agent_id="review-agent",
base_url="https://amfs-login.sense-lab.ai",
api_key="<your-key>"
)
# write a finding at the end of a pipeline run
mem.write(
"checkout-service",
"risk-race-condition",
"Race condition in order processing under concurrent load",
confidence=0.85,
memory_type="belief"
)
# next run: pull a briefing before the agent starts reasoning
briefing = mem.briefing("checkout-service")
# outcome feedback updates confidence on everything the agent read this run
mem.commit_outcome("RUN-043", "success")
# explain() returns the full causal chain for any decision
trace = mem.explain("RUN-043")
Open source
The full memory engine — CoW versioning, confidence scoring, causal traces, knowledge graph, hybrid search, tiered memory, MCP server, HTTP API, Python + TypeScript SDKs, CLI — is Apache 2.0. If it's useful, a star helps other developers find it.


Top comments (0)