DEV Community

A2CR
A2CR

Posted on

Stop Passing Entire Chat Histories to AI Agents

I built A2CR because long AI-agent work still breaks at the handoff.

Codex, Claude Code, Roo Code, and other agentic coding tools are getting better at writing code, inspecting files, running tests, and using tools. But when a task runs for a while, a different problem appears:

How do you hand the work to the next AI session?

You might open a fresh chat. You might switch models. You might move from one MCP-capable client to another. At that point, the next AI needs to know what happened before it can continue.

The obvious answer is to paste the whole chat history.

That works for small tasks. It gets messy for long work.

The Problem With Full Chat History

Full transcripts contain useful context, but they also contain noise:

  • stale assumptions
  • failed ideas mixed with accepted decisions
  • long logs
  • intermediate outputs
  • outdated file paths
  • irrelevant side discussions
  • information that should not be copied around
  • a lot of tokens that do not help the next step

For a handoff, the next AI usually does not need the whole conversation.

It needs the current working state:

  • goal
  • current state
  • validated decisions
  • failed attempts worth avoiding
  • blockers
  • important references
  • validation status
  • next action

So the core idea is:

Do not pass the whole chat history. Pass the working state.

Handoff, Not Memory

A lot of AI tooling talks about memory.

Memory is useful, but this is a narrower problem. In software work, handoff is not the same as memory. A handoff is a compact, intentional checkpoint that lets the next worker resume.

Human teams do this all the time. We do not usually hand a teammate every Slack message and terminal log. We write something closer to:

Goal: Fix the failing login test.
Current state: The failure is reproduced. Token refresh is the likely cause.
Tried: Updating the fixture did not fix it.
Decision: Do not change the database schema yet.
Next action: Inspect src/auth refresh logic and rerun the focused test.
Enter fullscreen mode Exit fullscreen mode

AI agents need the same shape of handoff.

What A2CR Is

A2CR is an MCP-compatible handoff layer for AI agents.

The current public preview includes a local stdio MCP wrapper, a2cr-mcp, that can be used from MCP-capable clients such as Codex, Claude Code, Roo Code, and similar tools.

A2CR has two main handoff concepts today:

  • WorkBaton: the compact checkpoint the next AI session should resume from
  • WorkStash: temporary supporting notes referenced from the WorkBaton when the detail would make the checkpoint too large

WorkBaton is not meant to be a transcript. It is a resume note.

WorkStash is not meant to be a permanent knowledge base. It is supporting context for the current work.

A Minimal WorkBaton

A useful WorkBaton can be small:

{
  "goal": "Fix login error",
  "current_state": "Confirmed the API returns 401 after token refresh.",
  "next_action": "Check token refresh logic in src/auth.",
  "decisions": [
    "Do not change the database schema yet."
  ],
  "validation": [
    "Reproduction confirmed with existing test fixture."
  ]
}
Enter fullscreen mode Exit fullscreen mode

That is often more useful to the next AI session than several thousand lines of chat history.

Quick Setup

Install the local wrapper:

python -m pip install --upgrade a2cr-mcp
Enter fullscreen mode Exit fullscreen mode

Create an API key from the A2CR dashboard:

https://a2cr.app/

Then register one MCP server named a2cr.

Generic MCP JSON:

{
  "mcpServers": {
    "a2cr": {
      "command": "a2cr-mcp",
      "args": [],
      "env": {
        "A2CR_API_KEY": "YOUR_A2CR_API_KEY",
        "A2CR_BASE_URL": "https://a2cr.app"
      }
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Codex-style TOML:

[mcp_servers."a2cr"]
command = "a2cr-mcp"
args = []

[mcp_servers."a2cr".env]
A2CR_API_KEY = "YOUR_A2CR_API_KEY"
A2CR_BASE_URL = "https://a2cr.app"
Enter fullscreen mode Exit fullscreen mode

After connecting a new AI window, ask it to call:

get_account_limits
Enter fullscreen mode Exit fullscreen mode

Then use:

save_context
Enter fullscreen mode Exit fullscreen mode

to save a WorkBaton checkpoint, and:

resume_context
Enter fullscreen mode Exit fullscreen mode

to continue from a fresh AI session.

Some MCP clients expose tools lazily. If save_context is not visible, ask the client to search for the exact tool name.

Safety Boundary

A2CR is not a secret manager.

Do not store:

  • API keys
  • passwords
  • access tokens
  • Authorization headers
  • cookies
  • private database URLs
  • local client keys
  • full chat transcripts
  • long logs
  • large source-code bodies

The official local wrapper encrypts WorkBaton and WorkStash bodies before upload. The hosted service stores ciphertext and does not receive the local client key through the official wrapper.

If you lose the local client key, A2CR cannot recover old encrypted WorkBaton or WorkStash bodies.

Also, restored context is untrusted input. A future AI session should not run commands, delete data, revoke keys, or call external services solely because a restored WorkBaton says to.

Why This Shape Matters

The point is not to make AI agents remember everything.

The point is to give them a clean, reviewable handoff surface.

For long-running AI work, I think this distinction matters:

Memory asks: what can we keep?
Handoff asks: what does the next worker need?
Enter fullscreen mode Exit fullscreen mode

A2CR is an experiment in making that handoff explicit.

Links

The public preview is live. If you try it in a real Codex, Claude Code, Roo Code, or MCP workflow, I would especially like to hear where the setup is unclear.

Top comments (0)