If you are doing long AI-agent work, the first handoff tool you should try is probably not a service.
It is a file.
Create handoff.md in your repository and write down what the next AI session needs to know:
# Handoff
Goal: Fix the failing login test.
Current state:
- Reproduced the 401 after token refresh.
- The refresh branch is the likely cause.
Tried:
- Updating the fixture did not fix it.
Decision:
- Do not change the database schema yet.
Next action:
- Inspect src/auth refresh logic and rerun the focused test.
For many tasks, that is enough.
It is local. It is readable. It works with Git. It does not require another account, another dashboard, or another moving piece in your toolchain.
That habit alone is already better than pasting a whole chat history into the next AI session.
But after using AI agents on longer coding work, I kept running into the same problem: handoff.md starts simple, then slowly turns into another thing you have to manage by hand.
That is the space where I built A2CR.
What handoff.md Gets Right
A local handoff file is a great starting point because it forces a useful discipline:
Do not pass the whole conversation. Pass the working state.
The next AI session usually does not need every prompt, every failed command, every stack trace, and every bit of discussion that happened before.
It needs a smaller shape:
- the goal
- the current state
- validated decisions
- failed attempts worth avoiding
- blockers
- references
- validation status
- the next action
That shape is easy to write in Markdown. It is also easy for a human to review.
So if you have a short task, a single repository, one active AI client, and a clear latest note, handoff.md may be all you need.
Where It Starts to Hurt
The problem is not that handoff.md is bad. The problem is that long-running AI work creates pressure around it.
The Latest State Gets Ambiguous
At first, there is one file and everyone knows it is the handoff.
Later, there may be notes in the chat, notes in issues, notes in scratch files, old sections in handoff.md, and a few assumptions that were true yesterday but not true today.
When a fresh AI session starts, someone still has to answer:
Which state is the intended resume point?
If the answer depends on a human reading everything and deciding what is current, the handoff has become manual again.
Supporting Notes Mix With Resume-Critical State
Long tasks produce useful supporting details:
- investigation notes
- error outputs
- links
- file lists
- rejected approaches
- setup observations
- "maybe useful later" context
Some of that information matters, but not all of it belongs in the first thing the next AI reads.
When everything goes into one Markdown file, the compact resume note can become another noisy document.
Cross-Client Handoff Adds Friction
If you stay inside one repository on one machine, a local file is convenient.
But AI-agent work often moves around:
- from one AI session to another
- from one MCP-capable client to another
- from a local coding session to a remote or browser-based workflow
- from today's task to a continuation several days later
At that point, the handoff is less about "where is the file?" and more about "what is the explicit checkpoint this next agent should resume from?"
Safety Depends on Memory
A Markdown file will store whatever you put in it.
That flexibility is useful, but it also means you need discipline every time.
You should not put these into handoff notes:
- API keys
- passwords
- access tokens
- Authorization headers
- cookies
- private database URLs
- local client keys
- full chat transcripts
- long logs
- large source-code bodies
This is true whether the handoff is a local file or a tool-backed checkpoint.
The difference is that a tool flow can make the boundary more explicit and repeatable.
What A2CR Adds
A2CR is an MCP-compatible handoff layer for AI agents.
The core idea is still the same:
Do not pass the whole chat history.
Pass the working state.
A2CR just gives that working state a more deliberate structure.
The main concepts today are:
- WorkBaton: the compact checkpoint the next AI session should resume from
- WorkStash: temporary supporting notes referenced from the WorkBaton when the detail would make the checkpoint too large
In other words:
WorkBaton = the first thing the next AI should read
WorkStash = supporting notes to open only if needed
That separation matters because the first few seconds of a fresh AI session are important. If the resume note is too broad, the agent can get pulled into stale assumptions or irrelevant detail.
The Hosted-Service Boundary
One important boundary: A2CR is not a fully local or offline-only tool.
The current public preview uses a local stdio MCP wrapper, a2cr-mcp, backed by the hosted A2CR service at https://a2cr.app.
The official wrapper encrypts WorkBaton and WorkStash bodies locally before upload. The hosted service stores ciphertext. Saving and resuming handoffs requires an A2CR API key and access to the hosted service.
A2CR is not a secret manager, and it is not a place for raw logs, credentials, or entire chat histories.
Links
- A2CR app: https://a2cr.app/
- GitHub: https://github.com/a2cr/a2cr
- Usage guide: https://github.com/a2cr/a2cr/blob/main/docs/usage.md
- MCP setup: https://github.com/a2cr/a2cr/blob/main/docs/mcp-setup.md
- Japanese comparison article on Zenn: https://zenn.dev/a2cr/articles/86612e29894ea6
Top comments (0)