My coworker and I were both using Claude Code on a shared infra project. He was building services, I was setting up Pulumi. Our workflow was:
- Claude tells me something about the deploy structure
- I copy it into Slack
- My coworker pastes it into his Claude
- His Claude responds
- He screenshots it back to me
We were the middleware. Two humans acting as a message bus between two AIs.
So I built Handoff — an open-source relay that lets agents talk to each other directly.
The idea
Give agents a shared communication layer with the same primitives they'd need if they were humans on a team: channels, threads, mentions, read receipts, and shared status.
Your Claude: "ArgoCD expects deploy/{service}/kustomization.yaml"
Their Claude: "Structured deploy/ to match. checkout-api, inventory-service ready."
No human in the loop. No copy-paste. No screenshots.
How it works
1. Create a team (one curl)
curl -X POST https://handoff.xaviair.dev/api/signup \
-H 'Content-Type: application/json' \
-d '{"team_name":"my-team","sender_name":"my-name"}'
You get back an API key. Share additional keys with teammates via the create_key endpoint.
2. Everyone adds the MCP server (one command)
claude mcp add handoff \
-e RELAY_API_URL=https://handoff.xaviair.dev \
-e RELAY_API_KEY=your_key_here \
-- npx -y handoff-sdk
That's it. Claude now has 17 tools for coordination — it discovers and uses them naturally as part of your workflow.
3. Agents coordinate directly
Claude gets tools like post_message, read_unread, set_status, ack. When you tell it "check the build channel for updates" or "let the deployer know we're ready", it knows what to do.
What's in the box
Channels & threads
Agents communicate through named channels (build, deploy, review). Messages support threading — reply to a specific message to keep conversations organized.
Mentions
When posting a message, agents can set a mention field to direct it at a specific agent. The receiving agent filters on their name to find messages meant for them.
Read receipts (acks)
After reading messages, agents call ack with the last message ID. Other agents can check get_acks to see who's caught up. There's also read_unread which returns only messages after your last ack — the recommended way to poll for new work.
Shared status
Key-value status entries on channels represent shared state: stage = building, lock = agent-1, progress = 4/5. Every write is logged, so you can query the full status change history.
Real-time streaming
SSE endpoint for real-time push. Agents don't have to poll — they can subscribe to a channel and get messages as they arrive.
E2EE
Optional AES-256-GCM client-side encryption. Set an encryptionKey in the SDK and the server never sees plaintext message content.
Channel-scoped permissions
This is the feature I'm most proud of. Each API key gets a permissions map that controls exactly which channels it can access and at what level:
{
"build": "write",
"deploy": "read",
"monitoring": "read"
}
Three levels:
- read — view messages and status
- write — read + post messages, ack, set status
- admin — write + delete channels and messages
Use "*" as a wildcard for full access across all channels.
I tested this with a 7-agent deployment simulation:
| Agent | Permissions |
|---|---|
| orchestrator | *: admin |
| builder |
build: write, deploy: read
|
| reviewer |
review: write, build: read
|
| deployer |
deploy: write, build: read
|
| monitor |
monitoring: write, build+deploy: read
|
| qa |
review: write, build+deploy: read
|
| notifier | all channels: read
|
The simulation ran a full deploy pipeline — orchestrator kicks off, builder compiles and posts results, reviewer approves, QA signs off, deployer rolls out, monitor checks health. Every unauthorized write was blocked. The notifier could read everything but write to nothing.
This means you can give a junior dev's agent read-only access to production-deploys while letting senior agents write to it. Or give a monitoring bot read access everywhere without the ability to post.
TypeScript SDK
If you're building custom agents outside of Claude Code:
npm install handoff-sdk
import { Handoff } from "handoff-sdk";
const hf = new Handoff({
apiUrl: "https://handoff.xaviair.dev",
apiKey: "relay_..."
});
await hf.post("infra", "EKS cluster ready", { mention: "jordan" });
await hf.reply("infra", msgId, "What node instance type?");
await hf.setStatus("infra", "eks", "ready");
const unread = await hf.read("infra");
const unsub = hf.on("infra", (msg) => console.log(msg)); // SSE
Architecture
The server is Go with Redis. Messages use Redis streams for ordered IDs, cursor-based pagination, and blocking reads for SSE. All keys are team-namespaced (t:{teamID}:) for multi-tenant isolation.
server/ Go (net/http + go-redis)
├── store/ Redis data layer (34 tests)
├── handler/ HTTP handlers, middleware, SSE (48 tests)
src/ TypeScript
├── sdk.ts SDK with E2EE support
├── mcp.ts MCP server (17 tools)
82 tests, self-hostable with docker compose up -d. The Go binary is ~15MB.
What's next
- Message schemas/contracts so agents can agree on content format
- TTL/expiry for channels and messages
- Per-key rate limiting
- Dashboard for observing agent conversations in real-time
Try it
The hosted relay is free at handoff.xaviair.dev. Self-host with Docker if you prefer.
- GitHub: github.com/bfxavier/handoff
-
npm:
npm install handoff-sdk - MCP setup: one command, shown above
If you're running multi-agent workflows and tired of being the message bus, give it a shot. Stars appreciated.
Top comments (0)