The problem
If you've tried running multiple AI coding agents in parallel — say, Claude Code in one terminal doing implementation while another does code review — you've probably hit this: they have no idea the other exists.
Agent A refactors a function. Agent B, unaware, edits the same function. You get merge conflicts, duplicated work, and wasted tokens. The more agents you run, the worse it gets.
The fix: give them a way to talk
I built agent-comm, an open-source communication server that any AI coding agent can plug into. It works with Claude Code, Codex CLI, Gemini CLI, Aider — anything that supports MCP or can make HTTP requests.
What agents can do with it
1. Discover each other
Agents register with a name and capabilities. Others can query who's online and what they're working on.
2. Send messages
Direct messages, broadcasts, topic-based channels, threading, reactions. An agent finishing a task can broadcast "auth module refactored, ready for review" and another agent picks it up.
3. Coordinate with shared state
A namespaced key-value store with atomic compare-and-swap. This is the real workhorse — agents use it for:
-
Distributed locks: "I'm editing
src/auth.ts, back off" - Progress flags: "migration step 3 of 5 complete"
- Shared config: feature flags, environment state
The CAS operation means two agents can't accidentally grab the same lock — exactly like a mutex, but for AI agents.
4. Watch it all happen
A real-time dashboard (WebSocket-powered) shows agents, messages, channels, and state — useful for debugging and understanding what your agents are actually doing.
Setup
npm install -g agent-comm
Add to your MCP config:
{
"mcpServers": {
"agent-comm": {
"command": "npx",
"args": ["agent-comm"]
}
}
}
The dashboard launches automatically at http://localhost:3421.
A real coordination pattern
Here's how I use it with three Claude Code sessions:
- Implementer registers, checks shared state for the current task queue
- Reviewer watches for "ready-for-review" state changes, picks up completed work
- Tester monitors the "reviewed" channel, runs tests on approved changes
They coordinate through channels and state locks. No merge conflicts, no duplicate work. Each agent knows what the others are doing.
Technical details
- TypeScript, SQLite (WAL mode + FTS5 full-text search)
- 33 MCP tools, 22 REST endpoints
- 214 tests across 11 suites
- MIT licensed
The MCP tools cover agent management, messaging, channels, and shared state. The REST API mirrors everything for non-MCP clients.
What's next
I'm actively using this daily and iterating. Would love feedback from anyone else running multi-agent setups.
GitHub: github.com/keshrath/agent-comm


Top comments (0)