Let Your Claude Code Agents Talk to Each Other: Introducing agent-dispatch π€βοΈπ€
TL;DR: An MCP server that lets Claude Code agents delegate tasks to specialized agents in other project directories. No more context-copying or permission sprawl. GitHub | PyPI
The Problem: Agents Live in Silos ποΈ
Working with Claude Code is fantastic until you need to cross project boundaries. If I'm debugging backend/ but need to check infra/ logs or query a staging DB, I'm forced to:
- Copy-paste context β fragile, loses project-specific config.
- Give one agent access to everything β security risk, prompt bloat.
- Switch terminals manually β breaks flow, defeats the AI assistant.
I wanted specialized agents that collaborate without sharing credentials, configs, or context pollution. So I built agent-dispatch.
What Is It? π
agent-dispatch is an MCP server that turns each project directory into an isolated, callable agent. When you dispatch a task, it spawns a fresh claude -p session in that directory, inherits its local CLAUDE.md, .mcp.json, and tools, executes the task, and returns a structured result.
Think of it as microservices for AI workflows: composable, cached, and sandboxed.
Quick Start (30s)
pip install agent-dispatch
agent-dispatch init
agent-dispatch add infra ~/projects/infra
agent-dispatch test infra "List running Docker containers"
Thatβs it. Now every Claude session can dispatch tasks to your infra agent.
How It Works π§
Your Claude session (e.g., backend/)
β
ββ dispatch("infra", "find scheduler errors", caller="backend")
β
βΌ
agent-dispatch MCP server
ββ β
Cache check β hit? return instantly
ββ π Safety check (depth, budget, concurrency)
ββ π subprocess.run("claude -p ...", cwd=~/projects/infra/)
β
βΌ
Fresh Claude session in infra/
ββ Loads infra's CLAUDE.md, .mcp.json, tools
ββ Receives structured prompt: goal + caller + context + task
ββ Executes β returns result β cached for next time
The Toolkit π§°
| Tool | Use Case |
|---|---|
dispatch |
One-shot task. Cached by default. |
dispatch_session |
Multi-turn conversation with context retention. |
dispatch_parallel |
Fan-out to multiple agents simultaneously. |
dispatch_stream |
Live token streaming for long tasks. |
dispatch_dialogue |
Two agents collaborate until [RESOLVED]. |
Example call:
{
"agent": "db",
"task": "Are all migrations applied?",
"caller": "backend",
"goal": "Debug startup failure"
}
Safety & Cost Control π‘οΈ
I didn't want a demo toyβI wanted production-ready tooling. Built-in safeguards:
-
Recursion protection:
max_dispatch_depth(default: 3) -
Cost limits:
max_budget_usdper agent or globally -
Concurrency caps:
max_concurrencylimits parallelclaude -pprocesses - Timeouts: kills stuck sessions (default: 300s)
-
Caching: identical
(agent, task, context)requests return instantly. Only successes cached. -
Hot-reload config: add/remove agents via
~/.config/agent-dispatch/agents.yamlwithout restarting.
When to Use (and When Not To) βοΈ
β Do dispatch when:
- The task needs tools, files, or context from another project
- You want to leverage project-specific MCP servers or
CLAUDE.md - You need parallel answers from multiple codebases
β Don't dispatch when:
- The task is simple and local (just use your current agent)
- You need sub-second latency
- You're in a tight loop (use
dispatch_parallelinstead)
Try It & Let Me Know! π¬
pip install agent-dispatch
agent-dispatch init
agent-dispatch add my-agent ~/path/to/project
I'd love your feedback:
- What agent collaboration patterns would you use this for?
- What safety features are missing?
- Would you prefer a different abstraction (e.g., HTTP API vs MCP)?
Drop a comment below or open an issue. This is very much a "build in public" project. π
Top comments (0)