You know that moment when Claude Code has been spinning on the same TypeScript error for the third time? You paste the same context, try rephrasing your prompt, and it still misses the fix.
What if Claude could just... ask Codex for help?
That's not a hypothetical anymore. I've been running a setup where my AI coding agents collaborate with each other, and it's changed how I work.
The Problem: Single-Agent Tunnel Vision
Every AI model has blind spots. Claude is great at architectural reasoning but sometimes overthinks simple fixes. Codex is fast and practical but can miss edge cases. Gemini has strong research capabilities but may not know your codebase conventions.
When you're stuck, you switch between agents manually — copy context from one, paste it into another, translate the answer back. It works, but it's slow and painful.
The Fix: Let Agents Talk to Each Other
I built agent-link-mcp, an MCP server that lets any AI coding agent spawn other agents as collaborators. The key insight: only the host agent needs the MCP server installed. The other agents are just CLI subprocesses.
Here's what it looks like in practice:
# Install in Claude Code (one command)
claude mcp add agent-link npx agent-link-mcp
That's it. Now Claude Code can talk to any other agent CLI you have installed.
Real Example: Debugging With a Second Opinion
I was building a WebSocket reconnection handler. Claude kept suggesting the same approach that wasn't working. So I had it ask Codex:
{
"agent": "codex",
"task": "This WebSocket reconnection logic causes duplicate connections. Why?",
"context": {
"files": ["src/ws-client.ts"],
"error": "MaxListenersExceededWarning: Possible EventEmitter memory leak"
}
}
Codex came back in 20 seconds with the answer: I was registering new event listeners on every reconnect without removing the old ones. Classic mistake that Claude kept missing because it was focused on the reconnection logic itself, not the listener cleanup.
Cross-Model Code Review
This is where it gets really useful. Before merging a feature branch, I have Claude ask a different model to review:
{
"agent": "codex",
"task": "Review these changes for bugs, edge cases, and performance issues",
"context": {
"files": ["src/api.ts", "src/handler.ts"],
"intent": "Code review before merge"
}
}
Different models catch different things. It's like having a second pair of eyes, but instant and free.
Bidirectional Conversations
The spawned agent can ask questions back. Claude answers, and work continues:
Claude: spawn_agent("codex", "Add Redis caching to the API layer")
Codex: [QUESTION] Should I use Redis or in-memory cache?
Claude: reply("codex-a1b2c3", "Redis — it's in our docker-compose.yml")
Codex: [RESULT] Added Redis caching with 5-minute TTL. Here's what changed...
The Two-Strike Rule
Here's my workflow: if I ask my primary agent to fix something and it fails twice, it automatically asks another agent. No more banging my head against the same wall.
You can set this up by adding to your CLAUDE.md:
When you fail to solve the same issue twice, use spawn_agent to ask
another agent (codex, gemini) for a fresh perspective. Pass the error
message and relevant files as context.
What Agents Can You Use?
Anything with a CLI:
| Agent | Install |
|---|---|
| Claude Code | npm i -g @anthropic-ai/claude-code |
| Codex | npm i -g @openai/codex |
| Gemini CLI | npm i -g @anthropic-ai/gemini-cli |
| Aider | pip install aider-chat |
agent-link-mcp auto-detects what's installed. You can also add custom agents (local LLMs via Ollama, etc.) through a config file.
Multi-Agent Pipelines
For larger tasks, I spawn multiple agents in parallel:
# Research phase
spawn_agent("gemini", "Find best practices for rate limiting in Node.js")
# Implementation (using research results)
spawn_agent("codex", "Implement token bucket rate limiter", {
files: ["src/middleware/"]
})
# Review
spawn_agent("claude", "Review for production readiness", {
files: ["src/middleware/rate-limiter.ts"]
})
Each agent brings its strengths. Gemini researches, Codex implements, Claude reviews.
Setup in 2 Minutes
# 1. Install the MCP server
claude mcp add agent-link npx agent-link-mcp
# 2. Make sure you have at least one other agent CLI
npm i -g @openai/codex
# 3. That's it. Try it:
# In Claude Code, ask: "Use list_agents to see what's available"
The GitHub repo has full docs, including templates for CLAUDE.md and AGENTS.md that you can drop into your projects.
Is This Actually Useful?
After two weeks of using this daily: yes. The biggest wins are:
- Debugging — When one agent is stuck, another usually spots the issue immediately
- Code review — Different models catch different classes of bugs
- Learning — Seeing how different models approach the same problem is educational
The biggest limitation is speed — spawning a CLI agent takes 10-30 seconds depending on the model. But when you're truly stuck, that's nothing compared to the hours you'd spend otherwise.
agent-link-mcp is open source (MIT). It works with any MCP-compatible host and any CLI-based AI agent. Install with npx agent-link-mcp.
Top comments (0)