One of the most common mistakes in AI system architecture is building point-to-point integrations with specific LLM providers.
You choose Anthropic. You integrate Claude directly. Six months later you want to benchmark against GPT-4.1, or a new model drops that changes the playing field. Now you're rewriting integration code.
a2a-opencode solves this with a different approach:
- Wrap OpenCode — which already supports Anthropic, OpenAI, GitHub Copilot, and more — behind the A2A protocol
- Your orchestration layer speaks A2A, not "Claude API" or "OpenAI API"
- Swap model providers in one config line. Your orchestrator never changes.
GitHub: https://github.com/shashikanth-gs/a2a-opencode
npm: https://www.npmjs.com/package/a2a-opencode
What is A2A?
The A2A (Agent-to-Agent) protocol is an open standard for agent interoperability. It defines how agents:
-
Advertise capabilities via Agent Cards (
/.well-known/agent-card.json) - Accept tasks via JSON-RPC and REST
- Stream responses via SSE
- Maintain task lifecycle (submitted → working → completed)
When an agent speaks A2A, any orchestrator that understands the protocol can discover and call it — without knowing anything about the underlying model or provider.
Prerequisites
- Node.js 18+
-
OpenCode installed (
npm install -g opencode-aior equivalent) - A supported LLM provider API key (Anthropic, OpenAI, or GitHub Copilot via
gh auth login)
Step 1: Start OpenCode
opencode serve
# OpenCode starts on http://localhost:4096 by default
Step 2: Install a2a-opencode
npm install -g a2a-opencode
Or run without installing:
npx a2a-opencode --config path/to/config.json
Step 3: Configure Your Agent
Create my-agent/config.json:
{
"agentCard": {
"name": "My OpenCode Agent",
"description": "A vendor-neutral AI agent with MCP tool support",
"version": "1.0.0",
"protocolVersion": "0.3.0",
"streaming": true,
"skills": [
{
"id": "code-review",
"name": "Code Review",
"description": "Analyze, review, and refactor code",
"tags": ["code", "review", "refactor", "security"]
}
]
},
"server": {
"port": 3001,
"advertiseHost": "localhost"
},
"opencode": {
"baseUrl": "http://localhost:4096",
"model": "anthropic/claude-sonnet-4-20250514",
"systemPrompt": "You are a code review expert. Analyze code for bugs, performance, and security issues.",
"autoApprove": true,
"autoAnswer": true
}
}
Step 4: Start the A2A Wrapper
a2a-opencode --config my-agent/config.json
Output:
[info] A2A server started
[info] Agent Card: http://localhost:3001/.well-known/agent-card.json
[info] JSON-RPC: http://localhost:3001/a2a/jsonrpc
[info] REST: http://localhost:3001/a2a/rest
[info] Health: http://localhost:3001/health
Step 5: Discover Your Agent
curl http://localhost:3001/.well-known/agent-card.json | jq .
Response:
{
"name": "My OpenCode Agent",
"description": "A vendor-neutral AI agent with MCP tool support",
"version": "1.0.0",
"protocolVersion": "0.3.0",
"url": "http://localhost:3001",
"capabilities": { "streaming": true },
"skills": [...]
}
This is the A2A Agent Card — the agent's identity and capability manifest. Any A2A-compatible orchestrator can read this and immediately understand what the agent can do.
Step 6: Send a Task
Via REST:
curl -X POST http://localhost:3001/a2a/rest \
-H "Content-Type: application/json" \
-d '{
"message": {
"role": "user",
"parts": [{"kind": "text", "text": "Review this TypeScript function for bugs and performance issues."}]
}
}'
Via JSON-RPC:
curl -X POST http://localhost:3001/a2a/jsonrpc \
-H "Content-Type: application/json" \
-d '{
"jsonrpc": "2.0",
"id": "1",
"method": "tasks/send",
"params": {
"message": {
"role": "user",
"parts": [{"kind": "text", "text": "Explain the difference between debounce and throttle."}]
}
}
}'
The response streams back in full A2A format with task lifecycle events and artifacts — identical to any other A2A agent, regardless of which LLM is powering it.
Switching Providers = One Line Change
Want to switch from Claude to GPT-4.1?
"opencode": {
"model": "openai/gpt-4.1"
}
Switch to GitHub Copilot?
"opencode": {
"model": "github/gpt-4.1"
}
The A2A interface is identical. Restart the agent. Your orchestrator doesn't change.
Multi-Agent Example

One orchestrator routing tasks to three specialized agents across providers
Run three specialized agents on different ports:
# Terminal 1: Code review agent (Claude)
a2a-opencode --config agents/code-review/config.json --port 3001
# Terminal 2: Documentation agent (GPT-4.1)
a2a-opencode --config agents/docs/config.json --port 3002
# Terminal 3: Security analysis (Claude Opus)
a2a-opencode --config agents/security/config.json --port 3003
Your orchestrator routes to whichever agent fits the task — by capability, skill tags, or load. Each agent is independently swappable.
Architecture

Full request flow from A2A client through OpenCode to any LLM provider and MCP tools
A2A Client / Orchestrator
│
│ JSON-RPC / REST / SSE
▼
a2a-opencode (Express + A2A SDK)
├─ SessionManager (contextId → OpenCode session)
├─ EventStreamManager (SSE polling + auto-reconnect)
├─ PermissionHandler (auto-approves tool calls)
└─ EventPublisher (OpenCode events → A2A events)
│
│ HTTP + SSE
▼
OpenCode Server (opencode serve)
├─ LLM inference (Anthropic / OpenAI / GitHub Copilot / ...)
└─ MCP tool execution
│
│ MCP Protocol
▼
MCP Servers (filesystem, database, custom tools...)
Adding MCP Tools (Optional)
Want your agent to read and write files, query databases, or call custom APIs? Add an MCP section to your config:
stdio (child process):
"mcp": {
"filesystem": {
"type": "stdio",
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/workspace"]
}
}
HTTP MCP server:
"mcp": {
"my-api-tools": {
"type": "http",
"url": "http://localhost:8002/mcp"
}
}
Restart the agent. OpenCode handles MCP execution and tool results flow back through the A2A response stream.
a2a-copilot vs a2a-opencode — Which Should You Use?
| a2a-copilot | a2a-opencode | |
|---|---|---|
| LLM backend | GitHub Copilot models only | Any provider via OpenCode |
| Auth | GitHub account / gh CLI token |
Provider-specific (set in OpenCode) |
| External dependency | None (uses gh CLI) |
OpenCode server must be running |
| Multi-provider support | No | Yes |
| Best for | Teams already on GitHub Copilot | Multi-provider or vendor-neutral setups |
Both expose the same A2A interface. Your orchestrator integrates once and can use either — or both.
What You Just Built
You now have a vendor-neutral AI agent running as a standalone, fully A2A-compliant service:
- Discoverable via Agent Card
- Callable via JSON-RPC and REST
- Streaming via SSE
- Multi-turn conversations via persistent sessions
- Any LLM provider swappable in one config line
- MCP tool access (if configured)
- Docker-deployable
Any A2A orchestrator can call it without any provider-specific code.
What's Next
-
Switch models: Change
"model"to"openai/gpt-4.1"or"github/gpt-4.1"in your config - Add MCP tools: Filesystem connectors, database clients, HTTP API tools
- Run multiple specialized agents on different ports, each with a different provider and system prompt
- Use an A2A orchestrator to route tasks dynamically across agents by capability or load
- Check out a2a-copilot for a zero-dependency alternative that wraps GitHub Copilot directly
GitHub: https://github.com/shashikanth-gs/a2a-opencode
npm: npm install -g a2a-opencode
Also see: https://github.com/shashikanth-gs/a2a-copilot (for the GitHub Copilot variant)


Top comments (0)