DEV Community

Cover image for Build a Vendor-Neutral A2A Agent That Works With Any LLM Provider
Shashi Kanth
Shashi Kanth

Posted on

Build a Vendor-Neutral A2A Agent That Works With Any LLM Provider

One of the most common mistakes in AI system architecture is building point-to-point integrations with specific LLM providers.

You choose Anthropic. You integrate Claude directly. Six months later you want to benchmark against GPT-4.1, or a new model drops that changes the playing field. Now you're rewriting integration code.

a2a-opencode solves this with a different approach:

  1. Wrap OpenCode — which already supports Anthropic, OpenAI, GitHub Copilot, and more — behind the A2A protocol
  2. Your orchestration layer speaks A2A, not "Claude API" or "OpenAI API"
  3. Swap model providers in one config line. Your orchestrator never changes.

GitHub: https://github.com/shashikanth-gs/a2a-opencode
npm: https://www.npmjs.com/package/a2a-opencode


What is A2A?

The A2A (Agent-to-Agent) protocol is an open standard for agent interoperability. It defines how agents:

  • Advertise capabilities via Agent Cards (/.well-known/agent-card.json)
  • Accept tasks via JSON-RPC and REST
  • Stream responses via SSE
  • Maintain task lifecycle (submitted → working → completed)

When an agent speaks A2A, any orchestrator that understands the protocol can discover and call it — without knowing anything about the underlying model or provider.

A2A Agent Card Discovery Flow


Prerequisites

  • Node.js 18+
  • OpenCode installed (npm install -g opencode-ai or equivalent)
  • A supported LLM provider API key (Anthropic, OpenAI, or GitHub Copilot via gh auth login)

Step 1: Start OpenCode

opencode serve
# OpenCode starts on http://localhost:4096 by default
Enter fullscreen mode Exit fullscreen mode

Step 2: Install a2a-opencode

npm Quick-Start Terminal Card


npm install -g a2a-opencode
Enter fullscreen mode Exit fullscreen mode

Or run without installing:

npx a2a-opencode --config path/to/config.json
Enter fullscreen mode Exit fullscreen mode

Step 3: Configure Your Agent

Create my-agent/config.json:

{
  "agentCard": {
    "name": "My OpenCode Agent",
    "description": "A vendor-neutral AI agent with MCP tool support",
    "version": "1.0.0",
    "protocolVersion": "0.3.0",
    "streaming": true,
    "skills": [
      {
        "id": "code-review",
        "name": "Code Review",
        "description": "Analyze, review, and refactor code",
        "tags": ["code", "review", "refactor", "security"]
      }
    ]
  },
  "server": {
    "port": 3001,
    "advertiseHost": "localhost"
  },
  "opencode": {
    "baseUrl": "http://localhost:4096",
    "model": "anthropic/claude-sonnet-4-20250514",
    "systemPrompt": "You are a code review expert. Analyze code for bugs, performance, and security issues.",
    "autoApprove": true,
    "autoAnswer": true
  }
}
Enter fullscreen mode Exit fullscreen mode

Step 4: Start the A2A Wrapper

a2a-opencode --config my-agent/config.json
Enter fullscreen mode Exit fullscreen mode

Output:

[info] A2A server started
[info] Agent Card: http://localhost:3001/.well-known/agent-card.json
[info] JSON-RPC:   http://localhost:3001/a2a/jsonrpc
[info] REST:       http://localhost:3001/a2a/rest
[info] Health:     http://localhost:3001/health
Enter fullscreen mode Exit fullscreen mode

Step 5: Discover Your Agent

curl http://localhost:3001/.well-known/agent-card.json | jq .
Enter fullscreen mode Exit fullscreen mode

Response:

{
  "name": "My OpenCode Agent",
  "description": "A vendor-neutral AI agent with MCP tool support",
  "version": "1.0.0",
  "protocolVersion": "0.3.0",
  "url": "http://localhost:3001",
  "capabilities": { "streaming": true },
  "skills": [...]
}
Enter fullscreen mode Exit fullscreen mode

This is the A2A Agent Card — the agent's identity and capability manifest. Any A2A-compatible orchestrator can read this and immediately understand what the agent can do.


Step 6: Send a Task

Via REST:

curl -X POST http://localhost:3001/a2a/rest \
  -H "Content-Type: application/json" \
  -d '{
    "message": {
      "role": "user",
      "parts": [{"kind": "text", "text": "Review this TypeScript function for bugs and performance issues."}]
    }
  }'
Enter fullscreen mode Exit fullscreen mode

Via JSON-RPC:

curl -X POST http://localhost:3001/a2a/jsonrpc \
  -H "Content-Type: application/json" \
  -d '{
    "jsonrpc": "2.0",
    "id": "1",
    "method": "tasks/send",
    "params": {
      "message": {
        "role": "user",
        "parts": [{"kind": "text", "text": "Explain the difference between debounce and throttle."}]
      }
    }
  }'
Enter fullscreen mode Exit fullscreen mode

The response streams back in full A2A format with task lifecycle events and artifacts — identical to any other A2A agent, regardless of which LLM is powering it.


Switching Providers = One Line Change

Want to switch from Claude to GPT-4.1?

"opencode": {
  "model": "openai/gpt-4.1"
}
Enter fullscreen mode Exit fullscreen mode

Switch to GitHub Copilot?

"opencode": {
  "model": "github/gpt-4.1"
}
Enter fullscreen mode Exit fullscreen mode

The A2A interface is identical. Restart the agent. Your orchestrator doesn't change.


Multi-Agent Example

Multi-Agent Routing with a2a-opencode
One orchestrator routing tasks to three specialized agents across providers

Run three specialized agents on different ports:

# Terminal 1: Code review agent (Claude)
a2a-opencode --config agents/code-review/config.json --port 3001

# Terminal 2: Documentation agent (GPT-4.1)
a2a-opencode --config agents/docs/config.json --port 3002

# Terminal 3: Security analysis (Claude Opus)
a2a-opencode --config agents/security/config.json --port 3003
Enter fullscreen mode Exit fullscreen mode

Your orchestrator routes to whichever agent fits the task — by capability, skill tags, or load. Each agent is independently swappable.


Architecture

a2a-opencode Internal Architecture
Full request flow from A2A client through OpenCode to any LLM provider and MCP tools

A2A Client / Orchestrator
        │
        │  JSON-RPC / REST / SSE
        ▼
a2a-opencode (Express + A2A SDK)
  ├─ SessionManager     (contextId → OpenCode session)
  ├─ EventStreamManager (SSE polling + auto-reconnect)
  ├─ PermissionHandler  (auto-approves tool calls)
  └─ EventPublisher     (OpenCode events → A2A events)
        │
        │  HTTP + SSE
        ▼
OpenCode Server (opencode serve)
  ├─ LLM inference (Anthropic / OpenAI / GitHub Copilot / ...)
  └─ MCP tool execution
        │
        │  MCP Protocol
        ▼
MCP Servers (filesystem, database, custom tools...)
Enter fullscreen mode Exit fullscreen mode

Adding MCP Tools (Optional)

Want your agent to read and write files, query databases, or call custom APIs? Add an MCP section to your config:

stdio (child process):

"mcp": {
  "filesystem": {
    "type": "stdio",
    "command": "npx",
    "args": ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/workspace"]
  }
}
Enter fullscreen mode Exit fullscreen mode

HTTP MCP server:

"mcp": {
  "my-api-tools": {
    "type": "http",
    "url": "http://localhost:8002/mcp"
  }
}
Enter fullscreen mode Exit fullscreen mode

Restart the agent. OpenCode handles MCP execution and tool results flow back through the A2A response stream.


a2a-copilot vs a2a-opencode — Which Should You Use?

a2a-copilot a2a-opencode
LLM backend GitHub Copilot models only Any provider via OpenCode
Auth GitHub account / gh CLI token Provider-specific (set in OpenCode)
External dependency None (uses gh CLI) OpenCode server must be running
Multi-provider support No Yes
Best for Teams already on GitHub Copilot Multi-provider or vendor-neutral setups

Both expose the same A2A interface. Your orchestrator integrates once and can use either — or both.


What You Just Built

You now have a vendor-neutral AI agent running as a standalone, fully A2A-compliant service:

  • Discoverable via Agent Card
  • Callable via JSON-RPC and REST
  • Streaming via SSE
  • Multi-turn conversations via persistent sessions
  • Any LLM provider swappable in one config line
  • MCP tool access (if configured)
  • Docker-deployable

Any A2A orchestrator can call it without any provider-specific code.


What's Next

  • Switch models: Change "model" to "openai/gpt-4.1" or "github/gpt-4.1" in your config
  • Add MCP tools: Filesystem connectors, database clients, HTTP API tools
  • Run multiple specialized agents on different ports, each with a different provider and system prompt
  • Use an A2A orchestrator to route tasks dynamically across agents by capability or load
  • Check out a2a-copilot for a zero-dependency alternative that wraps GitHub Copilot directly

GitHub: https://github.com/shashikanth-gs/a2a-opencode
npm: npm install -g a2a-opencode
Also see: https://github.com/shashikanth-gs/a2a-copilot (for the GitHub Copilot variant)

Top comments (0)