DEV Community

Vahap Ogut
Vahap Ogut

Posted on

How I Built a Universal MCP ↔ A2A Bridge: Architecture, Protocol Mapping, and What I Learned

The AI agent ecosystem is fragmenting into two incompatible standards. On one side, Anthropic's
MCP (Model Context Protocol) lets AI models use tools. On the other, Google's A2A
(Agent-to-Agent) lets agents collaborate. They can't talk to each other.

I built Nexarion, a runtime bridge that translates between these protocols in real-time. Here's
the architecture, the protocol mapping, and what I learned.

The Problem

Claude Desktop integrates with MCP servers to access tools like databases, APIs, and file
systems. But Google's A2A ecosystem has a different design. A2A agents expose "Agent Cards" at
/.well-known/agent-card.json, describe their skills, and communicate via JSON-RPC.

They're solving complementary problems but they don't interoperate. If I want Claude to call an
A2A weather agent, I have to write a custom integration. For every single agent.

This is the same fragmentation problem the web faced before TCP/IP. Everyone's speaking different
protocols.

Architecture

Nexarion sits between MCP clients and A2A agents as a translation runtime:

MCP CLIENTS (Claude, Cursor, VS Code)

│ tools/list, tools/call (stdio or HTTP)

NEXARION RUNTIME
┌─ Discovery: Agent Cards, caching, validation
├─ Translation: MCP ↔ A2A, schema mapping
└─ Routing: Auth passthrough, rate limiting

│ message/send, task/submit (JSON-RPC or REST)

A2A AGENTS (Weather, Code, Research, Custom)

Three layers:

  1. Discovery Layer fetches /.well-known/agent-card.json from A2A agents. Caches results with
    configurable TTL. Validates cards against a schema to catch malformed responses early.

  2. Translation Layer converts A2A agent skills into MCP tools with dynamically generated
    inputSchema. Each skill's description and tags become the tool's metadata. A generic send_message
    tool is also generated for every agent.

  3. Routing Layer resolves MCP tool names back to agent endpoints. Handles auth passthrough from
    MCP OAuth to A2A bearer tokens. Supports both JSON-RPC (A2A native) and REST/ACP (lighter
    weight).

Killer Feature: Dynamic Tool Synthesis

This is what makes the bridge feel like magic:

  1. A2A agent detected at https://weather.agent.ai
  2. GET /.well-known/agent-card.json
  3. Agent Card parsed, skills extracted
  4. Each skill auto-converted to MCP tool with inputSchema
  5. Instantly usable in Claude Desktop, Cursor, VS Code
Enter fullscreen mode Exit fullscreen mode

No configuration. No manual mapping. Just discover and use.

A single tools/list call shows every A2A agent as if it's a local MCP server.

Protocol Mapping

The translation is straightforward but non-trivial:

  tools/list fetches Agent Card from /.well-known/agent-card.json
  tools/call(name, args) converts to JSON-RPC message/send
  Tool inputSchema maps to skill description and tags
  Text response extracted from message.parts[].text
  Tool error mapped from status.state = failed
Enter fullscreen mode Exit fullscreen mode

The tricky part was handling SSE streaming. A2A agents can stream partial results via Server-Sent
Events, but MCP expects a single JSON response. The streaming adapter buffers SSE events and
converts them into MCP progress notifications.

Plugin System

The bridge supports four middleware hooks:

beforeTranslate to intercept MCP to A2A translation
afterTranslate to intercept A2A to MCP translation
onDiscover to intercept agent discovery
onError to intercept bridge errors

This means you can add logging, rate limiting, custom auth, or any transformation without
touching core code.

What I Learned

  1. Agent Cards are inconsistent. Every A2A agent formats their card slightly differently. Some
    have capabilities, some don't. Some use endpoints.rest, some only endpoints.jsonRpc. Schema
    validation at the discovery layer is not optional. It's essential.

  2. Stdio transport is a double-edged sword. Claude Desktop communicates via stdin/stdout. This is
    elegant but fragile. A single console.log in the wrong place breaks JSON-RPC parsing. Every log
    message must go to stderr.

  3. HMAC token signing is table stakes. If you're bridging two auth systems, you need
    cryptographic verification. Base64 encoding a secret isn't enough. Discovered that one the hard
    way.

  4. Rate limiting needs to be everywhere. The bridge, the HTTP server, per-agent, per-client. We
    hit our own npm publish rate limit 5 times during development.

  5. The timing is right. MCP and A2A are both growing fast. MCP has 35+ active specification
    proposals. A2A has 23K+ GitHub stars. A bridge between them is inevitable. I just built it first.

Stack and Numbers

  • TypeScript (strict mode), pnpm monorepo, tsup build
  • 8 packages: core, server, cli, sdk, registry, web, vscode
  • 102 tests, 0 build warnings
  • Apache-2.0 license
  • 5 npm packages published
  • Docker support with healthcheck

Links

GitHub: https://github.com/vahapogut/nexarion
npm: nexarion-core, nexarion-server, nexarioncli, nexarion-sdk, nexarion-registry

If you're building MCP servers or A2A agents, I'd love to hear how you're handling the interoperability question. Is this bridge a stopgap, or is it a long-term piece of the ecosystem?

Top comments (0)