DEV Community

Cover image for MCP Connects Agents to Tools. A2A Connects Agents to Each Other. Here's Why That Distinction Changes Everything
SANJEEVA KUMAR SSK
SANJEEVA KUMAR SSK

Posted on

MCP Connects Agents to Tools. A2A Connects Agents to Each Other. Here's Why That Distinction Changes Everything

Google Cloud NEXT '26 Challenge Submission

This is a submission for the Google Cloud NEXT Writing Challenge


I've spent the past week digging into every major announcement from Google Cloud
NEXT '26. Gemini Enterprise Agent Platform. TPU v8. ADK v1.0 stable. Project Mariner.
There's a lot to love.

But one thing kept nagging me: everyone is building multi-agent systems without
actually understanding the communication layer underneath them.

At NEXT '26, Google made two protocol-level announcements that most keynote
recaps buried in footnotes:

  • A2A v1.0 is now production-stable, running at 150 organizations, and governed by the Linux Foundation
  • MCP (Model Context Protocol) is now natively integrated across Google Cloud services (BigQuery, Maps, GKE, Compute Engine)

These are NOT the same thing. They are NOT competing. And if you're building
agents without understanding both, you are going to have a very bad time in production.

Let me explain the difference — and then show you exactly how to use them together.


The Confusion Everyone Has

When Google announced MCP support in December 2025, and then doubled down on
A2A at NEXT '26, a lot of developers asked: "So which one do I use?"

This is the wrong question. Here's a cleaner mental model:

MCP = how your agent connects to tools and data sources
A2A = how your agent communicates with other agents

Think of it like a company:

  • MCP is the internal tool stack — Slack, Notion, your CRM, your database
  • A2A is the org chart — who delegates to whom, who can request what from whom

If you only have MCP, your agent is a brilliant solo worker drowning in tasks.
If you only have A2A, your agents can coordinate perfectly but have no tools to
actually do anything.

You need both.


Why A2A v1.0 Matters More Than You Think

The original A2A launch (early 2025) had 50+ partners. At NEXT '26, it's 150
organizations running it in production — not pilots, not proofs of concept.

Microsoft, AWS, Salesforce, SAP, and ServiceNow are all routing real tasks through A2A.

What changed in v1.2 (announced at NEXT '26):

  • Signed Agent Cards — cryptographic signatures for domain verification, so you can trust the agent you're talking to
  • Linux Foundation governance — this is no longer a Google protocol. It's an open standard with independent oversight
  • Native ADK integration — A2A support is now built into Google's ADK, LangGraph, CrewAI, LlamaIndex Agents, Semantic Kernel, and AutoGen

That last point is huge. It means the ecosystem around A2A just became
language-agnostic and framework-agnostic on the same day ADK hit stable v1.0
in Python, Go, Java, and TypeScript.


A Real Example: The Procurement Agent Problem

Here's a scenario I've seen in almost every enterprise I've worked with:

A company has:

  • A finance agent that knows about budgets and purchase limits
  • An HR agent that knows about team headcount and contractor approvals
  • A procurement agent that actually places orders

In the old world, these three systems talk through custom API glue code that
someone wrote in 2023 and nobody fully understands anymore.

With A2A + ADK, the procurement agent can:

  1. Receive a purchase request via MCP (connected to your Slack or email tool)
  2. Use A2A to ask the finance agent: "Is this within budget?"
  3. Use A2A to ask the HR agent: "Is this contractor already approved?"
  4. Get structured responses from both agents
  5. Make a decision and place the order via MCP (connected to your ERP system)

No custom API code. No parsing each system's proprietary response format.
Secure handoffs with signed agent cards. Full audit trail.


Building It: ADK + A2A in Practice

Here's a minimal Python example using ADK v1.0 that shows what this actually
looks like in code:

from google.adk.agents import LlmAgent
from google.adk.tools.a2a import A2AClient

# Initialize the A2A client pointing to your finance agent
finance_agent = A2AClient(
    agent_card_url="https://finance.yourcompany.com/.well-known/agent.json"
)

# Initialize the A2A client for your HR agent
hr_agent = A2AClient(
    agent_card_url="https://hr.yourcompany.com/.well-known/agent.json"
)

# Your procurement orchestrator
procurement_orchestrator = LlmAgent(
    model="gemini-3-flash",
    name="procurement_orchestrator",
    instruction="""
        You handle purchase requests. Before approving any request:
        1. Verify budget availability with the finance agent
        2. Verify contractor approval status with the HR agent
        3. Only proceed if both agents confirm
    """,
    tools=[finance_agent.as_tool(), hr_agent.as_tool()]
)
Enter fullscreen mode Exit fullscreen mode

The agent_card_url points to a JSON file that describes what the agent can do,
what data it accepts, and — in v1.2 — carries a cryptographic signature so you
know you're talking to the right agent.

This is what Google means by "structured and secure" agent-to-agent
communication. It's not agents sending each other free-text messages.
It's a typed, verifiable protocol.


The MCP Side: Connecting to Real Tools

While A2A handles agent-to-agent coordination, your agents still need to
connect to actual tools. This is where MCP comes in.

At NEXT '26, Google launched fully managed MCP servers for:

  • BigQuery — agents query your data warehouse directly
  • Google Maps — agents get location intelligence
  • Compute Engine — agents can provision and manage VMs
  • Kubernetes Engine (GKE) — agents manage container workloads

Using ADK, connecting to these is straightforward:

from google.adk.tools.mcp import MCPToolset

# Connect your agent to BigQuery via managed MCP
bigquery_tools = MCPToolset(
    connection_params=SseServerParams(
        url="https://bigquery.googleapis.com/mcp/v1",
    )
)

# Your analytics agent now has live BigQuery access
analytics_agent = LlmAgent(
    model="gemini-3-flash",
    name="analytics_agent",
    instruction="You answer data questions using live BigQuery data.",
    tools=[*bigquery_tools.tools]
)
Enter fullscreen mode Exit fullscreen mode

Now combine this with A2A, and your procurement orchestrator from earlier
can delegate data lookups to an analytics agent — which uses MCP to hit
BigQuery — and get structured results back through A2A.

That's the full stack.


The Part I'm Most Excited About (And Most Worried About)

Excited: Agent Gateway.

This was announced quietly at NEXT '26 but it's one of the most important pieces
of the whole system. Agent Gateway is essentially a managed proxy that sits in
front of your entire agent ecosystem. It understands both MCP and A2A protocols
natively, enforces your security policies in real time, and integrates with
Model Armor to block prompt injection attacks before they reach your agents.

Think of it as the API Gateway of the agentic era. You wouldn't expose your
REST APIs without a gateway. You shouldn't expose your agent network without one either.

Worried: The signed agent cards in A2A v1.2 are a step in the right direction
for trust, but key rotation and revocation aren't fully specified yet. In a world
where your finance agent is making real budget decisions, the security model needs
to be as rigorous as your OAuth token lifecycle. This is something the Linux
Foundation governance will need to address fast.


What This Means For You Right Now

If you're building anything with agents in 2026, here's the practical checklist:

1. Stop building custom agent communication layers.
A2A v1.0 is stable. The ecosystem support (LangGraph, CrewAI, LlamaIndex) means
you're not locking into Google. Build to the protocol, not to a vendor.

2. Use managed MCP servers where you can.
Google's managed MCP servers for BigQuery, Maps, GKE, and Compute Engine save you
from running your own MCP infrastructure. Start there.

3. Put Agent Gateway in front of everything.
Even if you're just running a two-agent prototype, get in the habit of routing
through a gateway. Your future production self will thank you.

4. Read the Agent Card specification.
The /.well-known/agent.json format is where A2A actually lives. Understanding
this file — what capabilities your agent advertises, how trust is established —
is the most important 10 minutes you'll spend on agentic architecture this year.


The Underrated Takeaway from NEXT '26

Everyone's talking about Gemini 3.1 Pro. The Vertex AI rebrand. The TPU v8
Sunfish and Zebrafish chips.

But the announcement with the longest tail isn't a model or a chip. It's a
protocol reaching production maturity and independent governance.

A2A v1.0 governed by the Linux Foundation is the HTTP moment for multi-agent
systems. We'll look back at April 2026 the same way we look back at 1991:
the point where the infrastructure became standardized enough for everything
else to be built on top of it.

The platform wars for AI are interesting. The protocol wars are what
actually determine who wins.

Start building to the protocol.


Have you started experimenting with A2A or MCP in your projects?
What's blocking you from moving to production? Drop it in the comments —
I'm curious what real blockers look like out in the wild.

Top comments (0)