DEV Community

Cover image for A2A + MCP — The Two Protocols That Were the Actual Story of Google Cloud NEXT '26
Saquib Shahid
Saquib Shahid Subscriber

Posted on

A2A + MCP — The Two Protocols That Were the Actual Story of Google Cloud NEXT '26

Google Cloud NEXT '26 Challenge Submission

This is a submission for the Google Cloud NEXT Writing Challenge


Hey, I'm Saquib. I've been deep in the backend trenches for a couple of years now, mostly building out Node and Go microservices. Like pretty much every dev in 2026, I spent the last few months messing around with AI. Wiring LLMs into APIs. Breaking stuff. Fixing it. Getting a prototype working on the first try and immediately assuming something is horribly wrong.

When Google Cloud NEXT '26 kicked off, I actually blocked out some calendar time to watch. My feed was absolutely flooded with hot takes on the Gemini Enterprise Agent Platform and the new 8th-gen TPUs. Everyone also had a very loud opinion about that "75% of Google's code is AI-generated" stat.

But honestly? I kept getting distracted by two boring acronyms.

A2A and MCP.

They barely got any stage time. No hype reel. But I genuinely think they were the most important things announced at the whole event. Let me try to explain why.

the problem nobody wanted to say out loud

Every vendor on stage was showing off agents. Salesforce has Agentforce. ServiceNow's got one. SAP too. Microsoft Copilot, obviously. Google's Gemini Enterprise. Basically everyone is shipping agents right now. Which is fine.

But the question I kept coming back to was way less glamorous: how do any of these things actually talk to each other?

In a normal backend setup, when two microservices need to chat, we have a playbook. REST, gRPC, Kafka queues, OpenAPI specs. The whole mess is solved. It's boring and it works.

But with agents? A year ago, the answer was they just didn't. Or worse, they did it through some horrible custom webhook adapter that someone is going to have to babysit from a Slack DM at 2am. Every system spoke its own language. Integration wasn't a feature—it was somebody's entire job.

A2A and MCP fix that exact problem. Google finally stopped treating them like research experiments and started treating them like actual infrastructure.

the mental model that finally made it click

I had to read the docs a few times before I really got it. Here is the summary I wish I had on day one.

MCP is HTTP for agents. A2A is DNS plus HTTP for agents talking to other agents.

MCP (Model Context Protocol) was Anthropic's baby originally. It standardizes how a model reaches out to your tools and databases. Before MCP, if you wanted Gemini to query a database, you had to write a bunch of glue code and pray the model didn't invent a table name. Now, the model speaks MCP, the server speaks MCP, and they just shake hands.

A2A (Agent2Agent) came from Google and got donated to the Linux Foundation. It standardizes how one agent talks to another autonomous agent.

The A2A docs say it best: build with whatever framework, equip with MCP, and communicate with A2A. They aren't competing standards. MCP is how your agent talks down to the database. A2A is how it talks sideways to another agent. That realization saved me a massive headache.

okay, so what did google actually ship?

A lot of this trickled out before NEXT, but the conference is where they put it all together into an actual strategy.

On the MCP side, we got fully managed servers for things like BigQuery, Cloud SQL, and Pub/Sub. You don't deploy anything. You just point your agent at an endpoint. They also turned Apigee into an MCP bridge. This means any REST API you already built instantly becomes a discoverable agent tool with your existing auth layered on top. As a guy who spent way too long last year hand-wrapping APIs for LLMs, that was a huge relief. Auth is all IAM-backed now too, so no more passing API keys around.

On the A2A side, it hit production grade with LangGraph and CrewAI support. Donating it to the Linux Foundation was a big move. They also announced Agent Registry (basically DNS for your internet of agents) and Agent Gateway.

Put it all together, and Google is basically saying they don't need to own the agent itself. They just want to own the highways the agents drive on.

show me the code

I hate architecture diagrams. I need to see the actual implementation. So here is a really minimal setup: a planner agent analyzing data, handing work off to a reporter agent in a completely different service.

Planner first. We'll set it up to hit BigQuery via MCP:

# planner_agent.py
from google.adk.agents import LlmAgent
from google.adk.tools.mcp_tool import MCPToolset
from google.adk.tools.mcp_tool.mcp_session_manager import StreamableHTTPConnectionParams

# Point at Google Cloud's managed BigQuery MCP server.
# No infra to deploy. Auth handled by IAM.
bigquery_tools = MCPToolset(
    connection_params=StreamableHTTPConnectionParams(
        url="https://bigquery.googleapis.com/mcp/v1",
    ),
)

planner = LlmAgent(
    name="sales_planner",
    model="gemini-3.1-pro",
    instruction=(
        "You analyze sales data in BigQuery and prepare a brief "
        "for the reporter agent. Findings as bullet points only."
    ),
    tools=[bigquery_tools],
)
Enter fullscreen mode Exit fullscreen mode

Notice what's missing there. There are no BigQuery SDK imports or crazy prompt schemas. The managed MCP server just exposes the database as an agent tool. When I ran this the first time, I kept looking for the missing step. There isn't one.

Next, we expose the reporter agent over A2A:

# reporter_server.py
from a2a.server import A2AServer
from a2a.types import AgentCard, AgentSkill

# The Agent Card is the "business card" that other agents fetch.
# Served at /.well-known/agent-card.json automatically.
card = AgentCard(
    name="QuarterlyReporter",
    description="Turns sales briefs into formatted exec reports.",
    version="1.0.0",
    url="https://reporter.example.com",
    skills=[
        AgentSkill(
            id="write_exec_report",
            name="Write executive report",
            description="Takes bullet findings and produces a polished report.",
            input_modes=["text"],
            output_modes=["text", "application/pdf"],
        )
    ],
)

server = A2AServer(agent_card=card, handler=my_report_handler)
server.run(host="0.0.0.0", port=8080)
Enter fullscreen mode Exit fullscreen mode

That Agent Card is the magic part. It's just a tiny JSON doc at a predictable URL. It tells any other agent from any cloud exactly what this bot can do and how to authenticate. It's basically robots.txt for AI.

Finally, we make them talk:

# orchestrator.py
from a2a.client import A2AClient, A2ACardResolver

# Discover what the reporter can do.
resolver = A2ACardResolver(base_url="https://reporter.example.com")
card = await resolver.get_agent_card()

# Open a client against it.
reporter = A2AClient(agent_card=card)

# Run the planner...
findings = await planner.run("Analyze Q1 2026 revenue by region.")

# ...and hand off to the reporter over A2A.
task = await reporter.send_task(
    message={"parts": [{"type": "text", "text": findings}]},
)

# A2A tasks have a real lifecycle so we stream updates back.
async for update in reporter.stream_task(task.id):
    print(update.status, update.artifacts)
Enter fullscreen mode Exit fullscreen mode

Three files and we have a working cross-service handoff. The crazy part is that the reporter could be running on AWS or it could be a Salesforce agent. The planner code wouldn't need to change at all.

the detail most write-ups miss

A lot of the coverage I've seen just says "these are cool." True. But the interesting part is the underlying design.

A2A is basically stealing every good idea from how the early web scaled. Agent Cards live at /.well-known/agent-card.json. That's RFC 8615, the same pattern we use for security.txt. They're using JSON-RPC 2.0 over HTTP and Server-Sent Events. If your system speaks HTTP, it speaks A2A.

Tasks have a normal lifecycle too. Submitted, working, completed. It's the same shape as AWS Step Functions or GitHub Actions. It's boring—and coming from backend land, that's basically the nicest thing I can say about a tech. Clever protocols usually die because nobody wants to actually implement them. MCP and A2A are predictable.

stuff I'm still not sold on

Look, I'm not totally sold yet.

The security surface area here is terrifying. An agent that can auto-discover other agents and pass tasks around is an absolute nightmare for prompt injection or data exfiltration. I'd need a very solid threat model before letting this touch real production data.

Debugging is also going to be awful. Google says everything lands in Cloud Audit Logs. Cool. But tracing a failed task across three different agents built by three different vendors? Prepare for a lot of late nights.

Also, the spec is still moving. A2A is at the Linux Foundation, which means it will evolve. If you build heavily on it today, you will probably be rewriting parts of it next year.

what I'd actually do this week

If you want to get ahead of this, here is my unsolicited advice.

Go run the A2A Python quickstart. It takes maybe an hour and you get a working agent in 50 lines. Then hook a basic agent up to a managed MCP server. BigQuery is probably the easiest to test. Read a real Agent Card—just go look at the JSON and see how the auth and skills are structured. It grounds the whole concept. Just don't overbuild right away. Start with two agents and see what breaks.

Next '26 was Google quietly admitting that no single company will own the agent ecosystem. So they're building the infrastructure instead. A2A is the DNS. MCP is the HTTP. If you learn these protocols now, future-you is going to be really happy about it. That's my bet anyway.

If you are building with A2A or MCP, drop a comment. I'd love to swap notes, especially if you've hit weird OAuth snags between managed MCP servers and non-Google clients. I definitely spent a few hours stuck on that this week.

— Saquib

Top comments (1)

Collapse
 
zain_khan_ad14a05287b43c8 profile image
Zain Khan

Great share😊