DEV Community

Cover image for Forget the Flashy Keynote — The A2A Protocol Is the Real Revolution From Google Cloud Next '26
Nilam Bora
Nilam Bora

Posted on

Forget the Flashy Keynote — The A2A Protocol Is the Real Revolution From Google Cloud Next '26

Google Cloud NEXT '26 Challenge Submission

This is a submission for the Google Cloud NEXT Writing Challenge

Everyone's Talking About the Wrong Thing

Google Cloud Next '26 dropped like a thunderstorm. The internet exploded over the Apple partnership, the slick Gemini Enterprise Agent Platform demos, and 8th-Gen TPUs. And look — those are legitimately exciting. But after watching the keynotes, reading the docs, and spending a few hours actually digging into what shipped, I'm convinced the announcement that will reshape how we build software didn't even make the front page of Hacker News:

The Agent2Agent (A2A) Protocol is now in production at 150+ organizations, it's at v1.2, and it's officially governed by the Linux Foundation.

If you're a developer who builds anything that talks to other services — and let's be honest, that's all of us — this is the announcement you should be losing sleep over.

A Quick Primer: What Is A2A, and Why Should You Care?

Think about how services communicate today. We write REST endpoints. We wrangle GraphQL schemas. We negotiate API contracts across teams, build custom SDKs, and maintain brittle integration layers that eat 20-40% of our development cycles. Now scale that problem to AI agents.

In an agentic world, you don't just have your service calling their API. You have autonomous systems — agents — that need to discover each other, negotiate capabilities, delegate tasks, and report results, all without a human choreographing every step.

That's the problem A2A solves. And here's how it works, in plain English:

1. Agent Cards: The Business Card for Software

Every A2A-compliant agent publishes a discoverable "Agent Card" at a well-known URL (/.well-known/agent-card.json). This card describes:

  • Who the agent is — name, description, version
  • What it can do — its capabilities and skills
  • How to talk to it — endpoints, auth requirements, supported protocols

Think of it as a combination of OpenAPI spec and DNS record, but purpose-built for autonomous AI systems. Any agent on the network can discover and evaluate another agent's capabilities without a human ever writing an integration doc.

2. Communication Over Familiar Rails

A2A doesn't reinvent the wheel. It uses HTTP/HTTPS, JSON-RPC 2.0, and Server-Sent Events (SSE) for streaming. If you've written a webhook handler this decade, you already know 80% of the transport layer.

This is a deliberate and brilliant design choice. By building on existing web infrastructure, A2A inherits decades of tooling: load balancers, API gateways, observability stacks, WAFs — all of it works out of the box.

3. Task Lifecycle Management for Long-Running Work

Here's where A2A separates itself from anything that came before. It includes a structured task lifecycle with explicit states:

  • pending → task received, queued for execution
  • in-progress → agent is actively working
  • completed → results ready
  • failed → something broke, and here's why

This isn't just status tracking. It's a contract that enables agents to manage complex, multi-step workflows across organizational boundaries. A client agent can kick off a task, go handle other work, and poll or stream for results — exactly like a well-designed async job system, but standardized across the entire ecosystem.

4. Security That Enterprises Actually Need

The v1.2 update (which dropped alongside Cloud Next) added cryptographically signed agent cards for domain verification, alongside OAuth 2.0 and mTLS support. This isn't a research protocol being bolted onto production systems. It was built for production from day one.

Combined with Google Cloud's Model Armor for inline traffic sanitization, you get a security story that doesn't require security teams to reinvent the wheel for every agent deployment.

Why This Matters More Than the Gemini Enterprise Agent Platform

Don't get me wrong — the Gemini Enterprise Agent Platform (the thing that used to be Vertex AI) is impressive. The Agent Designer's no-code canvas, the Inbox for managing long-running agent workflows, the Agent Registry — all genuinely useful tools.

But here's my hot take: platforms are proprietary; protocols are permanent.

The Gemini Enterprise Agent Platform is a Google product. It's excellent, and if you're in the Google Cloud ecosystem, you should absolutely use it. But the A2A protocol is an open standard under the Linux Foundation. It's already integrated into:

  • Google's Agent Development Kit (ADK)
  • LangGraph (LangChain)
  • CrewAI
  • LlamaIndex Agents
  • Microsoft Semantic Kernel
  • AutoGen

This is the rare case where a major cloud vendor released something that helps everyone, including developers on competing platforms. That's not altruism — it's a bet that standardization grows the pie faster than lock-in. And historically, that bet tends to be right (see: HTTP, TCP/IP, OAuth, OpenTelemetry).

The Developer Impact: What Changes Right Now

Let me paint a practical picture. Say you're building a customer support system. Today, your architecture probably looks like:

User → Your App → Custom LLM Integration → Custom CRM API Wrapper 
                                          → Custom Billing API Wrapper
                                          → Custom Knowledge Base Search
Enter fullscreen mode Exit fullscreen mode

Every integration is bespoke. Every connection is a maintenance liability. Every new data source requires a new adapter, new auth handling, new error recovery logic.

With A2A in the mix, it looks more like this:

User → Your Orchestrator Agent 
        → discovers CRM Agent (via Agent Card)
        → discovers Billing Agent (via Agent Card)  
        → discovers Knowledge Agent (via Agent Card)
        → delegates tasks via standard A2A protocol
        → monitors lifecycle states
        → composes results
Enter fullscreen mode Exit fullscreen mode

The orchestrator doesn't need to know how the CRM agent works internally. It just needs to read the Agent Card, understand the capabilities, and communicate via the standard protocol. When Salesforce ships their own A2A-compliant agent tomorrow, your system picks it up without a single line of new integration code.

That's the revolution. Not any single agent being smarter, but all agents being able to work together without us hand-wiring every connection.

A2A vs. MCP: Complementary, Not Competing

I've seen some confusion about how A2A relates to the Model Context Protocol (MCP), so let me clarify:

MCP A2A
Purpose Connect agents to tools and data Connect agents to other agents
Relationship Agent ↔ Resource Agent ↔ Agent
Analogy USB port for peripherals TCP/IP for networked systems
Use Case "Query my database" "Hey CRM Agent, look up this customer"

They're complementary layers. MCP gives your agent hands and eyes. A2A gives it colleagues. The Agentic Data Cloud and Knowledge Catalog (also announced at Next '26) sit at the MCP layer — providing the context and grounding agents need. A2A sits above, orchestrating the collaboration between specialized agents.

If you're building anything non-trivial in the agentic space, you'll need both.

What I Think Is Still Missing

No protocol is perfect at v1.2, and A2A has some gaps I'd love to see addressed:

1. Discovery at Scale

Agent Cards at well-known URLs work great when you know where to look. But what about discovering agents you don't know exist? There's no standardized registry or marketplace protocol yet. Google's Agent Registry helps within the GCP ecosystem, but the open protocol needs a decentralized discovery mechanism — something like DNS for agents.

2. Economic Primitives

When Agent A delegates a task to Agent B, who pays? A2A has no built-in concept of metering, billing, or cost negotiation. As we move toward agent marketplaces (Google mentioned one in Project Mariner's Q4 2026 roadmap), this will become critical.

3. Semantic Versioning for Capabilities

Agent Cards describe capabilities, but there's no standard for versioning those capabilities. When an agent updates its skills, how do clients know what changed? We need something like semver for agent capabilities.

4. Debugging Multi-Agent Workflows

Tracing a single agent is hard enough. Tracing a conversation across 5 agents from 3 different vendors? The observability story needs work. OpenTelemetry integration for A2A traces would be a game-changer.

The Bigger Picture: Google's Bet on the Agentic Enterprise

Zoom out, and the entire Cloud Next '26 narrative clicks into place:

  • Gemini Enterprise Agent Platform = the factory where you build agents
  • Agent Designer = the blueprinting tool for non-engineers
  • Knowledge Catalog + Agentic Data Cloud = the fuel (trusted context from enterprise data)
  • Model Armor + Agentic Defense = the guardrails (security and governance)
  • A2A Protocol = the roads connecting everything together
  • 8th-Gen TPUs + Virgo Network = the power grid underneath it all

The Apple partnership? It's validation that Google's AI infrastructure is best-in-class — Apple choosing Google Cloud to build its next-gen foundation models is a vote of confidence in the Virgo fabric and TPU architecture. But for us as developers, it doesn't change what we build or how we build it.

A2A does. It changes the architecture of collaboration between intelligent systems. And that's a shift that will compound for years.

What You Should Do This Week

If any of this resonated, here's my practical advice:

  1. Read the A2A spec. It's well-written and surprisingly short. Start at google.github.io/A2A.

  2. Build a toy Agent Card. Publish a /.well-known/agent-card.json for one of your existing services. Even if you don't build the full A2A server, the exercise of describing your service's capabilities in a machine-readable format is incredibly clarifying.

  3. Try the ADK. Google's Agent Development Kit has native A2A support out of the box. Spin up two agents and watch them talk. There's something magical about seeing autonomous systems discover and delegate to each other.

  4. Think about your integration tax. Look at your current codebase. How much code exists purely to connect System A to System B? That's the code A2A is designed to eliminate. Start identifying the integration seams where a standardized protocol could replace bespoke glue code.

  5. Watch the Developer Keynote replay. The showcase of Agent Designer building a multi-agent workflow in natural language is legitimately impressive, and it demonstrates the full A2A lifecycle in action.

Final Thought

Every major platform shift has been catalyzed by a protocol, not a product. The web wasn't built on Netscape — it was built on HTTP. Mobile wasn't defined by the iPhone — it was enabled by LTE. Cloud computing wasn't created by AWS — it was powered by APIs and OAuth.

The agentic era will be no different. And A2A is the protocol that makes it possible.

Google Cloud Next '26 was packed with flashy demos and blockbuster partnerships. But the most important thing they shipped was a boring, beautiful, open protocol that lets AI agents work together without asking permission from any single vendor.

That's the one worth your attention. That's the one worth building on.


What's your take? Is A2A the game-changer I think it is, or am I overreacting? Have you tried building with the protocol yet? I'd love to hear about your experience in the comments.

Top comments (0)