Let me paint you a picture.
I have a portfolio site — shaurya.online — and like most developers building in 2026, I've been thinking about layering AI agents into it. A hiring agent that surfaces my work to recruiters. A code review agent that pre-screens my GitHub repos. A scheduling agent that handles interview slots through Google Calendar.
Three agents. Three jobs. Simple enough, right?
Except here's where it breaks down: what if the hiring agent (living inside Salesforce) needs to hand off candidate context to my scheduling agent (living inside Google Workspace), which then needs to notify my code review agent (running on a custom Vertex AI deployment) to pull the relevant repos?
Before last week, that was three separate integrations. Three custom APIs. Three authentication handshakes. Three points of failure. And that's just three agents. Imagine fifty.
This is the problem A2A solves — and it's the most important announcement that came out of Google Cloud NEXT '26 that almost nobody is talking about.
The Problem Nobody Mentions in the Keynote Hype
Everyone walked away from the NEXT '26 opening keynote buzzing about Gemini Enterprise, the 8th-gen TPUs, the Apple partnership. Fair. Those are flashy.
But buried underneath the demos and the Shaun White cameo was something more structurally significant: the Agent2Agent (A2A) protocol reaching production maturity with 150 organisations actively routing real tasks through it.
Here's the problem it's solving. As enterprises deploy AI agents — and they are, fast — each agent arrives with its own framework, its own API assumptions, its own authentication model. You've got LangGraph agents, CrewAI agents, Salesforce Agentforce agents, ServiceNow agents, custom ADK agents. They're all smart individually. But they can't talk to each other.
So what happens? Developers write glue code. Integration after integration. And here's the brutal mathematical reality:
Integration complexity grows at O(N²).
With 5 agents, you're managing 10 custom integrations. With 10 agents, it's 45. With 20, it's 190. Every new agent you add doesn't add a line of complexity — it multiplies it. Systems become brittle. Bugs compound. Teams spend more time on plumbing than on the actual product.
Without A2A — the spaghetti reality:
AgentA ←——→ AgentB
↕ ╲ ╱ ↕
AgentC ←——→ AgentD
↕ ╱ ╲ ↕
AgentE ←——→ AgentF
Every connection = a custom integration.
N agents = N(N-1)/2 integrations.
With A2A — the clean reality:
AgentA ──┐
AgentB ──┤
AgentC ──┼──[ A2A Protocol ]──→ Any Agent, Any Vendor
AgentD ──┤
AgentE ──┘
One protocol. Universal handshake.
This isn't an incremental improvement. It's a category shift.
What A2A Actually Is (Beyond the Marketing)
The Agent2Agent protocol is an open standard for how AI agents — regardless of who built them, what framework they run on, or what cloud they live in — discover, communicate with, and delegate tasks to each other.
Four things it enables, technically:
- Capability Discovery — agents find out what other agents can do
- Modality Negotiation — agents agree on how to exchange information (text, files, structured JSON, media)
- Task Management — agents delegate, track, and receive results from each other
- Secure Exchange — all of this happens without any agent ever exposing its internal state, memory, or tools to another
The foundation of discovery is something called an Agent Card — a JSON metadata document that every A2A-compliant agent publishes at a well-known URI:
/.well-known/agent.json
Here's what a minimal Agent Card looks like:
{
"name": "SchedulingAgent",
"description": "Manages calendar bookings and interview coordination",
"version": "1.0.0",
"url": "https://agents.shaurya.online/scheduler",
"capabilities": {
"streaming": true,
"pushNotifications": true
},
"skills": [
{
"id": "book_interview",
"name": "Book Interview Slot",
"description": "Schedules a technical interview given a candidate and time window",
"inputModes": ["text", "data"],
"outputModes": ["text", "data"]
}
],
"authentication": {
"schemes": ["bearer"]
}
}
Think of it as a robots.txt — but for agents. Any other agent on the network can hit that endpoint, parse the card, and know exactly what your agent can do and how to talk to it. No documentation needed. No SDK required. Just the protocol.
The transport layer is deliberately boring: HTTP with JSON-RPC 2.0. No proprietary message bus. No new infrastructure to learn. It runs on the same web stack your team already understands, which is a very deliberate design decision — enterprise adoption requires meeting engineers where they are.
For long-running tasks (think: a research agent that might take hours, or a procurement workflow that involves human approvals), A2A uses Server-Sent Events for real-time streaming and defines clear task lifecycle states:
SUBMITTED → WORKING → (INPUT_REQUIRED) → COMPLETED
↘ FAILED
↘ CANCELED
An agent can submit a task, receive a tracking ID, subscribe to status updates, and retrieve results asynchronously — without polling, without blocking, without timing out.
A2A vs. MCP: Stop Conflating Them
This is the single most common point of confusion I've seen in the coverage this week, so let's settle it cleanly.
| MCP (Model Context Protocol) | A2A (Agent2Agent Protocol) | |
|---|---|---|
| Made by | Anthropic | Google (now Linux Foundation) |
| Connects | Agent ↔ Tools & Data Sources | Agent ↔ Agent |
| Analogy | Nervous system | Postal network |
| Use case | "Give this agent access to my database" | "Tell this agent to handle the billing part" |
| Scope | Within one agent's environment | Across organisational/vendor boundaries |
The way I think about it:
MCP is how an agent reaches inward — connecting to its tools, its memory, its data. It's how my scheduling agent knows how to read my Google Calendar.
A2A is how agents reach outward — across company lines, across cloud vendors, across framework boundaries. It's how my scheduling agent hands a completed booking back to the Salesforce hiring agent that originally asked for it.
They're not competing. They're complementary layers in the same stack. Google officially adopted MCP across its own services at the end of 2025, running fully managed MCP servers for Maps, BigQuery, Compute Engine, and Kubernetes. A2A sits one level up.
┌─────────────────────────────────────────────┐
│ USER / ENTERPRISE APP │
├─────────────────────────────────────────────┤
│ A2A — Agent ↔ Agent Layer │ ← Cross-vendor orchestration
├──────────────┬──────────────────────────────┤
│ Agent A │ Agent B │
│ (ADK/GCP) │ (LangGraph / AWS) │
├──────────────┼──────────────────────────────┤
│ MCP ↓ │ MCP ↓ │ ← Tool & data connections
│ Tools, DBs │ Tools, DBs │
└──────────────┴──────────────────────────────┘
A2A is the coordination layer that MCP never tried to be.
What Actually Shipped at NEXT '26
Here's where things get concrete. A2A isn't a roadmap item announced to applause and then shipped in eighteen months. At NEXT '26, this is what landed:
ADK v1.0 — Stable across four languages
Google's Agent Development Kit reached stable v1.0 releases in Python, Go, Java, and TypeScript. A2A is now a first-class citizen in ADK — you can expose any ADK agent as an A2A server or consume any A2A agent as a sub-agent in just a few lines of code. It's code-first, model-agnostic, and deployable to any container or Kubernetes environment.
Native A2A support in every major framework
This is the moat. Native support is now built into:
- LangGraph
- CrewAI
- LlamaIndex Agents
- Microsoft Semantic Kernel
- AutoGen
The implication: it doesn't matter which framework your team chose. If you're building agents in 2026, A2A is already in your dependency tree.
The cross-vendor demo that should've gotten more airtime
The real-world example that quietly played during the keynote: a Salesforce Agentforce agent hands off a task to a Google Vertex AI agent, which queries a ServiceNow agent for IT asset data. Three companies. Three platforms. Three agent frameworks. Zero custom integration code. That's not a concept — that's 150 organisations doing it in production today.
Deployment paths for scale
Three production deployment paths are now available:
- Agent Engine — fully managed, agent-optimised
- Cloud Run — containerised, serverless at scale
- GKE — full Kubernetes control for complex orchestration
And new in v0.3: gRPC support, cryptographically signed Agent Cards for domain verification, and extended Python SDK client support.
The Linux Foundation Move Is the Real Story
Here's the strategic signal that I think most coverage missed.
Google contributed A2A to the Linux Foundation's Agentic AI Foundation. It now sits alongside HTTP, OAuth, and OpenAPI as an open, vendor-neutral, community-governed standard.
Why does that matter? Because it's Google explicitly saying: we don't want to own this. We want everyone to adopt it.
That's the same playbook as Android — open-source the platform, dominate the ecosystem. It's the same move that made OAuth the default auth standard. When a protocol moves to the Linux Foundation, it stops being a product and becomes infrastructure.
The HTTP of the agentic internet. That's the bet Google is making with A2A.
And the momentum backs it up. At launch in April 2025, A2A had 50 partner organisations. At NEXT '26, it's 150 — in production, routing real enterprise tasks, not just in pilot programmes. Microsoft, AWS, Salesforce, SAP, and ServiceNow are all running it. That's not adoption. That's standardisation.
The Honest Critique — Things That Are Still Unresolved
I'd be doing you a disservice if I left this as a hype piece. There are real open questions.
Agent identity at scale is unsolved. Signed Agent Cards using cryptographic domain verification are a good start, but the governance question remains: who vouches for an Agent Card published by a third party? How does an enterprise audit the provenance of an agent it's receiving delegated tasks from? This is the SSL certificate problem for agents, and we don't have a Let's Encrypt equivalent yet.
150 orgs is impressive, but verify before you commit. Production is a strong word. Some of those 150 organisations are running A2A on non-critical workflows. The spec is at v1.2, which means breaking changes are still possible. Plan for proof-of-concept validation before building critical business pipelines on top of it.
The OpenAI wild card. Google and Microsoft have aligned behind A2A. But OpenAI has been quiet. If they ship a competing inter-agent spec — and they have both the incentive and the resources — we could end up with a fragmented standard landscape. Worth watching.
Developer tooling is still maturing. The observability story — tracing an intent as it moves across three agents from three vendors — is conceptually supported but practically immature. Most monitoring tools don't yet give you a clean flame graph of a cross-vendor agent workflow. This will improve, but it's a current pain point.
None of these invalidate A2A. They're the honest shape of an early standard that's moving fast. Know what you're getting into.
What You Should Do Right Now
Whether you're building a side project like shaurya.online or architecting an enterprise AI platform, here's the practical playbook:
1. Read the spec. Not the marketing blog — the actual spec at a2a-protocol.org. It's well-written, relatively short, and will give you the vocabulary to make architectural decisions with confidence.
2. Expose one agent as A2A-compliant. Pick the smallest useful agent you have. Add an Agent Card at /.well-known/agent.json. Deploy it. You'll understand the protocol ten times better from building than from reading.
3. Install ADK v1.0 and build the "Hello, World!" cross-agent handoff. The quickstart in the documentation has you running a client agent calling a remote A2A agent in under 30 minutes. Do this. It reframes how you think about multi-agent architecture.
4. Audit your current integration graph. Count how many custom agent-to-agent connections you're maintaining right now. If that number is more than two, A2A deserves a serious architectural evaluation.
As for shaurya.online — if I were rebuilding my agent stack today, I'd design it A2A-first. Hiring agent, code review agent, scheduling agent — each one an independent A2A server with its own Agent Card. Each one deployable, replaceable, and interoperable with any future agent I add. No glue code. No N² integrations. Just the protocol.
That's the promise of A2A. And based on what shipped at NEXT '26, it's no longer a promise — it's a working standard.
The agentic internet needed a lingua franca. We might just have found it.
If you found this useful, I write about building with AI and developer tooling at shaurya.online. Drop your thoughts in the comments — especially if you've already got A2A running in production. I'd love to hear what the rough edges actually look like.
References & Further Reading
Top comments (0)