DEV Community

Cover image for Google Finally Answered the Question Nobody Was Asking Out Loud
Muhammad Asim Hanif
Muhammad Asim Hanif

Posted on

Google Finally Answered the Question Nobody Was Asking Out Loud

Google Cloud NEXT '26 Challenge Submission

There's a thing that happens at big tech conferences. You sit through an hour of polished demos, applause lines, and customer success stories, and somewhere in the middle of it all, a single slide quietly destroys a problem you'd been working around for months.

That happened to me while watching Google Cloud NEXT '26.

The announcement wasn't the flashiest one. It wasn't the 8th-gen TPUs (though those are genuinely wild). It wasn't Gemini 3.1 Pro. It was something called the Agent2Agent protocol — and if you've spent any time trying to build real multi-agent systems, you probably just sat up a little straighter.


The problem everyone's been ignoring

Let me back up.

For the past year or so, the developer narrative around AI agents has been: "build your agent, make it smart, deploy it." And tools have gotten genuinely good at that part. But there's a messy reality underneath the demos — what happens when your agent needs to talk to another agent?

Not your agent calling a REST API. Not your agent hitting a database. Your agent needing to hand off a task to a completely different agent, built by a different team, running on a different platform, with different internal logic.

Right now, that looks like a lot of custom glue code. HTTP calls with manually agreed-upon schemas. Hoping the other team's agent returns something predictable. Debugging failures that could be anywhere in a chain of three or four systems.

I've been in that situation. It's not fun. And nobody's really been talking about it as a protocol problem — it's been treated as an integration problem you just solve case by case.

Google's answer at NEXT '26: stop solving it case by case.


What A2A actually is

The Agent2Agent (A2A) protocol is an open standard for agent-to-agent communication. The idea is straightforward — give agents a common language for handing off tasks, sharing context, and reporting status, regardless of what platform they're built on.

Here's what struck me about it: A2A isn't a Google-only thing. It's already built into LangGraph, CrewAI, LlamaIndex, Semantic Kernel, and AutoGen. The Agent Development Kit (ADK) hit stable v1.0 across Python, Go, and Java with TypeScript available too. This isn't a vendor lock-in play disguised as an open standard — or at least, it's not only that.

The practical picture they painted: a Salesforce agent built on Agentforce hands off a task to a Google agent on Vertex AI (now "Gemini Enterprise Agent Platform"), which queries a ServiceNow agent for IT asset data — all through A2A, without any of the three systems needing to understand each other's internals. No custom schema negotiation. No fragile adapter layers.

If that actually works as advertised in production, it changes the economics of multi-agent system design pretty dramatically.


The part that's easy to miss

What I think is genuinely underrated in the NEXT '26 announcements is the security layer sitting underneath all of this.

A2A without trust guarantees is just chaos at scale. If agents can call each other freely, you need to know which agent called what, with what permissions, and be able to audit the whole chain.

Google's answer is Agent Identity — every agent gets a unique cryptographic ID. Agent Gateway handles traffic control between agents and data. Model Armor adds runtime protection against prompt injection and tool poisoning.

These aren't afterthoughts bolted on. According to the docs, they're baked into the Agent Platform from the ground up, which means if you build on it, you get that traceability by default rather than having to engineer it yourself.

I'll be honest — I was skeptical when I read "secure-by-design" in the keynote. That phrase gets used a lot. But the architecture around Agent Identity is specific enough that it reads less like marketing and more like a genuine engineering decision. Cryptographic IDs per agent. Audit logging through Cloud IAM. Centralized observability.

Whether it holds up when you actually try to build something complex on it — that's a different question. But the intent is at least coherent.


Let's actually try it — ADK in under 5 minutes

This is where I'll stop summarizing announcements and show you something concrete. If you want to form your own opinion, the fastest way is to run something.

Install the ADK:

pip install google-adk
Enter fullscreen mode Exit fullscreen mode

Here's a minimal multi-agent setup — a coordinator that delegates to two specialized sub-agents. This is the exact pattern A2A is designed to scale across platforms:

from google.adk.agents import LlmAgent

# A specialized agent that only fetches data
data_agent = LlmAgent(
    name="data_fetcher",
    model="gemini-2.5-flash",
    instruction="""You are a data retrieval specialist.
    When given a topic, return a concise structured summary of relevant facts.
    Keep responses under 150 words."""
)

# A specialized agent that only writes summaries
writer_agent = LlmAgent(
    name="report_writer",
    model="gemini-2.5-flash",
    instruction="""You are a technical writer.
    Take raw data points and turn them into a clean, readable paragraph.
    Avoid jargon. Write for a developer audience."""
)

# Coordinator that routes between them
coordinator = LlmAgent(
    name="coordinator",
    model="gemini-2.5-flash",
    description="I coordinate data fetching and report writing tasks.",
    instruction="""You manage a small team of agents.
    For any research request: first delegate to data_fetcher, 
    then pass those results to report_writer for a clean output.
    Do not do either task yourself.""",
    sub_agents=[data_agent, writer_agent]
)
Enter fullscreen mode Exit fullscreen mode

Run it from your terminal:

adk run .
Enter fullscreen mode Exit fullscreen mode

Or spin up the dev UI to see the full agent trace visually:

adk web
Enter fullscreen mode Exit fullscreen mode

The dev UI is actually one of the underrated parts — you get a real-time view of which sub-agent handled what, what it returned, and how long each step took. That kind of observability is what's been missing from most agent frameworks.

What's notable here is that data_agent and writer_agent could each be running on entirely different infrastructure — or even built by different teams using different frameworks — and with A2A, the coordinator would still hand off tasks the same way. That's the point.


What this actually means for developers

Let me be concrete about what changes if A2A gains real adoption:

Building a pipeline of specialized agents becomes viable. Right now, chaining agents usually means one team owns the whole chain. With A2A, you could have a data-fetching agent from one team, a reasoning agent from another, and a summarization agent from a third — all interoperating without a massive integration project.

The ADK is worth actually looking at now. It's model-agnostic, deployable to any container or Kubernetes environment, and optimized for Gemini but not exclusive to it. The v1.0 stable release across multiple languages means this is past the "experimental" phase.

Agent simulation before you ship. The new Agent Simulation tool lets you stress-test agents against real-world scenarios before deployment. I'm more interested in this than most of the headline features because it addresses one of the most painful parts of agent development — you genuinely don't know how your agent behaves until something weird happens in production.


My honest take

Google's keynote framing was "the era of the pilot is over, the era of the agent is here." I think that's a little optimistic. Most teams I know are still figuring out how to make a single reliable agent, let alone orchestrating fleets of them.

But the infrastructure they're building at NEXT '26 — particularly A2A and the identity/governance layer — is the right bet. The bottleneck in multi-agent systems isn't model intelligence anymore. It's interoperability and trust. And those are fundamentally protocol and infrastructure problems.

The Danfoss example they shared (80% of email-based order processing automated, response times cut from 42 hours to near real-time) and Suzano (95% reduction in query time for natural-language SQL) suggest at least some organizations are past the pilot stage. But enterprise manufacturers and large corporates are a different environment than most of us are building in.

The question for the average developer isn't "is Google's agentic vision compelling." It is. The question is whether A2A becomes a genuine standard or a Google-flavored standard that only really works well in Google's ecosystem. That's determined by adoption, not announcement.

Worth watching. Worth experimenting with. The ADK is free to try, Agent Platform gives $300 in credits, and the A2A spec is open.

That's enough to form your own opinion, which is always better than taking mine.


Further reading:

Top comments (3)

Collapse
 
noman_sharif_79f979affc2d profile image
Noman Sharif

good

Collapse
 
muhammad_saadhanif_a4ff0 profile image
MUHAMMAD SAAD HANIF

The interoperability between Vertex AI, Salesforce, and CrewAI through A2A is a huge step toward a truly decentralized AI ecosystem. This isn't just a feature; it's a foundation. Great breakdown.

Collapse
 
aliy__4b6438bd019ac5 profile image
Aliyan Imran

Finally, agents have been given social skills. No more playing the awkward middleman in a 2-week integration meeting. That Agent Identity layer is the real MVP here.