This is a submission for the Google Cloud NEXT Writing Challenge
Read enough keynote recaps and the shape of them becomes familiar. Model names, benchmark numbers, a CEO quote about whatever this year's era is called. You close the tab, write a Jira ticket, and wonder whether any of it was about your job.
Today's Google Cloud NEXT '26 opening keynote had all of that. Thomas Kurian in Las Vegas, Sundar Pichai on video, Apple's logo unexpectedly behind the Google CEO's head. TPU generation eight. "The agentic cloud." The Gemini Enterprise Agent Platform — which is mostly what Vertex AI used to be, renamed and consolidated.
Here's what I kept coming back to, though: the announcements that will actually affect what developers ship this year weren't the ones with applause breaks.
They were the boring ones. The plumbing.
What I Mean By Plumbing
The boring infrastructure almost always decides whether a technology ships at scale. HTTP wasn't exciting. TCP/IP wasn't a keynote moment. Nobody clapped for DNS. But that's the layer where things either work reliably or don't work at all.
AI agents are at exactly that point right now. Everyone roughly agrees on what they want agents to do. The part that has quietly killed a hundred enterprise AI projects is different: getting agents to talk to each other across systems, hold context between sessions, and do it without becoming a security nightmare your team has to clean up later.
That's most of what Google actually shipped today. Dressed up in model demos and stage lighting, but the substance is infrastructure.
The A2A Protocol Reaching v1.0
The Agent2Agent (A2A) protocol reached v1.0 today and got handed to the Linux Foundation's Agentic AI Foundation for governance. It got maybe thirty seconds on stage.
A2A answers a question that has made multi-agent architectures genuinely painful: how does an agent on Platform A discover, trust, and delegate to an agent on Platform B, when neither platform knows anything about the other's internals?
The answer is Agent Cards. Each agent publishes a signed card — cryptographically verified via domain signatures — declaring what it can do, what inputs it accepts, and how to reach it. Another agent fetches that card, checks the signature, and delegates with some real basis for trusting that the capability is what it claims.
Before this, "multi-agent" usually meant custom glue code, bespoke APIs, or just hoping two SDKs from the same vendor happened to compose without breaking. The production signal is worth noting: 150 organizations are running A2A in production right now, not in pilots — routing real workloads between agents built on different vendors' stacks. It launched roughly a year ago with 50 partner organizations.
Native A2A support now ships in ADK, LangGraph, CrewAI, LlamaIndex, Semantic Kernel, and AutoGen. That's not a Google-curated list of close partners. That's where developers are actually building agent systems.
The Linux Foundation move deserves more credit than it's getting. When a protocol lives in one company's GitHub, every potential adopter carries a quiet question in the back of their head: what happens when Google gets bored with this? That friction is real, and it's killed protocols before. Handing it to neutral governance before mass adoption removes the question. It's the right call — and it wasn't the obviously self-interested one, since Google could have used the protocol as a lock-in mechanism instead.
ADK v1.0 and What "Stable" Actually Buys You
The Agent Development Kit hit stable v1.0 releases today across Python, Go, and Java, with TypeScript available as well. The announcement was brief. The implications are less so.
The 0.x releases were experimentally useful — people shipped real things with them. But "production-ready" means something specific when your agents are taking autonomous actions: stable APIs you can actually depend on, predictable versioning, and a security model you can explain to someone who isn't you.
v1.0 ships with Model Armor, which defends against indirect prompt injection. This is the attack vector most agent systems ignore until it becomes a real problem — a malicious payload hidden in retrieved content that hijacks agent behavior mid-task. It also puts zero-trust architecture at the protocol level, with access managed through Cloud IAM and full audit logging. When an agent does something unexpected at 2am, you can find out what it did and why, rather than guessing.
If you've been waiting for ADK to stabilize before committing: the spec is frozen, the security model exists, and the governance is neutral. That's what stable means.
A Line from the Keynote Worth Sitting With
Thomas Kurian said this during his talk: "You have moved beyond the pilot. The experimental phase is behind us."
I've been thinking about that framing. It's not a description of where enterprises actually are. It's a description of where Google needs them to be.
Most enterprise AI projects are still in pilot. The gap between a working demo and something that runs reliably across production data, security policies, and organizational complexity has ended more AI initiatives than bad models ever did. That gap is exactly what makes today's less-glamorous announcements worth attention.
Knowledge Catalog grounds agents in actual business context across an entire data estate. Memory Bank gives agents persistent state across sessions, so they don't start from scratch on every interaction. Agent Identity manages agent credentials through the same IAM system that manages human credentials — which means your security team can audit them the same way.
None of this demos well. "Agent credentials managed through IAM with audit logging" doesn't generate applause. But it's what makes an agent your CISO will let near production data, rather than one that stays permanently in sandbox.
Where I'm Skeptical
The word "open" appears a lot today. A2A is an open protocol. ADK is open source. The Model Garden includes 200+ models from multiple vendors, including Anthropic Claude.
All true. And also: the smoothest path through every one of these tools runs directly through Google Cloud. Agent Engine for managed hosting. Apigee as the API-to-agent gateway. Vertex AI as the deployment target.
The protocol is portable. The operational infrastructure is not.
This isn't necessarily a problem — someone has to build the runtime, and Google's is genuinely good. But developers should be clear with themselves about what "open" covers here. The code you write on ADK travels with you. The observability tooling, the managed hosting, the audit trail — those are Google Cloud products. That's a real dependency. Know what you're choosing.
What to Actually Do with This
If you're building agents right now: read the A2A spec before the SDK docs. Understanding Agent Cards — what goes into them, how signing works, what a well-defined skill description looks like — shapes how you design agents from the start. Adding discoverability to a system you built as closed is miserable. The official ADK A2A docs are genuinely readable and worth an hour.
If you're choosing a multi-agent framework: A2A v1.0 in production at 150 organizations, across every major framework, is a meaningful signal about where multi-agent interoperability is actually converging. MCP is worth understanding too — the two solve different layers of the same problem. But A2A is where cross-platform agent composition is happening in production, not in demos.
If you're speccing an enterprise AI project: look at Memory Bank and Agent Identity before you finalize the architecture. Persistent agent state and proper credential management are the two things that most demo architectures quietly skip. If yours skips them too, you'll add them later, under pressure, and it won't go cleanly.
The Part That's Easy to Miss
The keynote demo that got the biggest reaction showed a Gemini agent pulling data from thousands of ingredient PDFs, catching a soy allergen buried in one of them, then calling research agents to build a full market projection — autonomously, while the presenter talked.
That's a real capability and it's impressive. But it works because of things that weren't in the demo: agents that can find each other by capability, verify each other's identity, maintain context between calls, and do it inside an auditable security boundary.
The conference was loud today. TPU naming conventions, Apple on a Google slide, Sundar Pichai explaining that 75% of Google's new code is now AI-generated. That's all interesting. The part that matters for what developers actually ship is quieter: a protocol standard under neutral governance, running in production, with a security story you can defend.
Infrastructure doesn't announce itself. It just works, until the day you need it and it's not there.
The developer keynote is tomorrow at 10:30 AM PT on the DEV homepage. Worth catching for how the ADK and A2A story gets told to a technical audience rather than an executive one.
Top comments (0)