DEV Community

Cover image for "I Watched Both Google Cloud NEXT '26 Keynotes So You Don't Have To — Here's What Actually Matters"
Tech Guy
Tech Guy

Posted on

"I Watched Both Google Cloud NEXT '26 Keynotes So You Don't Have To — Here's What Actually Matters"

Google Cloud NEXT '26 Challenge Submission

Okay, real talk.
When Google Cloud NEXT '26 happened, I did what most developers do — I skimmed the tweet threads, saw "8th gen TPUs" and "Agentic Enterprise" trending, nodded along, and moved on with my day.
Then I actually sat down and watched the keynotes. Both of them.
And I want to share what genuinely surprised me, what I think is being underrated in all the coverage, and where I think Google still has some explaining to do. This isn't a press release rewrite. This is me working through what I understood — and what I think it means for people who are actually building things.

First — What Even Is the "Agentic Enterprise"?

Every tech conference needs a buzzword, and NEXT '26's is "Agentic." Thomas Kurian used it approximately 400 times in the opening keynote.
But beneath the marketing, there's a real shift being described. For the past couple of years, most of us have been building AI features — a chatbot here, a summarization endpoint there. Stateless. One-shot. The model answers, you show it to the user, done. It was cool. It was also kind of shallow.
What Google is describing now is different. An AI agent doesn't just answer — it acts. It takes a goal, breaks it into steps, uses tools, calls other agents, checks its own work, and keeps going until the job is done. Think less "smart autocomplete" and more "a junior developer you can actually assign a whole task to."
That framing changes everything about how you build. And the entire NEXT '26 stack — from the TPUs to the data layer to the developer tools — is designed around that new model. Here are the three things I keep thinking about.

1. The Opening Keynote: They Built a Full Stack, and It's More Coherent Than I Expected

I'll be honest — I usually tune out opening keynotes. They're for the CIOs and the press, not for me. But this one was worth watching because it laid out something surprisingly coherent.
The Gemini Enterprise Agent Platform is positioned as the mission control for everything. You build agents, orchestrate them, govern them, and scale them — all from one place, with access to Gemini 3.1 Pro for complex reasoning tasks. That's the brain.
Then there's the AI Hypercomputer — the muscle. The new 8th-gen TPUs are split into two chips with deliberate purposes: TPU 8t is built for training models fast, and TPU 8i is optimized for inference — serving those models quickly and cheaply. Google claims the TPU 8i delivers 80% better performance per dollar compared to the previous generation. If that number holds outside a demo environment, it's a meaningful cost reduction for teams running agents at scale. Tying everything together is a new network fabric called Virgo, designed to connect hundreds of thousands of accelerators into what Google is calling a megascale AI supercomputer.
And then there's Agentic Defense — the security layer — built around the $32 billion Wiz acquisition that closed just before the conference. The idea is autonomous security that operates at "machine speed," because human defenders simply can't keep up with AI-driven threats manually. Red team simulation, blue team detection, green team remediation — all increasingly automated. Whether that's reassuring or mildly terrifying depends on your relationship with your security team.
What I genuinely appreciated about the opening keynote was the connective tissue. Usually these announcements feel like a product roadmap thrown at a wall. Here, each piece had a clear role: the platform builds and governs agents, the hardware runs them fast and cheaply, the data layer gives them context, and the security layer keeps them from going sideways. It's a full-stack story that actually hangs together.
The partnerships made it tangible. Virgin Voyages, Citadel Securities, Macquarie Bank — these aren't "we gave them free credits" partnerships. These are companies that have put this stuff in production. Tata Steel apparently deployed over 300 specialized AI agents in nine months. That number stuck with me.

**_

  1. The Developer Keynote: This Is Where It Got Real for Me_**

The opening keynote was for the executives. The developer keynote was for us.
And it was genuinely one of the better technical keynotes I've watched in a while — partly because the team did a live demo building a complex multi-agent system on stage and mostly didn't break it.
The scenario: plan a marathon in Las Vegas. That sounds silly but it's actually a smart test case. Marathon planning touches logistics, permits, route evaluation, weather simulation, budget constraints, crowd management. It's exactly the kind of messy, multi-domain problem that exposes the limits of a single-agent approach.
The Agent Development Kit (ADK) is the foundation — a framework for building modular agents where each one has a specific job. Fine, that's table stakes in 2026. But the interesting piece is that every Google Cloud service is now Model Context Protocol (MCP) enabled. MCP is an emerging open standard that lets AI agents communicate with tools and infrastructure without custom glue code. Your agent can call Cloud Storage, trigger Pub/Sub, query BigQuery — all through the same standardized interface. For anyone who has spent a weekend hand-rolling tool definitions just to let a model talk to a cloud service, this is genuinely welcome news.
The Agent-to-Agent (A2A) protocol combined with an Agent Registry solves a problem I've personally hit before. When you have multiple specialized agents, they need to know about each other. The naive approach is hardcoding — Agent A knows Agent B's endpoint. That works until it doesn't. A2A lets agents register their capabilities and discover each other at runtime. In the demo, a Planner, Evaluator, and Simulator agent found and collaborated with each other dynamically — no brittle custom integrations. That's the kind of plumbing decision that seems minor until you're three months into a project and your agent graph has fifteen nodes and every change breaks something.
The Memory Bank was the one that made me sit up. Most production agents are stateless by design — they start fresh on every invocation. Safe, yes, but also limiting. An agent that can't remember that the last five simulations failed on a particular route isn't really intelligent. It's just an expensive for-loop. The Memory Bank gives agents persistent, queryable memory backed by RAG, so they recall past results and adapt future behavior. In the marathon demo the Simulator remembered previous run failures and adjusted its approach on its own. That's the shift from "stateless tool" to "system that actually gets better with use." Small difference in a demo. Huge difference in production over three months.
The Agent Gateway is the governance piece that I think gets undersold in coverage. Governance enforced in prompts — "hey, don't touch the budget" — is fragile. A clever edge case or an unexpected instruction can blow right through it. The Agent Gateway enforces IAM policies and identity-based access controls at the infrastructure level, as a proxy, before the action executes. In the demo, an agent was blocked from modifying financial data even when directly instructed to, because the Gateway intercepted it at the infrastructure layer. For anyone building agents that touch real business systems — finance, HR, customer data — that distinction is everything.
The observability tooling — being able to describe a failure in natural language and get a root cause from Gemini Cloud Assist — looked great in the demo. I'm reserving judgment until I see it handle genuinely noisy distributed traces in a real environment, but the direction is exactly right.

3. The Agentic Data Cloud: The Quietly Consequential Announcement

The flashy announcements steal oxygen from the ones that actually matter most. The Agentic Data Cloud was buried in the latter half of the opening keynote, gets less coverage than the TPUs, and I think it's going to matter more than either of those things for most teams.
Here's the problem it's solving. AI agents are only as useful as the context they have access to. An agent that doesn't understand your company's definition of "active customer," or doesn't know that "revenue" and "rev_adj" mean different things across departments, is going to make wrong calls. The context problem is the real hard problem — not the model quality. Models are good enough now. Making them understand your specific business is the part that still takes months.
The Knowledge Catalog is Google's answer to this. It evolved from Dataplex into something that doesn't just track where your data is, but understands what it means. It uses Gemini to automatically tag incoming data, infer business semantics, map relationships across systems, and continuously enrich context as your organization's data evolves. A file lands in Cloud Storage — instantly enriched and made agent-ready. A new dataset appears in BigQuery — automatically mapped to your existing business vocabulary. Google's headline claim is "zero manual data engineering."
I'm skeptical of that framing, honestly. Every enterprise has legacy systems with decades of inconsistent naming and undocumented tribal knowledge. No automation makes that disappear overnight. But even if this gets you 70% of the way there automatically, that's a massive reduction in the months-long data prep work that currently blocks most serious agent projects from getting started.
The Cross-Cloud Lakehouse is the other half, and this is where I think Google made a genuinely bold architectural choice. Instead of saying "move all your data to Google Cloud," they built a Lakehouse standardized on open Apache Iceberg that lets data stay wherever it is — including AWS and Azure — and be queried directly from Google's infrastructure. No data migration. No egress nightmares. No six-month ETL project before your agents can see your existing datasets.
The technical mechanism: Cross-Cloud Interconnect is integrated directly into the data plane, giving agents low-latency access to data on other clouds as if it were local. For enterprises with data spread across multiple clouds — which is basically every large enterprise in 2026 — this removes what would otherwise be a show-stopping constraint. Federation with Databricks, Snowflake, AWS Glue, and SAP is supported out of the box.
The strategic read is interesting too. Google isn't trying to win by making you move your data to them. They're trying to be the reasoning layer on top of your data, wherever it lives. That's a different and much larger market position than "use BigQuery for everything." Whether enterprises buy that pitch is another question.
What I'd push back on: pricing. Google announced these features. They didn't clearly announce what cross-cloud queries, Knowledge Catalog indexing, or continuous enrichment actually costs at enterprise data volumes. That gap matters a lot. Teams evaluating this stack need those numbers before they can make a real architectural commitment. We've been burned before by per-token costs that seemed trivial in demos and brutal in production.

My Honest Take

Look, every Google Cloud NEXT involves a healthy dose of "we're the only ones who can do this" energy, and NEXT '26 was no different. Some of that is marketing.
But the coherence of this stack genuinely surprised me. The ADK and MCP for building agents. A2A for making them discover and talk to each other. The Memory Bank for making them stateful and adaptive. The Agent Gateway for making them governable. The Knowledge Catalog for giving them real business context. The Cross-Cloud Lakehouse for making that context available regardless of where the data lives. The TPU 8i for running all of it at scale without burning through your budget.
Each piece has a clear job. They connect in ways that make architectural sense. And there are actual production deployments that validate parts of this, not just beta programs.
The open questions are real though. Does the Knowledge Catalog's auto-enrichment actually hold up against messy enterprise data with years of inconsistency baked in? Does Agent Observability scale to production-level trace complexity? Does the Cross-Cloud Lakehouse latency story hold when you're doing serious multi-cloud joins at scale? We'll have better answers in six months once developers start kicking the tires beyond controlled demos.
For now, the thing NEXT '26 changed for me is the mental model. The primitive isn't "model" anymore. It's "agent system." And if you're building anything serious with AI over the next year, that shift is probably worth thinking about sooner rather than later.

What part of the NEXT '26 stack are you most curious — or skeptical — about? Drop a comment. I'm especially interested in whether anyone's already experimenting with the ADK, the Cross-Cloud Lakehouse, or the Agent Gateway in something resembling a real project.

Top comments (0)