This is a submission for the Google Cloud NEXT Writing Challenge
I'm at Google Cloud Next this week, and something someone said in passing between sessions stuck with me:
"Agents are the new microservices."
At first it felt like conference hyperbole. By the end of the keynotes, I wasn't so sure.
We've Been Here Before
Cast your mind back to 2012–2015. We were breaking apart monoliths. The pitch was compelling — smaller, focused services, independently deployable, easier to reason about. The reality took years to catch up. We needed service meshes to manage traffic, distributed tracing to debug across boundaries, circuit breakers to handle failures gracefully, and IAM policies to figure out which service could talk to which.
The decomposition was easy. The discipline was hard.
I think we're at the exact same inflection point with AI agents — and what Google shipped at Next '26 is the clearest signal yet.
The Announcements That Made It Click
Google didn't just talk about agents in the abstract. They shipped the infrastructure primitives:
Agent Identity — Every agent gets a unique cryptographic ID with scoped permissions and audit trails. Agents can operate autonomously but within defined authorization boundaries, and escalate to a human when they hit the edge of their scope.
Agent Gateway — Centralized policy enforcement for all agent-to-agent and agent-to-tool interactions. It understands MCP and A2A natively, inspects every interaction, and integrates with Model Armor for runtime protection against prompt injection and tool poisoning.
Agent Registry — They literally called it "the DNS of your internet of agents." Agents advertise capabilities via signed Agent Cards. Other agents discover and route to them. Sound familiar?
A2A Protocol v1.2 — Now in production at 150+ organizations including Microsoft, AWS, Salesforce, SAP, and ServiceNow. Governed by the Linux Foundation. IBM's competing ACP protocol voluntarily merged into it last year. The standard is settling.
The Architecture That's Forming
Here's the mental model I came away with:
- MCP = how an agent connects to tools and data (think: the service's internal dependencies)
- A2A = how agents communicate with each other across platforms and orgs (think: the inter-service API contract)
- Agent Identity = trust and authorization (think: mTLS + IAM, but for agents)
- Agent Gateway = policy enforcement at the boundary (think: your API gateway / service mesh)
- Agent Registry = discovery (think: service registry / DNS) If you squint, it's a microservice architecture. The components map almost 1:1. The difference is that the nodes in this network are nondeterministic — they don't return a predictable response to a given input the way a REST endpoint does. That changes the failure modes significantly.
What's Actually Different (And Harder)
With microservices, failure is binary and observable. A service either returns 200 or it doesn't. You can write deterministic tests, set SLAs, and build dashboards around clear metrics.
With agents, failure is semantic. An agent can return something that looks correct and is completely wrong for the task. Observability isn't just logs and latency anymore — it's did the agent do the right thing, which is fundamentally harder to instrument.
Google's response to this is Agent Observability and Model Armor — but we're early. The tooling for semantic correctness is nowhere near as mature as distributed tracing was at the microservice maturity peak.
Who Has the Advantage
The engineers who survived the microservices wars — who built retry logic, designed for idempotency, drew service boundary diagrams, and debugged cascading failures at 2am — are going to have a serious edge here.
The instincts transfer: keep agents focused on a single responsibility, design for failure at every boundary, don't share mutable state between agents, make every interaction traceable.
What doesn't transfer is the assumption that a system that runs is a system that works. That's the new discipline we're building.
Where I Think This Goes
A2A reaching the Linux Foundation with 150+ production orgs in under a year is legitimately impressive. For context, it took years for microservice tooling (Kubernetes, Istio, Jaeger) to reach equivalent adoption. The compression is real.
I left the keynotes thinking the analogy isn't just rhetorical anymore. The same organizational forces that pushed teams toward microservices — scale, team autonomy, independent deployment — are pushing toward multi-agent architectures. And Google just built the Kubernetes-equivalent control plane for it.
The question isn't if this becomes the dominant enterprise architecture. It's whether your team builds the discipline before you're debugging agent spaghetti at 2am.
Written from the conference floor at Google Cloud Next '26, Las Vegas.
Top comments (0)