This is a submission for the Google Cloud NEXT Writing Challenge
I've been obsessing over Clinical Decision Support (CDS) systems for years.
And if you've ever built one — a real one, one that has to watch over a patient in the ICU at 2 AM, react to a blood pressure crash, cross-reference incoming lab results, parse a nursing note written in shorthand, and still deliver an actionable alert before it's too late — you know the dark secret at the heart of most of them.
They are not real-time.
They are very fast batch jobs wearing a real-time costume.
You pre-configure thresholds based on clinical guidelines written three years ago. You poll the EHR every two minutes. You fire alerts based on data that is already stale. And you spend half your architectural energy managing the gap between what the patient's state is right now and what the system thinks it is right now.
I've been sitting with that problem for a long time. Then I watched the Google Cloud NEXT '26 Opening and Developer Keynotes. And something finally clicked.
What Google Announced — And Why It's Bigger Than It Looks
The big headline from the Opening Keynote is the Gemini Enterprise Agent Platform — Google's transformation of Vertex AI into an end-to-end system for building, governing, and deploying autonomous AI agents at scale. Over 32,000 attendees were at Next '26 in Las Vegas, and the energy in the room during these announcements made it clear this wasn't just incremental.
Most people read it as an enterprise productivity story. Summarize your discharge reports. Build a chatbot for scheduling.
I read it as a clinical infrastructure story. Specifically: the missing architectural layer that finally makes a reasoning-based, real-time CDS system buildable — without a team of 20 engineers and a five-year roadmap.
Let me show you what I mean.
The Real Problem Is Not Compute. It's Orchestration.
Picture this: a post-surgical patient in Bed 4, ICU, Day 2. It's 3 AM.
- Heart rate climbs slowly — subtle, but there.
- Blood pressure ticks down by 12% over 45 minutes.
- The night nurse notes "patient seems agitated" in a free-text field.
- Lab results from two hours ago show lactate trending upward.
- The on-call doc just prescribed a new antibiotic 30 minutes ago.
Should a sepsis alert fire right now?
Not in five minutes when the batch job runs. Not based on a cached lab from two hours ago. Right now.
In most CDS systems I've seen, nobody actually knows. The system fires whatever rule last evaluated as "true" and hopes the clinical staff connect the dots.
The problem is not that we lack data. The problem is that the data is trapped in silos — vitals in one system, labs in another, clinical notes in a third, medications in a fourth — and getting them to converge into a single reasoning step, in near-real-time, is an orchestration nightmare we've been duct-taping together for decades.
That is not a database problem. It is a reasoning problem.
The Announcement That Changes Everything: Agentic Data Cloud
The keynote highlight for me wasn't a faster GPU or a cheaper API. It was the Agentic Data Cloud.
Google described it as "a new, AI-native architecture that allows your data to be utilized at the speed and scale required by agentic AI." At its core is the Knowledge Catalog — a universal context engine that maps and infers business meaning across your entire data estate using aggregation, enrichment, and search to help agents execute tasks accurately.
For healthcare, this is transformational.
Today, your clinical alert logic must explicitly know the schema of your EHR, the format of your HL7 vitals feed, and the API surface of your LIMS. Someone wrote all those connections into code. Someone maintains them. Every system upgrade breaks something.
With the Knowledge Catalog, a clinical agent doesn't need to be told: "When blood pressure drops more than 10% AND lactate is above 2 mmol/L AND a new antibiotic was prescribed in the last four hours, evaluate for sepsis."
It can reason over the connected data estate and arrive at that logic itself. And when clinical guidelines change — when the Surviving Sepsis Campaign releases a new protocol — the agent's understanding can update without a code deploy.
That is a fundamentally different model.
What a Real "Reasoning" Triage Agent Looks Like
The Developer Keynote introduced several tools that bring this to life. Here's the architecture I'm now designing using what was announced this week:
The Triage Agent — Built on ADK with Memory Bank
Google announced the Agent Development Kit (ADK), which "unlocks more powerful reasoning by organizing agents into a network of sub-agents" using a graph-based framework that defines "clear, reliable logic for how agents work together."
A Triage Agent built on this would be persistent and stateful — not a model you call per request, but one that lives in the ICU alongside the care team.
- Agent Memory Bank (now GA): Maintains continuous context about each patient — baseline vitals, medication history, the reasoning behind every previous alert. When the patient's heart rate climbs, the agent doesn't start from scratch. It already knows this patient's normal range. Google specifically highlighted that Memory Bank uses "Memory Profiles" for "high-accuracy details with low latency."
- Multi-Agent Coordination: The Triage Agent is not alone. A Vitals Monitor Agent, a Lab Parsing Agent, and a Pharmacy Interaction Agent each watch their slice of the data and push real-time events up the chain. No polling. No batch jobs.
- Real-Time Data via Cross-Cloud Lakehouse: Announced this week and built on Apache Iceberg REST Catalog, this enables agents to "seamlessly access data across AWS, Azure, and a vast partner ecosystem" — meaning your vitals feed, your LIMS, and your EHR can all be queried without building fragile ETL pipelines.
The Governance Layer (Non-Negotiable in Healthcare)
Here is the part most AI architecture posts skip — and the part that determines whether a hospital ever lets you deploy this in production.
Google announced Agent Observability and Agent Evaluation tools that "visually trace complex reasoning to debug issues as they happen" and can "evaluate the logic of an entire conversation, not just a single response." For every decision the Triage Agent makes, we get: what signals it considered, what reasoning path it followed, and why it fired (or did not fire) an alert.
They also announced Agent Identity, which "assigns every agent a verifiable identity in the form of a unique cryptographic ID" and creates "a clear, auditable trail for every action an agent takes." In a regulated healthcare environment, an AI system that cannot explain itself — and prove who authorized its actions — is a liability, not an asset.
Confidence thresholds matter here too. If the Triage Agent's sepsis probability falls in the uncertainty zone, it doesn't auto-alert. It routes to a charge nurse. This is not a workaround. This is the right clinical architecture.
The Part That Surprised Me: Managed MCP Servers
I expected the agent platform. I did not expect Managed Model Context Protocol (MCP) Servers for databases.
Google announced these specifically to "securely manage the infrastructure to connect AI models directly to your operational data, eliminating the burden of hosting MCP servers." For anyone unfamiliar, MCP is the protocol that lets agents call external tools in a standardized way — essentially the USB-C for AI integrations.
We have spent years writing bespoke API wrappers to connect AI systems to Epic, Cerner, and Meditech. Fragile. Expensive. Breaks on every software update.
With managed MCP, the Triage Agent can call clinical tools — order a STAT lab, page a specialist, flag a chart for review — through a governed, observable, rate-limited interface. Standard protocol. No custom wrappers. No integration nightmares for every new hospital system.
Google also announced Cloud Storage MCP server, Looker's managed MCP server, and Workspace MCP server this week — the pattern is clear. MCP is becoming the universal API layer for the entire agentic ecosystem.
My Honest Take
Google Cloud NEXT '26 gave us something we've been waiting for: the connective tissue.
Not a better model. Not cheaper inference. A complete, production-grade architecture — ADK, Memory Bank, Knowledge Catalog, managed MCP, Agent Identity, Agent Observability — for building AI systems that reason in real time over fragmented, heterogeneous, regulated data.
One stat from the keynote sticks with me: Google updated their running list of real-world AI use cases at Next '26 to 1,302 customer stories. Healthcare is already in there — Highmark Health's AI assistant delivered $27.9 million in value in 2025 alone. Merck is deploying an agentic platform across their entire R&D and manufacturing operations.
The enterprise isn't waiting. Healthcare can't afford to either.
But here is what I want to be honest about.
None of this works if you hand the keys to the agent and walk away. The technology is ready. The domain discipline is still ours to provide.
You have to decide what "correct triage" means for your patient population. You have to define the guardrails, the confidence thresholds, the escalation paths. The agent acts faster than any human on better information — but the strategic clinical reasoning still belongs to the care team.
That has always been the contract in medical informatics. You build the best possible decision support, and then you make sure the clinician is still the one making the decision.
It is just that, after this week, the "best possible" got a whole lot better.
Have you worked on clinical AI or real-time medical data systems? I'd love to hear how you're thinking about agents in this space — drop it in the comments.
Top comments (0)