This is a submission for the Google Cloud NEXT Writing Challenge
Most of my projects didn’t fail in dramatic ways.
No AI going rogue. No crazy bugs.
They failed in boring, frustrating ways.
Things like:
- features working… but not actually useful
- systems breaking when the internet disappears
- users giving inputs that don’t reflect reality
While watching Google Cloud NEXT ‘26, I kept thinking:
This would’ve saved me a lot of trial and error.
Not because the tech is flashy — but because it actually addresses the kind of problems you run into when you try to build something real.
Where I Started: Building With Constraints
One of my projects was an offline communication system for disaster management (EchoRelief).
The idea was simple — allow people to communicate during disasters without internet.
But building it wasn’t simple at all.
The biggest problems weren’t “technical” in the usual sense. They were constraints:
- devices not connecting properly over local networks
- dependencies silently failing because they relied on the internet
- figuring out how to keep things simple but still usable
At some point I realised:
building for the real world is mostly about handling what doesn’t work.
A Completely Different Problem: Humans
Another project I worked on (MindGuard) tried to estimate cognitive fatigue.
Sounds straightforward — until you realise the inputs are things like:
- sleep
- stress
- screen time
All self-reported.
Which basically means: not reliable.
At first I tried a simple weighted model.
Then I realised combining things like “sleep” and “stress” into a single score isn’t clean.
So I moved toward a more adaptive approach using an LLM — not as a predictor, but as a reasoning layer.
That worked better.
But it also introduced a new problem:
How do you trust the output?
What NEXT ‘26 Actually Gets Right
A lot of announcements at NEXT ‘26 were impressive.
But a few things stood out to me because they directly relate to problems I’ve already faced.
1. MCP (Model Context Protocol) — Fixing Bad Inputs
One of the biggest limitations in my projects was input quality.
Everything depended on what the user entered.
The idea behind MCP — giving systems structured access to external tools and data — would completely change that.
Instead of relying on:
- “How many hours did you sleep?”
You could pull:
- actual device data
- screen time APIs
- wearable inputs
That shift — from assumed data to real data — is huge.
Because most systems don’t fail due to logic.
They fail because the input itself is flawed.
2. Observability & Evals — Catching Silent Failures
Another thing I struggled with:
Systems don’t always break loudly.
Sometimes they just… behave incorrectly.
And you don’t notice until much later.
The idea of integrated evals and observability for agents is something I wish I had earlier.
Not just:
- “Is it running?”
But:
- “Is it doing what it’s supposed to do?”
That’s a big difference.
Especially when systems become more autonomous or rely on reasoning instead of fixed logic.
3. A2A Protocol — Powerful, But Still Early
The Agent-to-Agent (A2A) protocol is probably one of the most talked-about ideas.
And yeah — the idea is strong.

Agents discovering each other, delegating tasks, collaborating without hardcoded integrations.
But honestly, it also feels like something that’s still evolving.
From what I’ve seen while building:
the hardest part isn’t communication — it’s clarity.
Defining:
- what a system actually does
- what inputs it expects
- what outputs it guarantees
That’s not trivial.
And without that clarity, even the best protocol won’t fully solve the problem.
So while A2A is exciting, I think the real challenge is still ahead — standardising meaning, not just communication.
The Bigger Shift (This Is What Matters)
What stood out to me isn’t just the tools.
It’s the shift in mindset.
We’re moving from:
“Can the model give the right answer?”
to:
“Can the system actually work in the real world?”
That includes:
- unreliable inputs
- edge cases
- system constraints
- imperfect environments
And that’s where things get hard.
Final Thought
I’m still early in my journey, and most of what I build is far from perfect.
But working on real projects taught me something simple:
Building something that works in theory is easy.
Building something that works in reality is where the real effort is.
And for the first time, it feels like a lot of what was announced at NEXT ‘26 is actually moving in that direction.
Not just making models better —
but making systems usable.
If you’ve tried building anything beyond tutorials, you probably know exactly what I mean.


Top comments (0)