This is a submission for the Google Cloud NEXT Writing Challenge
A few days ago, I shipped PlanetLedger — a weekend hackathon project that turns bank transactions into environmental impact insights.
It has:
- an event-driven pipeline (OpenClaw)
- an agent layer with memory
- RAG-grounded insights
- deterministic scoring + AI fallback
On paper, it looks like a modern AI system.
Then I watched Google Cloud NEXT ‘26.
And something didn’t sit right.
The uncomfortable realisation
While going through the announcements —
the Gemini Enterprise Agent Platform, Agentic Data Cloud, and long-running autonomous agents — I had this thought:
I didn’t build an AI system.
I built a pipeline that treats AI as the final step — not the decision-maker.
That sounds subtle.
It’s not.
My architecture (before NEXT)
PlanetLedger today works like this:
upload → parse → categorise → score → build context → generate insights → notify
Internally:
event → OpenClaw → workflows → AI → UI
It’s clean, predictable, and works well.
But it’s also:
fully deterministic until the very last step
AI is where the pipeline ends — not where decisions begin.
🤖 What NEXT ‘26 changes
The biggest shift across announcements wasn’t better models.
It was this:
We’ve entered the agentic era.
AI is no longer something you call.
It’s something that:
- acts
- reasons over data
- runs workflows autonomously
Three announcements made that click for me:
- The Gemini Enterprise Agent Platform → build and scale real agents
- The Agentic Data Cloud → agents reason directly over structured data
- Long-running agents in serverless environments → agents don’t just respond, they operate
What this actually looks like on Google Cloud
Mapping my system to Google Cloud made the gap obvious:
| What I built | Google Cloud direction |
|---|---|
| OpenClaw event triggers | Pub/Sub / Eventarc |
| Hardcoded workflows | Workflows / agent execution |
| RAG context builder | Agentic Data Cloud |
| LLM calls for insights | Gemini agents |
| Cron-based automation | Long-running autonomous agents |
What I built locally is essentially a proto-version of a cloud-native agent system — but missing the intelligence layer at the core.
Reimagining PlanetLedger
So I asked:
What if PlanetLedger wasn’t a pipeline… but an agent?
1. Events → from triggers to signals
Today:
transactions_uploaded → trigger workflows
In an agent-first system:
transactions_uploaded → agent decides what to do
Same event.
Completely different meaning.
Events stop being instructions.
They become inputs for reasoning.
2. Pipeline → replaced by a Financial Agent
Today, I explicitly define:
- parse
- categorise
- score
- generate insights
In an agent-based system:
Financial Agent:
- understands transactions
- detects patterns
- decides what matters
- chooses actions
Instead of:
“run this sequence”
It becomes:
“figure out what needs to happen”
3. RAG → becomes native data reasoning
Right now, I manually construct context:
- last 7 days
- top categories
- detected patterns
Then inject it into prompts.
With something like the Agentic Data Cloud:
- the agent queries data directly
- builds its own context
- adapts dynamically
Less glue code. More intelligence.
4. Workflows → become optional
OpenClaw is intentionally simple:
- sequential
- deterministic
- easy to debug
But it’s still:
explicit orchestration
The direction from NEXT suggests:
- long-running agents
- dynamic tool usage
- adaptive execution paths
Instead of:
step A → step B → step C
You get:
goal → agent decides steps → executes tools
More powerful.
Also harder to control.
5. The new problem: trust
In my current system:
- AI generates insights
- but doesn’t act
In an agent system:
- AI can trigger workflows
- influence decisions
- shape outcomes
Which introduces something new:
You now need to trust your architecture — not just your code.
That means:
- validation layers
- auditability
- explainability
Especially for something tied to financial behaviour.
The real shift
If I compress everything I learned into one line:
I went from designing workflows → to designing decision boundaries
Old vs New mental model
| PlanetLedger Today | PlanetLedger (NEXT-style) |
|---|---|
| Event triggers workflows | Event triggers reasoning |
| Pipeline-first | Agent-first |
| RAG context builder | Native data reasoning |
| Deterministic flow | Adaptive execution |
| Insights | Actions |
What I’d do next
If I were to rebuild PlanetLedger today using these ideas:
- Introduce an agent layer using Gemini-style reasoning
- Let the agent decide when to generate insights vs alerts
- Replace static RAG with dynamic data querying
- Add explainability for every AI-driven decision
- Keep events — but demote them to signals, not drivers
Not because the current system is wrong.
But because the direction is clear:
The future isn’t event-driven systems with AI
It’s AI systems that use events
Final thought
Google Cloud NEXT ‘26 didn’t just introduce new tools.
It exposed a shift:
We’re moving from systems that process data
to systems that interpret and act on it
PlanetLedger didn’t break after NEXT.
But the way I think about building it did.
And that’s a much bigger change.
If you’ve built something similar — pipelines, workflows, event buses — try this:
Remove the pipeline.
Replace it with an agent.
See what breaks.
That’s probably where the next version lives.



Top comments (0)