This is a submission for the Google Cloud NEXT Writing Challenge
Introduction
I have been eyeing to participate in a DEV.TO hackathon for a while now and when I saw this come across my timeline, I jumped up excitedly. I want the badge of honor. The big focus for the Google Cloud NEXT event is on what Google is debuting in the A.I. race especially agents. The future of developing on the cloud is exciting !!
Keynote & Demos
In this article, I’ll break down the Developer Keynote because that stood out most to me and why.
I tuned in for the Developer Keynote (April, 23) which discussed building a marathon planning agent. A Gemini Enterprise Agent planned an entire Las Vegas marathon experience, coordinating hotels, routes, schedules, and logistics using multiple connected agents.
Demo 1 (Mofi Rahman) - Building agents with Agent Platform
We can design an agent in agent designer. When building an agent you need three things: Instructions; to help the agent understand its role as the marathon planner. Skills; help the agent progressively understand what is available to the agent to complete the work. such as using Google maps, handling Geo-spatial data. Map skills can make your agent an expert at using Google maps.
Recap
- Give the agent a prompt
- Agent loads the skills
- Agent executes the tools to find viable routes.
What this means for developers
For developers, this means they can build specialized AI agents by composing three core components: instructions, skills, and tools, rather than writing complex logic from scratch. The agent designer platform handles how the agent loads skills, processes prompts, and decides which tools to call, so developers can focus on defining what the agent should do.
Demo 2 (Ivan Nardini & Casey West) - Creating multi-agent systems
We have a planner agent that needs to talk to 2 other agents: the simulator agent and the evaluator agent. The planner agent creates potential routes for the race. Evaluator sub-agent will judge routes based on specific criteria that we pick. Simulator agent works with the planner to get approved routes, run them and show results. How can we facilitate communication between all agents, we need two components; A2A protocol and agent registry.
A2A Protocol: Google introduced an open-source protocol called A2A protocol. This protocol eliminates API code to connect agents. Agents can just share a card that represents the capabilities of that agent.
Agent Registry: This is a directory where agents get registered and it resolves every agent identity and maps their specific skillset across the agent network.
What this means for developers
For developers, this means they can build multi-agent systems where agents communicate and collaborate without writing custom API integration code. The A2A protocol lets agents share capability cards with each other, while the agent registry acts as a directory that tracks every agent's identity and skillset across the network, so the planner, simulator, and evaluator agents can find and talk to each other automatically.
Demo 3 (Lucia Subatin & Jack Wotherspoon) - Enhancing agents with memory
When using gemini to do my job in the office, I usually copy all the files one-by-one and shove it into one prompt, in a bid to give gemini as much context as possible. For our planner agent we improve the context by giving it the ability to manage sessions, add memory and access other data sources. In this demo, we saw how we can use a skill to make the agent an expert at using alloyDB and vector functions. Basically what this Demo was showing is that by managing context efficiently and adding memory to our agent you can see the simulator agent adjust its route based on the memory and data it now had. The memory aspect comes from how the agent uses alloyDB together with vector functions to store and recall context from previous sessions, so it (alloyDB) behaves like a memory bank.
What this means for developers
For developers, this means they can replace the manual process of dumping files into a prompt by giving agents structured memory and database access instead. By connecting the agent to tools like AlloyDB and vector functions, the agent automatically manages its own context across sessions, so it can make smarter, more informed decisions without the developer having to feed it everything upfront.
Demo 4 (Megan O'keefe) - Debugging agents at scale
We all use console.log("here") to debug a point in our codebase, well in artificial intelligence, we can not do that but, Google provides tools for us to debug; Agent observability and Cloud assist investigation agent. Using these tools we will find and investigate the root cause of the error and then we will deploy a proactive fix. In this demo, we had an error an were suggested a fix by the agent.
Error: The simulator agent is failing to call the gemini model API due to a request error. Something is wrong that the payloads were sending to the model.
Cloud assist prompt: Resume the Gemini cloud assist investigation about the simulator agent. What is wrong with agent.py
Fix: The agent is suggesting a fix that we add a token threshold parameter to our event compaction config so that we are periodically compressing context more often with each invocation. This is because we have a 1 million context token limit.
From this demo, I learned that Google cloud provides a full observability suite for agents - cloud assist, coding agent.
What this means for developers
For developers, this means they have a proper debugging suite for AI agents rather than guessing what went wrong. Google Cloud's observability tools let developers investigate agent failures, understand why an agent misbehaved, and get suggested fixes, so in this case the tool spotted that the simulator agent was hitting Gemini's 1 million token context limit and recommended compressing context more frequently to stay within that boundary.
Conclusion
Google Cloud NEXT made one thing clear: building AI agents is becoming a first class developer experience. You no longer have to stitch together APIs, manually manage context, or guess why your agent broke. Google is giving developers a full stack for agent development, from designing and connecting agents, to giving them memory, to debugging them when things go wrong. The A2A protocol, agent registry, AlloyDB memory, and Cloud observability tools all point in the same direction: agents are becoming software components that developers can build, scale, and maintain just like any other part of their stack. The marathon planner was just a demo but the underlying tools are real and they are ready to be used.
Top comments (0)