I'm a final-year computer science student. I spend most of my days training deep learning models on image datasets, debugging tensor shape errors at 2am, and convincing myself that 67% accuracy is "a solid baseline."
I do not, normally, build AI agents.
But when Google Cloud NEXT '26 dropped last week and I saw the announcements around ADK 2.0 and the new Gemini Enterprise Agent Platform, I got genuinely curious. Not marketing-brochure curious — actually curious. Because the thing they kept saying was: "You can now build multi-step autonomous agents that coordinate with each other."
That sounded either really powerful or really overhyped. I wanted to find out which.
So I spent a day building something with it. This is what actually happened.
What Even Is ADK?
Before I get into the friction, a quick explainer for anyone who hasn't seen the announcements.
ADK — Agent Development Kit — is Google's open-source Python framework for building AI agents. Not chatbots. Agents — programs that take a goal, break it into steps, use tools, and figure out how to get things done autonomously.
The ADK 2.0 alpha (released March 2026) brought in graph-based workflows, collaborative multi-agent support, and native Vertex AI integration. The stable version (1.x) already supports multi-agent coordination and tool use. That's what I ended up using, and I'll explain why in a moment.
What I Decided to Build
I wanted to build a Research Assistant Agent — you give it a topic, it searches the web, structures the findings, and suggests what to explore next.
The twist: instead of one agent doing everything, I'd build it as a multi-agent pipeline with specialist sub-agents, the way ADK is actually designed to be used:
-
web_searcher→ hits Google Search, returns raw findings -
analyst_summarizer→ structures those findings for developers -
research_coordinator→ orchestrates both, delivers the final answer
Simple enough concept. Let's talk about what happened when I actually tried to set it up.
The Setup: Where Things Got Interesting
Step 1 — Getting the API Key
Go to aistudio.google.com, sign in, click "Get API Key." This part was genuinely smooth. Took maybe 3 minutes. Free tier gives you enough to build and experiment.
Step 2 — Installing ADK
pip install google-adk
Simple. Worked on the first try. The install is clean and the dependencies are sensible.
Step 3 — Creating the Project Structure
adk create research_agent
This gave me a folder with agent.py, .env, and __init__.py already stubbed out. That's a nice touch — you're not hunting for the right structure.
research_agent/
agent.py
.env
__init__.py
Step 4 — The Part Where I Hit a Wall
I was excited by ADK 2.0 after reading about the new workflow engine, so I tried installing it first:
pip install google-adk --pre
And here's the honest thing nobody's blog post tells you: ADK 2.0 is a proper alpha. The docs say it. The PyPI page says it. But you don't fully feel it until you're staring at import errors because the API surface has breaking changes from 1.x.
I spent about 40 minutes trying to make 2.0 work before I made the practical call: the stable 1.31.x release already supports multi-agent orchestration. The thing I wanted to build was fully doable without the alpha. So I went back to stable.
pip install google-adk # without --pre
Lesson learned: ADK 2.0 is genuinely exciting for what it brings (graph-based workflows, better debugging tooling, stateful multi-step support), but right now it's for people who want to be on the bleeding edge and don't mind patching things. If you want to build and ship something this week, use 1.x.
Building the Agent
Here's the full code. I'll walk you through each piece.
The Project Setup
Add your API key to .env:
GOOGLE_GENAI_API_KEY=your_key_here
GOOGLE_GENAI_USE_VERTEXAI=FALSE
GOOGLE_GENAI_USE_VERTEXAI=FALSE means you're using AI Studio (free), not Vertex AI (Google Cloud). Keep it false unless you've set up a Cloud project.
The Sub-Agents
from google.adk.agents import Agent
from google.adk.tools import google_search
# Sub-Agent 1: Does the actual web searching
searcher_agent = Agent(
name="web_searcher",
model="gemini-2.0-flash",
description="Searches the web for up-to-date information on a given topic.",
instruction="""
You are a web research specialist. Your only job is to search for
accurate, recent information on the topic given to you.
Always use the google_search tool — never answer from memory alone.
Prioritize sources from 2025-2026.
Return a clear summary of what you found, including source context.
""",
tools=[google_search],
)
# Sub-Agent 2: Turns raw findings into structured output
summarizer_agent = Agent(
name="analyst_summarizer",
model="gemini-2.0-flash",
description="Structures raw research into clear developer-friendly summaries.",
instruction="""
You are a technical writer for a developer audience.
Structure your response as:
## Key Findings
[3-4 bullet points of the most important facts]
## The Most Surprising Thing
[One insight that might be unexpected]
## What to Watch Out For
[Caveats, limitations, or gotchas]
## 3 Follow-Up Questions
[Specific questions a developer might want to explore next]
Keep the tone honest, direct, and useful. No marketing fluff.
""",
)
One thing I noticed: the description field matters more than I expected. The coordinator uses it to decide which sub-agent to delegate to. If your description is vague, the orchestration gets confused. Lesson from 30 minutes of head-scratching.
The Coordinator
root_agent = Agent(
name="research_coordinator",
model="gemini-2.0-flash",
description="Coordinates multi-step research by delegating to specialist sub-agents.",
instruction="""
You coordinate a two-step research pipeline:
Step 1 — Delegate to web_searcher to find current information.
Step 2 — Pass those findings to analyst_summarizer to structure them.
Step 3 — Present the final structured output to the user.
Always complete both steps before responding. Do not skip the search step.
""",
agents=[searcher_agent, summarizer_agent],
)
That agents=[...] parameter is doing the heavy lifting here. You're giving the coordinator a roster of sub-agents it can delegate to. It decides when to call which one based on the task and their descriptions.
Running It
adk web research_agent/
Open http://localhost:8000 and you get a chat interface with full event inspection — you can see every step the agent takes, every tool call, every sub-agent handoff. For a framework aimed at developers, this is actually thoughtful UX.
What I Actually Asked It
Test 1: Something I Already Knew the Answer To
"What is Google ADK and what was announced at Cloud NEXT '26?"
The agent searched, found the NEXT '26 announcements, and structured them cleanly. The output was accurate. It correctly identified ADK 2.0's graph-based workflows and the Gemini Enterprise Agent Platform rebrand. It cited things from 2026, not 2023.
Test 2: Something Niche
"What's the current state of AI agents in manufacturing quality control?"
This is where it got more interesting. The search results were mixed — some solid, some generic. The summarizer was honest about the limitations of what it found. It flagged one follow-up question I hadn't considered: whether outcome-based pricing (one of NEXT '26's announcements) changes the economics of running vision AI at manufacturing scale. I hadn't thought about that angle.
Test 3: Pushing It
"What's the A2A protocol and why does it matter for a student building their first AI project?"
Best output of the three. The framing of "for a student" changed the register of the summary — it explained the A2A protocol in practical terms (agents from different companies can talk to each other without custom integration code) rather than enterprise-speak. The follow-up questions were specific and genuinely useful.
What Genuinely Impressed Me
The multi-agent handoff is seamless. I expected some clunkiness at the boundary between searcher and summarizer. There wasn't any. The coordinator passes context cleanly, and the summarizer clearly received structured findings rather than raw text. I don't know exactly what's happening under the hood, but the output quality was noticeably better than a single-agent approach I tested alongside it.
The web UI for debugging. Being able to see the full event trace — which agent ran, what tool it called, what it returned — is not a small thing. When something goes wrong (and it will), you can actually see where. This is the kind of tooling that makes the difference between framework adoption and abandonment.
google_search as a first-class tool. You import it, you add it to tools=[], and it works. No API key management, no rate limit configuration to figure out upfront. For getting started, that's exactly right.
What I'd Push Back On
ADK 2.0 alpha is not ready for a tutorial. I understand why Google announced it at NEXT '26 — the graph-based workflow engine is a genuine step forward in how you structure complex agents. But the breaking changes from 1.x, combined with sparse alpha docs, mean the announcement is ahead of the developer experience right now. If your use case needs stateful multi-step workflows or the new debugging tooling, keep watching it. If you need to build something today, use 1.x.
The instruction prompt is load-bearing. The quality of your agent's output is almost entirely determined by how clearly you write the instruction field. ADK doesn't abstract that away — it amplifies it. A vague instruction gives you a vague agent. I rewrote mine three times before the summarizer stopped adding unnecessary corporate-sounding hedges to its output. That's not a framework problem, but it's worth knowing going in.
Memory across sessions is still your problem. Each conversation starts fresh. If you want stateful agents that remember context across sessions, you need to wire that up yourself. ADK 2.0's improvements here are in the roadmap, but they're not fully baked yet.
My Honest Verdict
ADK is the right direction. The multi-agent pattern it encourages — specialists coordinated by a root agent — produces noticeably better results than stuffing everything into one massive system prompt. The tooling is clean, the Google Search integration works, and the web UI for inspection is genuinely developer-friendly.
For a first-year student or someone new to agents: start here. You'll be running something real in under two hours.
For someone wanting to use the ADK 2.0 graph workflows specifically: give it another month or two. The alpha is progressing fast, but it's not ready to be the foundation of a tutorial you'll publish and stand behind.
The most interesting thing NEXT '26 signalled to me isn't any single announcement — it's the pattern. Google is betting that the future of cloud AI isn't individual models you call via API, but coordinated networks of specialist agents running on managed infrastructure. ADK is their framework for that future. Whether you agree with the bet or not, it's worth understanding how it works.
Get the Code
The full project is on GitHub: https://github.com/SimranShaikh20/Research-Assistant-Agent
Research-Assistant-Agent/
├── Research-Assistant-Agent/
│ ├── agent.py ← all the agent logic
│ └── __init__.py
├── .env.example ← copy this to .env, add your key
├── .gitignore
└── README.md
To run it yourself:
git clone https://github.com/SimranShaikh20/Research-Assistant-Agent
cd Research-Assistant-Agent
python -m venv venv
venv\Scripts\activate # Windows
# source venv/bin/activate # Mac/Linux
pip install google-adk
# copy .env.example to .env, add your Gemini API key
adk web research_agent/
Resources
- ADK Documentation
- ADK 2.0 Alpha docs
- Google AI Studio — free API key
- ADK Samples on GitHub
- Google Cloud NEXT '26 Announcements
Built for the Google Cloud NEXT '26 Writing Challenge on DEV Community. I'm a final-year BE Computer Science student at The Maharaja Sayajirao University of Baroda, where my major project is an AI-based defect detection system — so building agents like this is a bit of a departure from my usual ResNet-50 territory. Turned out to be worth the detour.
Top comments (0)