In the past 4-5 months, TWO powerful AI
agent development frameworks have been released:
- Google Agent Development Kit (ADK)
- AWS Strands Agents
You can view the other posts in the Google ADK series above. In our current post, we’ll dive into the Google Agent Development Kit (ADK)
and show how to create workflow sequential agents (3 agents) using ADK, Gemini 2.5, FastAPI, and a Streamlit interface.
Table of Contents
- What is Google Agent Development Kit?
- What is Multi-Agent, Workflow, Sequential Agents?
- Agent App
- Conclusion
- References
What is Google Agent Development Kit?
- Agent Development Kit (ADK) is an
open-source framework
to develop AI agentsto run anywhere
:- VSCode, Terminal,
- Docker Container
- Google CloudRun
- Kubernetes
What is Multi-Agent, Workflow, Sequential Agents?
The SequentialAgent
is a workflow agent
that executes its sub-agents in the order
they are specified in the list. Use the SequentialAgent when you want the execution to occur in a fixed, strict order.
When the SequentialAgent's Run Async method is called, it performs the following actions:
- Iteration: It iterates through the sub agents list in the order they were provided.
- Sub-Agent Execution: For each sub-agent in the list, it calls the sub-agent's Run Async method.
Details: https://google.github.io/adk-docs/agents/workflow-agents/sequential-agents/
Agent App
Sample project
on GitHub:
Installing Dependencies & Reaching Gemini Model
- Go to: https://aistudio.google.com/
- Get API key to reach Gemini
- Please add .env with Gemini and Serper API Keys
# .env
GOOGLE_GENAI_USE_VERTEXAI=FALSE
GOOGLE_API_KEY=PASTE_YOUR_ACTUAL_API_KEY_HERE
- Please install requirements:
fastapi
uvicorn
google-adk
google-generativeai
Frontend - Streamlit UI
# app.py
import streamlit as st
import requests
st.set_page_config(page_title="Agent Chat", layout="centered")
if "messages" not in st.session_state:
st.session_state.messages = []
st.title("Multi-Agent Sequential")
for msg in st.session_state.messages:
with st.chat_message(msg["role"]):
st.markdown(msg["content"])
user_query = st.chat_input("Ask to search for real-time data or anything...")
# send and display user + assistant messages
if user_query:
st.chat_message("user").markdown(user_query)
st.session_state.messages.append({"role": "user", "content": user_query})
try:
response = requests.post(
"http://localhost:8000/ask",
json={"query": user_query}
)
response.raise_for_status()
# respond is not str, it is list, because of multiple agent responds.
all_replies = response.json().get("responses", ["No response."])
for reply in all_replies:
st.chat_message("assistant").markdown(reply)
st.session_state.messages.append({"role": "assistant", "content": reply})
except Exception as e:
error_msg = f"Error: {str(e)}"
st.chat_message("assistant").markdown(error_msg)
st.session_state.messages.append({"role": "assistant", "content": error_msg})
Backend - Agent
# agent.py
from fastapi import FastAPI
from pydantic import BaseModel
from dotenv import load_dotenv
from google.genai import types
from google.adk.agents import Agent
from google.adk.agents.llm_agent import LlmAgent
from google.adk.agents.sequential_agent import SequentialAgent
from google.adk.runners import Runner
from google.adk.sessions import InMemorySessionService, Session
from google.adk.memory import InMemoryMemoryService
load_dotenv()
MODEL = "gemini-2.5-flash"
APP_NAME = "search_memory_app"
USER_ID = "user123"
SESSION_ID = "session123"
# Code Writer Agent
# Takes the initial specification (from user query) and writes code.
code_writer_agent = LlmAgent(
name="CodeWriterAgent",
model=MODEL,
instruction="""You are a Python Code Generator.
Based *only* on the user's request, write Python code that fulfills the requirement.
Output *only* the complete Python code block, enclosed in triple backticks (```
python ...
```).
Do not add any other text before or after the code block.
""",
description="Writes initial Python code based on a specification.",
output_key="generated_code" # Stores output in state['generated_code']
)
# Code Reviewer Agent
# Takes the code generated by the previous agent (read from state) and provides feedback.
code_reviewer_agent = LlmAgent(
name="CodeReviewerAgent",
model=MODEL,
instruction="""You are an expert Python Code Reviewer.
Your task is to provide constructive feedback on the provided code.
**Code to Review:**
```
python
{generated_code}
```
**Review Criteria:**
1. **Correctness:** Does the code work as intended? Are there logic errors?
2. **Readability:** Is the code clear and easy to understand? Follows PEP 8 style guidelines?
3. **Efficiency:** Is the code reasonably efficient? Any obvious performance bottlenecks?
4. **Edge Cases:** Does the code handle potential edge cases or invalid inputs gracefully?
5. **Best Practices:** Does the code follow common Python best practices?
**Output:**
Provide your feedback as a concise, bulleted list. Focus on the most important points for improvement.
If the code is excellent and requires no changes, simply state: "No major issues found."
Output *only* the review comments or the "No major issues" statement.
""",
description="Reviews code and provides feedback.",
output_key="review_comments", # Stores output in state['review_comments']
)
# Code Refactorer Agent
# Takes the original code and the review comments (read from state) and refactors the code.
code_refactorer_agent = LlmAgent(
name="CodeRefactorerAgent",
model=MODEL,
instruction="""You are a Python Code Refactoring AI.
Your goal is to improve the given Python code based on the provided review comments.
**Original Code:**
```
python
{generated_code}
```
**Review Comments:**
{review_comments}
**Task:**
Carefully apply the suggestions from the review comments to refactor the original code.
If the review comments state "No major issues found," return the original code unchanged.
Ensure the final code is complete, functional, and includes necessary imports and docstrings.
**Output:**
Output *only* the final, refactored Python code block, enclosed in triple backticks (```
python ...
```).
Do not add any other text before or after the code block.
""",
description="Refactors code based on review comments.",
output_key="refactored_code", # Stores output in state['refactored_code']
)
# This agent orchestrates the pipeline by running the sub_agents in order.
code_pipeline_agent = SequentialAgent(
name="CodePipelineAgent",
sub_agents=[code_writer_agent, code_reviewer_agent, code_refactorer_agent],
description="Executes a sequence of code writing, reviewing, and refactoring.",
# The agents will run in the order provided: Writer -> Reviewer -> Refactorer
)
root_agent = code_pipeline_agent
session_service = InMemorySessionService()
memory_service = InMemoryMemoryService()
app = FastAPI()
class QueryRequest(BaseModel):
query: str
@app.on_event("startup")
async def startup_event():
await session_service.create_session(
app_name=APP_NAME,
user_id=USER_ID,
session_id=SESSION_ID
)
global runner
runner = Runner(
agent=root_agent,
app_name=APP_NAME,
session_service=session_service,
memory_service=memory_service
)
@app.post("/ask")
def ask_agent(req: QueryRequest):
content = types.Content(role="user", parts=[types.Part(text=req.query)])
events = runner.run(user_id=USER_ID, session_id=SESSION_ID, new_message=content)
responses = []
for event in events:
if event.is_final_response() and event.content and event.content.parts:
responses.append(event.content.parts[0].text)
# Save session to memory after each turn
session = session_service.get_session(app_name=APP_NAME, user_id=USER_ID, session_id=SESSION_ID)
memory_service.add_session_to_memory(session)
return {"responses": responses or ["No response received."]}
if __name__ == "__main__":
import uvicorn
uvicorn.run(app, host="0.0.0.0", port=8000)
Run & Demo
Run backend (agent.py):
$ uvicorn agent:app --host 0.0.0.0 --port 8000
INFO: Started server process [5037]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
Run frontend (app.py):
$ python3 -m streamlit run app.py
You can now view your Streamlit app in your browser.
Local URL: http://localhost:8501
Network URL: http://172.28.246.163:8501
Test Prompt:
I want to create a space shuttle game where meteors fall from the top of
the screen. The player controls the shuttle using the arrow keys and can
shoot at the meteors by pressing the spacebar.
Code Generator agent created code, Reviwer agent gives following response:
- Game Over Event Handling: The event loop for quitting (for event in
pygame.event.get():) inside the if game_over: block is problematic.
pygame.event.get() consumes all events in the queue. Calling it twice
(once in the main loop, once in the game_over block) means the inner loop
might miss events already processed by the outer loop or only catch events
that occurred after the outer loop for that frame. It's better to handle
all events in the main loop and apply conditions based on game_over. For
example, change if event.key == pygame.K_q: within the inner loop to elif
event.key == pygame.K_q and game_over: in the main event loop.
- Magic Numbers/Constants: Many literal values (e.g., 50 for shuttle size,
7 for speed, 10 for score, 1000 for meteor spawn time) are used directly
in the code. Defining these as named constants at the top would improve
readability, make them easier to modify, and reduce potential errors.
- Player Vertical Movement Limit: The SCREEN_HEIGHT // 2 boundary for the
player's vertical movement is an arbitrary "magic number". If this is a
design choice, consider making it a named constant (e.g.,
PLAYER_TOP_BOUND) for clarity.
- Visuals: While functional for a prototype, the game uses simple colored
squares. In a more complete game, consider loading actual image assets for
the shuttle, meteors, and bullets for a better visual experience.
Code Refactor Agent fixes the code: Generated-Codes on GitHub
Demo GIF: GIF on GitHub
Conclusion
In this post, we mentioned:
- how to access Google Gemini 2.5,
- how to implement multi-agent sequential worfklow
If you found the tutorial interesting, I’d love to hear your thoughts in the blog post comments. Feel free to share your reactions or leave a comment. I truly value your input and engagement 😉
For other posts 👉 https://dev.to/omerberatsezer 🧐
Follow for Tips, Tutorials, Hands-On Labs:
References
- https://google.github.io/adk-docs/
- https://google.github.io/adk-docs/agents/workflow-agents/sequential-agents
- https://github.com/omerbsezer/Fast-LLM-Agent-MCP/
Your comments 🤔
- Which tools are you using to develop AI Agents (e.g.
Google ADK
,AWS Strands, Google ADK
, CrewAI, Langchain, etc.)? - What are you thinking about Google ADK?
- Did you implement multi-agent, workflow before?
=> Welcome to any comments below related AI, agents for brainstorming 🤯
Top comments (3)
I'm evaluating several AI multi-agents frameworks, including CrewAI, AWS Strands, and Google ADK by applying them to a range of use cases like multi-agent collaboration, MCP tool integration, support for different language models, and workflow orchestration. Of these, Google ADK and AWS Strands stood out for their ease of implementation. There is no explicit task implementations in Google ADK and AWS Strands like in CrewAI. Both offer similar feature sets and integrate smoothly with open-source tools such as LiteLLM, MCP components like StdioServerParameters, session memory management (short, long term).
Absolute masterclass on building multi-agent workflows with Google ADK..
I agree with you. If you're exploring multi-agent systems, definitely check out AWS Strands Agents (Swarm) and CrewAI as well. They also bring some powerful tools to the table. 3 popular multi-agent frameworks: Google ADK, AWS Strands, CrewAI.