Taking the Next Step
After successfully creating my first basic agent, I was ready to explore more advanced capabilities. The next logical step was to build an agent that could actually do things - not just answer questions, but perform actions using tools. This led me to create a multi-tool agent that can retrieve weather information and get the current time for different cities.
What is a Multi-Tool Agent?
A multi-tool agent is an agent that has access to multiple functions (tools) that it can call to perform specific tasks. Instead of relying solely on the LLM's knowledge, the agent can execute code, make API calls, or perform calculations to provide accurate, real-time information.
Building the Weather & Time Agent
I followed the official ADK quickstart guide for multi-tool agents and created a weather and time agent. Here's how I built it:
Step 1: Project Structure
I created a new directory structure for my multi-tool agent:
multi_tool_agent/
├── __init__.py
├── agent.py
└── .env
Step 2: Creating the Tools
The agent needs two tools: one for weather and one for time. Let's look at the weather function:
def get_weather(city: str) -> dict:
"""Retrieves the current weather report for a specified city.
Args:
city (str): The name of the city for which to retrieve the weather report.
Returns:
dict: status and result or error msg.
"""
if city.lower() == "new york":
return {
"status": "success",
"report": (
"The weather in New York is sunny with a temperature of 25 degrees"
" Celsius (77 degrees Fahrenheit)."
),
}
else:
return {
"status": "error",
"error_message": f"Weather information for '{city}' is not available.",
}
This function takes a city name and returns weather information. For now, it only supports New York, but it demonstrates how the agent can call functions to get information.
The time function works similarly:
def get_current_time(city: str) -> dict:
"""Returns the current time in a specified city.
Args:
city (str): The name of the city for which to retrieve the current time.
Returns:
dict: status and result or error msg.
"""
if city.lower() == "new york":
tz_identifier = "America/New_York"
else:
return {
"status": "error",
"error_message": (
f"Sorry, I don't have timezone information for {city}."
),
}
tz = ZoneInfo(tz_identifier)
now = datetime.datetime.now(tz)
report = (
f'The current time in {city} is {now.strftime("%Y-%m-%d %H:%M:%S %Z%z")}'
)
return {"status": "success", "report": report}
This function uses Python's zoneinfo module to get the current time in a specific timezone. It properly handles timezone conversion and formatting.
Step 3: Creating the Agent
The key difference from my first agent is that this agent includes tools:
root_agent = Agent(
name="weather_time_agent",
model="gemini-2.0-flash",
description=(
"Agent to answer questions about the time and weather in a city."
),
instruction=(
"You are a helpful agent who can answer user questions about the time and weather in a city."
),
tools=[get_weather, get_current_time],
)
By passing tools=[get_weather, get_current_time], I'm giving the agent access to these functions. The LLM will automatically decide when to call these tools based on the user's query.
Running the Agent
Terminal Mode
I ran the agent using the terminal interface:
(.venv) PS C:\Agent> adk run .\multi_tool_agent\
The agent started successfully and I could interact with it:
Running agent weather_time_agent, type exit to exit.
[user]: what can you do?
[weather_time_agent]: I can provide you with the current weather and time for a city. Just let me know which city you're interested in!
[weather_time_agent]: I am sorry, I cannot fulfill this request. I do not have the weather and time information for Dhaka.
[weather_time_agent]: I am sorry, I cannot fulfill this request. I do not have the weather and time information for London.
[user]: new york
[weather_time_agent]: OK. The weather in New York is sunny with a temperature of 25 degrees Celsius (77 degrees Fahrenheit). The current time in new york is 2025-12-02 09:36:51 EST-0500.
Notice how the agent:
- Understood the query - When asked "what can you do?", it explained its capabilities
- Attempted to help - For Dhaka and London, it tried to use the tools but gracefully handled the error
-
Successfully executed tools - For New York, it called both
get_weatherandget_current_timefunctions and provided accurate information
Web UI Mode
I also launched the web-based development UI:
(.venv) HP@DESKTOP-HSUOTBB MINGW64 /c/Agent (main)
$ adk web --port 8000
The server started successfully:
+-----------------------------------------------------------------------------+
| ADK Web Server started |
| |
| For local testing, access at http://127.0.0.1:8000. |
+-----------------------------------------------------------------------------+
The web UI provides several advantages:
- Visual Interface: A clean, chat-based interface for interacting with the agent
- Event Inspection: You can see individual function calls, responses, and model responses
- Trace Logs: View detailed trace logs showing the latency of each function call
- Debugging: Easily inspect what the agent is doing behind the scenes
When I asked questions in the web UI, I could see in the Events tab:
- The user's message
- The agent's decision to call tools
- The function calls made (
get_weather,get_current_time) - The responses from those functions
- The final response to the user
This visibility is incredibly helpful for understanding how the agent works and debugging issues.
How It Works
When you ask the agent a question like "What's the weather in New York?", here's what happens:
- User Query: The question is sent to the agent
-
LLM Analysis: The Gemini model analyzes the query and determines that it needs to call the
get_weathertool -
Tool Execution: The agent calls
get_weather("new york") - Response Processing: The function returns weather data
- Final Response: The LLM formats the tool's response into a natural language answer
The beauty of this system is that the LLM automatically decides which tools to use and when. You don't need to write complex logic to route queries - the model handles it intelligently.
Key Learnings
1. Tool Function Design
The tool functions need clear docstrings. The LLM uses these docstrings to understand:
- What the function does
- What parameters it takes
- What it returns
Good docstrings are crucial for the agent to use tools correctly.
2. Error Handling
The tools return structured responses with status and either report or error_message. This allows the agent to:
- Know when a tool call succeeded or failed
- Provide appropriate error messages to users
- Handle edge cases gracefully
3. Tool Selection
The agent intelligently selects which tools to use. When asked "What's the weather and time in New York?", it calls both get_weather and get_current_time automatically.
4. Web UI Benefits
The web UI is excellent for development because:
- You can see exactly what tools are being called
- Trace logs show performance metrics
- The Events tab provides detailed debugging information
- It's easier to test multiple queries quickly
What's Next?
This multi-tool agent is just the beginning! I'm planning to:
- Expand Tool Capabilities: Add more cities and real weather API integration
- Add More Tools: Create additional tools for different use cases
- Explore Tool Chaining: Build agents that use multiple tools in sequence
- Real-World Applications: Build agents that solve actual problems
- Advanced Features: Explore memory, sessions, and state management
Key Takeaways
- Tools enable action: Agents can do more than just answer questions - they can perform actions
- Automatic tool selection: The LLM intelligently decides which tools to use based on the query
- Clear documentation matters: Good docstrings help the agent understand and use tools correctly
- Error handling is important: Structured responses help agents handle failures gracefully
- Web UI is powerful: The development UI provides excellent debugging and inspection capabilities
Resources
- Google ADK Multi-Tool Agent Documentation
- Google ADK Documentation
- GitHub Repository - Check out the code for this agent
- YouTube Tutorial - Watch the video walkthrough
Top comments (0)