Your AI agent can answer questions, but it can't do anything. It can't check the weather, look up a user, or query a database. Without tools, it's just an expensive autocomplete.
PydanticAI fixes this with one decorator. Here's how to give your agent two working tools in under 10 minutes.
The Code
import httpx
from dataclasses import dataclass
from pydantic import BaseModel
from pydantic_ai import Agent, RunContext
@dataclass
class Deps:
http_client: httpx.Client
class CityWeather(BaseModel):
city: str
temperature_f: float
summary: str
recommendation: str
agent = Agent(
"openai:gpt-4o-mini",
deps_type=Deps,
output_type=CityWeather,
instructions="You help users check weather conditions. Use the tools provided to fetch real data before answering.",
)
@agent.tool
def get_coordinates(ctx: RunContext[Deps], city: str) -> str:
"""Get latitude and longitude for a city."""
response = ctx.deps.http_client.get(
"https://geocoding-api.open-meteo.com/v1/search",
params={"name": city, "count": 1},
)
data = response.json()
if not data.get("results"):
return f"City '{city}' not found."
result = data["results"][0]
return f"{result['name']}: lat={result['latitude']}, lon={result['longitude']}"
@agent.tool
def get_weather(ctx: RunContext[Deps], latitude: float, longitude: float) -> str:
"""Get current weather for coordinates."""
response = ctx.deps.http_client.get(
"https://api.open-meteo.com/v1/forecast",
params={
"latitude": latitude,
"longitude": longitude,
"current": "temperature_2m,wind_speed_10m",
"temperature_unit": "fahrenheit",
},
)
data = response.json()["current"]
return f"Temperature: {data['temperature_2m']}F, Wind: {data['wind_speed_10m']} km/h"
result = agent.run_sync(
"What's the weather like in Tokyo?",
deps=Deps(http_client=httpx.Client()),
)
print(result.output.city)
print(f"{result.output.temperature_f}F")
print(result.output.summary)
print(result.output.recommendation)
Install and run:
pip install pydantic-ai httpx
export OPENAI_API_KEY="sk-..."
python weather_agent.py
How It Works
Define dependencies with a dataclass. The Deps class holds your HTTP client. PydanticAI injects it into every tool call through RunContext -- no globals, no singletons. Swap in a mock client for testing and your tools work identically.
Set the output type with a Pydantic model. CityWeather defines the exact shape of the agent's response. PydanticAI forces the LLM to return data matching this schema. You get a typed Python object back -- not a string you have to parse.
Register tools with @agent.tool. Each decorated function becomes a tool the LLM can call. The function signature becomes the tool's parameter schema. The docstring becomes the tool's description -- the model reads this to decide when to call it, so write it clearly.
The agent chains tools automatically. Ask "What's the weather in Tokyo?" and the agent calls get_coordinates first to get lat/lon, then calls get_weather with those coordinates. You didn't write that orchestration logic -- the LLM figured out the sequence from the tool descriptions.
What You'll See
Tokyo
58.4F
Mild and calm conditions with light winds.
Light jacket weather -- comfortable for walking around the city.
The response is a validated CityWeather object. Access .city, .temperature_f, .summary, and .recommendation directly -- no JSON parsing, no string extraction.
Key Details
Docstrings matter. The LLM uses your tool's docstring to decide when to call it. "Get latitude and longitude for a city" tells the model this tool takes a city name and returns coordinates. Vague docstrings lead to wrong tool selection.
Type hints drive the schema. city: str and latitude: float become the tool's JSON schema. PydanticAI validates the LLM's arguments before your function runs -- no type-checking code needed.
Dependencies keep tools testable. Instead of creating an HTTP client inside get_weather, you receive it through ctx.deps. In tests, pass a mock client that returns fixed responses. Same pattern FastAPI developers already use.
Going Further
- Add
retries=3to theAgent()constructor for automatic retry on validation failures - Use
asynctool functions withhttpx.AsyncClientfor concurrent API calls - Combine with
agent.run_stream()for real-time streamed responses
Check out the other posts in the AI Agent Quick Tips series: retry logic, structured outputs, testing tool calls, and human approval gates.
Building agents that need tool orchestration, scheduling, and memory without the infrastructure work? Nebula handles it so you can focus on the tools.
Top comments (0)