PydanticAI is a powerful Python framework designed to streamline the development of production-grade applications using Generative AI. It is built by the same team behind Pydantic, a widely used data validation library, and aims to bring the innovative and ergonomic design of FastAPI to the field of AI application development. PydanticAI focuses on type safety, modularity, and seamless integration with other Python tools.
Core Concepts
PydanticAI revolves around several key concepts:
Agents
Agents are the primary interface for interacting with Large Language Models (LLMs). An agent acts as a container for various components, including:
- System prompts: Instructions for the LLM, defined as static strings or dynamic functions.
- Function tools: Functions that the LLM can call to get additional information or perform actions.
- Structured result types: Data types that the LLM must return at the end of a run.
- Dependency types: Data or services that system prompt functions, tools and result validators may use.
- LLM models: The LLM that the agent will use, which can be set at agent creation or at runtime.
Agents are designed for reusability and are typically instantiated once and reused throughout an application.
System Prompts
System prompts are instructions provided to the LLM by the developer. They can be:
- Static system prompts: Defined when the agent is created, using the
system_prompt
parameter of theAgent
constructor. - Dynamic system prompts: Defined by functions decorated with
@agent.system_prompt
. These can access runtime information, such as dependencies, via theRunContext
object.
A single agent can use both static and dynamic system prompts, which are appended in the order they are defined at runtime.
from pydantic_ai import Agent, RunContext
from datetime import date
agent = Agent(
'openai:gpt-4o',
deps_type=str,
system_prompt="Use the customer's name while replying to them.",
)
@agent.system_prompt
def add_the_users_name(ctx: RunContext[str]) -> str:
return f"The user's name is {ctx.deps}."
@agent.system_prompt
def add_the_date() -> str:
return f'The date is {date.today()}.'
result = agent.run_sync('What is the date?', deps='Frank')
print(result.data)
#> Hello Frank, the date today is 2032-01-02.
Function Tools
Function tools enable LLMs to access external information or perform actions not available within the system prompt itself. Tools can be registered in several ways:
-
@agent.tool
decorator: For tools that require access to the agent's context viaRunContext
. -
@agent.tool_plain
decorator: For tools that do not need access to the agent's context. -
tools
keyword argument inAgent
constructor: Can take plain functions or instances of theTool
class, giving more control over tool definitions.
import random
from pydantic_ai import Agent, RunContext
agent = Agent(
'gemini-1.5-flash',
deps_type=str,
system_prompt=(
"You're a dice game, you should roll the die and see if the number "
"you get back matches the user's guess. If so, tell them they're a winner. "
"Use the player's name in the response."
),
)
@agent.tool_plain
def roll_die() -> str:
"""Roll a six-sided die and return the result."""
return str(random.randint(1, 6))
@agent.tool
def get_player_name(ctx: RunContext[str]) -> str:
"""Get the player's name."""
return ctx.deps
dice_result = agent.run_sync('My guess is 4', deps='Anne')
print(dice_result.data)
#> Congratulations Anne, you guessed correctly! You're a winner!
Tool parameters are extracted from the function signature and are used to build the tool's JSON schema. The docstrings of functions are used to generate the descriptions of the tool and the parameter descriptions within the schema.
Dependencies
Dependencies provide data and services to the agent’s system prompts, tools, and result validators via a dependency injection system. Dependencies are accessed through the RunContext
object. They can be any Python type, but dataclasses
are a convenient way to manage multiple dependencies.
from dataclasses import dataclass
import httpx
from pydantic_ai import Agent, RunContext
@dataclass
class MyDeps:
api_key: str
http_client: httpx.AsyncClient
agent = Agent(
'openai:gpt-4o',
deps_type=MyDeps,
)
@agent.system_prompt
async def get_system_prompt(ctx: RunContext[MyDeps]) -> str:
response = await ctx.deps.http_client.get(
'https://example.com',
headers={'Authorization': f'Bearer {ctx.deps.api_key}'},
)
response.raise_for_status()
return f'Prompt: {response.text}'
async def main():
async with httpx.AsyncClient() as client:
deps = MyDeps('foobar', client)
result = await agent.run('Tell me a joke.', deps=deps)
print(result.data)
#> Did you hear about the toothpaste scandal? They called it Colgate.
Results
Results are the final values returned from an agent run. They are wrapped in RunResult
(for synchronous and asynchronous runs) or StreamedRunResult
(for streamed runs), providing access to usage data and message history. Results can be plain text or structured data and are validated using Pydantic.
from pydantic import BaseModel
from pydantic_ai import Agent
class CityLocation(BaseModel):
city: str
country: str
agent = Agent('gemini-1.5-flash', result_type=CityLocation)
result = agent.run_sync('Where were the olympics held in 2012?')
print(result.data)
#> city='London' country='United Kingdom'
Result validators, added via the @agent.result_validator
decorator, provide a way to add further validation logic, particularly when the validation requires IO and is asynchronous.
Key Features
PydanticAI boasts several key features that make it a compelling choice for AI application development:
- Model Agnostic: PydanticAI supports a variety of LLMs, including OpenAI, Anthropic, Gemini, Ollama, Groq, and Mistral. It also provides a simple interface for implementing support for other models.
- Type Safety: Designed to work seamlessly with static type checkers like mypy and pyright. It allows for type checking of dependencies and result types.
- Python-Centric Design: Leverages familiar Python control flow and agent composition to build AI projects, making it easy to apply standard Python practices.
- Structured Responses: Uses Pydantic to validate and structure model outputs, ensuring consistent responses.
- Dependency Injection System: Offers a dependency injection system to provide data and services to an agent’s components, enhancing testability and iterative development.
- Streamed Responses: Supports streaming LLM outputs with immediate validation, allowing for rapid and accurate results.
Working with Agents
Running Agents
Agents can be run in several ways:
-
run_sync()
: For synchronous execution. -
run()
: For asynchronous execution. -
run_stream()
: For streaming responses.
from pydantic_ai import Agent
agent = Agent('openai:gpt-4o')
# Synchronous run
result_sync = agent.run_sync('What is the capital of Italy?')
print(result_sync.data)
#> Rome
# Asynchronous run
async def main():
result = await agent.run('What is the capital of France?')
print(result.data)
#> Paris
async with agent.run_stream('What is the capital of the UK?') as response:
print(await response.get_data())
#> London
Conversations
An agent run might represent an entire conversation, but conversations can also be composed of multiple runs, especially when maintaining state between interactions. You can pass messages from previous runs using the message_history
argument to continue a conversation.
from pydantic_ai import Agent
agent = Agent('openai:gpt-4o', system_prompt='Be a helpful assistant.')
result1 = agent.run_sync('Tell me a joke.')
print(result1.data)
#> Did you hear about the toothpaste scandal? They called it Colgate.
result2 = agent.run_sync('Explain?', message_history=result1.new_messages())
print(result2.data)
#> This is an excellent joke invent by Samuel Colvin, it needs no explanation.
Usage Limits
PydanticAI provides a settings.UsageLimits
structure to limit the number of tokens and requests. You can apply these settings via the usage_limits
argument to the run
functions.
from pydantic_ai import Agent
from pydantic_ai.settings import UsageLimits
from pydantic_ai.exceptions import UsageLimitExceeded
agent = Agent('claude-3-5-sonnet-latest')
try:
result_sync = agent.run_sync(
'What is the capital of Italy? Answer with a paragraph.',
usage_limits=UsageLimits(response_tokens_limit=10),
)
except UsageLimitExceeded as e:
print(e)
#> Exceeded the response_tokens_limit of 10 (response_tokens=32)
Model Settings
The settings.ModelSettings
structure allows you to fine-tune model behaviour through parameters such as temperature
, max_tokens
, and timeout
. You can apply these via the model_settings
argument in the run
functions.
from pydantic_ai import Agent
agent = Agent('openai:gpt-4o')
result_sync = agent.run_sync(
'What is the capital of Italy?',
model_settings={'temperature': 0.0},
)
print(result_sync.data)
#> Rome
Function Tools in Detail
Tool Registration
Tools can be registered using the @agent.tool
decorator (for tools needing context), the @agent.tool_plain
decorator (for tools without context) or via the tools
argument in the Agent
constructor.
from pydantic_ai import Agent, RunContext
agent_a = Agent(
'gemini-1.5-flash',
deps_type=str,
tools=[
lambda: str(random.randint(1, 6)),
lambda ctx: ctx.deps
],
)
Tool Schema
Parameter descriptions are extracted from docstrings and added to the tool’s JSON schema. If a tool has a single parameter that can be represented as an object in JSON schema, the schema is simplified to be just that object.
from pydantic_ai import Agent
from pydantic_ai.messages import ModelMessage, ModelResponse
from pydantic_ai.models.function import AgentInfo, FunctionModel
agent = Agent()
@agent.tool_plain
def foobar(a: int, b: str, c: dict[str, list[float]]) -> str:
"""Get me foobar.
Args:
a: apple pie
b: banana cake
c: carrot smoothie
"""
return f'{a} {b} {c}'
Dynamic Tools
Tools can be customised with a prepare
function, which is called at each step to modify the tool definition or omit the tool from that step.
from typing import Union
from pydantic_ai import Agent, RunContext
from pydantic_ai.tools import ToolDefinition
agent = Agent('test')
async def only_if_42(
ctx: RunContext[int], tool_def: ToolDefinition
) -> Union[ToolDefinition, None]:
if ctx.deps == 42:
return tool_def
@agent.tool(prepare=only_if_42)
def hitchhiker(ctx: RunContext[int], answer: str) -> str:
return f'{ctx.deps} {answer}'
result = agent.run_sync('testing...', deps=41)
print(result.data)
#> success (no tool calls)
result = agent.run_sync('testing...', deps=42)
print(result.data)
#> {"hitchhiker":"42 a"}
Messages and Chat History
Accessing Messages
Messages exchanged during an agent run can be accessed via the all_messages()
and new_messages()
methods on RunResult
and StreamedRunResult
objects.
from pydantic_ai import Agent
agent = Agent('openai:gpt-4o', system_prompt='Be a helpful assistant.')
result = agent.run_sync('Tell me a joke.')
print(result.data)
#> Did you hear about the toothpaste scandal? They called it Colgate.
print(result.all_messages())
Message Reuse
Messages can be passed to the message_history
parameter to continue conversations across multiple agent runs. When a message_history
is set and not empty, a new system prompt is not generated.
Message Format
The message format is model-independent allowing messages to be used in different agents or with the same agent using different models.
Debugging and Monitoring
Pydantic Logfire
PydanticAI integrates with Pydantic Logfire, an observability platform that allows you to monitor and debug your entire application. Logfire can be used for:
- Real-time debugging: To see what's happening in your application in real-time.
- Monitoring application performance: Using SQL queries and dashboards.
To use PydanticAI with Logfire, install with the logfire
optional group: pip install 'pydantic-ai[logfire]'
. You then need to configure a Logfire project and authenticate your environment.
Installation and Setup
Installation
PydanticAI can be installed using pip:
pip install pydantic-ai
A slim install is also available to use specific models, for example:
pip install 'pydantic-ai-slim[openai]'
Logfire Integration
To use PydanticAI with Logfire, install it with the logfire
optional group:
pip install 'pydantic-ai[logfire]'
Examples
Examples are available as a separate package:
pip install 'pydantic-ai[examples]'
Testing and Evaluation
Unit Tests
Unit tests verify that your application code behaves as expected. For PydanticAI, follow these strategies:
- Use
pytest
as your test harness. - Use
TestModel
orFunctionModel
in place of your actual model. - Use
Agent.override
to replace your model inside your application logic. - Set
ALLOW_MODEL_REQUESTS=False
globally to prevent accidental calls to non-test models.
import pytest
import anyio
from pydantic_ai import Agent, RunContext
from pydantic_ai.models.test import TestModel
from pydantic_ai.messages import ModelRequest, UserPromptPart
from pydantic_ai.tools import ToolDefinition
from datetime import datetime
@pytest.mark.anyio
async def test_weather():
model = TestModel()
class WeatherResult(dict):
temperature: int
location: str
time: datetime
weather_agent = Agent(
model=model,
result_type=WeatherResult,
system_prompt='Return a valid WeatherResult object.',
)
@weather_agent.tool
def get_current_time(ctx: RunContext) -> datetime:
return datetime.now()
@weather_agent.tool
def get_location(ctx: RunContext) -> str:
return "london"
with weather_agent.override(model=model):
result = await weather_agent.run('What is the weather?')
assert result.data["temperature"] == 0
assert result.data["location"] == 'london'
assert isinstance(result.data["time"], datetime)
messages = result.all_messages()
assert len(messages) == 3
assert messages.kind == "request"
assert messages.kind == "response"
assert messages.kind == "request"
assert isinstance(messages.parts, UserPromptPart)
assert messages.parts.tool_name == "get_location"
assert isinstance(messages.parts.tool_name , str)
assert messages.parts.tool_name == 'get_current_time'
Evals
Evals are used to measure the performance of the LLM and are more like benchmarks than unit tests. Evals focus on measuring how the LLM performs for a specific application. This can be done through end-to-end tests, synthetic self-contained tests, using LLMs to evaluate LLMs, or by measuring agent performance in production.
Example Use Cases
PydanticAI can be used in a wide variety of use cases:
- Roulette Wheel: Simulating a roulette wheel using an agent with an integer dependency and a boolean result.
- Chat Application: Creating a chat application with multiple runs, passing previous messages using
message_history
. - Bank Support Agent: Building a support agent for a bank using tools, dependency injection, and structured responses.
- Weather Forecast: Creating an application that returns a weather forecast based on location and date using function tools and dependencies.
- SQL Generation: Generating SQL queries from user prompts, with validation using the result validator.
Conclusion
PydanticAI offers a robust and flexible framework for developing AI applications with a strong emphasis on type safety and modularity. The use of Pydantic for data validation and structuring, coupled with its dependency injection system, makes it an ideal tool for building reliable and maintainable AI applications. With its broad LLM support and seamless integration with tools like Pydantic Logfire, PydanticAI enables developers to build powerful, production-ready AI-driven projects efficiently.
Top comments (0)