Tool Use AI Agents Python: Build Smart Agents That Call Functions

Photo by Matheus Bertelli on Pexels
You're building an AI agent, but it feels like you've hit a wall. Your agent can chat, but it can't actually do anything — no weather lookups, no database queries, no real-world interactions. It's stuck in conversation limbo while your users need an agent that can take action.
This is where tool use (also called function calling) transforms your basic chatbot into a powerful AI agent. Instead of just generating text, your agent can now call Python functions, query APIs, manipulate data, and interact with external systems. It's the difference between an agent that talks about weather and one that actually fetches today's forecast.
Related: Complete RAG Tutorial Python: Build Your First Agent
In this guide, you'll learn how to implement tool use in Python AI agents from scratch. We'll build a practical weather station agent that can fetch real weather data and make decisions based on sensor readings — perfect for that Raspberry Pi project gathering dust on your shelf.
Table of Contents
- Understanding Tool Use in AI Agents
- Setting Up Your Python Environment
- Building Your First Tool Use Agent
- Advanced Tool Use Patterns
- Handling Tool Use Errors and Edge Cases
- Best Practices for Production AI Agents
- Frequently Asked Questions
Understanding Tool Use in AI Agents
Tool use enables AI agents to execute functions based on natural language instructions. Instead of generating a response like "I would check the weather for you," your agent actually calls a get_weather() function and returns real data.
Here's how the magic happens: the language model analyzes the user's request, determines which tools it needs, extracts the required parameters, and then executes the appropriate function calls. The results flow back into the conversation context, allowing the agent to provide informed responses.
This architecture powers everything from simple calculators to complex multi-agent systems. Your agent becomes truly autonomous when it can perceive (through sensors), reason (via the LLM), and act (through tools).
Setting Up Your Python Environment
Before diving into tool use implementation, you need the right foundation. Start with a clean Python environment and install the essential packages:
# requirements.txt
openai>=1.0.0
pydantic>=2.0.0
requests>=2.28.0
typing-extensions>=4.4.0
Install these dependencies:
pip install openai pydantic requests typing-extensions
The key insight here is that modern AI agents rely heavily on structured data validation. Pydantic ensures your tool definitions are rock-solid, preventing runtime errors when the LLM calls your functions with unexpected parameters.
Building Your First Tool Use Agent
Let's build a weather station agent that can fetch weather data and analyze sensor readings. This agent will demonstrate the core concepts of tool use while solving a real problem.
First, define your tools using Pydantic for type safety:
import json
from typing import List, Optional
from pydantic import BaseModel, Field
import requests
from openai import OpenAI
class WeatherParams(BaseModel):
location: str = Field(description="City name or coordinates")
units: str = Field(default="celsius", description="Temperature units")
class SensorParams(BaseModel):
sensor_id: str = Field(description="Sensor identifier")
reading_type: str = Field(description="Type of reading: temperature, humidity, pressure")
class WeatherAgent:
def __init__(self, api_key: str, weather_api_key: str):
self.client = OpenAI(api_key=api_key)
self.weather_api_key = weather_api_key
def get_weather(self, location: str, units: str = "celsius") -> dict:
"""Fetch current weather for a location"""
url = f"http://api.openweathermap.org/data/2.5/weather"
params = {
"q": location,
"appid": self.weather_api_key,
"units": "metric" if units == "celsius" else "imperial"
}
try:
response = requests.get(url, params=params)
response.raise_for_status()
data = response.json()
return {
"location": data["name"],
"temperature": data["main"]["temp"],
"humidity": data["main"]["humidity"],
"description": data["weather"][0]["description"],
"units": units
}
except Exception as e:
return {"error": f"Failed to fetch weather: {str(e)}"}
def read_sensor(self, sensor_id: str, reading_type: str) -> dict:
"""Simulate reading from a Raspberry Pi sensor"""
# In a real implementation, this would interface with actual hardware
import random
sensor_data = {
"rpi_temp_01": {"temperature": round(random.uniform(18, 25), 1)},
"rpi_humid_01": {"humidity": round(random.uniform(40, 80), 1)},
"rpi_pressure_01": {"pressure": round(random.uniform(1010, 1030), 1)}
}
if sensor_id in sensor_data and reading_type in sensor_data[sensor_id]:
return {
"sensor_id": sensor_id,
"reading_type": reading_type,
"value": sensor_data[sensor_id][reading_type],
"timestamp": "2026-03-24T10:30:00Z"
}
else:
return {"error": f"Sensor {sensor_id} or reading type {reading_type} not found"}
Now implement the tool use logic:
def get_tools_definition(self):
"""Define available tools for the LLM"""
return [
{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get current weather information for a specific location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "City name or coordinates"
},
"units": {
"type": "string",
"enum": ["celsius", "fahrenheit"],
"description": "Temperature units"
}
},
"required": ["location"]
}
}
},
{
"type": "function",
"function": {
"name": "read_sensor",
"description": "Read data from a Raspberry Pi sensor",
"parameters": {
"type": "object",
"properties": {
"sensor_id": {
"type": "string",
"description": "Sensor identifier (e.g., rpi_temp_01)"
},
"reading_type": {
"type": "string",
"enum": ["temperature", "humidity", "pressure"],
"description": "Type of sensor reading"
}
},
"required": ["sensor_id", "reading_type"]
}
}
}
]
def execute_tool_call(self, tool_call):
"""Execute a function call from the LLM"""
function_name = tool_call.function.name
arguments = json.loads(tool_call.function.arguments)
if function_name == "get_weather":
return self.get_weather(**arguments)
elif function_name == "read_sensor":
return self.read_sensor(**arguments)
else:
return {"error": f"Unknown function: {function_name}"}
def chat(self, message: str) -> str:
"""Main chat interface with tool use capabilities"""
messages = [
{
"role": "system",
"content": "You are a helpful weather station assistant. You can fetch weather data and read sensor values from Raspberry Pi devices. Always provide actionable insights based on the data you gather."
},
{"role": "user", "content": message}
]
# Initial LLM call
response = self.client.chat.completions.create(
model="gpt-4-turbo",
messages=messages,
tools=self.get_tools_definition(),
tool_choice="auto"
)
assistant_message = response.choices[0].message
messages.append(assistant_message)
# Execute any tool calls
if assistant_message.tool_calls:
for tool_call in assistant_message.tool_calls:
result = self.execute_tool_call(tool_call)
messages.append({
"role": "tool",
"content": json.dumps(result),
"tool_call_id": tool_call.id
})
# Get final response with tool results
final_response = self.client.chat.completions.create(
model="gpt-4-turbo",
messages=messages
)
return final_response.choices[0].message.content
return assistant_message.content
Advanced Tool Use Patterns
Once you've mastered basic tool use, several advanced patterns can make your AI agents more capable and reliable.
Parallel Tool Execution: Instead of calling tools sequentially, execute independent calls in parallel to reduce latency:
import asyncio
import aiohttp
class AsyncWeatherAgent(WeatherAgent):
async def parallel_weather_check(self, locations: List[str]):
"""Check weather for multiple locations simultaneously"""
tasks = [self.get_weather_async(loc) for loc in locations]
results = await asyncio.gather(*tasks, return_exceptions=True)
return results
Tool Chaining: Design tools that can use outputs from other tools as inputs. This creates powerful workflows where agents can break down complex tasks into manageable steps.
Error Recovery: Implement robust error handling that allows agents to retry failed operations, use fallback tools, or gracefully degrade functionality when tools are unavailable.
Handling Tool Use Errors and Edge Cases
Real-world AI agents face numerous challenges. Network failures, invalid parameters, rate limits, and unexpected data formats can break your agent's functionality. Here's how to build resilience:
Input Validation: Always validate tool parameters before execution. Pydantic helps, but add custom validation for business logic:
def validate_location(self, location: str) -> bool:
"""Validate location parameter before weather API call"""
if len(location.strip()) == 0:
return False
# Add more validation logic
return True
Graceful Degradation: When tools fail, provide alternative responses instead of generic errors:
def get_weather_with_fallback(self, location: str) -> dict:
"""Weather fetching with fallback strategies"""
try:
return self.get_weather(location)
except requests.ConnectionError:
return {
"location": location,
"error": "Weather service unavailable. Try checking a local weather app.",
"fallback": True
}
except Exception as e:
return {
"location": location,
"error": f"Unable to fetch weather data: {str(e)}",
"suggestion": "Please verify the location name and try again."
}
Rate Limiting: Implement intelligent rate limiting to prevent API abuse and ensure stable performance during high usage periods.
Best Practices for Production AI Agents
Building production-ready AI agents with tool use requires attention to security, performance, and maintainability.
Security: Never expose sensitive functions directly to the LLM. Create safe wrappers that validate permissions and sanitize inputs. Use environment variables for API keys and implement proper authentication for external tools.
Monitoring: Log all tool calls, their parameters, execution times, and results. This data helps you optimize performance and debug issues when they arise.
Testing: Create comprehensive test suites for your tools. Mock external APIs in tests to ensure consistent behavior across different environments.
Documentation: Maintain clear documentation for each tool, including expected inputs, outputs, and error conditions. This helps both developers and the LLM understand how to use tools effectively.
Your AI agents become truly powerful when they can reliably interact with the world through well-designed tools. Start simple, test thoroughly, and gradually add complexity as your confidence grows.
Frequently Asked Questions
Q: How do I prevent my AI agent from calling tools with invalid parameters?
Use Pydantic models for strict type validation and add custom validation logic in your tool functions. Always validate parameters before execution and return descriptive error messages when validation fails.
Q: Can I limit which tools an AI agent can access based on user permissions?
Yes, implement a permission system that filters the available tools list based on user roles or authentication levels. Pass only authorized tools to the LLM's tools parameter in each request.
Q: What's the best way to handle rate limits when using external APIs in tools?
Implement exponential backoff, queue tool calls, and cache results when possible. Consider using async execution for non-dependent tool calls and provide fallback responses when rate limits are hit.
Q: How can I make my tool use agents work offline or with limited connectivity?
Design tools with local fallbacks, cache frequently accessed data, and implement offline modes for critical functionality. Use local models when internet connectivity is unreliable for your use case.
Need a server? Get $200 free credits on DigitalOcean to deploy your AI apps.
Resources I Recommend
If you're serious about building production AI agents, these AI and LLM engineering books provide deep insights into advanced patterns and architectural decisions that'll save you months of trial and error.
You Might Also Like
- Complete RAG Tutorial Python: Build Your First Agent
- LlamaIndex Tutorial: Build AI Agents with RAG
- Build Chatbot with RAG: Beyond Basic Q&A in 2026
Tool use transforms static chatbots into dynamic agents that can actually solve problems. Start with simple functions, focus on error handling, and gradually build more sophisticated tool ecosystems. Your users will notice the difference between an agent that talks about actions and one that takes them.
📘 Go Deeper: Building AI Agents: A Practical Developer's Guide
185 pages covering autonomous systems, RAG, multi-agent workflows, and production deployment — with complete code examples.
Also check out: *AI-Powered iOS Apps: CoreML to Claude***
Enjoyed this article?
I write daily about iOS development, AI, and modern tech — practical tips you can use right away.
- Follow me on Dev.to for daily articles
- Follow me on Hashnode for in-depth tutorials
- Follow me on Medium for more stories
- Connect on Twitter/X for quick tips
If this helped you, drop a like and share it with a fellow developer!
Top comments (0)