An Azure AI agent without tools is just a chatbot. Function calling gives your agent the ability to invoke your code when it needs real data or actions. Here's how to define functions, handle tool calls, and wire up the infrastructure with Terraform.
In the previous post, we deployed an Azure AI agent that can reason and hold multi-turn conversations. But it can only generate text from what the model already knows. Ask it for a live exchange rate or your account balance, and it either hallucinates or admits it doesn't know.
Function calling changes that. You define functions with descriptions and parameters. When the agent determines it needs data or an action, it returns a structured function call request. Your code executes the function and sends the result back. The agent then uses that real data to generate its response. The model decides what to call; your code does the work. π―
ποΈ How Function Calling Works
User: "What's the exchange rate from USD to EUR?"
β
Agent reads function definitions β selects get_exchange_rate
β
Agent returns: call get_exchange_rate(currency_from="USD", currency_to="EUR")
β
Your code executes the function, calls external API
β
Your code sends result back to the agent
β
Agent: "The current rate is 1 USD = 0.92 EUR"
Unlike AWS where a Lambda function executes remotely, Azure function calling is a round-trip: the agent proposes the call, your application code executes it locally, and you submit the output back. This gives you full control over execution, error handling, and security.
π§ Azure Tool Types
The Foundry Agent Service offers several tool categories:
| Tool Type | What It Does | Best For |
|---|---|---|
| Function calling | Your code executes locally | Custom logic, internal APIs |
| Azure Functions | Hosted serverless execution | Stateful, long-running tools |
| OpenAPI Spec tool | Auto-generated from spec | Existing documented APIs |
| Code Interpreter | Agent writes and runs code | Math, data analysis, charts |
| File Search | Searches uploaded documents | RAG over attached files |
| Bing Grounding | Real-time web search | Current events, live data |
This post focuses on function calling, the most flexible and common pattern.
π§ Terraform: Provision the Infrastructure
The Foundry resource, project, and model deployment from Day 9 remain the same. Add Secret Manager for API keys your tools need:
# agent/secrets.tf
resource "azurerm_key_vault" "agent" {
name = "${var.environment}-agent-kv"
location = azurerm_resource_group.this.location
resource_group_name = azurerm_resource_group.this.name
tenant_id = data.azurerm_client_config.current.tenant_id
sku_name = "standard"
enable_rbac_authorization = true
purge_protection_enabled = false
}
resource "azurerm_key_vault_secret" "tool_api_keys" {
for_each = var.tool_api_keys
name = each.key
value = each.value
key_vault_id = azurerm_key_vault.agent.id
}
# Grant your app access to read secrets
resource "azurerm_role_assignment" "kv_reader" {
scope = azurerm_key_vault.agent.id
role_definition_name = "Key Vault Secrets User"
principal_id = data.azurerm_client_config.current.object_id
}
Config Output with Tool Definitions
# agent/config.tf
resource "local_file" "agent_config" {
filename = "${path.module}/agent_code/agent_config.json"
content = jsonencode({
foundry_endpoint = azurerm_cognitive_account.ai_foundry.endpoint
deployment_name = azurerm_cognitive_deployment.agent_model.name
agent_name = "${var.environment}-${var.agent_name}"
instruction = var.agent_instruction
key_vault_name = azurerm_key_vault.agent.name
})
}
π Define Functions
Define your functions as plain Python, then describe them as tool definitions for the agent:
# agent_code/tools.py
import json
import urllib.request
def get_exchange_rate(currency_from: str, currency_to: str) -> str:
"""Get the current exchange rate between two currencies."""
url = f"https://api.frankfurter.dev/v1/latest?base={currency_from}&symbols={currency_to}"
req = urllib.request.Request(url)
with urllib.request.urlopen(req) as resp:
data = json.loads(resp.read())
rate = data["rates"].get(currency_to, "N/A")
return json.dumps({"rate": rate, "from": currency_from, "to": currency_to})
def get_weather(city: str, units: str = "celsius") -> str:
"""Get current weather for a city."""
# In production, call a real weather API
return json.dumps({"city": city, "temp": 22, "units": units, "condition": "sunny"})
# Map function names to implementations
TOOL_FUNCTIONS = {
"get_exchange_rate": get_exchange_rate,
"get_weather": get_weather,
}
π Create Agent with Function Tools
# agent_code/create_agent.py
from azure.ai.agents import AgentsClient
from azure.ai.agents.models import FunctionTool
from azure.identity import DefaultAzureCredential
import json
with open("agent_config.json") as f:
config = json.load(f)
client = AgentsClient(
endpoint=config["foundry_endpoint"],
credential=DefaultAzureCredential(),
)
# Define function tool schemas
exchange_rate_tool = FunctionTool(
name="get_exchange_rate",
description="Get the current exchange rate between two currencies. Use when the user asks about currency conversion.",
parameters={
"type": "object",
"properties": {
"currency_from": {
"type": "string",
"description": "Source currency code (e.g., USD, EUR, GBP)"
},
"currency_to": {
"type": "string",
"description": "Target currency code (e.g., USD, EUR, GBP)"
}
},
"required": ["currency_from", "currency_to"]
}
)
weather_tool = FunctionTool(
name="get_weather",
description="Get current weather for a city. Use when the user asks about weather conditions.",
parameters={
"type": "object",
"properties": {
"city": {
"type": "string",
"description": "City name (e.g., Seattle, London, Tokyo)"
},
"units": {
"type": "string",
"description": "Temperature units: celsius or fahrenheit",
"default": "celsius"
}
},
"required": ["city"]
}
)
# Create agent with tools
agent = client.create_agent(
model=config["deployment_name"],
name=config["agent_name"],
instructions=config["instruction"],
tools=[exchange_rate_tool, weather_tool],
)
print(f"Agent created: {agent.id}")
The description fields drive tool selection. The agent reads these to decide which function to call. Be specific about when each function should and should not be used.
π Handle the Function Call Loop
The key difference from simple chat: you need a loop to handle tool calls:
# agent_code/run_agent.py
from azure.ai.agents import AgentsClient
from azure.identity import DefaultAzureCredential
from tools import TOOL_FUNCTIONS
import json
client = AgentsClient(
endpoint=config["foundry_endpoint"],
credential=DefaultAzureCredential(),
)
# Create a thread and add user message
thread = client.threads.create()
client.messages.create(
thread_id=thread.id,
role="user",
content="What's the exchange rate from USD to JPY?"
)
# Create a run
run = client.runs.create(thread_id=thread.id, agent_id=agent.id)
# Poll for completion, handling tool calls
while run.status in ("queued", "in_progress", "requires_action"):
if run.status == "requires_action":
tool_outputs = []
for tool_call in run.required_action.submit_tool_outputs.tool_calls:
func_name = tool_call.function.name
func_args = json.loads(tool_call.function.arguments)
# Execute the function locally
if func_name in TOOL_FUNCTIONS:
result = TOOL_FUNCTIONS[func_name](**func_args)
else:
result = json.dumps({"error": f"Unknown function: {func_name}"})
tool_outputs.append({
"tool_call_id": tool_call.id,
"output": result
})
# Submit results back to the agent
run = client.runs.submit_tool_outputs(
thread_id=thread.id,
run_id=run.id,
tool_outputs=tool_outputs
)
else:
import time
time.sleep(1)
run = client.runs.retrieve(thread_id=thread.id, run_id=run.id)
# Get the final response
messages = client.messages.list(thread_id=thread.id)
for msg in messages:
if msg.role == "assistant":
print(msg.content[0].text.value)
The requires_action status is the key. When the agent decides to call a function, the run pauses with this status. Your code executes the function, submits the output, and the run continues. The agent can chain multiple function calls in a single run.
β οΈ Gotchas and Tips
10-minute run expiration. Runs expire 10 minutes after creation. If your functions take longer, consider Azure Functions (hosted execution) instead of local function calling.
Return JSON strings. Function outputs must be strings. Always json.dumps() your return values, even for simple responses.
Multiple tool calls per run. The agent can request several function calls in a single requires_action step. Always iterate over all tool_calls and submit outputs for each.
Error handling. If a function fails, return an error message as the output rather than raising an exception. The agent can reason about the error and try a different approach.
OpenAPI Spec tool alternative. If you have existing APIs with OpenAPI documentation, use the OpenAPI Spec tool instead of manually defining function schemas. The agent auto-generates tool definitions from the spec, similar to ADK's OpenAPIToolset.
Azure Functions for stateful tools. For tools that need database connections, queue access, or long execution times, use Azure Functions as the execution backend. The agent calls the hosted function directly without your application code in the loop.
βοΈ What's Next
This is Post 2 of the Azure AI Agents with Terraform series.
- Post 1: Deploy First Azure AI Agent π€
- Post 2: Function Calling - Connect to APIs (you are here) π
- Post 3: Multi-Agent with AI Agent Service
- Post 4: Agent + Bing Grounding
Your agent can now take action. Define functions, describe their parameters, and the agent decides when to call them. The model reasons; your code executes. Real data, real actions, real results. π
Found this helpful? Follow for the full AI Agents with Terraform series! π¬
Top comments (0)