Power Your Azure AI Agents: Function Calling, Code Interpreter, and File Search
In Part 1, we built a customer support agent that could have conversations—but it couldn't actually do anything. It couldn't look up orders, analyze data, or search through documentation. It was all persona, no power.
Time to fix that.
In this article, we'll extend our agent with the three built-in tool types that transform it from a chatbot into a capable assistant:
- Function Calling — Connect to your own APIs and databases
- Code Interpreter — Execute Python for data analysis and calculations
- File Search — RAG-powered search across your documents
By the end, your agent will be able to look up real order data, analyze CSV files, and answer questions from your product documentation.
Understanding Tools in Azure AI Agent Service
Tools are what give agents their superpowers. When an agent encounters a request it can't handle with just language, it can invoke tools to take action in the real world.
The Tool Execution Flow
┌─────────────────────────────────────────────────────────────────┐
│ Agent Run Flow │
│ │
│ 1. User: "What's the status of order ORD-12345?" │
│ │ │
│ ▼ │
│ 2. Agent thinks: "I need to call get_order_status" │
│ │ │
│ ▼ │
│ 3. Run status: requires_action │
│ Required: { function: "get_order_status", │
│ arguments: { order_id: "ORD-12345" } } │
│ │ │
│ ▼ │
│ 4. YOUR CODE executes the function │
│ Result: { status: "shipped", tracking: "1Z999..." } │
│ │ │
│ ▼ │
│ 5. Submit tool output back to the run │
│ │ │
│ ▼ │
│ 6. Agent: "Your order ORD-12345 has shipped! │
│ Tracking number: 1Z999..." │
└─────────────────────────────────────────────────────────────────┘
The key insight: you execute the tools. The agent decides what to call and with what arguments, but your code does the actual work. This keeps you in control of security, data access, and business logic.
Function Calling: Connect to Your APIs
Function calling lets you define custom tools that the agent can invoke. This is how you connect agents to your databases, APIs, and business systems.
Defining a Function Tool
Let's create a function to look up order status:
using Azure.AI.Projects;
using System.Text.Json;
// Define the function schema
var getOrderStatusTool = new FunctionToolDefinition(
name: "get_order_status",
description: "Get the current status of a customer order including shipping information",
parameters: BinaryData.FromObjectAsJson(new
{
type = "object",
properties = new
{
order_id = new
{
type = "string",
description = "The order ID (e.g., ORD-12345)"
}
},
required = new[] { "order_id" }
})
);
The parameters field uses JSON Schema to describe what arguments the function accepts. Be specific in your descriptions—the agent uses them to understand when and how to call the function.
Creating an Agent with Functions
// Define multiple function tools
var tools = new List<ToolDefinition>
{
new FunctionToolDefinition(
name: "get_order_status",
description: "Get the current status of a customer order",
parameters: BinaryData.FromObjectAsJson(new
{
type = "object",
properties = new
{
order_id = new { type = "string", description = "The order ID" }
},
required = new[] { "order_id" }
})
),
new FunctionToolDefinition(
name: "get_product_info",
description = "Get detailed information about a product",
parameters: BinaryData.FromObjectAsJson(new
{
type = "object",
properties = new
{
product_id = new { type = "string", description = "The product SKU" },
include_inventory = new { type = "boolean", description = "Whether to include stock levels" }
},
required = new[] { "product_id" }
})
),
new FunctionToolDefinition(
name: "create_return_request",
description: "Initiate a return request for an order",
parameters: BinaryData.FromObjectAsJson(new
{
type = "object",
properties = new
{
order_id = new { type = "string", description = "The order ID to return" },
reason = new { type = "string", description = "Reason for the return" },
items = new
{
type = "array",
items = new { type = "string" },
description = "List of item IDs to return"
}
},
required = new[] { "order_id", "reason" }
})
)
};
// Create agent with tools
var agent = await projectClient.CreateAgentAsync(new CreateAgentOptions(
model: "gpt-4o",
name: "CustomerSupportAgent",
instructions: """
You are a helpful customer support agent for Contoso Electronics.
You have access to the following capabilities:
- Look up order status and shipping information
- Get product details and inventory
- Create return requests for customers
Always confirm the order ID before looking it up.
For returns, explain the return policy (30 days, original packaging preferred).
""",
tools: tools
));
Handling Tool Calls
When the agent decides to use a tool, the run enters the requires_action state. Your code must:
- Extract the tool calls from the run
- Execute each function
- Submit the results back
// Start a run
var run = await projectClient.CreateRunAsync(thread.Id, agent.Id);
// Poll until we need to take action or it completes
while (run.Value.Status == RunStatus.Queued ||
run.Value.Status == RunStatus.InProgress ||
run.Value.Status == RunStatus.RequiresAction)
{
if (run.Value.Status == RunStatus.RequiresAction)
{
// Handle the tool calls
var toolOutputs = await HandleToolCallsAsync(run.Value);
// Submit the results
run = await projectClient.SubmitToolOutputsToRunAsync(
thread.Id,
run.Value.Id,
toolOutputs
);
}
else
{
await Task.Delay(500);
run = await projectClient.GetRunAsync(thread.Id, run.Value.Id);
}
}
// Helper method to execute tool calls
async Task<IEnumerable<ToolOutput>> HandleToolCallsAsync(ThreadRun run)
{
var toolCalls = run.RequiredAction?.SubmitToolOutputs?.ToolCalls
?? Enumerable.Empty<RequiredToolCall>();
var outputs = new List<ToolOutput>();
foreach (var toolCall in toolCalls)
{
if (toolCall is RequiredFunctionToolCall functionCall)
{
Console.WriteLine($"🔧 Executing: {functionCall.Name}");
Console.WriteLine($" Arguments: {functionCall.Arguments}");
var result = await ExecuteFunctionAsync(
functionCall.Name,
functionCall.Arguments
);
outputs.Add(new ToolOutput(functionCall.Id, result));
}
}
return outputs;
}
// Execute the actual function logic
async Task<string> ExecuteFunctionAsync(string functionName, string argumentsJson)
{
var args = JsonDocument.Parse(argumentsJson);
return functionName switch
{
"get_order_status" => await GetOrderStatusAsync(
args.RootElement.GetProperty("order_id").GetString()!
),
"get_product_info" => await GetProductInfoAsync(
args.RootElement.GetProperty("product_id").GetString()!,
args.RootElement.TryGetProperty("include_inventory", out var inv) && inv.GetBoolean()
),
"create_return_request" => await CreateReturnRequestAsync(
args.RootElement.GetProperty("order_id").GetString()!,
args.RootElement.GetProperty("reason").GetString()!,
args.RootElement.TryGetProperty("items", out var items)
? items.EnumerateArray().Select(i => i.GetString()!).ToList()
: null
),
_ => JsonSerializer.Serialize(new { error = $"Unknown function: {functionName}" })
};
}
Implementing the Function Logic
Here's how you might implement the actual business logic:
// Simulated order database
private static readonly Dictionary<string, Order> _orders = new()
{
["ORD-12345"] = new Order
{
Id = "ORD-12345",
Status = "shipped",
TrackingNumber = "1Z999AA10123456784",
EstimatedDelivery = DateTime.Now.AddDays(2),
Items = new[] { "Wireless Mouse", "USB-C Hub" }
},
["ORD-67890"] = new Order
{
Id = "ORD-67890",
Status = "processing",
TrackingNumber = null,
EstimatedDelivery = DateTime.Now.AddDays(5),
Items = new[] { "Mechanical Keyboard" }
}
};
async Task<string> GetOrderStatusAsync(string orderId)
{
// In production, this would hit your actual database/API
await Task.Delay(100); // Simulate API call
if (_orders.TryGetValue(orderId, out var order))
{
return JsonSerializer.Serialize(new
{
order_id = order.Id,
status = order.Status,
tracking_number = order.TrackingNumber,
estimated_delivery = order.EstimatedDelivery?.ToString("yyyy-MM-dd"),
items = order.Items,
message = order.Status switch
{
"shipped" => "Your order is on the way!",
"processing" => "Your order is being prepared.",
"delivered" => "Your order has been delivered.",
_ => "Status information available."
}
});
}
return JsonSerializer.Serialize(new
{
error = "Order not found",
order_id = orderId,
suggestion = "Please verify the order ID and try again."
});
}
async Task<string> CreateReturnRequestAsync(string orderId, string reason, List<string>? items)
{
await Task.Delay(100);
if (!_orders.ContainsKey(orderId))
{
return JsonSerializer.Serialize(new { error = "Order not found" });
}
// Create return in your system
var returnId = $"RET-{Guid.NewGuid():N[..8].ToUpper()}";
return JsonSerializer.Serialize(new
{
success = true,
return_id = returnId,
order_id = orderId,
reason = reason,
status = "initiated",
next_steps = new[]
{
"You'll receive a prepaid shipping label via email within 24 hours",
"Pack items in original packaging if possible",
"Drop off at any UPS location",
"Refund processed within 5-7 business days of receipt"
}
});
}
record Order
{
public string Id { get; init; } = "";
public string Status { get; init; } = "";
public string? TrackingNumber { get; init; }
public DateTime? EstimatedDelivery { get; init; }
public string[] Items { get; init; } = Array.Empty<string>();
}
Streaming with Tool Calls
For real-time UX, you'll want to handle tool calls during streaming:
var toolOutputs = new List<ToolOutput>();
RequiredAction? pendingAction = null;
await foreach (var update in projectClient.CreateRunStreamingAsync(thread.Id, agent.Id))
{
switch (update)
{
case RunUpdate runUpdate when runUpdate.Value.Status == RunStatus.RequiresAction:
pendingAction = runUpdate.Value.RequiredAction;
break;
case MessageContentUpdate contentUpdate:
Console.Write(contentUpdate.Text);
break;
}
}
// If we have pending tool calls, handle them
if (pendingAction?.SubmitToolOutputs?.ToolCalls is { } toolCalls)
{
foreach (var call in toolCalls.OfType<RequiredFunctionToolCall>())
{
var result = await ExecuteFunctionAsync(call.Name, call.Arguments);
toolOutputs.Add(new ToolOutput(call.Id, result));
}
// Submit and continue streaming
await foreach (var update in projectClient.SubmitToolOutputsToRunStreamingAsync(
thread.Id, runId, toolOutputs))
{
if (update is MessageContentUpdate content)
{
Console.Write(content.Text);
}
}
}
Code Interpreter: Python-Powered Analysis
Code Interpreter is a sandboxed Python environment that agents can use for:
- Data analysis and visualization
- Mathematical calculations
- File format conversions
- Generating charts and reports
Enabling Code Interpreter
var agent = await projectClient.CreateAgentAsync(new CreateAgentOptions(
model: "gpt-4o",
name: "DataAnalysisAgent",
instructions: """
You are a data analysis assistant. You can:
- Analyze CSV and Excel files
- Create visualizations and charts
- Perform statistical analysis
- Generate reports
When given data, provide clear insights and visualizations.
Always explain your methodology.
""",
tools: new List<ToolDefinition>
{
new CodeInterpreterToolDefinition()
}
));
Uploading Files for Analysis
To analyze files, you need to upload them and attach to the thread or message:
// Upload a file
using var fileStream = File.OpenRead("sales_data.csv");
var uploadedFile = await projectClient.UploadFileAsync(
fileStream,
FilePurpose.Agents,
"sales_data.csv"
);
Console.WriteLine($"📁 Uploaded: {uploadedFile.Value.Filename} (ID: {uploadedFile.Value.Id})");
// Create a message with the file attached
await projectClient.CreateMessageAsync(
thread.Id,
MessageRole.User,
"Analyze this sales data. Show me the top products and monthly trends.",
new MessageCreationOptions
{
Attachments = new List<MessageAttachment>
{
new MessageAttachment(uploadedFile.Value.Id, new List<ToolDefinition>
{
new CodeInterpreterToolDefinition()
})
}
}
);
Handling Code Interpreter Output
Code Interpreter can generate both text and images. Here's how to handle them:
var messages = await projectClient.GetMessagesAsync(thread.Id);
foreach (var message in messages.Value.Data.Where(m => m.Role == MessageRole.Assistant))
{
foreach (var content in message.ContentItems)
{
switch (content)
{
case MessageTextContent textContent:
Console.WriteLine(textContent.Text);
break;
case MessageImageFileContent imageContent:
// Download the generated image
var imageFile = await projectClient.GetFileContentAsync(imageContent.FileId);
var imagePath = $"output_{imageContent.FileId}.png";
using (var fileStream = File.Create(imagePath))
{
await imageFile.Value.CopyToAsync(fileStream);
}
Console.WriteLine($"📊 Chart saved: {imagePath}");
break;
}
}
}
Example: Complete Analysis Workflow
using Azure.AI.Projects;
using Azure.Identity;
var projectClient = new AIProjectClient(
new Uri(Environment.GetEnvironmentVariable("AZURE_AI_FOUNDRY_PROJECT_ENDPOINT")!),
new DefaultAzureCredential()
);
// Create a data analyst agent
var agent = await projectClient.CreateAgentAsync(new CreateAgentOptions(
model: "gpt-4o",
name: "SalesAnalyst",
instructions: """
You are a sales data analyst. When analyzing data:
1. First, explore the data structure and quality
2. Identify key metrics and trends
3. Create clear visualizations
4. Provide actionable insights
Use charts to make data easy to understand.
""",
tools: new[] { new CodeInterpreterToolDefinition() }
));
var thread = await projectClient.CreateThreadAsync();
// Upload sales data
var csvContent = """
date,product,category,revenue,units
2024-01-01,Widget A,Electronics,1500,30
2024-01-01,Widget B,Electronics,2000,40
2024-01-02,Gadget X,Accessories,800,20
2024-01-02,Widget A,Electronics,1200,24
...more data...
""";
var tempFile = Path.GetTempFileName() + ".csv";
await File.WriteAllTextAsync(tempFile, csvContent);
using var fs = File.OpenRead(tempFile);
var file = await projectClient.UploadFileAsync(fs, FilePurpose.Agents, "sales.csv");
// Ask for analysis
await projectClient.CreateMessageAsync(
thread.Value.Id,
MessageRole.User,
"Analyze this sales data. What are the top products? Show me a chart of daily revenue.",
new MessageCreationOptions
{
Attachments = new[]
{
new MessageAttachment(file.Value.Id, new[] { new CodeInterpreterToolDefinition() })
}
}
);
// Run and get results
var run = await projectClient.CreateRunAsync(thread.Value.Id, agent.Value.Id);
while (run.Value.Status == RunStatus.Queued || run.Value.Status == RunStatus.InProgress)
{
await Task.Delay(1000);
run = await projectClient.GetRunAsync(thread.Value.Id, run.Value.Id);
Console.WriteLine($"Status: {run.Value.Status}");
}
// Display results
var messages = await projectClient.GetMessagesAsync(thread.Value.Id);
var assistantMessages = messages.Value.Data
.Where(m => m.Role == MessageRole.Assistant)
.OrderBy(m => m.CreatedAt);
foreach (var message in assistantMessages)
{
foreach (var content in message.ContentItems)
{
if (content is MessageTextContent text)
{
Console.WriteLine(text.Text);
}
else if (content is MessageImageFileContent image)
{
var imageData = await projectClient.GetFileContentAsync(image.FileId);
var path = $"chart_{DateTime.Now:yyyyMMddHHmmss}.png";
using var outFile = File.Create(path);
await imageData.Value.CopyToAsync(outFile);
Console.WriteLine($"📈 Saved chart: {path}");
}
}
}
// Cleanup
await projectClient.DeleteAgentAsync(agent.Value.Id);
File.Delete(tempFile);
File Search: RAG Without the Infrastructure
File Search provides built-in retrieval-augmented generation (RAG). You upload documents, Azure creates vector embeddings automatically, and the agent can search through them to answer questions.
This is huge—you get RAG without managing vector databases, embedding models, or chunking strategies.
Creating a Vector Store
// Upload documents
var files = new List<string>();
foreach (var docPath in Directory.GetFiles("./product-docs", "*.md"))
{
using var stream = File.OpenRead(docPath);
var uploaded = await projectClient.UploadFileAsync(
stream,
FilePurpose.Agents,
Path.GetFileName(docPath)
);
files.Add(uploaded.Value.Id);
Console.WriteLine($"📄 Uploaded: {Path.GetFileName(docPath)}");
}
// Create a vector store from the files
var vectorStore = await projectClient.CreateVectorStoreAsync(
new VectorStoreCreationOptions
{
Name = "product-documentation",
FileIds = files
}
);
Console.WriteLine($"🗂️ Vector store created: {vectorStore.Value.Id}");
Console.WriteLine($" Files: {vectorStore.Value.FileCounts.Total}");
Console.WriteLine($" Status: {vectorStore.Value.Status}");
// Wait for processing to complete
while (vectorStore.Value.Status == VectorStoreStatus.InProgress)
{
await Task.Delay(1000);
vectorStore = await projectClient.GetVectorStoreAsync(vectorStore.Value.Id);
}
if (vectorStore.Value.Status == VectorStoreStatus.Completed)
{
Console.WriteLine("✅ Vector store ready!");
}
Creating an Agent with File Search
// Create agent with file search enabled
var agent = await projectClient.CreateAgentAsync(new CreateAgentOptions(
model: "gpt-4o",
name: "ProductExpert",
instructions: """
You are a product documentation expert for Contoso Electronics.
Use the file search tool to find accurate information from our docs.
Always cite which document your information comes from.
If you can't find an answer in the docs, say so clearly.
Never make up product specifications—only use documented information.
""",
tools: new[] { new FileSearchToolDefinition() },
toolResources: new ToolResources
{
FileSearch = new FileSearchToolResource
{
VectorStoreIds = new[] { vectorStore.Value.Id }
}
}
));
How File Search Works
When the agent uses file search:
- Your query is converted to a vector embedding
- Azure searches the vector store for similar chunks
- Relevant chunks are returned to the agent's context
- The agent synthesizes an answer from the chunks
// Create thread and ask a question
var thread = await projectClient.CreateThreadAsync();
await projectClient.CreateMessageAsync(
thread.Value.Id,
MessageRole.User,
"What is the warranty policy for the Pro Wireless Keyboard? How do I make a claim?"
);
var run = await projectClient.CreateRunAsync(thread.Value.Id, agent.Value.Id);
// File search runs automatically—no RequiresAction handling needed
while (run.Value.Status == RunStatus.Queued || run.Value.Status == RunStatus.InProgress)
{
await Task.Delay(500);
run = await projectClient.GetRunAsync(thread.Value.Id, run.Value.Id);
}
// Get response with citations
var messages = await projectClient.GetMessagesAsync(thread.Value.Id);
var response = messages.Value.Data
.Where(m => m.Role == MessageRole.Assistant)
.OrderByDescending(m => m.CreatedAt)
.First();
foreach (var content in response.ContentItems)
{
if (content is MessageTextContent textContent)
{
Console.WriteLine(textContent.Text);
// Show citations if available
foreach (var annotation in textContent.Annotations ?? Enumerable.Empty<MessageTextAnnotation>())
{
if (annotation is MessageTextFileCitationAnnotation citation)
{
Console.WriteLine($"\n📎 Source: {citation.FileId}");
Console.WriteLine($" Quote: \"{citation.Text}\"");
}
}
}
}
Attaching Files to Threads
You can also attach files directly to a thread for temporary context:
// Upload a specific document
using var policyDoc = File.OpenRead("return_policy_2024.pdf");
var policyFile = await projectClient.UploadFileAsync(
policyDoc,
FilePurpose.Agents,
"return_policy.pdf"
);
// Create a vector store for this thread
var threadVectorStore = await projectClient.CreateVectorStoreAsync(
new VectorStoreCreationOptions
{
Name = "thread-specific-docs",
FileIds = new[] { policyFile.Value.Id }
}
);
// Create thread with the vector store attached
var thread = await projectClient.CreateThreadAsync(new ThreadCreationOptions
{
ToolResources = new ToolResources
{
FileSearch = new FileSearchToolResource
{
VectorStoreIds = new[] { threadVectorStore.Value.Id }
}
}
});
Combining Multiple Tools
Real agents often need multiple tools working together. Here's a production-ready example that combines all three:
using Azure;
using Azure.AI.Projects;
using Azure.Identity;
using System.Text.Json;
var projectClient = new AIProjectClient(
new Uri(Environment.GetEnvironmentVariable("AZURE_AI_FOUNDRY_PROJECT_ENDPOINT")!),
new DefaultAzureCredential()
);
// Upload documentation for file search
var docFiles = new List<string>();
foreach (var doc in Directory.GetFiles("./docs", "*.md"))
{
using var stream = File.OpenRead(doc);
var uploaded = await projectClient.UploadFileAsync(stream, FilePurpose.Agents, Path.GetFileName(doc));
docFiles.Add(uploaded.Value.Id);
}
var vectorStore = await projectClient.CreateVectorStoreAsync(new VectorStoreCreationOptions
{
Name = "support-docs",
FileIds = docFiles
});
// Wait for vector store to be ready
while ((await projectClient.GetVectorStoreAsync(vectorStore.Value.Id)).Value.Status == VectorStoreStatus.InProgress)
{
await Task.Delay(1000);
}
// Define function tools
var functionTools = new List<ToolDefinition>
{
new FunctionToolDefinition(
"get_order_status",
"Look up the status of a customer order",
BinaryData.FromObjectAsJson(new
{
type = "object",
properties = new { order_id = new { type = "string" } },
required = new[] { "order_id" }
})
),
new FunctionToolDefinition(
"create_support_ticket",
"Create a support ticket for issues that need human follow-up",
BinaryData.FromObjectAsJson(new
{
type = "object",
properties = new
{
subject = new { type = "string" },
description = new { type = "string" },
priority = new { type = "string", @enum = new[] { "low", "medium", "high" } }
},
required = new[] { "subject", "description" }
})
)
};
// Create the ultimate support agent
var agent = await projectClient.CreateAgentAsync(new CreateAgentOptions(
model: "gpt-4o",
name: "UltimateSupportAgent",
instructions: """
You are an expert customer support agent for Contoso Electronics.
Your capabilities:
1. **Order Lookup**: Use get_order_status to check order information
2. **Documentation**: Use file_search to find product info, policies, and procedures
3. **Data Analysis**: Use code_interpreter for calculations or data analysis
4. **Escalation**: Use create_support_ticket for issues needing human attention
Approach:
- First, understand the customer's need
- Use the appropriate tool(s) to gather information
- Provide clear, helpful responses
- Escalate if you can't fully resolve the issue
Always be friendly, professional, and thorough.
""",
tools: new List<ToolDefinition>(functionTools)
{
new CodeInterpreterToolDefinition(),
new FileSearchToolDefinition()
},
toolResources: new ToolResources
{
FileSearch = new FileSearchToolResource
{
VectorStoreIds = new[] { vectorStore.Value.Id }
}
}
));
Console.WriteLine($"🤖 Created agent with {agent.Value.Tools.Count} tools");
// Main conversation loop
var thread = await projectClient.CreateThreadAsync();
while (true)
{
Console.Write("\n👤 You: ");
var input = Console.ReadLine();
if (string.IsNullOrWhiteSpace(input) || input.ToLower() == "quit") break;
await projectClient.CreateMessageAsync(thread.Value.Id, MessageRole.User, input);
var run = await projectClient.CreateRunAsync(thread.Value.Id, agent.Value.Id);
while (run.Value.Status != RunStatus.Completed &&
run.Value.Status != RunStatus.Failed &&
run.Value.Status != RunStatus.Cancelled)
{
if (run.Value.Status == RunStatus.RequiresAction)
{
var outputs = new List<ToolOutput>();
foreach (var call in run.Value.RequiredAction!.SubmitToolOutputs.ToolCalls
.OfType<RequiredFunctionToolCall>())
{
Console.WriteLine($" 🔧 {call.Name}");
var result = call.Name switch
{
"get_order_status" => GetOrderStatus(call.Arguments),
"create_support_ticket" => CreateSupportTicket(call.Arguments),
_ => JsonSerializer.Serialize(new { error = "Unknown function" })
};
outputs.Add(new ToolOutput(call.Id, result));
}
run = await projectClient.SubmitToolOutputsToRunAsync(
thread.Value.Id, run.Value.Id, outputs
);
}
else
{
await Task.Delay(500);
run = await projectClient.GetRunAsync(thread.Value.Id, run.Value.Id);
}
}
// Get and display the response
var messages = await projectClient.GetMessagesAsync(thread.Value.Id);
var lastAssistant = messages.Value.Data
.Where(m => m.Role == MessageRole.Assistant)
.OrderByDescending(m => m.CreatedAt)
.FirstOrDefault();
if (lastAssistant != null)
{
Console.WriteLine();
foreach (var content in lastAssistant.ContentItems.OfType<MessageTextContent>())
{
Console.WriteLine($"🤖 Agent: {content.Text}");
}
}
}
// Cleanup
await projectClient.DeleteAgentAsync(agent.Value.Id);
await projectClient.DeleteVectorStoreAsync(vectorStore.Value.Id);
// Function implementations
string GetOrderStatus(string argsJson)
{
var args = JsonDocument.Parse(argsJson);
var orderId = args.RootElement.GetProperty("order_id").GetString();
return JsonSerializer.Serialize(new
{
order_id = orderId,
status = "shipped",
tracking = "1Z999AA10123456784",
carrier = "UPS",
estimated_delivery = "2024-01-15"
});
}
string CreateSupportTicket(string argsJson)
{
var args = JsonDocument.Parse(argsJson);
var ticketId = $"TKT-{Random.Shared.Next(10000, 99999)}";
return JsonSerializer.Serialize(new
{
success = true,
ticket_id = ticketId,
message = $"Support ticket {ticketId} created. A team member will respond within 24 hours."
});
}
Best Practices for Tool Design
1. Write Clear Descriptions
The model uses your descriptions to decide when to call functions:
// ❌ Bad: Vague description
new FunctionToolDefinition(
"get_info",
"Gets information", // Too vague!
...
);
// ✅ Good: Specific and actionable
new FunctionToolDefinition(
"get_order_status",
"Retrieves the current status, tracking number, and estimated delivery date for a customer order. Use this when a customer asks about their order status, shipping, or delivery.",
...
);
2. Return Structured Data
Return JSON with context the model can use:
// ❌ Bad: Just the status
return "shipped";
// ✅ Good: Rich context
return JsonSerializer.Serialize(new
{
status = "shipped",
tracking_number = "1Z999...",
carrier = "UPS",
shipped_date = "2024-01-10",
estimated_delivery = "2024-01-15",
delivery_status = "In Transit - On Schedule",
last_location = "Chicago, IL",
suggested_response = "Your order shipped on Jan 10 and is on its way!"
});
3. Handle Errors Gracefully
try
{
var order = await _orderService.GetOrderAsync(orderId);
return JsonSerializer.Serialize(new { success = true, data = order });
}
catch (OrderNotFoundException)
{
return JsonSerializer.Serialize(new
{
success = false,
error = "Order not found",
order_id = orderId,
suggestion = "Please verify the order ID. It should be in format ORD-XXXXX."
});
}
catch (Exception ex)
{
_logger.LogError(ex, "Failed to get order {OrderId}", orderId);
return JsonSerializer.Serialize(new
{
success = false,
error = "Unable to retrieve order information at this time",
retry = true
});
}
4. Limit Tool Count
Too many tools can confuse the model. Group related functionality:
// ❌ Bad: Separate tools for each action
new FunctionToolDefinition("get_order", ...),
new FunctionToolDefinition("update_order", ...),
new FunctionToolDefinition("cancel_order", ...),
new FunctionToolDefinition("get_order_items", ...),
new FunctionToolDefinition("get_order_tracking", ...),
// 20 more order tools...
// ✅ Good: Consolidated tool with action parameter
new FunctionToolDefinition(
"order_management",
"Manage customer orders: get status, update, cancel, or retrieve details",
BinaryData.FromObjectAsJson(new
{
type = "object",
properties = new
{
action = new {
type = "string",
@enum = new[] { "get_status", "cancel", "update_address", "get_tracking" }
},
order_id = new { type = "string" },
update_data = new { type = "object" }
},
required = new[] { "action", "order_id" }
})
)
What's Next
We now have a powerful single agent with real capabilities. But what happens when tasks get complex enough that one agent can't handle everything? That's where multi-agent orchestration comes in.
In Part 3, we'll use Semantic Kernel to coordinate multiple specialized agents:
- A Research Agent that gathers information
- A Writer Agent that creates content
- An Editor Agent that reviews and improves
You'll learn how agents can collaborate, hand off tasks, and produce results that no single agent could achieve alone.
Summary
In this article, we extended our agent with powerful tools:
- Function Calling: Connect to your APIs and databases with custom functions
- Code Interpreter: Let the agent write and execute Python for analysis
- File Search: Add RAG capabilities without managing vector infrastructure
- Combined Tools: Build agents that can use multiple tools together
Your agent is no longer just a chatbot—it's a capable assistant that can take real action.
Resources:
Next up: Part 3 — Multi-Agent Systems: Orchestrating Azure AI Agents with Semantic Kernel
Top comments (0)