DEV Community

Cover image for Azure AI Agent Service: Your First Production-Ready AI Agent in C#
Brian Spann
Brian Spann

Posted on

Azure AI Agent Service: Your First Production-Ready AI Agent in C#

Azure AI Agent Service: Your First Production-Ready AI Agent in C#

Building AI agents from scratch is hard. You need to manage conversation state, handle tool calling, implement streaming, deal with rate limits, and somehow make it all production-ready. What if Microsoft had already solved most of these problems for you?

That's exactly what Azure AI Agent Service offers—a managed platform for building, deploying, and running AI agents in production. In this series, we'll take you from your first agent to a fully orchestrated multi-agent system deployed on Azure AI Foundry.

What You'll Learn in This Series

  1. Part 1 (this article): Introduction to Azure AI Agent Service and your first agent
  2. Part 2: Extending agents with tools, function calling, and file search
  3. Part 3: Multi-agent orchestration with Semantic Kernel
  4. Part 4: Production patterns—state management, sessions, and observability
  5. Part 5: Deploying multi-agent systems to Azure AI Foundry

Let's dive in.


Why Azure AI Agent Service?

If you've built AI applications before, you've probably used the Azure OpenAI SDK or similar libraries to make direct completions calls. That works great for simple Q&A scenarios, but agents are different. They need:

  • Persistent conversation state across multiple turns
  • Tool execution with the ability to call external functions
  • Streaming responses for better user experience
  • Automatic context management to handle long conversations
  • Built-in capabilities like code interpretation and file search

Building all of this yourself is a significant undertaking. Azure AI Agent Service provides these capabilities out of the box, letting you focus on what makes your agent unique.

Azure AI Agent Service vs. Building from Scratch

Concern From Scratch Azure AI Agent Service
Conversation state You build it Managed threads
Tool calling loop You implement it Automatic with runs
Code execution You sandbox it Code Interpreter built-in
File search/RAG You build vectors Vector stores included
Streaming You handle chunks Native support
Scaling Your infrastructure Azure-managed

The Core Architecture

Before writing code, let's understand the key concepts:

Agents

An Agent is your AI entity with a specific persona and capabilities. It has:

  • A model (like gpt-4o) that powers its reasoning
  • Instructions that define its behavior and personality
  • Tools it can use (functions, code interpreter, file search)

Agents are persistent resources—you create them once and reuse them across many conversations.

Threads

A Thread represents a conversation. It's the container for all messages between users and agents. Threads:

  • Persist automatically (no database needed on your side)
  • Support multiple messages from both users and assistants
  • Can be shared across agent invocations

Messages

Messages are the individual turns in a conversation. Each message has:

  • A role (user, assistant, or tool)
  • Content (text, images, or file references)
  • Metadata for your application needs

Runs

A Run is a single execution of an agent against a thread. When you create a run:

  1. The agent reads all messages in the thread
  2. It generates a response (potentially calling tools)
  3. The response is automatically added to the thread

Runs can be synchronous, asynchronous, or streamed.

┌─────────────────────────────────────────────────────────────┐
│                     Azure AI Foundry Project                │
│  ┌─────────────┐                                            │
│  │   Agent     │ ──── model: gpt-4o                         │
│  │             │ ──── instructions: "You are..."            │
│  │             │ ──── tools: [function, code_interpreter]   │
│  └─────────────┘                                            │
│         │                                                   │
│         ▼                                                   │
│  ┌─────────────┐     ┌──────────────────────┐               │
│  │   Thread    │────▶│     Messages         │               │
│  └─────────────┘     │ • User: "Help me..." │               │
│         │            │ • Assistant: "Sure!" │               │
│         ▼            │ • User: "Also..."    │               │
│  ┌─────────────┐     └──────────────────────┘               │
│  │    Run      │                                            │
│  │  (in_progress)                                           │
│  └─────────────┘                                            │
└─────────────────────────────────────────────────────────────┘
Enter fullscreen mode Exit fullscreen mode

Setting Up Your Environment

Prerequisites

  1. Azure subscription with access to Azure AI services
  2. .NET 8 SDK or later
  3. Azure CLI installed and logged in
  4. An Azure AI Foundry project (we'll create this)

Creating an Azure AI Foundry Project

Azure AI Foundry is the umbrella platform for Azure AI services. Your agents live inside a Foundry project.

# Login to Azure
az login

# Create a resource group
az group create --name rg-ai-agents --location eastus2

# Create an Azure AI Foundry hub
az ml workspace create \
  --name hub-ai-agents \
  --resource-group rg-ai-agents \
  --kind hub

# Create a project within the hub
az ml workspace create \
  --name project-customer-support \
  --resource-group rg-ai-agents \
  --kind project \
  --hub-id /subscriptions/{sub-id}/resourceGroups/rg-ai-agents/providers/Microsoft.MachineLearningServices/workspaces/hub-ai-agents
Enter fullscreen mode Exit fullscreen mode

Once created, get your project endpoint from the Azure Portal (it looks like https://{region}.api.azureml.ms/agents/v1.0/subscriptions/{sub}/resourceGroups/{rg}/providers/Microsoft.MachineLearningServices/workspaces/{project}).

Installing the SDK

Create a new .NET project and add the Azure.AI.Projects package:

dotnet new console -n AgentDemo
cd AgentDemo
dotnet add package Azure.AI.Projects --prerelease
dotnet add package Azure.Identity
Enter fullscreen mode Exit fullscreen mode

Note: As of early 2025, the Azure.AI.Projects package may still be in preview. Check NuGet for the latest stable version.


Your First Agent: A Customer Support Bot

Let's build a customer support agent that can answer questions about orders. We'll start simple and add capabilities throughout this series.

Step 1: Initialize the Project Client

The AIProjectClient is your gateway to Azure AI Agent Service:

using Azure;
using Azure.AI.Projects;
using Azure.Identity;

// Store your endpoint in environment variables (never hardcode!)
var endpoint = Environment.GetEnvironmentVariable("AZURE_AI_FOUNDRY_PROJECT_ENDPOINT")
    ?? throw new InvalidOperationException("Set AZURE_AI_FOUNDRY_PROJECT_ENDPOINT");

// Use DefaultAzureCredential for seamless local dev + production auth
var credential = new DefaultAzureCredential();

var projectClient = new AIProjectClient(new Uri(endpoint), credential);

Console.WriteLine("✅ Connected to Azure AI Foundry project");
Enter fullscreen mode Exit fullscreen mode

DefaultAzureCredential automatically handles authentication:

  • Locally: Uses your Azure CLI login, Visual Studio credentials, or environment variables
  • In Azure: Uses Managed Identity (no secrets to manage!)

Step 2: Create an Agent

Now let's create our customer support agent:

// Define the agent's persona and capabilities
var agentOptions = new CreateAgentOptions(
    model: "gpt-4o",  // Or your deployed model name
    name: "CustomerSupportAgent",
    instructions: """
        You are a helpful customer support agent for Contoso Electronics.

        Your responsibilities:
        - Answer questions about orders, shipping, and returns
        - Provide product information and recommendations
        - Escalate complex issues to human support when needed

        Guidelines:
        - Be friendly, professional, and concise
        - If you don't know something, say so honestly
        - Always confirm order numbers before providing order-specific info
        - For refund requests over $500, recommend speaking to a supervisor

        Important: Never share internal policies or speculate about inventory.
        """
);

var agent = await projectClient.CreateAgentAsync(agentOptions);

Console.WriteLine($"✅ Created agent: {agent.Value.Name} (ID: {agent.Value.Id})");
Enter fullscreen mode Exit fullscreen mode

A Note on Instructions: Your instructions are crucial. They define:

  • Persona: Who is this agent?
  • Scope: What can/can't it help with?
  • Guidelines: How should it behave?
  • Guardrails: What are the limits?

Spend time crafting good instructions—they're the foundation of agent quality.

Step 3: Create a Thread

Threads hold the conversation history. Create one per user session:

var thread = await projectClient.CreateThreadAsync();

Console.WriteLine($"✅ Created thread: {thread.Value.Id}");
Enter fullscreen mode Exit fullscreen mode

In a real application, you'd store the thread ID and reuse it when the same user returns. We'll cover session management patterns in Part 4.

Step 4: Add a User Message

Let's ask our agent a question:

var userMessage = "Hi! I placed an order last week but haven't received any shipping updates. My order number is ORD-12345. Can you help?";

await projectClient.CreateMessageAsync(
    thread.Value.Id,
    MessageRole.User,
    userMessage
);

Console.WriteLine($"📤 User: {userMessage}");
Enter fullscreen mode Exit fullscreen mode

Step 5: Run the Agent

Now we execute the agent against the thread:

// Create a run - this triggers the agent to process all messages and respond
var run = await projectClient.CreateRunAsync(thread.Value.Id, agent.Value.Id);

Console.WriteLine($"🔄 Run started: {run.Value.Id} (Status: {run.Value.Status})");

// Poll until the run completes
while (run.Value.Status == RunStatus.Queued || run.Value.Status == RunStatus.InProgress)
{
    await Task.Delay(500);  // Don't hammer the API
    run = await projectClient.GetRunAsync(thread.Value.Id, run.Value.Id);
    Console.WriteLine($"   Status: {run.Value.Status}");
}

if (run.Value.Status == RunStatus.Completed)
{
    Console.WriteLine("✅ Run completed!");
}
else
{
    Console.WriteLine($"❌ Run failed with status: {run.Value.Status}");
    Console.WriteLine($"   Error: {run.Value.LastError?.Message}");
}
Enter fullscreen mode Exit fullscreen mode

Step 6: Get the Response

Retrieve the messages to see what the agent said:

var messages = await projectClient.GetMessagesAsync(thread.Value.Id);

// Messages are returned in reverse chronological order
foreach (var message in messages.Value.Data.OrderBy(m => m.CreatedAt))
{
    var role = message.Role == MessageRole.User ? "👤 User" : "🤖 Agent";

    foreach (var content in message.ContentItems)
    {
        if (content is MessageTextContent textContent)
        {
            Console.WriteLine($"{role}: {textContent.Text}");
        }
    }
}
Enter fullscreen mode Exit fullscreen mode

Putting It All Together

Here's the complete first agent implementation:

using Azure;
using Azure.AI.Projects;
using Azure.Identity;

// Configuration
var endpoint = Environment.GetEnvironmentVariable("AZURE_AI_FOUNDRY_PROJECT_ENDPOINT")
    ?? throw new InvalidOperationException("Set AZURE_AI_FOUNDRY_PROJECT_ENDPOINT");

var projectClient = new AIProjectClient(new Uri(endpoint), new DefaultAzureCredential());

try
{
    // Create the agent
    var agent = await projectClient.CreateAgentAsync(new CreateAgentOptions(
        model: "gpt-4o",
        name: "CustomerSupportAgent",
        instructions: """
            You are a helpful customer support agent for Contoso Electronics.
            Be friendly, professional, and concise.
            If you don't have order lookup capabilities yet, let the user know
            and offer to help with general questions.
            """
    ));
    Console.WriteLine($"✅ Agent created: {agent.Value.Id}");

    // Create a conversation thread
    var thread = await projectClient.CreateThreadAsync();
    Console.WriteLine($"✅ Thread created: {thread.Value.Id}");

    // Add user message
    var userInput = "Hi! I need help with my order ORD-12345";
    await projectClient.CreateMessageAsync(thread.Value.Id, MessageRole.User, userInput);
    Console.WriteLine($"📤 User: {userInput}");

    // Run the agent
    var run = await projectClient.CreateRunAsync(thread.Value.Id, agent.Value.Id);

    while (run.Value.Status == RunStatus.Queued || run.Value.Status == RunStatus.InProgress)
    {
        await Task.Delay(500);
        run = await projectClient.GetRunAsync(thread.Value.Id, run.Value.Id);
    }

    // Get and display response
    if (run.Value.Status == RunStatus.Completed)
    {
        var messages = await projectClient.GetMessagesAsync(thread.Value.Id);
        var assistantMessage = messages.Value.Data
            .Where(m => m.Role == MessageRole.Assistant)
            .OrderByDescending(m => m.CreatedAt)
            .First();

        foreach (var content in assistantMessage.ContentItems)
        {
            if (content is MessageTextContent textContent)
            {
                Console.WriteLine($"🤖 Agent: {textContent.Text}");
            }
        }
    }

    // Cleanup: delete the agent when done (optional for demos)
    await projectClient.DeleteAgentAsync(agent.Value.Id);
    Console.WriteLine("🧹 Agent deleted");
}
catch (RequestFailedException ex)
{
    Console.WriteLine($"Azure error: {ex.Message}");
    Console.WriteLine($"Error code: {ex.ErrorCode}");
}
Enter fullscreen mode Exit fullscreen mode

Streaming Responses

Polling works, but for a better user experience, you'll want streaming. This shows words as they're generated:

// Create a streaming run
await foreach (var update in projectClient.CreateRunStreamingAsync(
    thread.Value.Id, 
    agent.Value.Id))
{
    switch (update)
    {
        case RunUpdate runUpdate:
            Console.WriteLine($"Run status: {runUpdate.Value.Status}");
            break;

        case MessageContentUpdate contentUpdate:
            // This fires for each chunk of text
            Console.Write(contentUpdate.Text);
            break;

        case RunStepUpdate stepUpdate:
            Console.WriteLine($"\nStep: {stepUpdate.Value.Type}");
            break;
    }
}
Console.WriteLine();  // Final newline
Enter fullscreen mode Exit fullscreen mode

Streaming provides a much more responsive feel—users see the agent "thinking" in real-time rather than waiting for a complete response.


Agent Lifecycle Management

In production, you need to think about agent lifecycle:

Creating Agents

  • Startup creation: Create agents when your service starts, reuse across requests
  • On-demand creation: Create specialized agents per use case
  • Hybrid: Create base agents at startup, clone and customize for specific needs
public class AgentFactory
{
    private readonly AIProjectClient _client;
    private Agent? _supportAgent;

    public async Task<Agent> GetSupportAgentAsync()
    {
        if (_supportAgent == null)
        {
            var response = await _client.CreateAgentAsync(new CreateAgentOptions(
                model: "gpt-4o",
                name: $"SupportAgent-{Guid.NewGuid():N}",
                instructions: "...support agent instructions..."
            ));
            _supportAgent = response.Value;
        }
        return _supportAgent;
    }
}
Enter fullscreen mode Exit fullscreen mode

Cleaning Up

Agents persist until deleted. In development, always clean up:

// Delete a specific agent
await projectClient.DeleteAgentAsync(agentId);

// List and delete all your agents (careful in production!)
var agents = await projectClient.GetAgentsAsync();
foreach (var agent in agents.Value.Data)
{
    Console.WriteLine($"Deleting: {agent.Name}");
    await projectClient.DeleteAgentAsync(agent.Id);
}
Enter fullscreen mode Exit fullscreen mode

Thread Management

Threads also persist. For long-running applications:

// Delete threads when conversations end
await projectClient.DeleteThreadAsync(threadId);

// Or set up automatic cleanup for old threads
// (You'll need to track thread ages yourself)
Enter fullscreen mode Exit fullscreen mode

When to Use Azure AI Agent Service

Azure AI Agent Service shines when you need:

Multi-turn conversations with automatic state management

Tool execution (function calling, code interpreter, file search)

Quick prototyping of agent experiences

Azure-native deployment with managed scaling

Enterprise features (authentication, audit logs, compliance)

When to Consider Alternatives

Simple completion requests — Use Azure OpenAI SDK directly

Non-Azure infrastructure — Consider OpenAI's Assistants API or build custom

Maximum control needed — Build on Microsoft Agent Framework or Semantic Kernel directly

Latency-critical applications — Direct model calls have less overhead


What's Next

We've built a basic agent, but it can't actually look up orders—it just pretends! In Part 2, we'll fix that by adding:

  • Function calling to connect to your order database
  • Code Interpreter for data analysis tasks
  • File Search to answer questions from documentation

You'll learn how to give your agent real capabilities that make it genuinely useful.


Summary

In this article, we:

  • Explored why Azure AI Agent Service exists and when to use it
  • Understood the core architecture: Agents, Threads, Messages, and Runs
  • Set up an Azure AI Foundry project and installed the SDK
  • Built a complete customer support agent from scratch
  • Implemented both polling and streaming response patterns
  • Discussed lifecycle management for production scenarios

The foundation is laid. Your first agent is running. Now let's make it powerful.


Resources:

Next up: Part 2 — Power Your Azure AI Agents: Function Calling, Code Interpreter, and File Search

Top comments (0)