Introduction
If you've been building AI applications in .NET, 2025 was a whirlwind year. We had Semantic Kernel reaching maturity, AutoGen capturing attention with its multi-agent patterns, and then... the announcement that changed everything: Microsoft Agent Framework—the unification of both projects into a single, cohesive SDK.
This isn't just a rebrand. It's a fundamental rethinking of how we build AI agents in .NET, combining the production-grade foundations of Semantic Kernel with the research-driven orchestration patterns of AutoGen.
In this first article of our 4-part series, we'll explore:
- Why Microsoft unified these projects
- What Agent Framework inherits from each predecessor
- How to get started with your first agent
- Migration paths if you're coming from SK or AutoGen
The Evolution: A Brief History
Semantic Kernel (2023-2025)
Semantic Kernel emerged as Microsoft's answer to LangChain—a strongly-typed, enterprise-ready framework for building AI applications. It brought us:
- Plugins and native functions: Type-safe tool calling
- Planners: AI-driven task decomposition
- Memory connectors: Vector store integrations
- Filters and middleware: Production observability
- Multi-model support: OpenAI, Azure OpenAI, local models
SK found its niche in enterprise applications where C# developers needed to integrate LLMs into existing .NET ecosystems. It was stable, well-documented, and played nicely with dependency injection.
AutoGen (2023-2025)
While Semantic Kernel focused on single-agent patterns, AutoGen (from Microsoft Research) explored a different frontier: multi-agent orchestration. Its innovations included:
- Conversational agents: Agents that talk to each other
- GroupChat patterns: Multiple agents collaborating on tasks
- Human-in-the-loop: Seamless integration of human oversight
- Code execution: Agents that could write and run code
AutoGen was experimental, research-driven, and pushed boundaries. But it wasn't always production-ready, and its Python-first approach left .NET developers wanting.
The Unification (October 2025)
At .NET Conf 2025, Microsoft announced what many had hoped for: a unified agent framework that combines the best of both worlds. The key insight? These weren't competing projects—they were complementary pieces of a larger puzzle.
What Agent Framework Brings to the Table
From Semantic Kernel:
| Feature | What It Means for You |
|---|---|
| Type-safe tools | Define functions with attributes, get automatic schema generation |
| State management | Session persistence, conversation history, context windows |
| Filters & middleware | Logging, caching, content moderation at the pipeline level |
| Telemetry integration | OpenTelemetry support out of the box |
| Model abstraction | Swap providers without changing agent code |
From AutoGen:
| Feature | What It Means for You |
|---|---|
| Multi-agent patterns | Pre-built orchestration: round-robin, selector, broadcast |
| Workflows | Explicit, durable task orchestration |
| Agent-to-Agent (A2A) | Standardized protocol for agent communication |
| Magentic One | Research-proven patterns for complex tasks |
| Research foundation | Patterns validated through academic research |
New in Agent Framework:
- MCP Integration: First-class support for Model Context Protocol
- Workflow Engine: Visual, durable, checkpointed workflows
- Azure AI Foundry: Seamless cloud deployment
- Microsoft.Extensions.AI: Built on the new AI abstractions
Getting Started
Installation
# Create a new project
dotnet new console -n MyFirstAgent
cd MyFirstAgent
# Add the Agent Framework package
dotnet add package Microsoft.Agents.AI --prerelease
# Add Azure OpenAI support
dotnet add package Microsoft.Extensions.AI.AzureAIInference
Your First Agent
Let's build a simple assistant agent:
using Microsoft.Agents.AI;
using Microsoft.Extensions.AI;
using Azure.Identity;
using Azure.AI.Inference;
// Create the chat client
IChatClient chatClient = new ChatCompletionsClient(
new Uri(Environment.GetEnvironmentVariable("AZURE_OPENAI_ENDPOINT")!),
new DefaultAzureCredential()
).AsChatClient("gpt-4o");
// Create an agent
var agent = new ChatClientAgent(chatClient, new ChatClientAgentOptions
{
Name = "Assistant",
Instructions = """
You are a helpful AI assistant. You provide clear,
accurate answers and ask clarifying questions when needed.
"""
});
// Invoke the agent
var response = await agent.InvokeAsync(
"What are the key differences between async and parallel programming in C#?");
Console.WriteLine(response.Content);
This looks deceptively simple, but there's a lot happening under the hood:
- The agent manages conversation context
- Token counting and context window management is automatic
- The response includes usage metrics for cost tracking
Adding Tools
Agents become powerful when they can take actions. Let's add some tools:
using System.ComponentModel;
public class CalculatorTools
{
[AgentTool]
[Description("Adds two numbers together")]
public double Add(
[Description("The first number")] double a,
[Description("The second number")] double b)
=> a + b;
[AgentTool]
[Description("Multiplies two numbers")]
public double Multiply(
[Description("The first number")] double a,
[Description("The second number")] double b)
=> a * b;
[AgentTool]
[Description("Calculates the square root of a number")]
public double SquareRoot(
[Description("The number to calculate the square root of")] double n)
{
if (n < 0)
throw new ArgumentException("Cannot calculate square root of negative number");
return Math.Sqrt(n);
}
}
Now register the tools with the agent:
var agent = new ChatClientAgent(chatClient, new ChatClientAgentOptions
{
Name = "MathAssistant",
Instructions = "You are a math assistant. Use the provided tools to perform calculations."
});
// Add tools from a class
agent.AddTools<CalculatorTools>();
// Or add individual functions
agent.AddTool(
"get_current_time",
"Gets the current UTC time",
() => DateTime.UtcNow.ToString("O"));
// Now the agent can use these tools
var response = await agent.InvokeAsync(
"What is the square root of 144, multiplied by 3?");
Console.WriteLine(response.Content);
// Output: The square root of 144 is 12, multiplied by 3 equals 36.
Async Tools and Dependency Injection
Real-world tools often need to call external services:
public class WeatherTools
{
private readonly IWeatherService _weatherService;
private readonly ILogger<WeatherTools> _logger;
public WeatherTools(
IWeatherService weatherService,
ILogger<WeatherTools> logger)
{
_weatherService = weatherService;
_logger = logger;
}
[AgentTool]
[Description("Gets the current weather for a city")]
public async Task<WeatherInfo> GetCurrentWeather(
[Description("The city name, e.g., 'Seattle' or 'London, UK'")]
string city,
CancellationToken cancellationToken = default)
{
_logger.LogInformation("Fetching weather for {City}", city);
return await _weatherService.GetCurrentAsync(city, cancellationToken);
}
[AgentTool]
[Description("Gets the weather forecast for the next N days")]
public async Task<IReadOnlyList<WeatherForecast>> GetForecast(
[Description("The city name")] string city,
[Description("Number of days to forecast (1-7)")] int days = 3,
CancellationToken cancellationToken = default)
{
days = Math.Clamp(days, 1, 7);
return await _weatherService.GetForecastAsync(city, days, cancellationToken);
}
}
// With dependency injection
var services = new ServiceCollection()
.AddSingleton<IWeatherService, OpenMeteoWeatherService>()
.AddLogging()
.BuildServiceProvider();
var weatherTools = ActivatorUtilities.CreateInstance<WeatherTools>(services);
agent.AddTools(weatherTools);
Conversation Management
Agents maintain conversation history automatically, but you have full control:
// Create a conversation
var conversation = new AgentConversation();
// Add messages
conversation.AddUserMessage("My name is Alex.");
var response1 = await agent.InvokeAsync(conversation);
conversation.AddAssistantMessage(response1);
conversation.AddUserMessage("What's my name?");
var response2 = await agent.InvokeAsync(conversation);
// The agent remembers: "Your name is Alex."
// Inspect the conversation
foreach (var message in conversation.Messages)
{
Console.WriteLine($"[{message.Role}]: {message.Content}");
}
// Serialize for persistence
var json = JsonSerializer.Serialize(conversation);
// Clear or trim as needed
conversation.TrimToTokenLimit(maxTokens: 4000);
System Messages and Context
var agent = new ChatClientAgent(chatClient, new ChatClientAgentOptions
{
Name = "CustomerSupport",
Instructions = """
You are a customer support agent for TechCorp.
Company policies:
- Refunds are available within 30 days
- Premium members get priority support
- Never share internal pricing formulas
Always be polite and professional.
""",
// Additional context injection
ContextProvider = async (conversation, ct) =>
{
var userId = conversation.Metadata.GetValueOrDefault("userId");
if (userId is string id)
{
var user = await userService.GetAsync(id, ct);
return $"Current user: {user.Name}, Tier: {user.Tier}, Member since: {user.CreatedAt:yyyy}";
}
return null;
}
});
Migration from Semantic Kernel
If you're coming from Semantic Kernel, the migration is straightforward. Most patterns transfer directly:
Before (Semantic Kernel)
// Semantic Kernel approach
var kernel = Kernel.CreateBuilder()
.AddAzureOpenAIChatCompletion(
deploymentName: "gpt-4o",
endpoint: config["AzureOpenAI:Endpoint"]!,
credentials: new DefaultAzureCredential())
.Build();
kernel.Plugins.AddFromType<MyPlugin>();
var agent = new ChatCompletionAgent
{
Kernel = kernel,
Name = "Assistant",
Instructions = "You are helpful."
};
var history = new ChatHistory();
history.AddUserMessage("Hello!");
await foreach (var content in agent.InvokeStreamingAsync(history))
{
Console.Write(content.Content);
}
After (Agent Framework)
// Agent Framework approach
IChatClient chatClient = new ChatCompletionsClient(
new Uri(config["AzureOpenAI:Endpoint"]!),
new DefaultAzureCredential()
).AsChatClient("gpt-4o");
var agent = new ChatClientAgent(chatClient, new ChatClientAgentOptions
{
Name = "Assistant",
Instructions = "You are helpful."
});
agent.AddTools<MyPlugin>(); // Same plugin, renamed attribute
var conversation = new AgentConversation();
conversation.AddUserMessage("Hello!");
await foreach (var chunk in agent.InvokeStreamingAsync(conversation))
{
Console.Write(chunk.Content);
}
Key Differences
| Semantic Kernel | Agent Framework | Notes |
|---|---|---|
Kernel |
IChatClient |
Uses M.E.AI abstractions |
KernelPlugin |
AgentTool |
Attribute renamed |
ChatHistory |
AgentConversation |
Similar API, more features |
ChatCompletionAgent |
ChatClientAgent |
Simplified construction |
kernel.InvokeAsync<T> |
agent.InvokeAsync |
More consistent API |
What Still Works
- Your existing plugins work with minimal changes (rename
KernelFunctiontoAgentTool) - Dependency injection patterns are identical
- Memory connectors are available through adapters
- OpenTelemetry integration is enhanced, not replaced
Migration from AutoGen
AutoGen developers will find familiar multi-agent patterns:
Before (AutoGen)
# AutoGen (Python)
assistant = AssistantAgent("assistant", llm_config=llm_config)
user_proxy = UserProxyAgent("user_proxy")
user_proxy.initiate_chat(assistant, message="Hello!")
After (Agent Framework)
// Agent Framework (C#)
var assistant = new ChatClientAgent(chatClient, new ChatClientAgentOptions
{
Name = "Assistant",
Instructions = "You are a helpful assistant."
});
var userProxy = new UserProxyAgent(new UserProxyAgentOptions
{
Name = "User",
HumanInputMode = HumanInputMode.Always
});
var chat = new RoundRobinGroupChat(new[] { userProxy, assistant });
await chat.RunAsync("Hello!");
AutoGen's orchestration patterns are now first-class citizens in Agent Framework, with added type safety and production-ready durability.
What's Next in This Series
In the upcoming articles, we'll dive deeper:
Part 2: Workflows — Learn how to orchestrate complex multi-step tasks with explicit workflows, conditional branching, and checkpointing.
Part 3: MCP Integration — Build and consume Model Context Protocol tools for cross-platform interoperability.
Part 4: Production Deployment — Deploy to Azure AI Foundry with observability, scaling, and security best practices.
Resources
- Microsoft Agent Framework Documentation
- GitHub Repository
- Microsoft.Extensions.AI
- Migration Guide from Semantic Kernel
Conclusion
Microsoft Agent Framework represents a maturation of .NET's AI capabilities. By unifying Semantic Kernel's production-ready foundations with AutoGen's research-driven patterns, we finally have a comprehensive toolkit for building everything from simple chatbots to complex multi-agent systems.
The migration path is smooth, the patterns are familiar, and the future is unified. Whether you're starting fresh or migrating existing projects, now is the perfect time to embrace Agent Framework.
In Part 2, we'll explore Workflows—the powerful orchestration engine that enables complex, durable, multi-step agent tasks.
Top comments (2)
The unification of SK + AutoGen makes a lot of sense — the split always felt artificial when you actually needed both single-agent tool calling AND multi-agent orchestration in the same project.
One thing I'm curious about for Part 2: how does the workflow engine handle cross-framework agent communication? The A2A protocol mention is interesting because in practice, most production agent deployments I've seen end up needing agents built on different stacks to talk to each other. Your .NET agent might need to coordinate with a Python-based CrewAI crew or an OpenAI Swarm.
The broader challenge right now is that the agent ecosystem is fragmenting fast — not just frameworks, but the platforms agents deploy onto. There are 100+ dedicated agent platforms now (social networks, marketplaces, governance systems) each with their own API patterns. We've been tracking this in an open-source list: github.com/profullstack/awesome-agent-platforms
Would be great to see how Agent Framework's MCP integration plays with that cross-platform reality in Part 3.
Great points, and you're hitting on exactly what Part 2 covers.
Cross-framework communication: Agent Framework uses A2A (Agent-to-Agent) protocol as the interop layer. It's HTTP/JSON-based, so your .NET agent doesn't care if the other side is CrewAI, Swarm, or a shell script — it just speaks the protocol. Think of it like how REST doesn't care what language your server runs.
The key abstractions:
• AgentCard — discovery/capability advertisement
• Task — the unit of work passed between agents
• Artifact — outputs agents produce for each other
So a .NET Agent Framework agent can hand off to a Python CrewAI crew, get results back, and continue orchestration — as long as both speak A2A.
MCP integration: MCP is the "tool calling" layer (agent ↔ tools), while A2A is the "agent coordination" layer (agent ↔ agent). They're complementary. Your agent uses MCP to call a database or API, and A2A to delegate to another agent that might be running anywhere.
Platform fragmentation: You're right that this is accelerating. The awesome-agent-platforms list is useful — I'll check it out. Microsoft's bet seems to be that A2A + MCP become the common protocols that abstract over platform differences, similar to how HTTP abstracted over network differences.
Part 3 will dig into the MCP registry and how agents discover each other's capabilities across deployments.