DEV Community

Cover image for MCP for .NET: AI Agents That Can Actually Do Things
Hamed Hajiloo
Hamed Hajiloo

Posted on

MCP for .NET: AI Agents That Can Actually Do Things

Crysta is an AI app where you can build your own AI agents for real business use.

What makes it powerful is this: you are not limited to the default behavior of a chatbot.
You can connect different MCP servers and give your agent practical capabilities like scheduling, reservations, and data access.

In short, Crysta helps you build an AI agent that does more than talk, it can actually get work done.

What Users Can Connect

With MCP, an agent can connect to tools such as:

  • Google Calendar workflows
  • Reservation and scheduling systems
  • Business data tools exposed through MCP
  • Knowledge tools like DeepWiki (through MCP)

So instead of only answering a request like "book a meeting," the agent can actually do it through a connected tool.

Why This Matters for .NET Developers

MCP gives .NET developers a straightforward way to add real capabilities to an AI agent without building every integration yourself. Instead of wiring each API directly into your app, you connect MCP‑compatible tools and let the agent use them through the same function‑calling flow.

This leads to two big benefits:

  • Faster development: new features come from adding MCP tools, not writing new integrations.
  • Cleaner architecture: your agent grows by adding tools, not by adding more prompt logic or custom glue code.

How We Built It (High Level)

We kept the flow simple:

  1. User adds an MCP server link to an agent.
  2. System validates the link and reads tool metadata.
  3. Connected MCP tools become part of that agent’s available actions.
  4. During chat, the model can call those tools when needed.
  5. We track failures and connection health so users can fix broken links quickly.

Packages We Use for This

These are the key packages behind this MCP workflow:

  • ModelContextProtocol
  • Microsoft.Extensions.AI
  • Microsoft.Extensions.AI.OpenAI
  • Azure.AI.OpenAI

This combination gives us MCP tool discovery plus a solid function-calling pipeline in .NET.

Small Code Examples (Based on Our Code)

Here’s a sample snippet that reflects the structure we use in our production pattern.

1) MCP client + validation service

using ModelContextProtocol.Client;

public class McpServerService : IMcpServerService
{
    public async Task<StoredMcpServer> ValidateAndGetMcpInfoAsync(string mcpUrl)
    {
        if (!Uri.TryCreate(mcpUrl, UriKind.Absolute, out _))
        {
            return new StoredMcpServer { IsValid = false, ErrorMessage = "Invalid URL format" };
        }

        try
        {
            var transport = new HttpClientTransport(
                new HttpClientTransportOptions { Endpoint = new Uri(mcpUrl) },
                loggerFactory: null);

            var client = await McpClient.CreateAsync(transport);
            var serverInfo = client.ServerInfo;

            return new StoredMcpServer
            {
                Url = mcpUrl,
                Title = serverInfo?.Name ?? "Unknown MCP Server",
                Description = serverInfo?.Title ?? "No description available",
                AddedDate = DateTime.Now,
                IsValid = serverInfo is not null
            };
        }
        catch (Exception ex)
        {
            return new StoredMcpServer { IsValid = false, ErrorMessage = ex.Message };
        }
    }

    public async Task<McpClient> CreateClientAsync(string mcpUrl, CancellationToken cancellationToken = default)
    {
        var transport = new HttpClientTransport(
            new HttpClientTransportOptions { Endpoint = new Uri(mcpUrl) },
            loggerFactory: null);
        return await McpClient.CreateAsync(transport);
    }
}
Enter fullscreen mode Exit fullscreen mode

2) Load MCP tools from agent skills (runtime)

using Microsoft.Extensions.AI;
using ModelContextProtocol.Client;

private async Task<List<AIFunction>> GetMcpToolsAsync()
{
    if (interactiveAgent?.Skills == null) return [];

    var allTools = new List<AIFunction>();
    using var scope = ServiceScopeFactory.CreateScope();
    var mcpServerService = scope.ServiceProvider.GetRequiredService<IMcpServerService>();

    foreach (var skill in interactiveAgent.Skills)
    {
        if (skill.SkillType != SkillType.MCP) continue;
        if (skill.SkillJson is null) continue;

        var check = await mcpServerService.ValidateAndGetMcpInfoAsync(skill.SkillJson);
        if (!check.IsValid) continue;

        var client = await mcpServerService.CreateClientAsync(skill.SkillJson);
        var tools = await client.ListToolsAsync();
        allTools.AddRange(tools);
    }

    return allTools;
}
Enter fullscreen mode Exit fullscreen mode

3) Attach MCP tools to Microsoft.Extensions.AI chat pipeline

using System.ClientModel;
using Azure.AI.OpenAI;
using Microsoft.Extensions.AI;

var azureChatClient = new AzureOpenAIClient(
    new Uri(endpoint),
    new ApiKeyCredential(apiKey))
    .GetChatClient(deployment)
    .AsIChatClient();

var mcpTools = await GetMcpToolsAsync();

var chatClient = new ChatClientBuilder(azureChatClient)
    .ConfigureOptions(options =>
    {
        options.Tools ??= [];
        foreach (var tool in mcpTools)
        {
            options.Tools.Add(tool);
        }
    })
    .UseFunctionInvocation(c => c.AllowConcurrentInvocation = true)
    .Build();
Enter fullscreen mode Exit fullscreen mode

This is the exact idea we use: validate MCP links, discover tools, then pass those tools into the chat client so the agent can call them.

Closing

MCP moved our product from "AI that answers" to "AI that acts."

That is the difference users notice immediately.

If you want to see the end result, check it here: https://crystacode.ai

Top comments (0)