DEV Community

Cover image for MCP in .NET: The Protocol That Changes How Your Code Talks to AI
Elvin Suleymanov
Elvin Suleymanov

Posted on

MCP in .NET: The Protocol That Changes How Your Code Talks to AI

Everything you need to know about Model Context Protocol in .NET. What it is, why it exists, how it works, and how to build your first MCP server from scratch with a real-world OpenWeatherMap example.


MCP in .NET: The Protocol That Changes How Your Code Talks to AI

There is a moment every developer hits when building AI features. You have your application. You have an AI model. And now you need them to talk to each other. So you write an adapter. Then another. Then another. And before you know it, you are maintaining a pile of brittle glue code that breaks every time someone at OpenAI, Anthropic, or Google ships an update.

This is not a .NET problem. This is not a Python problem. This is an industry problem. And in 2026, it finally has a real solution.

Model Context Protocol. MCP.

If you write software that touches AI in any way, this article is for you. If you are a .NET developer, this article is especially for you, because the C# SDK just hit v1.0 and the ecosystem is moving fast. But even if you have never written a line of C# in your life, the ideas here will change how you think about connecting software to AI.

Let us start from the beginning.


The Problem Nobody Wanted to Talk About

Here is a story that will sound familiar.

A team builds a customer support application. They integrate OpenAI so the chatbot can answer questions. It works. Then the product manager asks: "Can it also look up order status?" So the team writes a function-calling adapter using OpenAI's specific schema. It works. Then leadership says: "We are switching to Claude for cost reasons." So the team rewrites the adapter using Anthropic's tool-use specification, which has different schema conventions, different invocation patterns, and different error handling.

Six months later they add Copilot integration. Another adapter. Then Google Gemini for a different use case. Another adapter. Each one has its own authentication flow, its own discovery mechanism, its own way of describing what tools are available.

The application now has four separate integration layers doing fundamentally the same thing: letting an AI model call a function in the application. Each layer has its own bugs, its own test suite, and its own maintenance burden. Adding a new tool (say, "check inventory") means implementing it four times.

This is the integration tax. And until recently, every team paid it silently.


What MCP Actually Is

Model Context Protocol is an open standard that eliminates the integration tax. It defines a single, universal way for AI applications to discover and use tools provided by external software.

The analogy that works best: think about what happened with web APIs. Before REST became the dominant pattern, every web service had its own protocol. SOAP. XML-RPC. Custom binary formats. If you wanted to integrate with three services, you learned three different protocols. REST standardized the conversation. One pattern. Any client. Any server.

MCP does the same thing, but for AI-to-tool communication. You build one server that exposes your capabilities. Any MCP-compatible AI client can discover and use those capabilities without custom integration code. Claude, Copilot, Semantic Kernel, custom applications. It does not matter. The protocol is the contract.

MCP was originally created by Anthropic (the company behind Claude) and has since been contributed to the Linux Foundation for open governance. It is not a proprietary technology tied to one vendor. It is an open protocol with an open specification, open SDKs, and a growing ecosystem of implementations.

As of March 2026, the official C# SDK reached v1.0, built collaboratively by Microsoft and Anthropic. This is not a preview or a beta. It is production-ready infrastructure.


Why Should You Care (Even If You Are Not a .NET Developer)

MCP matters because it solves a coordination problem that transcends any single language or framework.

If you are a backend developer, MCP gives you a way to expose your existing APIs to AI agents without rewriting anything. Your database queries, your business logic, your internal services. All of it becomes AI-accessible through a thin protocol layer.

If you are a frontend developer, MCP means the AI features in your application can talk to any backend tool without you building custom bridges for each one.

If you are an architect, MCP is a new presentation layer concern that fits cleanly into existing patterns (Clean Architecture, CQRS, microservices) without disrupting what you have already built.

If you are a product manager, MCP means switching AI providers does not require an engineering sprint. The tools work with any compliant client.

If you are a student, MCP is one of the most important protocols to learn right now because it sits at the intersection of AI and software engineering, which is where the industry is heading at full speed.

The rest of this article focuses on .NET because that is where I live and because the C# SDK is exceptionally well-designed. But the concepts apply everywhere.


The Architecture: Three Roles, One Protocol

MCP uses a client-server model with three clearly defined roles.

MCP Architecture Overview
Figure 1. A single host contains multiple MCP clients, each maintaining a 1:1 session with a separate server over JSON-RPC 2.0.

The Host is the application the user interacts with. It could be an IDE like Visual Studio 2026, a conversational interface like Claude Desktop, or a custom enterprise application your team built. The host is the outer shell that the human sees.

The Client lives inside the host. It implements the MCP protocol, handling the low-level details of connecting to servers, discovering their capabilities, and invoking tools. A single host can manage multiple clients, each connected to a different server. Think of the client as the diplomat that speaks the protocol language on behalf of the host.

The Server is where your code lives. It exposes capabilities through the protocol. When an AI model needs to check the weather, query a database, or cancel an order, the client sends a request to your server, your server executes the logic, and the result flows back through the client to the host.

The communication between client and server uses JSON-RPC 2.0, which provides structured request-response messaging with support for notifications. The protocol lifecycle begins with an initialization handshake where both sides declare their capabilities, so they can gracefully handle cases where one side supports features the other does not.

This architecture has an elegant property: your server does not know or care which AI model is on the other end. It exposes tools. Clients call them. The model is irrelevant. Today it might be Claude. Tomorrow it might be GPT-5. Next year it might be something that does not exist yet. Your server does not change.


The Three Primitives: What Your Server Can Expose

Every MCP server exposes capabilities through three types of primitives. Understanding them is the key to designing good MCP servers.

MCP Three Primitives
Figure 2. The three MCP primitives: Tools (actions), Resources (context), and Prompts (templates).

Tools: The Verbs

Tools are functions that AI agents can execute. They are the actions. When a model decides it needs to do something, like check inventory, send an email, run a calculation, or fetch weather data, it calls a tool.

Every tool has three components: a name, a description, and a parameter schema. The name is how the model identifies the tool. The description is how the model decides when to use it. The schema defines what inputs the tool expects.

Here is the critical insight that most people miss: the description is the most important part of a tool. The AI model reads the description to determine whether this tool is the right one for the user's request. A bad description leads to the model calling the wrong tool or not calling any tool at all. Writing descriptions is not documentation. It is engineering.

Resources: The Context

Resources are read-only data that gives the AI context about your domain. They answer the question: "What does the AI need to know before it starts calling tools?"

Imagine hiring a new developer on your team. You would not give them terminal access and say "go fix the bug." You would give them documentation first. Architecture diagrams. Database schemas. Team conventions. Resources serve the same purpose for AI agents.

Each resource has a URI, a name, and a MIME type. They can be static (like a database schema that rarely changes) or dynamic (like a list of active users that is generated on demand). When an AI agent connects to your server, it can read resources first to understand the domain, then make much smarter decisions about which tools to call and how.

Prompts: The Templates

Prompts are reusable message templates that structure how the AI interacts with your system. They are the recipes. A prompt might define a standard workflow for "generate a weekly sales report" by specifying which tools to call in what order and what format to use for the output.

Prompts are optional. Many servers work perfectly with just tools and resources. But for teams that want consistent AI behavior across different users and sessions, prompts are incredibly valuable.


MCP in .NET: The SDK

The MCP C# SDK ships as two NuGet packages.

ModelContextProtocol is the core library. It contains everything you need for both servers and clients: protocol primitives, transport handling, JSON-RPC communication, and schema generation.

ModelContextProtocol.AspNetCore adds HTTP hosting support. If you want your MCP server to run as a remote service (in the cloud, behind a load balancer, on Azure Container Apps), this package provides the middleware.

Both packages target netstandard2.0, which means they work on .NET 8, .NET 9, and .NET 10 without any compatibility issues.

The SDK was built collaboratively by Microsoft and Anthropic and is maintained on GitHub under the Model Context Protocol organization. It reached v1.0 in March 2026, bringing production-ready features including OAuth authorization, structured output, elicitation (servers can ask users for clarification mid-execution), and long-running request support with progress reporting.

What makes the .NET SDK particularly well-designed is its integration with patterns .NET developers already know. Dependency injection works natively. Tool classes support constructor injection. The server registers through the standard IServiceCollection pipeline. If you know how to build an ASP.NET Core application, you already know 90% of what you need to build an MCP server.


Building a Real MCP Server: OpenWeatherMap from Scratch

Theory is useful. Code is better. Let us build something real.

We are going to create an MCP server that connects any AI agent to live weather data from OpenWeatherMap. When someone asks Claude "What is the weather in Tokyo?", Claude will call our server, our server will hit the OpenWeatherMap API, and Claude will respond with real, live data instead of a hallucinated guess.

You need three things before starting: .NET 8 or later, a free OpenWeatherMap API key (sign up at openweathermap.org), and a code editor.

Create the Project

mkdir WeatherMcp && cd WeatherMcp
dotnet new console -n WeatherMcp.Server
cd WeatherMcp.Server
dotnet add package ModelContextProtocol --prerelease
dotnet add package ModelContextProtocol.AspNetCore --prerelease
dotnet add package Microsoft.Extensions.Hosting
dotnet add package Microsoft.Extensions.Http
Enter fullscreen mode Exit fullscreen mode

The Weather Service

First we need a service that talks to OpenWeatherMap. This is plain .NET. No MCP-specific code here at all.

// Services/OpenWeatherMapService.cs

using System.Text.Json;
using System.Text.Json.Serialization;

namespace WeatherMcp.Server.Services;

// DTOs matching OpenWeatherMap JSON
public class WeatherResponse
{
    [JsonPropertyName("name")]
    public string City { get; set; } = "";

    [JsonPropertyName("main")]
    public WeatherMain Main { get; set; } = new();

    [JsonPropertyName("weather")]
    public List<WeatherCondition> Weather { get; set; } = new();

    [JsonPropertyName("wind")]
    public WeatherWind Wind { get; set; } = new();

    [JsonPropertyName("visibility")]
    public int Visibility { get; set; }

    [JsonPropertyName("sys")]
    public WeatherSys Sys { get; set; } = new();

    [JsonPropertyName("coord")]
    public WeatherCoord Coord { get; set; } = new();
}

public class WeatherMain
{
    [JsonPropertyName("temp")] public double Temp { get; set; }
    [JsonPropertyName("feels_like")] public double FeelsLike { get; set; }
    [JsonPropertyName("humidity")] public int Humidity { get; set; }
    [JsonPropertyName("pressure")] public int Pressure { get; set; }
}

public class WeatherCondition
{
    [JsonPropertyName("main")] public string Main { get; set; } = "";
    [JsonPropertyName("description")] public string Description { get; set; } = "";
}

public class WeatherWind
{
    [JsonPropertyName("speed")] public double Speed { get; set; }
    [JsonPropertyName("deg")] public int Deg { get; set; }
}

public class WeatherSys
{
    [JsonPropertyName("country")] public string Country { get; set; } = "";
    [JsonPropertyName("sunrise")] public long Sunrise { get; set; }
    [JsonPropertyName("sunset")] public long Sunset { get; set; }
}

public class WeatherCoord
{
    [JsonPropertyName("lat")] public double Lat { get; set; }
    [JsonPropertyName("lon")] public double Lon { get; set; }
}

public class ForecastResponse
{
    [JsonPropertyName("city")] public ForecastCity City { get; set; } = new();
    [JsonPropertyName("list")] public List<ForecastItem> Items { get; set; } = new();
}

public class ForecastCity
{
    [JsonPropertyName("name")] public string Name { get; set; } = "";
    [JsonPropertyName("country")] public string Country { get; set; } = "";
}

public class ForecastItem
{
    [JsonPropertyName("dt")] public long Dt { get; set; }
    [JsonPropertyName("main")] public WeatherMain Main { get; set; } = new();
    [JsonPropertyName("weather")] public List<WeatherCondition> Weather { get; set; } = new();
    [JsonPropertyName("wind")] public WeatherWind Wind { get; set; } = new();
    [JsonPropertyName("dt_txt")] public string DateText { get; set; } = "";
}

// The service interface
public interface IWeatherService
{
    Task<WeatherResponse?> GetCurrentAsync(string city);
    Task<ForecastResponse?> GetForecastAsync(string city);
}

// The implementation
public class OpenWeatherMapService : IWeatherService
{
    private readonly HttpClient _http;
    private readonly string _apiKey;
    private readonly ILogger<OpenWeatherMapService> _logger;
    private const string Base = "https://api.openweathermap.org/data/2.5";

    public OpenWeatherMapService(
        HttpClient http,
        IConfiguration config,
        ILogger<OpenWeatherMapService> logger)
    {
        _http = http;
        _apiKey = config["OpenWeatherMap:ApiKey"]
            ?? throw new InvalidOperationException(
                "OpenWeatherMap:ApiKey not configured.");
        _logger = logger;
    }

    public async Task<WeatherResponse?> GetCurrentAsync(string city)
    {
        var url = $"{Base}/weather?q={Uri.EscapeDataString(city)}" +
                  $"&appid={_apiKey}&units=metric";
        _logger.LogInformation("Fetching weather for {City}", city);

        var res = await _http.GetAsync(url);
        if (!res.IsSuccessStatusCode) return null;

        return JsonSerializer.Deserialize<WeatherResponse>(
            await res.Content.ReadAsStringAsync());
    }

    public async Task<ForecastResponse?> GetForecastAsync(string city)
    {
        var url = $"{Base}/forecast?q={Uri.EscapeDataString(city)}" +
                  $"&appid={_apiKey}&units=metric&cnt=40";
        _logger.LogInformation("Fetching forecast for {City}", city);

        var res = await _http.GetAsync(url);
        if (!res.IsSuccessStatusCode) return null;

        return JsonSerializer.Deserialize<ForecastResponse>(
            await res.Content.ReadAsStringAsync());
    }
}
Enter fullscreen mode Exit fullscreen mode

This is a standard .NET service. HttpClient with DI. Structured logging. Strongly-typed responses. Nothing here is MCP-specific. If you have ever written an API client in C#, this is exactly what you already know.

The MCP Tools

Now we wrap this service in MCP tools. This is the layer where your existing code becomes AI-accessible.

// Tools/WeatherTools.cs

using System.ComponentModel;
using System.Text;
using ModelContextProtocol;
using WeatherMcp.Server.Services;

namespace WeatherMcp.Server.Tools;

[McpToolType]
public class WeatherTools
{
    private readonly IWeatherService _weather;

    public WeatherTools(IWeatherService weather)
    {
        _weather = weather;
    }

    [McpTool("get_current_weather")]
    [Description(
        "Get current weather conditions for a city. " +
        "Returns temperature in Celsius, humidity, wind, " +
        "and sky conditions. Use when the user asks about " +
        "current or today's weather.")]
    public async Task<ToolResult> GetCurrentWeather(
        [Description("City name, e.g. 'London' or 'Tokyo,JP'")] string city)
    {
        var data = await _weather.GetCurrentAsync(city);

        if (data is null)
            return ToolResult.Error(
                $"City '{city}' not found. Try adding a country " +
                $"code, e.g. 'Springfield,US'.");

        var sb = new StringBuilder();
        sb.AppendLine($"Weather in {data.City}, {data.Sys.Country}:");
        sb.AppendLine($"  {data.Main.Temp}C (feels like {data.Main.FeelsLike}C)");
        sb.AppendLine($"  Humidity: {data.Main.Humidity}%");
        sb.AppendLine($"  Wind: {data.Wind.Speed} m/s");
        sb.AppendLine($"  Visibility: {data.Visibility / 1000.0} km");
        if (data.Weather.Count > 0)
            sb.AppendLine($"  Sky: {data.Weather[0].Description}");

        return ToolResult.Success(sb.ToString());
    }

    [McpTool("get_forecast")]
    [Description(
        "Get a 5-day weather forecast for a city. " +
        "Returns temperature and conditions every 3 hours. " +
        "Use when the user asks about tomorrow, this week, " +
        "or future weather.")]
    public async Task<ToolResult> GetForecast(
        [Description("City name, e.g. 'Berlin,DE'")] string city,
        [Description("Days ahead, 1 to 5, default 3")] int days = 3)
    {
        if (days < 1 || days > 5)
            return ToolResult.Error("Days must be between 1 and 5.");

        var data = await _weather.GetForecastAsync(city);
        if (data is null)
            return ToolResult.Error($"City '{city}' not found.");

        var sb = new StringBuilder();
        sb.AppendLine($"Forecast for {data.City.Name}, {data.City.Country}:");

        var cutoff = DateTime.UtcNow.AddDays(days);
        var grouped = data.Items
            .Where(i => DateTimeOffset.FromUnixTimeSeconds(i.Dt).UtcDateTime < cutoff)
            .GroupBy(i => DateTimeOffset.FromUnixTimeSeconds(i.Dt).ToString("yyyy-MM-dd"));

        foreach (var day in grouped)
        {
            sb.AppendLine($"\n  {day.Key}:");
            foreach (var item in day)
            {
                var time = DateTimeOffset.FromUnixTimeSeconds(item.Dt).ToString("HH:mm");
                var cond = item.Weather.Count > 0 ? item.Weather[0].Description : "N/A";
                sb.AppendLine($"    {time} - {item.Main.Temp}C, {cond}, wind {item.Wind.Speed} m/s");
            }
        }

        return ToolResult.Success(sb.ToString());
    }
}
Enter fullscreen mode Exit fullscreen mode

Look at what is happening here. The [McpTool] attribute registers a method as a callable tool. The [Description] attributes tell the AI when and how to use it. The constructor receives IWeatherService through dependency injection, which means this tool uses the exact same service you would use in a REST controller.

The tool does not know which AI model is calling it. It does not care. It receives input, calls your service, formats a result, and returns it.

Wiring It Together

// Program.cs

using ModelContextProtocol;
using ModelContextProtocol.AspNetCore;
using WeatherMcp.Server.Services;
using WeatherMcp.Server.Tools;

var builder = WebApplication.CreateBuilder(args);

builder.Services.AddHttpClient<IWeatherService, OpenWeatherMapService>();
builder.Services.AddMcpServer()
    .WithTools<WeatherTools>();

var app = builder.Build();
app.MapGet("/", () => "Weather MCP Server is running.");
app.MapMcpEndpoint("/mcp");
app.Run();
Enter fullscreen mode Exit fullscreen mode

Add your API key to appsettings.json:

{
  "OpenWeatherMap": {
    "ApiKey": "YOUR_KEY_HERE"
  }
}
Enter fullscreen mode Exit fullscreen mode

Run it with dotnet run. Your server is live.

Connecting to Claude Desktop

Add this to your Claude Desktop config (macOS: ~/Library/Application Support/Claude/claude_desktop_config.json):

{
  "mcpServers": {
    "weather": {
      "command": "dotnet",
      "args": ["run", "--project", "/path/to/WeatherMcp.Server"]
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Restart Claude. Ask: "What is the weather like in London right now?" Claude calls your server, gets live data, and gives a real answer. Ask "Should I bring an umbrella to Tokyo tomorrow?" and it calls the forecast tool, checks for rain in the forecast, and gives you practical advice based on actual data.

That is the magic. Your C# code is now accessible to an AI agent through a standardized protocol. No OpenAI-specific adapter. No Anthropic-specific plugin. Just MCP.


Transport: How Clients and Servers Communicate

MCP defines two transport mechanisms for different deployment scenarios.

Stdio: For Local Servers

Standard input/output. The host launches your server as a child process and talks to it through stdin/stdout pipes. This is how Claude Desktop and Visual Studio connect to local MCP servers. Zero network overhead. Lowest possible latency.

For a Stdio server, you do not need ASP.NET Core. A plain console app works:

var builder = Host.CreateApplicationBuilder(args);
builder.Services.AddHttpClient<IWeatherService, OpenWeatherMapService>();
builder.Services.AddMcpServer()
    .WithStdioTransport()
    .WithTools<WeatherTools>();
await builder.Build().RunAsync();
Enter fullscreen mode Exit fullscreen mode

HTTP: For Remote Servers

For cloud deployments, multi-tenant environments, and load-balanced architectures, you use HTTP with Server-Sent Events (SSE). This is the ModelContextProtocol.AspNetCore package. The MapMcpEndpoint() method handles all the HTTP plumbing.

Most teams start with Stdio during development (for quick testing with Claude Desktop or Copilot) and deploy with HTTP in production.


Where MCP Lives in Your Architecture

If you follow Clean Architecture, Onion Architecture, or any layered pattern, MCP fits as a presentation layer concern. It sits at the same level as your REST API controllers.

Your Application
  Domain Layer         → Entities, business rules
  Application Layer    → Use cases, commands, queries
  Infrastructure       → Database, external APIs
  Presentation Layer   → REST Controllers (existing)
                       → MCP Server (new)
Enter fullscreen mode Exit fullscreen mode

MCP in Clean Architecture
Figure 3. MCP sits in the presentation layer alongside REST controllers. Both point inward to the application layer.

Your MCP tools should be thin. They accept input, call your application layer (or dispatch commands via MediatR if you use CQRS), and return results. They should contain zero business logic. All validation, authorization, and domain rules stay in the layers where they belong.

This means adding MCP to an existing application is not a rewrite. It is a new entry point into the same application logic. The same service that powers your REST endpoint can power your MCP tool. One codebase. Two interfaces. No duplication.


What v1.0 Brought to the Table

The March 2026 release was not just a version bump. It brought features that make MCP viable for serious production use.

Authorization. OAuth support with Protected Resource Metadata. Your MCP server can sit behind an identity provider and enforce the same role-based access control you use for REST endpoints.

Elicitation. Servers can ask users for clarification during tool execution. A deployment tool can pause and say "You asked me to deploy to production. Please confirm." The user responds through the client, and execution continues.

Structured Output. Instead of returning raw text that the AI has to interpret, tools can return explicitly typed content. The model knows exactly what each field means, which reduces errors and hallucination.

Progress Reporting. Long-running tools can send incremental status updates. "Collecting data... Processing... Generating report... Complete." The client displays these to the user so they know something is happening.

Icon Support. Tools, resources, and prompts can have icons for visual identification in client UIs.


The Ecosystem: What Works with MCP Today

Building an MCP server is an investment. The return on that investment depends on how many clients can use it. Here is what works today.

MCP Ecosystem Compatibility
Figure 4. One MCP server works with every compatible AI client, present and future.

Claude Desktop. Full MCP support. Local Stdio and remote HTTP.

GitHub Copilot in Visual Studio 2026. MCP servers appear as tools in agent mode. Your weather tool, your database tool, your internal API tool. All discoverable in Copilot.

VS Code with Copilot Agent Mode. Same as Visual Studio but in VS Code.

Semantic Kernel. Microsoft's AI orchestration framework consumes MCP servers as plugins automatically. Your MCP tools become Semantic Kernel functions with zero extra code.

Microsoft Agent Framework. Introduced in .NET 10 at .NET Conf 2025. It uses MCP as its primary tool integration mechanism. Autonomous agents discover and invoke tools through the protocol.

Copilot Studio. Enterprise AI workflow integration for organizations using Microsoft 365.

One server. All of these clients. That is the power of implementing a protocol instead of coupling to a vendor.


The Description Engineering Principle

This deserves its own section because it is the single most impactful thing you can do to make your MCP server work well.

When an AI model decides which tool to call, it reads the [Description] attribute. That is it. That is the entire selection mechanism. If your description is vague, the model picks the wrong tool. If your description is incomplete, the model does not pick your tool at all. If your description is excellent, the model calls exactly the right tool with exactly the right parameters.

Bad description:

"Gets weather"
Enter fullscreen mode Exit fullscreen mode

Good description:

"Get current weather conditions for a city. Returns
temperature in Celsius, humidity, wind speed, and sky
conditions. Use when the user asks about current or
today's weather in a specific location."
Enter fullscreen mode Exit fullscreen mode

The good description tells the model three things: what the tool does, what it returns, and when to use it. This is not optional polish. This is functional engineering that directly affects how well your server works.

The same principle applies to parameter descriptions. Instead of "The city", write "City name, optionally with country code, e.g. 'London' or 'Tokyo,JP'". The model uses these to construct correct input.


Error Messages That AI Can Use

When a REST endpoint fails, you return an error for a developer to read. When an MCP tool fails, you return an error for an AI model to interpret and act on.

Bad error:

"Error: 404"
Enter fullscreen mode Exit fullscreen mode

Good error:

"City 'Springfeld' not found. This may be a spelling error.
Try 'Springfield' or add a country code like 'Springfield,US'."
Enter fullscreen mode Exit fullscreen mode

The good error gives the AI enough information to recover. It might correct the spelling and retry. It might ask the user for clarification. It might try a different approach entirely. But it can only do these things if your error message provides context.

This is a mindset shift. You are not writing errors for a developer reading logs at 2 AM. You are writing errors for an AI agent that needs to decide what to do next in real time.


What Makes MCP Different from Function Calling

If you have used OpenAI's function calling or Anthropic's tool use, you might wonder: how is MCP different?

Function calling is a feature of a specific AI provider's API. It lets you define functions that a specific model can call within a specific conversation. The function definitions live in your API request. The execution happens in your application code. It works, but it is tightly coupled to that provider's schema and conventions.

MCP is a protocol. It separates the tool definition from the AI provider entirely. Your tools live in a server that any client can connect to. Discovery is automatic. Schema generation is automatic. The protocol handles lifecycle management, capability negotiation, and communication. You do not embed tool definitions in API calls. You run a server and clients find it.

The practical difference: with function calling, you write tool definitions once per provider. With MCP, you write them once, period.


When Should You Build an MCP Server?

MCP is not the right solution for everything. Here is when it makes sense.

Build an MCP server when you have existing business logic that would be useful to AI agents. Order management. Inventory queries. Customer lookup. Report generation. Deployment automation. If you already have services that do these things, wrapping them in MCP tools is a small investment with a large payoff.

Build an MCP server when you want to decouple from AI providers. If you are tired of rewriting integrations every time you switch models, MCP gives you a stable interface that survives provider changes.

Build an MCP server when you want AI features in your IDE. If your team uses Visual Studio or VS Code with Copilot, an MCP server can expose your internal tools, APIs, and data directly in the developer's workflow.

Do not build an MCP server when you just need a one-off function call in a single API request. If your use case is simple and provider-specific, function calling might be simpler.

Do not build an MCP server when you do not have existing services to expose. MCP is a presentation layer. If there is no application logic behind it, there is nothing to expose.


Getting Started in 5 Minutes

If you want to try MCP right now with the absolute minimum code:

dotnet new console -n MyFirstMcp
cd MyFirstMcp
dotnet add package ModelContextProtocol --prerelease
dotnet add package Microsoft.Extensions.Hosting
Enter fullscreen mode Exit fullscreen mode
using ModelContextProtocol;
using Microsoft.Extensions.Hosting;
using System.ComponentModel;

var builder = Host.CreateApplicationBuilder(args);
builder.Services.AddMcpServer()
    .WithStdioTransport()
    .WithTools<MyTools>();
await builder.Build().RunAsync();

[McpToolType]
public class MyTools
{
    [McpTool("greet")]
    [Description("Generate a personalized greeting for someone.")]
    public Task<ToolResult> Greet(
        [Description("The person's name")] string name)
    {
        return Task.FromResult(
            ToolResult.Success($"Hello, {name}! Welcome."));
    }
}
Enter fullscreen mode Exit fullscreen mode

That is a complete MCP server. Five minutes. One file. Connect it to Claude Desktop and you have an AI agent that can greet people by name using your code.

From here you add real services, real tools, real resources. The weather server we built earlier in this article. A database query tool. An order management tool. A deployment pipeline tool. Each one is just another class with [McpToolType] and [McpTool] attributes.


What Comes Next

MCP is early. The protocol is mature enough for production (v1.0 is stable), but the ecosystem is still expanding rapidly.

Multi-agent orchestration is the next frontier. The Microsoft Agent Framework already supports scenarios where multiple agents coordinate through MCP, each with their own tools, sharing context and delegating tasks. This is where the protocol's design really shines, because each agent is just another client connecting to servers.

Remote server hosting will become standardized. Today most MCP servers run locally. The HTTP transport makes remote hosting possible, and cloud providers are beginning to offer managed MCP hosting.

Security models will evolve. As MCP servers handle more sensitive operations, the community will need to develop patterns for fine-grained permission scoping, audit logging, and anomaly detection for AI-initiated operations.

Tool marketplaces are coming. Just as npm and NuGet transformed how developers share code, MCP server registries will transform how teams share AI-accessible capabilities.


Conclusion

MCP is not a framework. It is not a library. It is a protocol. And protocols change industries.

HTTP changed how computers talk to each other. REST changed how services talk to each other. MCP is changing how AI talks to software. The pattern is the same: standardize the conversation and everything else follows.

If you build software that touches AI in any way, MCP is worth understanding deeply. If you are a .NET developer, the C# SDK is production-ready and the ecosystem support from Microsoft is serious. If you are not a .NET developer, the concepts are identical across every SDK.

The code we wrote today, the weather server, is a starting point. The real value comes when you wrap your own business logic in MCP tools and suddenly every AI agent in your organization can use it.

Build the server. Expose your tools. Let the AI find them.


If this was useful, hit the reaction button and share it with someone who builds software that talks to AI. I write about .NET architecture and AI integration. Follow me for more.

Questions? Ideas for your own MCP server? Drop them in the comments.

Top comments (0)