DEV Community

Matt Anderson
Matt Anderson

Posted on

ZeroMCP Has Grown Up: From a Single Attribute to a Production-Ready MCP Platform

A follow-up to "Your Existing ASP.NET Core API is Already an MCP Server — You Just Don't Know It Yet"


When I published the original ZeroMCP article a few weeks ago, the pitch was simple: tag a controller action with [Mcp], add two lines of setup, and your existing ASP.NET Core API becomes an MCP server. Zero duplication, zero new process, zero rewriting.

The response was encouraging — enough that I've kept building. And ZeroMCP has grown considerably since then.

This post is an honest look at what's changed: what got added, why, and what's coming next.


Where We Started

The v1.0 story was about the core insight: your controller pipeline is already doing exactly what an MCP server needs to do. Instead of duplicating it into a separate [McpServerTool] class (where your auth filters don't run, your ModelState doesn't validate, your DI scope is wrong), ZeroMCP dispatches through IActionInvokerFactory — the real pipeline — using a synthetic HttpContext built from the LLM's tool arguments.

That core mechanic hasn't changed. It's still the heart of everything.

What has changed is everything built on top of it.


Observability and Governance

The first thing that became obvious after dogfooding the library on real APIs: you need to be able to see what's happening.

Structured Logging

Every MCP request now emits structured log entries with a scope containing CorrelationId, JsonRpcId, and Method. Tool invocations log ToolName, StatusCode, IsError, DurationMs, and CorrelationId. If you're using Serilog or any structured logging provider, these show up as first-class fields — filterable, queryable, alertable.

Correlation IDs

Send X-Correlation-ID on the MCP request and ZeroMCP echoes it in the response, propagates it to the synthetic request's TraceIdentifier, and includes it in every log entry. If you don't send one, a GUID is generated. This sounds like a small thing but it makes debugging agentic workflows — where a single user action can trigger dozens of tool calls — dramatically easier.

OpenTelemetry

Set EnableOpenTelemetryEnrichment = true and ZeroMCP tags Activity.Current with mcp.tool, mcp.status_code, mcp.is_error, mcp.duration_ms, and mcp.correlation_id. Your existing OpenTelemetry pipeline picks it up automatically — no new instrumentation needed.

Pluggable Metrics

Implement IMcpMetricsSink and register it after AddZeroMcp():

public class MyMetricsSink : IMcpMetricsSink
{
    public void RecordToolInvocation(string toolName, int statusCode, bool isError, long durationMs)
    {
        // Push to Prometheus, Datadog, Application Insights — whatever you use
    }
}
Enter fullscreen mode Exit fullscreen mode

The default is a no-op, so there's no overhead if you don't need it.

Role and Policy-Based Tool Visibility

Phase 1 also solidified the governance story. Tools can now be restricted by role or policy directly on the [Mcp] attribute:

[Mcp("admin_report", Description = "Runs admin report.", Roles = ["Admin"])]
public ActionResult<Report> GetAdminReport() { ... }

[Mcp("sensitive_export", Description = "Exports customer PII.", Policy = "RequireDataSteward")]
public ActionResult<ExportResult> ExportData() { ... }
Enter fullscreen mode Exit fullscreen mode

Tools not visible to the current user don't appear in tools/list and are rejected if called directly. The LLM never knows they exist. And because this is built on ASP.NET Core's standard IAuthorizationService, your existing auth policies and role claims work without any ZeroMCP-specific configuration.

For discovery-time filtering (e.g. strip out internal tools in production), use ToolFilter:

options.ToolFilter = name => !name.StartsWith("internal_");
Enter fullscreen mode Exit fullscreen mode

For per-request filtering (e.g. show beta tools only to beta users), use ToolVisibilityFilter:

options.ToolVisibilityFilter = (name, ctx) =>
    ctx.Request.Headers.TryGetValue("X-Beta-Features", out _) || !name.StartsWith("beta_");
Enter fullscreen mode Exit fullscreen mode

Result Enrichment, Follow-Ups, and Streaming

I then focussed on about making the LLM smarter about what to do with tool results.

Result Enrichment

Enable EnableResultEnrichment = true and tool call results include metadata alongside the payload: statusCode, durationMs, correlationId, and optional hints. The LLM can use this to reason about whether a call succeeded, how long it took, and what to try next.

Suggested Follow-Ups

This is the feature I'm most excited about. Set EnableSuggestedFollowUps = true and implement SuggestedFollowUpsProvider to return a list of suggested next tools after each invocation:

options.SuggestedFollowUpsProvider = (toolName, result, ctx) => toolName switch
{
    "create_order" => ["get_order", "list_customer_orders"],
    "get_customer" => ["list_customer_orders", "update_customer"],
    _ => []
};
Enter fullscreen mode Exit fullscreen mode

The LLM gets a suggestedFollowUps field in the result. Whether it uses them is up to the model, but in practice this significantly improves multi-step workflow completion — the model has a map of "what makes sense to do next" rather than having to infer it from the tool list alone.

Streaming Tool Results

For large results (think: export endpoints, report generation, search results), you can now stream:

options.EnableStreamingToolResults = true;
options.StreamingChunkSize = 4096;
Enter fullscreen mode Exit fullscreen mode

Results are returned as chunks with chunkIndex and isFinal fields. MCP clients that support streaming can start processing immediately rather than waiting for the full payload.

XmlDoc support

If you use Swagger for your existing APIs, the chances are you have already crafted descriptions of your Methods. If you do not specify a description in your [MCP] attribute, ZeroMCP will extract the description from your XmlDoc comments.

Rich Tool Metadata

The [Mcp] attribute now supports Category, Examples, and Hints:

[Mcp(
    name: "create_order",
    Description = "Creates a new order and returns the created record.",
    Category = "orders",
    Examples = ["Create order for Alice, 2 Widgets", "New order: Bob, 1 Gadget, rush"],
    Hints = ["idempotent:false", "cost:low", "side-effect:write"]
)]
Enter fullscreen mode Exit fullscreen mode

Examples give the LLM concrete demonstrations of how to use the tool. Hints are free-form AI-facing signals — use them however makes sense for your domain. Category helps with grouping in tool inspectors and future filtering features.


The Tool Inspector — Including a Full UI

ZeroMCP now ships a browser-based tool inspector UI at /{routePrefix}/tools/ui.

If you've used Swagger UI, you already know what this feels like. Navigate to /mcp/tools/ui in a browser and you get a visual interface listing every registered MCP tool — name, description, category, input schema, tags, required roles, examples. And like Swagger UI, you can execute tools directly from the browser: fill in the arguments, hit invoke, and see the result come back.

This is genuinely useful in ways the JSON endpoint alone isn't:

  • During development, you can verify your [Mcp] attributes were picked up correctly and that your schema looks right, without setting up a full MCP client
  • When debugging, you can reproduce a tool call the LLM made and inspect the response directly
  • When onboarding teammates, you can hand them a URL and say "here are all the things the AI can do with this API" — no tooling required on their end

The underlying JSON endpoint (GET {routePrefix}/tools) is still there for programmatic use:

{
  "serverName": "My Orders API",
  "serverVersion": "2.0.0",
  "protocolVersion": "2024-11-05",
  "toolCount": 12,
  "tools": [
    {
      "name": "create_order",
      "description": "Creates a new order and returns the created record.",
      "httpMethod": "POST",
      "route": "/api/orders",
      "category": "orders",
      "examples": ["Create order for Alice, 2 Widgets"],
      "inputSchema": { ... }
    }
  ]
}
Enter fullscreen mode Exit fullscreen mode

Both the UI and the JSON endpoint are controlled by the same option:

options.EnableToolInspector = true;  // default; exposes /tools and /tools/ui
Enter fullscreen mode Exit fullscreen mode

Disable it in production if your tool list is sensitive:

options.EnableToolInspector = false;
Enter fullscreen mode Exit fullscreen mode

The analogy to Swagger UI is deliberate. Swagger solved a real discoverability problem for REST APIs — before it, you had to read source code or documentation to understand what an API could do. ZeroMCP's inspector solves the same problem for MCP tools: it makes your AI-facing API surface visible, browsable, and testable without an AI client in the loop.


Four Production-Ready Examples

The examples/ folder now ships four standalone projects that represent the range of real-world configurations:

Example What It Shows
Minimal One controller action, one minimal API, no auth — the fastest path to working
WithAuth API-key auth, role-based tool visibility, [Authorize] filters
WithEnrichment Result enrichment, suggested follow-ups, streaming
Enterprise Auth + enrichment + observability + ToolFilter + ToolVisibilityFilter together

Run any of them with dotnet run from its folder. The Enterprise example is the reference implementation for production deployments — it shows how all the pieces fit together without being a contrived demo.


The Benchmark Suite

ZeroMCP now ships ZeroMCP.Benchmarks, a BenchmarkDotNet project that measures dispatch overhead, schema generation cost, and throughput at various tool counts. I'll publish numbers in a dedicated post, but the short version: the overhead of the synthetic HttpContext approach is negligible compared to the cost of a real HTTP round-trip, which is what you'd be paying if you separated your MCP server from your API.


What's Still Missing (And What's Next)

Being direct about current limitations:

stdio transport isn't fully supported. ZeroMCP does support stdio, you just need to run the API endpoint locally, which is usually out of scope for this kind of implementation. A standalone relay solution is currently being developed to handle this.

Minimal API parameter binding is limited. Route parameters work. Query and body binding on minimal APIs is constrained by what the route pattern exposes. Controller actions have full binding support.

The two highest-impact next additions are stdio transport support and richer minimal API parameter binding — both listed in the Contributing section if you want to take a swing at either.


The Bigger Picture

Something has shifted in how I think about this project since the original article.

ZeroMCP started as a brownfield story: you have an existing API, here's how to add MCP with minimal disruption. That's still true. But what's become clearer is that for greenfield APIs, designing with [Mcp] from the start changes how you think about your endpoints.

When your API endpoint is also an AI tool, you start writing descriptions that matter to a model, not just a human developer reading Swagger. You think about what "suggested follow-ups" make sense after each operation. You consider what hints help the model understand side effects and cost. You think about which tools should be visible to which roles — not just from an access control perspective, but from a task completion perspective.

That's a different design discipline, and I think it's a good one. APIs that are legible to AI agents tend to be better designed APIs in general: cleaner semantics, more explicit contracts, better documentation.

ZeroMCP is becoming a library for building that kind of API.


Get Started

If you're using ZeroMCP in a real project, I'd love to hear about it in the GitHub Discussions. And if something doesn't work, open an issue — the MCP ecosystem for .NET is still early, and the rough edges are worth fixing.


Tags: #mcp #aspdotnet #webdev #llm #dotnet

Top comments (0)