DEV Community

Matt Anderson
Matt Anderson

Posted on • Edited on

Turn Any OpenAPI Spec into an MCP Server with One Command (or how I used other people's APIs as a personal MCP)

Part three of the ZeroMcp series. Part one covered exposing your own ASP.NET Core API as an MCP server. Part two covered everything that's grown since then. This one is about a different problem entirely.


ZeroMcp started with a specific premise: you have an ASP.NET Core API, and you want AI clients to use it without rewriting anything. Slap [Mcp] on your controller actions, add two lines of setup, done.

But what about APIs you don't own?

What about Stripe, GitHub, your internal CRM, a third-party logistics API, a partner's REST service? You have credentials, you have documentation, maybe you have an OpenAPI spec URL — but you don't have source code to decorate with attributes.

That's the problem ZeroMcp.Relay solves.


What It Is

ZeroMcp.Relay is a standalone dotnet tool. You install it globally, point it at one or more OpenAPI spec URLs, configure authentication, and it immediately starts serving those APIs as MCP tools — no code, no compilation, no framework knowledge required.

dotnet tool install -g ZeroMcp.Relay
Enter fullscreen mode Exit fullscreen mode

The command is mcprelay. The concept is simple: ingest a spec, generate tools, relay calls.

The two tools in the ZeroMcp ecosystem cover complementary territory:

ZeroMcp          ← your own ASP.NET Core APIs (in-process, NuGet package)
ZeroMcp.Relay    ← any API with an OpenAPI spec (outbound HTTP relay, dotnet tool)
Enter fullscreen mode Exit fullscreen mode

They speak the same MCP protocol and can be used side-by-side.


Quick Start

# 1. Scaffold a config file
mcprelay configure init

# 2. Add an API
mcprelay configure add -n petstore \
  -s https://petstore3.swagger.io/api/v3/openapi.json

# 3. Run with the visual UI for setup
mcprelay run --enable-ui
# → open http://localhost:5000/ui

# 4. Or run headless for production
mcprelay run

# 5. Or run in stdio mode for Claude Desktop
mcprelay run --stdio
Enter fullscreen mode Exit fullscreen mode

That's the whole workflow. Three commands from nothing to a working MCP server over any documented API.


The Config File

Everything lives in relay.config.json. Here's a realistic multi-API setup:

{
  "$schema": "https://zeromcp.dev/schemas/relay.config.json",
  "serverName": "My API Relay",
  "serverVersion": "1.0.0",
  "apis": [
    {
      "name": "stripe",
      "source": "https://raw.githubusercontent.com/stripe/openapi/master/openapi/spec3.json",
      "prefix": "stripe",
      "auth": {
        "type": "bearer",
        "token": "env:STRIPE_SECRET_KEY"
      },
      "exclude": ["test_helpers.*", "radar.*"]
    },
    {
      "name": "crm",
      "source": "https://internal-crm.company.com/swagger/v1/swagger.json",
      "prefix": "crm",
      "auth": {
        "type": "apikey",
        "header": "X-Api-Key",
        "value": "env:CRM_API_KEY"
      },
      "headers": {
        "X-Tenant-Id": "acme"
      }
    },
    {
      "name": "github",
      "source": "https://github.com/github/rest-api-description/raw/main/descriptions/api.github.com/api.github.com.json",
      "prefix": "gh",
      "auth": {
        "type": "bearer",
        "token": "env:GITHUB_TOKEN"
      }
    }
  ]
}
Enter fullscreen mode Exit fullscreen mode

Notice env:STRIPE_SECRET_KEY — any credential value prefixed with env: is resolved from the environment at startup. If the variable isn't set, that API is disabled with a warning rather than starting with invalid credentials. You can also load a .env file:

mcprelay run --env .env.production
Enter fullscreen mode Exit fullscreen mode

Authentication

Relay supports the auth patterns you'll actually encounter:

Bearer tokenAuthorization: Bearer {token} on every request.

API key (header) — any named header, e.g. X-Api-Key.

API key (query parameter) — appended to every URL, for APIs that expect ?api_key=....

HTTP BasicAuthorization: Basic {base64(user:pass)}.

None — for public APIs or internal services with network-level auth.

Per-API auth configuration means you can mix all of these in a single relay instance. Your Stripe integration uses bearer, your legacy internal API uses basic auth, your partner's API uses a query parameter — Relay handles each correctly without any cross-contamination.


How Tool Names Work

Tool names are {prefix}_{operationId}, lowercased, with non-alphanumeric characters replaced by underscores:

operationId: ChargeCreate  →  stripe_charge_create
operationId: GetCustomer   →  crm_get_customer
Enter fullscreen mode Exit fullscreen mode

Operations without an operationId (which is unfortunately common) get a generated name from the HTTP method and path:

GET  /customers/{id}  →  crm_get_customers_id
POST /orders          →  crm_post_orders
Enter fullscreen mode Exit fullscreen mode

The prefix is the key to multi-API sanity. When an LLM sees stripe_charge_create and crm_get_customer and gh_repos_list, it has an immediate signal about which system each tool belongs to. Duplicate prefixes cause a startup error — you can't accidentally collide two APIs' tool namespaces.


Include / Exclude Filtering

Real-world OpenAPI specs are big. Stripe's spec has hundreds of operations. GitHub's has over 500. You probably don't want to expose all of them as MCP tools — both because it's overwhelming for the LLM and because some operations shouldn't be accessible from an AI client at all.

Use glob patterns to control exactly what gets exposed:

{
  "name": "stripe",
  "include": ["stripe_charge_*", "stripe_customer_*", "stripe_invoice_*"],
  "exclude": ["stripe_*_test_*", "stripe_radar_*"]
}
Enter fullscreen mode Exit fullscreen mode

include (if non-empty) is a whitelist. exclude then removes from whatever include allows. Both empty means all operations are included.

This is also where you enforce boundaries. An AI client probably shouldn't have access to your billing API's deletion endpoints, your admin reporting operations, or anything that touches test data. Exclude them at the Relay level and they don't exist as far as the LLM is concerned.


The Config UI

When you pass --enable-ui, Relay serves a visual interface at /ui for managing your configuration:

mcprelay run --enable-ui
Enter fullscreen mode Exit fullscreen mode

The UI is deliberately opt-in and only available when explicitly enabled. A plain mcprelay run doesn't register the endpoint at all — not even a 404. This matters for production deployments where you don't want a management interface exposed.

The intended workflow is: use the UI during local setup to configure your APIs and get everything working, then run without --enable-ui in production.

What the UI gives you:

Adding a new API walks you through name, spec URL, auth, prefix, timeout, and include/exclude patterns. The standout feature is Fetch Spec: before you save, you can preview the spec — title, version, operation count, and any warnings (missing operationIds, malformed schemas, unresolvable $refs). You know exactly what you're getting before it's committed to config.

The tool browser lets you search and filter across all configured APIs, click into any tool to see its full input schema, and invoke tools directly from the UI — fill in arguments, hit execute, see the raw response. This is the same "Swagger UI for MCP" experience that the Tool Inspector brings to ZeroMcp proper, now available for externally-relayed APIs too.

Status indicators on the API list give you immediate visibility: green for loaded and healthy, yellow for disabled, red for spec fetch failure or missing auth credentials.


Two Modes: HTTP and stdio

HTTP Server Mode

The default. Relay starts an ASP.NET Core server and exposes:

POST /mcp          JSON-RPC 2.0 — MCP protocol
GET  /mcp          Server info
GET  /mcp/tools    Tool list JSON
GET  /health       Per-API status and tool counts
Enter fullscreen mode Exit fullscreen mode

The health endpoint is worth highlighting:

{
  "status": "degraded",
  "apis": [
    { "name": "stripe",    "status": "ok",    "toolCount": 147 },
    { "name": "crm",       "status": "ok",    "toolCount": 34  },
    { "name": "logistics", "status": "error", "error": "Spec fetch failed" }
  ],
  "totalTools": 181
}
Enter fullscreen mode Exit fullscreen mode

If one spec fails to load, the others keep working. degraded means some APIs are unavailable; error means all of them are. You can wire this into your existing health monitoring infrastructure — it's just an HTTP endpoint.

stdio Mode

Pass --stdio and Relay reads JSON-RPC from stdin and writes to stdout, with all logging going to stderr. This is what you use for Claude Desktop and Claude Code:

{
  "mcpServers": {
    "relay": {
      "command": "mcprelay",
      "args": ["run", "--stdio"],
      "env": {
        "STRIPE_SECRET_KEY": "sk_live_...",
        "CRM_API_KEY": "..."
      }
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Or with a project-specific config:

{
  "mcpServers": {
    "relay": {
      "command": "mcprelay",
      "args": [
        "run", "--stdio",
        "--config", "/path/to/project/relay.config.json"
      ]
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

By default, all specs are fetched and validated before Relay starts reading from stdin. If you have many APIs and startup latency is a concern, --lazy defers spec fetching to the first tool call for each API.


CI Validation

One addition I'm particularly happy about: mcprelay validate.

mcprelay validate --strict --config relay.config.json
Enter fullscreen mode Exit fullscreen mode

This fetches all spec URLs, parses them, resolves all environment variable references, and reports problems — missing operationIds, malformed schemas, unresolvable $refs, missing secrets. Exit code 0 on success, 1 on failure.

In a GitHub Actions workflow:

- name: Validate relay config
  run: mcprelay validate --strict --config relay.config.json
  env:
    STRIPE_SECRET_KEY: ${{ secrets.STRIPE_SECRET_KEY }}
    CRM_API_KEY: ${{ secrets.CRM_API_KEY }}
Enter fullscreen mode Exit fullscreen mode

If someone accidentally commits a config pointing at a dead spec URL, or forgets to add a new secret to the CI environment, the pipeline catches it before deployment. This is the kind of thing that tends to get discovered at 2am in production otherwise.


Deployment

Local Developer (stdio)

The simplest path: install globally, point Claude Desktop at your local config.

dotnet tool install -g ZeroMcp.Relay
mcprelay configure init
mcprelay run --enable-ui   # set up your APIs visually
# then add to claude_desktop_config.json with --stdio
Enter fullscreen mode Exit fullscreen mode

Team Server (HTTP)

Run Relay as a shared HTTP server your team's MCP clients connect to. Everyone gets the same API access without managing local credentials.

FROM mcr.microsoft.com/dotnet/runtime:9.0
RUN dotnet tool install -g ZeroMcp.Relay
ENV PATH="$PATH:/root/.dotnet/tools"
COPY relay.config.json /app/relay.config.json
WORKDIR /app
EXPOSE 8080
ENTRYPOINT ["mcprelay", "run", "--host", "0.0.0.0", "--port", "8080"]
Enter fullscreen mode Exit fullscreen mode
docker run -p 8080:8080 \
  -e STRIPE_SECRET_KEY=sk_live_... \
  -e CRM_API_KEY=... \
  myrelay:latest
Enter fullscreen mode Exit fullscreen mode

TLS termination goes in front of Relay (nginx, Caddy, Traefik — whatever you already use).


Filling the stdio Gap in ZeroMcp

There's one more use case for Relay that deserves its own callout, because it's not obvious at first: Relay in stdio mode is also the answer for your own ZeroMcp APIs when you need stdio.

ZeroMcp (the library) speaks Streamable HTTP only. That's intentional — it runs in-process inside your ASP.NET Core app and dispatches through your real pipeline. But it means Claude Desktop and Claude Code, which expect a stdio process, can't connect to it directly.

Relay closes that gap. Point Relay at your own app's /mcp endpoint — the one ZeroMcp exposes — and run Relay in stdio mode:

{
  "mcpServers": {
    "my-api": {
      "command": "mcprelay",
      "args": ["run", "--stdio"],
    }
  }
}
Enter fullscreen mode Exit fullscreen mode
{
  "name": "my-api",
  "source": "http://localhost:5000/mcp/tools",
  "prefix": "myapi",
  "auth": {
    "type": "bearer",
    "token": "env:MY_API_KEY"
  }
}
Enter fullscreen mode Exit fullscreen mode

Relay reads from ZeroMcp's tool inspector endpoint to get the spec, then proxies tool calls through to /mcp. Claude Desktop gets a stdio process; your app gets normal in-process dispatch through its full pipeline. Both sides are happy.

This makes the full picture:

Claude Desktop / Claude Code (stdio)
        │
        ▼
  mcprelay --stdio           ← ZeroMcp.Relay, reads JSON-RPC from stdin
        │
        │  HTTP  POST /mcp
        ▼
  Your ASP.NET Core app      ← ZeroMcp, in-process dispatch
        │
        ▼
  Your controllers / endpoints
Enter fullscreen mode Exit fullscreen mode

The Bigger Picture

ZeroMcp.Relay and ZeroMcp proper solve adjacent but distinct problems — and Relay bridges the one gap ZeroMcp leaves open.

ZeroMcp is about your own APIs: your source code, your pipeline, your auth filters running in-process. The value is zero duplication and full fidelity to your existing implementation. Its constraint is transport: HTTP only.

ZeroMcp.Relay is about reach: any API with an OpenAPI spec, any transport. Third-party services, internal APIs you don't own, and — via the stdio bridge pattern above — your own ZeroMcp APIs when you need to connect a stdio client.

Together, they cover the full range. Build your internal capabilities with ZeroMcp. Connect to external services and stdio clients with ZeroMcp.Relay. Use both from the same MCP client.


Get Started

This is early — v0.1.1 — and there will be rough edges, especially around OpenAPI specs that use unusual patterns or non-standard extensions. If something doesn't work, open an issue with the spec URL (or a minimal reproduction) and I'll take a look.


Tags: #mcp #dotnet #webdev #llm #openapi

Top comments (0)