Hey folks 👋
I've been deep in the AI tooling world lately, and I kept running into the same annoying problem: connecting LLMs to existing APIs is way harder than it should be.
Every time I wanted an LLM to call a REST API, I had to write a custom MCP server from scratch. Define the tools. Map the parameters. Handle auth. Wire up the HTTP calls. Over and over again.
So I built something to fix that.
Meet mcp-server-openapi
mcp-server-openapi is an open-source Go CLI that automatically converts your OpenAPI spec into MCP tools. If your API has an OpenAPI doc (and let's be honest, most do), you can expose it to any MCP client — Claude Desktop, Cursor, VS Code, you name it — in about 30 seconds.
No boilerplate. No hand-wiring. Just point it at your spec and go.
mcp-server-openapi --spec ./your-api.yaml
That's it. Your API endpoints are now tools that any MCP client can call.
Wait, What's MCP?
If you're new to this — Model Context Protocol (MCP) is an open standard that lets AI assistants interact with external tools and data sources. Think of it as a USB-C port for AI: one standard connection that works everywhere.
MCP servers expose "tools" (basically functions) that an MCP client can discover and call. The problem is, building these servers by hand gets tedious fast — especially when your API already describes itself perfectly through OpenAPI.
That's the gap mcp-server-openapi fills.
The Processing Pipeline
OpenAPI Spec mcp-server-openapi MCP Client
------------ -------------------------- ---------------
+---------+ +-----+ +------+ +-----+ +-----------+
| YAML/ |---->|Parse|->|Filter|->|Schema|-->| Discovers |
| JSON | | | | | | Gen | | Tools |
+---------+ +-----+ +------+ +-----+ +-----+-----+
|
+--------------------------+ |
| When tool is called: |<--------+
| |
| 1. Map args -> HTTP req |
| 2. Inject auth headers |
| 3. Execute request |---> Your API
| 4. Return response |<--- Response
+--------------------------+
How It Works Step by Step
-
Parses your spec using kin-openapi — the gold standard for OpenAPI parsing in Go. Full
$refresolution,oneOf/anyOf/allOf— the works. -
Filters operations — only endpoints tagged with
mcpget exposed (you choose what gets visible) - Generates JSON Schema for each tool's input — path params, query params, headers, and body are all mapped automatically
- Serves tools via stdio or Streamable HTTP using the mcp-go SDK
- Executes HTTP requests when an MCP client calls a tool, mapping responses back cleanly
The key insight: your OpenAPI spec already has everything needed to generate MCP tools — parameter types, descriptions, required fields, request bodies. Why write it twice?
Quick Start (5 Minutes)
1. Install
go install github.com/soyvural/mcp-server-openapi/cmd/mcp-server-openapi@latest
2. Tag Your Endpoints
Add the mcp tag to any operation you want to expose:
paths:
/pets/{petId}:
get:
tags:
- mcp # <-- This is the magic switch
operationId: getPetById
summary: Get a pet by ID
parameters:
- name: petId
in: path
required: true
schema:
type: integer
3. Configure Your MCP Client
Add this to your MCP client config (works with any MCP-compatible app — Claude Desktop, Cursor, etc.):
{
"mcpServers": {
"my-api": {
"command": "mcp-server-openapi",
"args": ["--spec", "/path/to/your/openapi.yaml"]
}
}
}
Restart your MCP client, and your API tools show up automatically. Done. 🎉
The x-mcp Extensions (My Favorite Part)
You can fine-tune how your API appears to the MCP client using OpenAPI vendor extensions:
/users/{id}:
get:
tags: [mcp]
operationId: getUserById
x-mcp-tool-name: get_user # Friendlier name
x-mcp-description: | # AI-optimized description
Fetch detailed user info including
profile, settings, and activity.
/internal/health:
get:
tags: [mcp]
x-mcp-hidden: true # Hide from AI
/debug/stats:
get:
tags: [internal] # No mcp tag, but...
x-mcp-hidden: false # Force visible anyway
Here's what each extension does:
+-------------------+----------+--------------------------------+
| Extension | Type | What It Does |
+-------------------+----------+--------------------------------+
| x-mcp-tool-name | string | Override the generated name |
| x-mcp-description | string | Write AI-friendly description |
| x-mcp-hidden | boolean | Show/hide regardless of tags |
+-------------------+----------+--------------------------------+
Think of x-mcp-description as writing instructions specifically for the AI. Your OpenAPI summary might say "Get user by ID" (fine for docs), but the MCP description can add context that helps the LLM make smarter decisions about when and how to use the tool.
Authentication Built In
Most real APIs need auth. We've got you covered:
Bearer token:
export GITHUB_TOKEN="ghp_..."
mcp-server-openapi \
--spec ./github-api.yaml \
--auth-type bearer \
--auth-token-env GITHUB_TOKEN
API key (header or query):
export MY_KEY="sk_..."
mcp-server-openapi \
--spec ./api.yaml \
--auth-type api-key \
--auth-key-env MY_KEY \
--auth-key-name X-API-Key \
--auth-key-in header
Credentials stay in environment variables — never in config files or CLI args. 🔒
Error Handling That Makes Sense
When things go wrong (and they will), error mapping is clear:
+----------------+---------------------------------------------+
| What Happened | What the MCP Client Sees |
+----------------+---------------------------------------------+
| 2xx response | The actual response body (success!) |
| 400-499 error | Error with status code + response body |
| 500-599 error | "Upstream server error" + status code |
| Timeout | "Request timed out" |
| Can't connect | "Failed to connect to upstream" |
+----------------+---------------------------------------------+
All errors include enough context for the AI to understand what went wrong and communicate it to the user. No mysterious failures.
Docker Support
If you prefer containers:
docker run --rm -i \
-v $(pwd)/specs:/specs \
mcp-server-openapi --spec /specs/my-api.yaml
Works great for teams who want a standardized MCP setup across environments.
Why I Built This
Honestly? Laziness. The good kind.
I was working on a project where I needed my AI tools to interact with about a dozen internal APIs. Writing a custom MCP server for each one felt insane. The specs were already there. The parameter types were already documented. The auth was already defined.
I just needed something to connect the dots.
Now, adding a new API to my MCP setup takes me less than a minute: tag the endpoints, point the server at the spec, done.
What's Coming Next
The project is MIT-licensed and open for contributions. Here's what's on the roadmap:
- 🔐 OAuth2 client credentials flow
- 📄 Config file support (YAML)
- 🔄 SSE transport
- ⚠️
x-mcp-confirmextension for destructive operations - 🔥 Spec hot-reload
- ⏱️ Rate limiting and retry policies
If any of that sounds interesting, PRs are welcome!
Try It Out
go install github.com/soyvural/mcp-server-openapi/cmd/mcp-server-openapi@latest
mcp-server-openapi --spec examples/petstore/petstore.yaml
I'd love to hear what APIs you connect with this. Drop a comment or open an issue — always happy to chat.
Happy building! 🚀
Top comments (0)