DEV Community

Cover image for I Built an Open-Source Tool That Turns Any REST API into an MCP Server — No Code Required
Kyle Fuehri
Kyle Fuehri

Posted on

I Built an Open-Source Tool That Turns Any REST API into an MCP Server — No Code Required

Every AI agent builder hits the same wall.

You want Claude, Cursor, or Copilot to call the Stripe API. Or the GitHub API. Or your company's internal API. So you sit down and write an MCP server — a bridge between your agent and the REST endpoint.

You write the tool definitions by hand. You map every parameter. You handle auth, rate limiting, error codes. You write tests. And then you do it all over again for the next API.

I got tired of writing glue code. So I built APIFold — an open-source platform that takes any OpenAPI spec and turns it into a live, production-ready MCP server endpoint. No code required.

Paste a spec URL. Get a working MCP server. Connect your agent.


The Problem in 30 Seconds

MCP (Model Context Protocol) is how AI agents talk to external tools. It's a great standard — but connecting a REST API to MCP today means:

  1. Reading the API docs
  2. Manually defining every tool (endpoint) with its parameters
  3. Writing the HTTP proxy logic
  4. Handling authentication injection
  5. Adding rate limiting, circuit breakers, error handling
  6. Deploying and maintaining the server

For one API, that's a weekend project. For ten APIs, it's a full-time job.


What APIFold Does

APIFold automates every step:

# Self-hosted: one command
docker compose up -d

# Or use the hosted version at apifold.dev
Enter fullscreen mode Exit fullscreen mode
Step What Happens
1. Import Paste your OpenAPI/Swagger spec URL or upload a file
2. Transform APIFold parses every operation, resolves $ref chains, handles allOf/oneOf/anyOf composition, and generates MCP tool definitions
3. Connect You get a live SSE endpoint — point Claude Desktop, Cursor, or any MCP client at it

That's it. Your agent can now call every endpoint in the API.


What's Under the Hood

APIFold is a TypeScript monorepo with four components:

1. The Transformer (@apifold/transformer)

The core engine is an MIT-licensed npm package. It's a pure function — no side effects, no network calls, no runtime dependencies. Feed it an OpenAPI 3.0/3.1 spec, get back MCP tool definitions.

import { parseSpec } from '@apifold/transformer';

const spec = await fetch('https://api.example.com/openapi.json')
  .then(r => r.json());

const tools = parseSpec(spec);
// → Array of MCP tool definitions with inputSchema,
//   parameter mapping, and metadata
Enter fullscreen mode Exit fullscreen mode

It handles the gnarly stuff:

  • Circular $ref resolution with cycle detection
  • Schema composition (allOf, oneOf, anyOf)
  • Operation name sanitization and collision deduplication
  • Parameter mapping across path, query, header, and request body

Tested against 10+ real-world API specs (Stripe, GitHub, Twilio, OpenAI, and more) with 95%+ coverage.

You can use the transformer standalone — it's MIT licensed and works in browsers, Node, edge runtimes, wherever.

2. The MCP Runtime

An Express server that hosts live MCP endpoints. For each server you create, you get:

GET  /mcp/my-stripe-server/sse          # SSE connection
POST /mcp/my-stripe-server/sse/message  # Tool calls
Enter fullscreen mode Exit fullscreen mode

What it handles for you: credential injection, per-server rate limiting (Redis-backed sliding window), circuit breakers for upstream resilience, and hot-reload via Redis pub/sub — no restarts needed.

Internally, it uses a tiered context loading architecture: L0 in-memory registry for hot servers, L1 tool cache, L2 credential cache. This keeps memory predictable even at scale.

3. The Dashboard

A Next.js 15 app where you manage everything:

  • Import specs from URL or file
  • Configure servers — auth mode, base URL, rate limits
  • Enable/disable individual tools per server
  • Test tool calls in an interactive console with JSON Schema form generation
  • View request logs with filtering and syntax highlighting
  • Export standalone code — generate a self-contained TypeScript MCP server you can deploy anywhere

4. Security Built In

Concern How It's Handled
Credentials AES-256-GCM encryption at rest (PBKDF2-derived keys)
SSRF DNS resolution checks + private IP blocking on spec URL fetching
SQL Injection Parameterized queries via Drizzle ORM
Access Control Row-level filtering by userId on every query
Rate Limiting Dual-layer — nginx (per-IP) + application (per-server)
Scanning Trivy, gitleaks, and npm audit in CI

The Tech Stack

Layer Choice
Framework Next.js 15 (App Router) + Express
Database PostgreSQL 16 + Drizzle ORM
Cache / PubSub Redis 7
Auth Clerk
Billing Stripe
UI Shadcn/UI + Tailwind
Docs Fumadocs (MDX, integrated at /docs)
Testing Vitest + Playwright (55+ E2E tests)
CI/CD GitHub Actions + Docker + GHCR
Deploy Docker Compose + nginx (SSE-optimized)

Self-Host or Use the Cloud

APIFold runs on a single ~$5/month VPS:

# docker-compose.yml (simplified)
services:
  web:
    image: ghcr.io/work90210/apifold-web:latest
    ports: ["3000:3000"]

  runtime:
    image: ghcr.io/work90210/apifold-runtime:latest
    ports: ["4000:4000"]

  postgres:
    image: postgres:16-alpine

  redis:
    image: redis:7-alpine

  nginx:
    image: nginx:alpine
    ports: ["80:80", "443:443"]
Enter fullscreen mode Exit fullscreen mode

One docker compose up -d and you own the entire stack. No vendor lock-in, no data leaving your infrastructure.

Or use the hosted version:

Plan Servers Requests/mo Log Retention Price
Free 2 1,000 7 days Free
Starter 10 50,000 30 days $9/mo
Pro Unlimited 500,000 90 days $49/mo
Enterprise Unlimited Custom Custom Contact us

What's Coming Next

The v1 launch covers the full loop: import > configure > connect > test > export. But the roadmap has 10 more features planned:

CLI Tool

# Zero-config quick start — no dashboard needed
npx apifold serve ./stripe-openapi.yaml

# With options
apifold serve ./spec.json \
  --port 3001 \
  --base-url https://api.stripe.com \
  --auth-header "Authorization: Bearer $STRIPE_KEY" \
  --filter-tags payments,customers
Enter fullscreen mode Exit fullscreen mode

A lightweight local MCP server that runs from a spec file. No database, no Redis — just the transformer and an Express server. Available via npm, Homebrew, and standalone binaries.


OAuth 2.0 Support

Today APIFold handles API keys and bearer tokens. OAuth 2.0 is next — with PKCE, automatic token refresh, and pre-configured presets for:

Google | Slack | GitHub | Salesforce | HubSpot | Microsoft Graph | Notion | Spotify

You supply your client ID/secret. APIFold handles the dance.


Spec Registry

A curated catalog of pre-validated API specs. Browse a grid of API cards, click "Deploy Stripe", enter your API key, and have a live MCP server in under 30 seconds.

Community contributions welcome.


Analytics Dashboard

The data is already being collected. The analytics dashboard will surface it:

  • Time-series charts — call volume over 24h / 7d / 30d
  • Latency percentiles — p50, p95, p99
  • Error explorer — filter by error code, inspect failed requests
  • Per-tool breakdown — which tools get called most, which are slow
  • CSV export — bring your own BI tools

Webhook-to-MCP Bridge

This is the feature I'm most excited about.

Many APIs are event-driven — Stripe webhooks, GitHub push events, Slack messages. APIFold will map OpenAPI 3.1 webhooks to MCP resources and notifications. Your agent will be able to subscribe to real-time events, not just make requests.

# Agent receives real-time notification when a payment succeeds
webhook://stripe/payment_intent.succeeded
Enter fullscreen mode Exit fullscreen mode

Per-provider signature validation included (Stripe, GitHub, Slack).


Multi-Spec Composition

A "CRM agent" needs HubSpot + Slack + your internal API. Today, that's three separate MCP connections. Composite servers merge tools from multiple specs into a single endpoint:

# One MCP endpoint. One connection. All your APIs.
stripe_getCustomer
stripe_listInvoices
hubspot_getContacts
hubspot_getDeals
slack_postMessage
Enter fullscreen mode Exit fullscreen mode

Drag-and-drop builder in the dashboard. Each tool retains its own upstream URL and credentials.


More on the Roadmap

Feature What It Does
Swagger 2.0 auto-conversion Paste a Swagger 2.0 spec — it auto-converts transparently
Access control profiles "Read Only" shows only GET tools, "Billing" shows only charges + invoices
Response schema mapping Tools include return type metadata for better agent chaining
Streamable HTTP transport Stateless JSON-RPC endpoint for serverless deployment (Vercel, Lambda)

Why Open Source?

The transformer is MIT — use it anywhere, no strings attached.

The platform is AGPL-3 — self-host freely, but you can't build a closed-source competing SaaS on top of it. A commercial license is available for orgs that need proprietary modifications.

I believe the best developer tools are open source. You can read every line of code, audit the security model, run it on your own infrastructure, and contribute improvements back.


Try It

GitHub github.com/Work90210/APIFold
npm npm install @apifold/transformer
Self-host docker compose up -d
Docs Available at /docs in any running instance

If you're building AI agents and tired of writing MCP servers by hand, give APIFold a try. Star the repo if it's useful — it helps more than you'd think.


Which upcoming feature matters most to you? Drop a comment below — it directly influences what I build next.

Top comments (3)

Collapse
 
narnaiezzsshaa profile image
Narnaiezzsshaa Truong

The Hidden Risk: “Paste an OpenAPI spec → Get a live MCP server”
This is the part that made my hair stand on end.

The platform:

  • Fetches arbitrary URLs (even with SSRF checks, this is not safe).
  • Parses untrusted schemas.
  • Generates executable tool definitions.
  • Exposes them over a persistent SSE channel.
  • Allows agents to call them with user‑injected credentials.

This is schema‑driven remote code execution in everything but name.

Even with AES‑GCM, rate limits, and Clerk auth, the fundamental model is unsafe:

  • Schema = capability
  • Capability = executable surface
  • Executable surface = attack surface

You cannot “secure” a system whose entire premise is auto‑generating attack surface.

Your article frames security as:

  • AES‑256‑GCM for credentials
  • SSRF checks
  • Row‑level access control
  • Rate limiting
  • CI scanning

All of these are perimeter‑level mitigations, not governance‑layer protections.

None of them address:

  • Capability over‑generation
  • Agent overreach
  • Streaming control‑plane exposure
  • Schema‑driven privilege escalation
  • Multi‑spec composition risks
  • Cross‑spec inference
  • Tool‑surface explosion
  • Deterministic replay of agent actions
  • Reverse‑engineering of upstream API structure
  • Tenant boundary collapse under load or drift
Collapse
 
work90210 profile image
Kyle Fuehri • Edited

Really appreciate you taking the time to write this up. Let me explain, because you raise some points worth discussing honestly and a few that I think deserve a closer look at what's actually in the codebase.

The core thesis: "schema = capability = attack surface"

I agree with this as a general principle. Any system that generates tooling from external input expands its surface area proportionally. That's a real tradeoff, and it's one I think about a lot. But I'd push back on the conclusion that you cannot secure such a system — because the same argument would disqualify API gateways, service meshes, and honestly any infrastructure that dynamically routes based on configuration. The question isn't whether attack surface exists, it's whether it's bounded and governable.

On the "perimeter-only" characterization

This is where I think the critique misses some of what Apifold actually does. You describe the security measures as perimeter-level only, with no governance-layer protections. But:

Capability over-generation — Every tool generated from a spec can be individually enabled or disabled from the dashboard. Disabled tools are completely hidden from AI clients — they don't appear in tools/list responses. This is explicit capability governance, not perimeter defense. You control exactly which endpoints the agent can see.

Agent overreach — The runtime is a proxy, not an executor. It can only call upstream endpoints that the user has imported, enabled, and provided credentials for. An agent can't discover or reach anything beyond what the user explicitly exposed. The blast radius is bounded by design.

Reverse-engineering of upstream API structure — The MCP tools expose what the OpenAPI spec already describes. If the spec is public (Stripe, GitHub, etc.), there's nothing to reverse-engineer. If it's private, the user chose to import it — same trust boundary as giving any client your API docs.

Tenant boundary collapse — Credentials are encrypted per-user with unique 12-byte IVs and AES-256-GCM. Sessions are Redis-scoped. Database access uses parameterized queries via Drizzle ORM. I'd welcome a concrete attack vector here — "under load or drift" isn't something I can patch against without specifics.

On the transformer processing untrusted input

This one I take seriously, and it's where the most engineering effort went. The transformer is a pure-function library (no I/O, no side effects) with defense-in-depth specifically for hostile specs: prototype pollution filtering (__proto__, constructor, prototype), bounded recursion (max 50 levels), memoized $ref resolution capped at 1,000 resolutions, array/object size limits (10,000), glob pattern length caps to prevent ReDoS, and null-prototype objects for user-controlled key maps. It's at 96.5% test coverage across 14 real-world API fixtures. Not claiming it's bulletproof, but "parses untrusted schemas" is a solved problem category when you bound it properly.

What I genuinely agree is worth building toward

A few of your points are fair calls for future work rather than current vulnerabilities:

  • Deterministic replay / audit trail — Full request-level audit logging with replay capability is on the roadmap. Right now you get method, path, status, and duration in the logs, but not a replayable trace.
  • Multi-spec composition risks — Today each server is one spec, one upstream. Composition across specs isn't supported yet, so the risk doesn't currently apply, but it's worth thinking about before it does.
  • Streaming control-plane exposure — The SSE channel follows the MCP spec (JSON-RPC 2.0 over SSE). There's room for tighter validation on the message framing side, and that's something I'm actively looking at.

The broader framing

I think there's a version of this critique that applies to the entire MCP ecosystem, not just Apifold. Any MCP server exposes capabilities to agents. The value proposition of Apifold is that it does this with governance (per-tool visibility, rate limits, credential isolation, circuit breakers) rather than asking developers to hand-roll it every time. Is it a complete governance framework? Not yet. Is it more governed than the alternative of writing ad-hoc MCP servers with no access control at all? I think so.

The codebase is open — github.com/Work90210/APIFold. Security policy is at docs/SECURITY.md and I genuinely welcome responsible disclosures at security@apifold.dev. If you spot a concrete exploit path, I want to hear about it.

Collapse
 
narnaiezzsshaa profile image
Narnaiezzsshaa Truong

Kyle—really appreciate the thoughtful and transparent reply. It’s clear you’ve put serious engineering discipline into APIFold, and that comes through in every detail you shared.

My earlier comment was coming from a different altitude. The controls you’ve built—encryption, SSRF protection, rate limits, per‑tool visibility, transformer hardening—are all important gateway‑level safeguards. They meaningfully reduce risk for anyone exposing API capabilities to agents.

The concerns I raised live at a different layer: the governance layer, where identity drift, lineage continuity, capability envelopes, and control‑plane stability sit. Those aren’t implementation issues so much as substrate‑level dynamics that emerge whenever agents are given a dynamically generated capability surface, regardless of how well the perimeter is secured.

For example, bounding recursion prevents parser drift, not agentic drift. The risks I’m describing emerge after the schema is parsed—when the schema becomes capability, and capability becomes behavior.

So my critique wasn’t about APIFold’s engineering quality—it was about the category of problem it participates in. APIFold is a strong gateway. My work focuses on the layer underneath gateways: the physics that make capability exposure governable in the first place.

Appreciate the conversation—it’s rare to see open, constructive dialogue in this space, and you’re building something that clearly resonates with a lot of developers.