DEV Community

lorenzosaraiva
lorenzosaraiva

Posted on

How I Turned 1,079 GitHub API Endpoints into 25 AI-Ready Tools

If you've tried connecting an AI assistant to a REST API through MCP (Model Context Protocol), you've probably hit the same wall I did: the tooling either doesn't exist for the API you need, or it generates hundreds of tools that make the LLM choke.

I built MCPForge to fix that. It's a CLI that takes any OpenAPI spec and generates a production-ready MCP server, with an AI optimization layer that curates endpoints down to the ones that actually matter.

The Problem

MCP is blowing up. Claude Desktop, Cursor, and a growing list of AI tools support it. But if you want to connect one of these tools to a REST API, you have two options:

Option 1: Write an MCP server by hand. Define every tool, write HTTP handlers, wire up auth, handle errors, write a README. Hours of boilerplate per API.

Option 2: Auto-generate from the OpenAPI spec. Tools like FastMCP and Stainless can do this. But the output is a 1:1 mapping of endpoints to tools. A big API like GitHub has over 1,000 endpoints. Dumping 1,000 tools on an LLM doesn't work. The context window fills up, tool selection gets confused, and the results are garbage.

Even FastMCP's own documentation says auto-generated servers are better for prototyping than production use with LLMs.

The Fix: AI-Powered Tool Curation

MCPForge adds an intelligence layer between parsing and generation. After reading the OpenAPI spec, it sends the parsed endpoints to Claude and asks it to curate them like a senior API designer would.

The optimizer:

  • Picks the 25 most useful endpoints for typical use cases
  • Rewrites descriptions to be concise and action-oriented
  • Drops noise like health checks, admin routes, and deprecated endpoints
  • Groups related operations when it makes sense

Here are the real numbers from running it on three popular APIs:

API Raw Endpoints After Optimization
GitHub 1,079 25
Stripe 587 25
Spotify 97 25

Strict mode (the default) targets 25 or fewer tools. If you need broader coverage, there's a standard mode that caps at 80, or you can set a custom limit with --max-tools.

How It Works

The whole flow is one command:

npx mcpforge init --optimize https://api.example.com/openapi.json
Enter fullscreen mode Exit fullscreen mode

It parses the spec, runs the optimizer, and generates a complete TypeScript MCP server project with auth handling, error handling, and a README that includes copy-paste config for Claude Desktop and Cursor.

No OpenAPI Spec? No Problem.

A lot of APIs don't publish an OpenAPI spec. MCPForge has a --from-url mode that scrapes API documentation pages and uses Claude to infer the endpoint structure:

npx mcpforge init --from-url https://docs.some-api.com/reference
Enter fullscreen mode Exit fullscreen mode

It's not as accurate as working from a spec, but it gets you 80% of the way there without any manual work.

Detecting Breaking Changes

One thing that came up repeatedly in feedback: what happens when the upstream API changes its spec after you've generated a server?

MCPForge has a diff command that compares the current spec against what was used during generation and flags changes with risk scoring:

  • High risk: removed endpoints, parameter type changes, new required fields
  • Medium risk: response schema changes, deprecations
  • Low risk: new endpoints added, description updates
mcpforge diff
Enter fullscreen mode Exit fullscreen mode

This came directly from Reddit feedback. Someone pointed out that silent spec drift is worse than a loud failure, and they were right.

What I Learned Building This

Feedback-driven development works. I posted on Reddit and got real, specific feedback within hours. "100 tools is still too many" turned into strict mode. "What happens when specs change?" turned into the diff command. Building in public with fast iteration beats planning in private.

The moat isn't the code, it's the curation. The actual code generation is straightforward. The hard part is making the AI optimizer produce genuinely good tool sets. The prompt engineering for that is where most of the iteration went.

LLMs have strong opinions about tool count. Through testing, I found that 20-30 tools is the sweet spot for most MCP clients. Below 15 and you're missing useful functionality. Above 50 and tool selection degrades noticeably.

Try It

# From an OpenAPI spec
npx mcpforge init --optimize https://petstore3.swagger.io/api/v3/openapi.json

# From a docs page
npx mcpforge init --from-url https://jsonplaceholder.typicode.com/

# Inspect a spec before generating
npx mcpforge inspect https://api.example.com/openapi.json
Enter fullscreen mode Exit fullscreen mode

It's open source and MIT licensed: github.com/lorenzosaraiva/mcpforge

If you try it on an API and something breaks, open an issue. The project is a few days old and there are definitely rough edges, especially on large or unusual specs.

Top comments (0)