Stripe. GitHub. Twilio. Slack. Notion. Shopify. Discord. SendGrid. Linear. PagerDuty.
None of them ship MCP tool definitions. Every agent developer connecting to these APIs is hand-wiring tools, writing custom auth, and — most dangerously — giving agents full CRUD access with zero risk classification.
These are the APIs that run production systems. If you're connecting them to AI agents without knowing which endpoints are destructive, you're one hallucination away from a very bad day.
The problem is worse than "no MCP support"
I took the public OpenAPI specs for 10 of the most commonly used APIs and converted them to MCP tool definitions. Then I counted how many destructive operations each one exposes — endpoints that delete data, cancel subscriptions, revoke access, or mutate state irreversibly.
Here's what I found:
| API | Total Endpoints | Safe (GET) | Moderate (POST/PATCH) | Destructive (DELETE) | Official MCP server? |
|---|---|---|---|---|---|
| Stripe | 314 | 104 | 163 | 47 | No |
| GitHub | 347 | 189 | 111 | 47 | Community only |
| Twilio | 215 | 72 | 108 | 35 | No |
| Slack | 168 | 43 | 112 | 13 | No |
| Notion | 47 | 18 | 22 | 7 | Community only |
| Shopify Admin | 280+ | 95 | 130 | 55+ | No |
| Discord | 190+ | 65 | 89 | 36 | No |
| SendGrid | 120+ | 40 | 55 | 25 | No |
| Linear | 50+ (GraphQL) | 15 | 28 | 7 | Community only |
| PagerDuty | 180+ | 78 | 72 | 30 | No |
That's 300+ destructive endpoints across 10 APIs.
When you convert an API spec to MCP tools and hand them all to an agent, those destructive endpoints are mixed in with everything else. The agent doesn't know that delete_customer is fundamentally different from get_customer. They're both just tools.
What actually goes wrong
Stripe: 47 DELETE endpoints include delete_customer, void_invoice, cancel_subscription, and refund_charge. An agent trying to "clean up test data" could cancel live subscriptions.
GitHub: delete_repository, remove_collaborator, transfer_repository, delete_org. An agent asked to "reorganize the GitHub org" has the tools to do irreversible damage.
Shopify: The admin API lets you delete products, cancel orders, and remove customer data. An agent doing "inventory management" has access to endpoints that can wipe your catalog.
Twilio: delete_message, release_phone_number, delete_recording. An agent optimizing your Twilio usage could release production phone numbers.
The pattern is always the same: the API spec doesn't distinguish between "safe to call freely" and "requires human approval." That distinction only exists if you add it.
What I'm doing about it
I built ruah conv to automate this entire pipeline — and to add the safety layer that API specs don't have.
npm i -g @ruah-dev/cli
Point it at any API spec, get MCP tools with automatic risk classification:
ruah conv generate stripe-openapi.yaml --target mcp-ts-server
→ 314 tools generated from stripe-openapi.yaml
→ Risk breakdown: 104 safe, 163 moderate, 47 destructive
→ Destructive: delete_customer, cancel_subscription,
refund_charge, void_invoice... (+43 more)
Every tool gets tagged safe, moderate, or destructive based on:
- HTTP method (GET = safe, DELETE = destructive)
- Endpoint patterns (
/cancel,/revoke,/destroy,/remove) - Mutation semantics (state changes that can't be undone)
Then you filter before loading:
# Only generate safe + moderate tools. No destructive.
ruah conv generate stripe-openapi.yaml \
--target mcp-ts-server \
--max-risk moderate
Now your agent has 267 tools instead of 314, and none of them can delete anything.
The context window problem (bonus finding)
Here's the other thing nobody warned me about:
Each MCP tool definition is 200–500 tokens. Stripe's 314 endpoints = 60,000–150,000 tokens just for tool metadata.
Perplexity's CTO flagged this at Ask 2026 — MCP tool descriptions eat 40–50% of available context windows before agents do any actual work.
The fix is the same: filter before you load.
# Only payment-related tools, nothing destructive
ruah conv generate stripe-openapi.yaml \
--target mcp-ts-server \
--include-tags payments,charges \
--max-risk moderate
25 tools instead of 314. Your agent is safer AND faster.
Input formats → Output targets
ruah conv doesn't just handle OpenAPI. These are all the specs I tested with:
Inputs:
- OpenAPI 3.x / Swagger 2.0
- Postman Collection v2.1
- GraphQL SDL (Linear, Shopify Storefront)
- HAR files (recorded browser traffic)
Outputs:
| Target | What you get |
|---|---|
mcp-ts-server |
Full TypeScript MCP server scaffold |
mcp-py-server |
Full Python MCP server scaffold |
mcp-tools |
Just the tool definitions (JSON) |
openai |
OpenAI function-calling schema |
anthropic |
Anthropic tool schema |
a2a |
Agent-to-Agent service wrapper |
Auth normalization, pagination wrappers, retry logic, and dry-run mode included in every output.
Try it
npm i -g @ruah-dev/cli
ruah conv generate your-spec.yaml --target mcp-ts-server
Node.js 18+. One dependency. MIT licensed.
GitHub: github.com/ruah-dev
Docs: ruah.sh
The converter is part of a larger toolchain (orchestrator for parallel agents, optimizer for cost tracking), but it works completely standalone. No lock-in, no account, no cloud dependency.
Top comments (0)