You know that moment where you find the perfect API for your agent, and then you spend the next 45 minutes manually writing tool definitions for it?
Mapping every parameter. Copying descriptions from the docs. Realizing you forgot the request body schema. Debugging why the agent keeps hallucinating a parameter that doesn't exist.
I kept doing this. Over and over. For every API. And at some point I thought — the machine-readable spec is right there. Why am I doing this by hand?
So I built Ruah Convert.
What it does
Feed it an OpenAPI spec. Get MCP tool definitions out.
npx @ruah-dev/conv generate petstore.yaml --json
That's it. One command. You get a JSON array of MCP-compatible tools with names, descriptions, and input schemas — ready to drop into any agent framework.
But first, let me inspect
Before generating anything, you probably want to know what's in the spec. The inspect command gives you a quick summary:
npx @ruah-dev/conv inspect petstore.yaml
API Spec Summary
──────────────────────────────────────────────────
Title: Petstore API
Version: 1.0.0
Format: openapi-3.0
Base URL: https://api.petstore.example.com/v1
Auth Schemes (1)
• apiKeyAuth: apiKey (X-API-Key)
Tools (4)
• listPets GET /pets (2 params)
• createPet POST /pets (0 params +body) [moderate]
• getPet GET /pets/{petId} (1 params)
• deletePet DELETE /pets/{petId} (1 params) [destructive]
Types (3)
Pet, NewPet, Error
Every endpoint becomes a tool. Every parameter maps into the input schema. And notice the [moderate] and [destructive] tags — more on that in a second.
Risk classification
This is my favorite part. Every generated tool gets a risk level based on its HTTP method:
| Method | Risk | Why |
|---|---|---|
| GET, HEAD, OPTIONS | safe |
Read-only, no side effects |
| POST | moderate |
Creates resources |
| PUT, PATCH | moderate |
Modifies resources |
| DELETE | destructive |
Removes resources |
Your agent knows which tools are dangerous before it calls them. If you're building guardrails or approval workflows, this gives you the classification for free.
Validation catches problems early
npx @ruah-dev/conv validate petstore.yaml
This checks your spec for things that will trip up LLMs:
- Missing descriptions — LLMs need these to understand what a tool does
-
Unresolved
$ref— broken references mean broken schemas - Duplicate tool names — two operations with the same name = confusion
- No operations — a spec with no paths has nothing to convert
Warnings are advisory. The tool still generates output, but you know what to fix.
How it works under the hood
Most converters are one-to-one: OpenAPI in, one specific format out. That works until you need to support Postman collections, or GraphQL, or output in OpenAI's function calling format instead of MCP.
Ruah Convert uses an intermediate representation called the Ruah Tool Schema:
Input Parsers IR Output Generators
───────────── ────────────── ─────────────────────
OpenAPI 3.x ──┐ ┌── MCP Tool Definitions ✓
Swagger 2.0 ──┤ ├── MCP Server (TS)
Postman v2.1 ──┼──→ Ruah Tool Schema ──┼── Function Calling (OpenAI)
GraphQL SDL ──┤ (canonical IR) ├── Function Calling (Anthropic)
HAR files ──┘ └── A2A service wrapper
Every input parser normalizes to the IR. Every output generator reads from it. Adding a new input format means writing one parser. Adding a new output format means writing one generator. Never N×M.
The IR is fully inspectable — ruah conv inspect ./spec.yaml --json dumps the whole thing as JSON, so you can pipe it into whatever you want.
Programmatic API
Not just a CLI. You can use it in your own code:
import { parse, validateIR, generate } from "@ruah-dev/conv";
const ir = parse("./petstore.yaml");
const warnings = validateIR(ir);
const result = generate("mcp-tool-defs", ir);
console.log(result.files[0].content);
What ships today (v0.1)
- OpenAPI 3.0 / 3.1 parser (YAML and JSON)
- Ruah Tool Schema intermediate representation
- MCP tool definitions output (JSON)
- Risk classification per tool
- IR validation with warnings
- CLI:
generate,inspect,validate,targets - Programmatic API
What's next
- v0.2: Full MCP TypeScript server scaffold, Swagger 2.0 auto-upgrade, Postman collection parser
- v0.3: MCP Python server (FastMCP), OpenAI + Anthropic function calling schemas, GraphQL SDL parser
Try it
# Run without installing
npx @ruah-dev/conv inspect your-api-spec.yaml
# Or install globally
npm install -g @ruah-dev/conv
One runtime dependency (yaml). Node 18+. MIT licensed.
GitHub: github.com/ruah-dev/ruah-conv
npm: @ruah-dev/conv
This is the first tool in the Ruah ecosystem — an open-source toolchain for building, running, and managing agentic AI systems. Orchestration, planning, safety, observability — all composable, all standalone.
If you're building with agents, I'd love to hear what you think. What APIs are you wiring up? What output formats do you need? Drop a comment or open an issue.
Top comments (0)