DEV Community

Rajeev Ramani
Rajeev Ramani

Posted on • Originally published at rajeevramani.substack.com on

MCP Made Me Rethink Who My Software Serves

A close up view of a green plant stem _Photo by notorious v1ruS on Unsplash_

The Hard Part of MCP Isn’t the Protocol

The Model Context Protocol is everywhere. Claude Desktop, Cursor, Windsurf—everyone’s racing to connect AI to tools.

I spent the last few weeks adding MCP to Flowplane. What started as “expose some endpoints to Claude” became something I didn’t expect: a complete rethink of who my software serves.

Two Audiences, Two Servers

MCP itself is simple—JSON-RPC 2.0, well-defined message types. I had it working in a day.

The hard part wasn’t figuring out what to expose, but what and how much metadata would be needed to let an agent effectively use the platform.

Flowplane manages Envoy proxies. My first instinct: wrap the infrastructure primitives—clusters, listeners, routes—in MCP tools. Ship it.

Then I realised the MCP layer in Flowplane wasn’t serving one kind of consumer — it had to serve two very different ones. A customer’s agent doesn’t want to “create a cluster with round-robin load balancing.” They want to call getUser(id: “123”). They don’t care about Envoy. They care about the APIs behind it. An internal DevOps agent, on the other hand, needs to create and manage APIs and their corresponding resources. I wasn’t just exposing tools — I was designing an AI-facing platform layer.

So I decided to serve two different categories of tools.

Control Plane tools for platform engineering agents who build the gateway—19 tools for managing clusters, listeners, routes, and filters.

Control Plane tools

Tools 2

Gateway API tools for everyone else—dynamically generated from OpenAPI specs or learned from traffic. When you call these requests, they go through Envoy with the same JWT validation and rate limiting as any other client.

Tools 2

The Metadata Problem

MCP tools need good metadata—names, descriptions, schemas. Without them, AI can’t figure out which tool to use.

The gap between a tool with an OpenAPI spec and a fallback generated from path patterns is brutal. One has rich descriptions and typed parameters. The other technically exists.

I built two approaches to close this gap:

  • OpenAPI extraction — When you import a spec, Flowplane pulls operationId, summaries, and request/response schemas automatically. Every route gets rich metadata immediately. Full confidence.

  • Schema learning — For APIs without specs, Flowplane observes traffic and infers schemas from actual requests and responses. Field types, required parameters, response shapes—all learned over time. Confidence grows with volume.

The goal: no route should ever be “not ready” for MCP. Whether you have perfect documentation or none at all, the tools should be usable.

It’s not glamorous work. But it’s the difference between tools AI can use effectively and tools that just exist.

The Token Tax

Every MCP tool costs tokens before you use it.

Tool definitions—names, descriptions, parameter schemas—live in the model’s context window. 19 Control Plane tools are manageable. Hundreds of Gateway API tools, one per endpoint, are not.

With a large API surface, you’re spending context just describing what’s available. The model hasn’t done anything yet.

I’m still figuring out the right approach—tool categories, lazy loading, trimming verbose schemas. Good metadata helps AI pick the right tool, but too much crowds out the actual conversation.

The irony : Good metadata helps AI pick the right tool, but too much crowds out the actual conversation. Right now, finding the balance feels more like art than science.

The Point

MCP isn’t complicated. Deciding what to expose is.

The usual API questions still apply—audience, capabilities, permissions. But MCP adds a new one: what information does the model need to choose the right tool, and how do I surface it?

That’s where I spent most of my time. The protocol took a day. The metadata problem is ongoing.

Top comments (0)