DEV Community

Cover image for Don't Build Your MCP Server as an API Wrapper
logiQode
logiQode

Posted on • Originally published at dev.to

Don't Build Your MCP Server as an API Wrapper

The Model Context Protocol (MCP) is gaining traction as the standard way to connect language models to external systems. And the first instinct of most engineers who already have a REST API is entirely predictable: wrap it. One tool per endpoint, one parameter per query string, done in an afternoon. It compiles, it runs, the model can technically call it — and it will perform badly in production. Here is why that instinct is wrong, and what to do instead.

The Obvious Trap: One Tool Per Endpoint

Imagine a typical REST API for a customer support platform:

GET /responses
GET /responses/:id
PUT /responses/:id
POST /responses/:id/assign
GET /responses/:id/messages
Enter fullscreen mode Exit fullscreen mode

The naive MCP translation looks like this:

server.tool("list_responses", { status: z.string().optional() }, async ({ status }) => {
 const res = await api.get("/responses", { params: { status } });
 return { content: [{ type: "text", text: JSON.stringify(res.data) }] };
});

server.tool("get_response", { id: z.string() }, async ({ id }) => {
 const res = await api.get(`/responses/${id}`);
 return { content: [{ type: "text", text: JSON.stringify(res.data) }] };
});

server.tool("update_response", { id: z.string(), body: z.string() }, async ({ id, body }) => {
 const res = await api.put(`/responses/${id}`, { body });
 return { content: [{ type: "text", text: JSON.stringify(res.data) }] };
});
Enter fullscreen mode Exit fullscreen mode

This is not an MCP server. It is an HTTP client with extra steps. The model now has to chain three or four tool calls to accomplish something a single well-designed tool could handle in one. Every extra round-trip is latency, token cost, and another surface for the model to misinterpret an intermediate result.

Group Tools Around Intent, Not Endpoints

Anthropic's own guidance on building agents that reach production systems with MCP makes this explicit: group tools around intent, not endpoints. The question to ask is not "what does the API expose?" but "what does the agent need to accomplish?"

For the support platform above, the intents might be:

  • Triage an incoming request (fetch it, read its thread, decide priority)
  • Draft and send a reply
  • Escalate to a human agent
  • Close a resolved ticket

Each of those is a single cognitive unit for the model. When you map tools to endpoints instead of intents, the model has to reconstruct that unit itself — from a list of low-level primitives — every time it reasons about a task. That reconstruction is error-prone and expensive.

A better triage_request tool looks like this:

server.tool(
 "triage_request",
 {
 response_id: z.string().describe("The ID of the support response to triage"),
 },
 async ({ response_id }) => {
 // Fan out internally — the model doesn't pay for these round-trips
 const [response, messages] = await Promise.all([
 api.get(`/responses/${response_id}`),
 api.get(`/responses/${response_id}/messages`),
 ]);

 const summary = {
 id: response.data.id,
 status: response.data.status,
 customer: response.data.customer_email,
 subject: response.data.subject,
 message_count: messages.data.length,
 last_message: messages.data.at(-1)?.body ?? null,
 created_at: response.data.created_at,
 };

 return {
 content: [{ type: "text", text: JSON.stringify(summary) }],
 };
 }
);
Enter fullscreen mode Exit fullscreen mode

The model calls one tool, gets one structured answer, and can reason about what to do next. The two HTTP calls are an implementation detail — invisible to the model, parallelised, and handled with proper error boundaries in one place.

Tool Descriptions Are Part of the Interface

In a REST API, the contract lives in the URL, the HTTP verb, and the OpenAPI schema. In MCP, the contract lives in the tool name, the parameter descriptions, and the shape of the returned content. The model reads all of it.

A parameter named id with no description forces the model to infer context from surrounding conversation. A parameter named response_id with .describe("The ID of the support response to triage") is self-documenting in the exact context where it matters — the model's prompt.

This has a practical consequence: writing good tool descriptions is not documentation work, it is prompt engineering. A vague description will produce vague tool calls. In practice, teams often discover this only after seeing the model hallucinate parameter values that "made sense" given an ambiguous schema.

Treat every z.string() as an opportunity to be precise:

const TriageInput = z.object({
 response_id: z.string().describe(
 "Unique identifier of the support response. Format: resp_XXXXXXXX"
 ),
 include_internal_notes: z.boolean().optional().describe(
 "Set to true only when the agent has support-staff permissions. Defaults to false."
 ),
});
Enter fullscreen mode Exit fullscreen mode

The description of include_internal_notes does two things: it sets a default expectation and it encodes a permission hint. The model will use that hint when deciding whether to set the flag.

Return Structured Data, Not Raw API Responses

Passing JSON.stringify(res.data) directly to the model is the equivalent of handing a junior analyst a raw database dump and asking for a summary. It works, but it wastes context window on fields the model will never use, and it couples your MCP server's behaviour to your API's internal schema.

Instead, project the response down to what the intent actually requires:


typescript
function formatResponseForTriage(raw
Enter fullscreen mode Exit fullscreen mode

Top comments (0)