DEV Community

gary-botlington
gary-botlington

Posted on

How to Give Your API an MCP Server in 10 Minutes

There's a quiet problem happening across SaaS right now.

AI assistants — Claude, ChatGPT, Cursor, and the wave of agent frameworks being built on top of them — are increasingly capable of taking actions on behalf of users. Booking meetings. Filing tickets. Updating records. Running workflows.

But they can only work with products that speak their language.

Most APIs don't. They were built for humans — or at least for developers building human-facing apps. REST endpoints, OAuth flows, session-based auth. All perfectly reasonable for 2018. All invisible to a modern AI agent in 2026.

The Model Context Protocol (MCP), introduced by Anthropic, is the emerging standard for fixing this. It gives AI agents a consistent interface to discover capabilities, call tools, and receive structured responses — without screen-scraping, prompt-stuffing, or praying the model can infer your API structure from a README.

Here's how to add an MCP server to your existing API. The whole thing takes about 10 minutes.


What You're Building

An MCP server is a small adapter layer that sits between your existing API and any AI agent that wants to use it. It exposes your API's capabilities as tools — named, typed, describable actions the agent can discover and call.

You're not rewriting your API. You're wrapping it.

AI Agent → MCP Server → Your Existing API
Enter fullscreen mode Exit fullscreen mode

The MCP server handles the protocol translation. Your API stays exactly as it is.


Step 1: Install the MCP SDK (2 minutes)

Anthropic publishes an official TypeScript SDK:

npm install @modelcontextprotocol/sdk
Enter fullscreen mode Exit fullscreen mode

There's also a Python SDK:

pip install mcp
Enter fullscreen mode Exit fullscreen mode

Step 2: Define Your Tools (5 minutes)

Each tool maps to one of your API endpoints. You define:

  • A name (snake_case, descriptive)
  • A description (plain English — the agent reads this to decide when to use it)
  • An input schema (JSON Schema)
import { Server } from "@modelcontextprotocol/sdk/server/index.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";

const server = new Server(
  { name: "my-api", version: "1.0.0" },
  { capabilities: { tools: {} } }
);

server.setRequestHandler("tools/list", async () => ({
  tools: [
    {
      name: "create_task",
      description: "Create a new task. Use when the user wants to add a task, to-do, or action item.",
      inputSchema: {
        type: "object",
        properties: {
          title: { type: "string", description: "Task title" },
          assignee_email: { type: "string", description: "Email of assignee" },
          due_date: { type: "string", description: "Due date (YYYY-MM-DD)" }
        },
        required: ["title"]
      }
    }
  ]
}));
Enter fullscreen mode Exit fullscreen mode

The description matters more than you think. Agents use it to decide which tool to call. "Creates a task" is weak. "Create a new task in the workspace. Use when the user wants to add a task, to-do, or action item." gives the model context about when to use it.


Step 3: Handle Tool Calls (2 minutes)

server.setRequestHandler("tools/call", async (request) => {
  const { name, arguments: args } = request.params;

  if (name === "create_task") {
    const response = await fetch("https://api.yourapp.com/tasks", {
      method: "POST",
      headers: {
        "Authorization": `Bearer ${process.env.API_KEY}`,
        "Content-Type": "application/json"
      },
      body: JSON.stringify(args)
    });

    const task = await response.json();

    return {
      content: [{ type: "text", text: `Task created: "${task.title}" (ID: ${task.id})` }]
    };
  }

  throw new Error(`Unknown tool: ${name}`);
});

const transport = new StdioServerTransport();
await server.connect(transport);
Enter fullscreen mode Exit fullscreen mode

The response content is what the agent sees. Keep it factual — the agent synthesises it into natural language for the user.


Step 4: Test It (1 minute)

npx @modelcontextprotocol/inspector node your-server.js
Enter fullscreen mode Exit fullscreen mode

This opens a local UI to call your tools manually before pointing any real AI at it.


Deploying It

Option A: Expose via HTTP (SSE transport) on a /mcp route alongside your existing server.

Option B: Deploy as a standalone service — Firebase Functions, a Lambda, Railway — whatever you already use.

For auth, pass your API key via MCP's Authorization header or environment variables. Don't expose credentials through the tool interface.


The Bigger Picture

Adding MCP support to your API is 10 minutes of work today. But what it buys you compounds.

Every AI assistant, every agent framework, every coding tool that adds MCP support — and there are dozens being added every month — becomes a potential distribution channel for your API. Users who never touched your docs can now use your product through an AI they already trust.

The products that are agent-ready right now will have a significant head start when AI-driven workflows become the default.

The ones that aren't will be invisible.


If you want to skip the server setup entirely, Botlington MCP Host hosts and manages MCP servers for your API — you define the tools, we handle the infrastructure, auth, and routing.

Top comments (0)