DEV Community

AttractivePenguin
AttractivePenguin

Posted on

Build Your First MCP Server in TypeScript: The Integration Layer Every AI App Needs in 2026

Build Your First MCP Server in TypeScript: The Integration Layer Every AI App Needs in 2026

If you've spent any time in the AI development space this year, you've probably heard of the Model Context Protocol (MCP). Anthropic open-sourced it in late 2024. By mid-2025, OpenAI deprecated its proprietary Assistants API in favor of it. Google's ADK, LangGraph, CrewAI, and Microsoft's Agent Framework all support it. GitHub's Octoverse report shows over 693,000 new LLM SDK repositories in the last 12 months alone.

MCP isn't a trend anymore. It's infrastructure.

But most articles about MCP stop at "it's like a USB-C port for AI" and never show you how to actually build one. That ends here. In this article, you'll build a working MCP server in TypeScript that exposes real tools to any MCP-compatible AI client — Claude, ChatGPT, Cursor, Windsurf, VS Code Copilot, you name it.

Why MCP Matters Right Now

Before we write code, let's understand the problem MCP solves.

AI agents are only as useful as what they can reach. An LLM that can't query your database, can't check your CI pipeline, can't read your internal docs, and can't file tickets in your issue tracker is a very smart parrot. It can generate text, but it can't do anything.

The old approach was custom integrations: every AI tool built its own connectors to every external service. That's O(n²) glue code. MCP replaces that with a standard protocol — one server per service, any client can connect.

Think of it this way: before HTTP, every networked application invented its own wire protocol. After HTTP, we got browsers, APIs, and the entire web. MCP is doing the same thing for AI-to-tool communication.

The Three Capabilities an MCP Server Provides

  1. Tools — Functions the LLM can call (with user approval). Think: "search my database," "deploy this commit," "create a ticket."
  2. Resources — File-like data the client can read. Think: API responses, log files, documentation.
  3. Prompts — Pre-written templates that help users accomplish specific tasks with the AI.

We'll focus on tools — they're the most powerful and the most commonly needed.

Setting Up Your Project

You'll need Node.js 18+ and a TypeScript environment. Let's scaffold the project:

mkdir mcp-task-server && cd mcp-task-server
npm init -y
npm install @modelcontextprotocol/sdk zod
npm install -D typescript @types/node
npx tsc --init
Enter fullscreen mode Exit fullscreen mode

Update your tsconfig.json to use ES modules (required by the MCP SDK):

{
  "compilerOptions": {
    "target": "ES2022",
    "module": "Node16",
    "moduleResolution": "Node16",
    "outDir": "./dist",
    "rootDir": "./src",
    "strict": true,
    "esModuleInterop": true,
    "skipLibCheck": true
  },
  "include": ["src/**/*"]
}
Enter fullscreen mode Exit fullscreen mode

Create the source directory:

mkdir src
Enter fullscreen mode Exit fullscreen mode

Building the Server

We're going to build a task management MCP server — something every team actually needs. It'll expose tools to create tasks, list tasks, and mark tasks complete. The AI can then manage your task board through natural language.

Create src/index.ts:

import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { z } from "zod";

// In-memory task store (swap this for a real DB in production)
interface Task {
  id: string;
  title: string;
  description: string;
  status: "open" | "in-progress" | "done";
  priority: "low" | "medium" | "high";
  createdAt: string;
}

const tasks: Map<string, Task> = new Map();
let idCounter = 1;

function generateId(): string {
  return `TASK-${String(idCounter++).padStart(4, "0")}`;
}

// Initialize the MCP server
const server = new McpServer({
  name: "task-manager",
  version: "1.0.0",
});

// Tool 1: Create a task
server.tool(
  "create_task",
  "Create a new task in the task board",
  {
    title: z.string().describe("Short title for the task"),
    description: z.string().optional().describe("Detailed description of the task"),
    priority: z.enum(["low", "medium", "high"]).default("medium").describe("Task priority level"),
  },
  async ({ title, description, priority }) => {
    const task: Task = {
      id: generateId(),
      title,
      description: description ?? "",
      status: "open",
      priority,
      createdAt: new Date().toISOString(),
    };
    tasks.set(task.id, task);

    return {
      content: [
        {
          type: "text" as const,
          text: `✅ Task created: ${task.id} — "${task.title}" [${task.priority} priority, ${task.status}]`,
        },
      ],
    };
  }
);

// Tool 2: List all tasks (with optional status filter)
server.tool(
  "list_tasks",
  "List tasks from the task board, optionally filtered by status",
  {
    status: z.enum(["open", "in-progress", "done"]).optional().describe("Filter by task status"),
  },
  async ({ status }) => {
    let filtered = Array.from(tasks.values());
    if (status) {
      filtered = filtered.filter((t) => t.status === status);
    }

    if (filtered.length === 0) {
      return {
        content: [
          {
            type: "text" as const,
            text: status ? `No ${status} tasks found.` : "No tasks found. Create one with create_task!",
          },
        ],
      };
    }

    const taskList = filtered
      .map((t) => `- **${t.id}**: ${t.title} [${t.priority}/${t.status}]`)
      .join("\n");

    return {
      content: [
        {
          type: "text" as const,
          text: `📋 Found ${filtered.length} task(s):\n${taskList}`,
        },
      ],
    };
  }
);

// Tool 3: Update task status
server.tool(
  "update_task",
  "Update a task's status (e.g., mark as in-progress or done)",
  {
    id: z.string().describe("The task ID (e.g., TASK-0001)"),
    status: z.enum(["open", "in-progress", "done"]).describe("New status for the task"),
  },
  async ({ id, status }) => {
    const task = tasks.get(id);
    if (!task) {
      return {
        content: [
          {
            type: "text" as const,
            text: `❌ Task ${id} not found. Use list_tasks to see available IDs.`,
          },
        ],
        isError: true,
      };
    }

    const oldStatus = task.status;
    task.status = status;

    return {
      content: [
        {
          type: "text" as const,
          text: `📝 Task ${id} updated: ${oldStatus}${status}`,
        },
      ],
    };
  }
);

// Tool 4: Get task details
server.tool(
  "get_task",
  "Get full details for a specific task",
  {
    id: z.string().describe("The task ID to look up"),
  },
  async ({ id }) => {
    const task = tasks.get(id);
    if (!task) {
      return {
        content: [
          {
            type: "text" as const,
            text: `❌ Task ${id} not found.`,
          },
        ],
        isError: true,
      };
    }

    return {
      content: [
        {
          type: "text" as const,
          text: [
            `📌 **${task.id}**: ${task.title}`,
            `   Priority: ${task.priority}`,
            `   Status: ${task.status}`,
            `   Created: ${task.createdAt}`,
            `   Description: ${task.description || "(none)"}`,
          ].join("\n"),
        },
      ],
    };
  }
);

// Start the server with STDIO transport
async function main() {
  const transport = new StdioServerTransport();
  await server.connect(transport);
  // Log to stderr (stdout is reserved for JSON-RPC messages!)
  console.error("Task Manager MCP server running on stdio");
}

main().catch((err) => {
  console.error("Fatal error:", err);
  process.exit(1);
});
Enter fullscreen mode Exit fullscreen mode

That's it. Four tools, one server, ~120 lines. Let's break down what's happening:

  • McpServer is the high-level SDK class that handles the protocol details — JSON-RPC message framing, capability negotiation, tool registration.
  • server.tool() registers a tool with a name, description, Zod schema for input validation, and an async handler function. The SDK automatically generates the JSON Schema from your Zod definitions and makes it available to clients.
  • StdioServerTransport tells the server to communicate over stdin/stdout. This is the standard transport for local MCP servers — the AI client launches your server as a subprocess and talks to it via pipes.

Running and Testing Your Server

First, compile and run it directly to verify it starts:

npx tsc
node dist/index.js
Enter fullscreen mode Exit fullscreen mode

You should see Task Manager MCP server running on stdio in your terminal (it logs to stderr). The server is now waiting for JSON-RPC messages on stdin.

Testing with the MCP Inspector

The MCP SDK ships with an Inspector tool — a web UI for testing your server:

npx @modelcontextprotocol/inspector node dist/index.js
Enter fullscreen mode Exit fullscreen mode

This opens a browser where you can:

  1. See all registered tools and their schemas
  2. Call tools with custom inputs
  3. Inspect the raw JSON-RPC messages

Try calling create_task with {"title": "Deploy hotfix", "priority": "high"} and watch it return the formatted response.

Connecting to Claude for Desktop

Add your server to Claude's config file. On macOS, edit ~/Library/Application Support/Claude/claude_desktop_config.json:

{
  "mcpServers": {
    "task-manager": {
      "command": "node",
      "args": ["/absolute/path/to/mcp-task-server/dist/index.js"]
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

On Windows, the config lives at %APPDATA%\Claude\claude_desktop_config.json.

Restart Claude for Desktop. You should see a 🔨 hammer icon with your "task-manager" server listed. Now you can say:

"Create a high-priority task called 'Review PR #347' and then list all open tasks"

Claude will call your create_task and list_tasks tools automatically.

Connecting to VS Code Copilot / Cursor / Windsurf

Most MCP-compatible editors use a similar config pattern. In Cursor, for example, add to .cursor/mcp.json:

{
  "mcpServers": {
    "task-manager": {
      "command": "node",
      "args": ["./mcp-task-server/dist/index.js"]
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Same server, multiple clients. That's the point.

Real-World Scenarios

Scenario 1: Internal Developer Platform

Your platform team builds an MCP server that exposes tools for:

  • Provisioning preview environments
  • Checking deployment status
  • Rolling back releases
  • Querying service health

Every developer on the team connects their AI assistant to this server. Instead of switching between kubectl, the CI dashboard, and Slack, they just ask their AI: "What's the status of the auth service in staging?" or "Roll back the payment service to the previous release."

Scenario 2: Customer Support Copilot

Build an MCP server that connects to your CRM, ticketing system, and knowledge base. Your support agents' AI assistants can:

  • Look up customer details
  • Check order status
  • Search internal docs for known issues
  • Create follow-up tickets

The AI handles the tool orchestration. The agent handles the conversation.

Scenario 3: Data Pipeline Debugging

An MCP server connected to your data warehouse and orchestration tool (Airflow, Dagster, etc.) gives your AI:

  • Access to query recent pipeline runs
  • Ability to check data freshness
  • Tools to re-run failed jobs
  • Access to schema information

"Which pipelines failed in the last hour? Re-run the ETL for the users table." — one prompt, multiple tool calls, zero context switching.

Production Considerations

STDIO vs. HTTP Transport

Our example uses STDIO transport, which is great for local tools and IDE integrations. For remote/shared servers, use the Streamable HTTP transport:

import { StreamableHTTPServerTransport } from "@modelcontextprotocol/sdk/server/streamableHttp.js";

// In an Express or Hono app:
app.post("/mcp", async (req, res) => {
  const transport = new StreamableHTTPServerTransport({
    sessionIdGenerator: undefined,
  });
  await server.connect(transport);
  await transport.handleRequest(req, res);
});
Enter fullscreen mode Exit fullscreen mode

Remote servers should implement OAuth 2.1 for authentication — the MCP spec defines this flow explicitly.

Logging Gotcha

Never write to stdout in a STDIO-based MCP server. stdout is reserved for JSON-RPC messages. Writing anything else there will corrupt the protocol stream and crash the connection. Always use stderr or a logging library:

// ❌ BAD — corrupts JSON-RPC stream
console.log("Processing request");

// ✅ GOOD — stderr is safe
console.error("Processing request");

// ✅ GOOD — proper logging
import pino from "pino";
const logger = pino({ transport: { target: "pino/file", options: { destination: "/tmp/mcp.log" } } });
logger.info("Processing request");
Enter fullscreen mode Exit fullscreen mode

Error Handling

Return isError: true when a tool call fails, so the AI client knows not to present the result as a success:

return {
  content: [{ type: "text", text: "Task not found" }],
  isError: true,
};
Enter fullscreen mode Exit fullscreen mode

This is critical. If you return errors as normal text, the AI might treat the error message as valid data and continue down a broken path.

State Persistence

Our example uses an in-memory Map. For production, swap in a real database:

import { PrismaClient } from "@prisma/client";
const prisma = new PrismaClient();

// In your tool handler:
const task = await prisma.task.create({
  data: { title, description, status: "open", priority },
});
Enter fullscreen mode Exit fullscreen mode

The MCP server is just a normal Node.js process — use any database, cache, or API you'd use in any other backend service.

FAQ

Q: Do I need to use TypeScript? Can I use Python?
A: Both work. The MCP SDK is available for TypeScript and Python. TypeScript has the advantage of Zod-based type-safe schemas, and the integration with Node.js runtimes is seamless. Python's SDK uses type hints and docstrings for the same purpose. Pick whichever your team already uses.

Q: What's the difference between MCP tools and OpenAI function calling?
A: OpenAI function calling is specific to OpenAI's API — you define tools inline with each request. MCP is a standard protocol that works across all AI providers. An MCP server works with Claude, ChatGPT, Cursor, and any future client without code changes. It also supports persistent connections, resource subscriptions, and server-initiated notifications that function calling doesn't.

Q: Can MCP servers call other MCP servers?
A: Yes, but it's an advanced pattern. An MCP client can connect to multiple servers simultaneously, and you can build "orchestration" servers that act as both a client (connecting to other servers) and a server (exposing aggregated tools). For most use cases, connecting the AI client to multiple servers directly is simpler.

Q: Is MCP secure enough for production?
A: The spec includes user consent (tools require approval before execution), scoped permissions, and OAuth 2.1 for remote servers. But security is your responsibility — validate all inputs with Zod schemas, scope database queries to authorized contexts, and never expose destructive operations without confirmation.

Q: My server doesn't show up in Claude for Desktop. What's wrong?
A: Check these in order: (1) Is the command path in your config absolute? (2) Does the server compile without errors? (3) Check Claude's developer logs (Help > Toggle Developer Tools) for MCP errors. (4) Make sure you restarted Claude after editing the config. (5) Verify your server starts correctly from the terminal.

Q: How do I handle long-running operations?
A: Return an intermediate response immediately (e.g., "Processing started for task X") and use MCP's notification system to send progress updates. The client can also poll by calling a status-check tool. For truly async workflows, consider returning a job ID that the AI can check later.

Conclusion

MCP is the missing piece that turns AI from a text generator into an actual agent that can do things in your systems. And building an MCP server is surprisingly straightforward — the SDK handles the protocol, you handle the logic.

The server we built today is simple, but the pattern scales. Every internal tool, every database, every API your team uses can be one MCP server away from being accessible to any AI assistant. That's not a minor DX improvement — it's a fundamentally different way of interacting with your infrastructure.

Start small: wrap one internal API in an MCP server this week. Connect it to your AI assistant. See how it changes your workflow. Then add more tools, more servers, more capabilities. The protocol is ready. The ecosystem is ready. The only question is: what will your AI be able to reach?


This article is part of the Architecture Decisions series. Previous: Idempotency Keys in Production, Stop Defaulting to WebSockets.

Top comments (0)