DEV Community

HK Lee
HK Lee

Posted on • Originally published at pockit.tools

Model Context Protocol (MCP): The Complete Guide to Building AI Agents That Actually Work

If you've been building AI applications in 2025, you've probably hit the same wall everyone else has: your LLM is brilliant at generating text, but connecting it to real-world data and tools feels like duct-taping APIs together with prompt engineering prayers.

Enter the Model Context Protocol (MCP)—an open standard that's quietly becoming as fundamental to AI development as REST APIs are to web development. Originally developed by Anthropic and now adopted across the industry, MCP is solving one of the biggest headaches in AI engineering: how do you give your AI agent reliable, structured access to the outside world?

In this comprehensive guide, we'll explore what MCP is, why it matters, how it works under the hood, and most importantly—how to implement it in your own AI applications.

The Problem MCP Solves

Before diving into MCP, let's understand the pain it addresses.

The Integration Nightmare

Traditional AI application development looks something like this:

┌─────────────────────────────────────────────────────────────┐
│                    Your AI Application                       │
├─────────────────────────────────────────────────────────────┤
│                                                              │
│   ┌──────────┐    ┌──────────┐    ┌──────────┐             │
│   │ OpenAI   │    │ Database │    │ Slack    │             │
│   │ API      │    │ Queries  │    │ API      │             │
│   └────┬─────┘    └────┬─────┘    └────┬─────┘             │
│        │               │               │                    │
│   Custom Parser   Custom Parser   Custom Parser             │
│        │               │               │                    │
│   Prompt Hack     Prompt Hack     Prompt Hack               │
│        │               │               │                    │
│   Error Handler   Error Handler   Error Handler             │
│        │               │               │                    │
│        └───────────────┴───────────────┘                    │
│                        │                                     │
│              ┌─────────▼─────────┐                          │
│              │   Orchestration   │                          │
│              │   Spaghetti Code  │                          │
│              └───────────────────┘                          │
└─────────────────────────────────────────────────────────────┘
Enter fullscreen mode Exit fullscreen mode

Every integration requires:

  • Custom authentication handling
  • Bespoke response parsing
  • Prompt engineering to explain the tool to the LLM
  • Error handling that differs per integration
  • Manual schema maintenance

Multiply this by the 10+ integrations a typical AI agent needs, and you've got a maintenance nightmare.

The Function Calling Limitation

OpenAI's function calling and similar features help, but they're fundamentally LLM-vendor-specific. Your carefully crafted function definitions for GPT-4 won't work with Claude, Gemini, or the hot new open-source model that just dropped.

// This works for OpenAI...
const tools = [{
  type: "function",
  function: {
    name: "get_weather",
    description: "Get the current weather",
    parameters: {
      type: "object",
      properties: {
        location: { type: "string" }
      }
    }
  }
}];

// But Claude has a different format...
// And Gemini has another...
// And Llama has yet another...
Enter fullscreen mode Exit fullscreen mode

What We Actually Need

The ideal solution would be:

  1. Universal: Work across LLM providers
  2. Standardized: One integration pattern for all data sources
  3. Bidirectional: Let the AI query data AND receive updates
  4. Secure: Built-in authentication and permission handling
  5. Discoverable: AI can learn what tools are available at runtime

This is exactly what MCP provides.

What is Model Context Protocol?

MCP is an open protocol that standardizes how AI applications connect to external data sources and tools. Think of it as "USB for AI"—a universal connector that lets any AI model plug into any data source or tool.

The Architecture

MCP follows a client-server architecture:

┌────────────────────────────────────────────────────────────────┐
│                        MCP Architecture                         │
├────────────────────────────────────────────────────────────────┤
│                                                                 │
│  ┌─────────────────┐         ┌─────────────────────────────┐  │
│  │   MCP Client    │         │        MCP Servers          │  │
│  │                 │         │                             │  │
│  │  ┌───────────┐  │         │  ┌─────────┐ ┌─────────┐   │  │
│  │  │ AI Model  │  │ JSON-RPC│  │ GitHub  │ │ Slack   │   │  │
│  │  │(GPT/Claude│◄─┼─────────┼─►│ Server  │ │ Server  │   │  │
│  │  │ /Gemini)  │  │   over  │  └─────────┘ └─────────┘   │  │
│  │  └───────────┘  │  stdio/ │                             │  │
│  │                 │ SSE/WS  │  ┌─────────┐ ┌─────────┐   │  │
│  │  ┌───────────┐  │         │  │Database │ │ Custom  │   │  │
│  │  │Host App   │  │         │  │ Server  │ │ Server  │   │  │
│  │  │(Your App) │  │         │  └─────────┘ └─────────┘   │  │
│  │  └───────────┘  │         │                             │  │
│  └─────────────────┘         └─────────────────────────────┘  │
│                                                                 │
└────────────────────────────────────────────────────────────────┘
Enter fullscreen mode Exit fullscreen mode

Key Components:

  1. MCP Client: Lives in your AI application. Discovers and connects to MCP servers.
  2. MCP Server: Exposes data sources and tools in a standardized format.
  3. Transport Layer: JSON-RPC 2.0 over stdio, Server-Sent Events (SSE), or WebSockets.

The Three Primitives

MCP defines three core primitives that cover virtually all AI-to-external-world interactions:

1. Resources

Static or dynamic data that the AI can read. Think of these as "files" the AI can access.

{
  "uri": "file:///project/README.md",
  "name": "Project README",
  "mimeType": "text/markdown"
}
Enter fullscreen mode Exit fullscreen mode

2. Tools

Functions that the AI can invoke to perform actions.

{
  "name": "create_github_issue",
  "description": "Create a new issue in a GitHub repository",
  "inputSchema": {
    "type": "object",
    "properties": {
      "repo": { "type": "string" },
      "title": { "type": "string" },
      "body": { "type": "string" }
    },
    "required": ["repo", "title"]
  }
}
Enter fullscreen mode Exit fullscreen mode

3. Prompts

Reusable prompt templates that can be invoked with parameters.

{
  "name": "code_review",
  "description": "Review code for best practices",
  "arguments": [
    {
      "name": "code",
      "description": "The code to review",
      "required": true
    }
  ]
}
Enter fullscreen mode Exit fullscreen mode

Why MCP Matters Now

The Agentic AI Explosion

2025 is the year of AI agents. From OpenAI's Operator to Claude's computer use capabilities, AI is moving beyond chat into autonomous action. But here's the dirty secret: autonomous AI is only as good as its access to the real world.

An AI agent that can't reliably:

  • Read your codebase
  • Query your database
  • Check your calendar
  • Send messages to your team

...is just a very expensive chatbot.

MCP makes these integrations reliable, consistent, and maintainable.

The Standardization Moment

We're at an inflection point similar to the early 2000s web services era. Back then, we had CORBA, DCOM, and proprietary protocols fighting for dominance. Then REST won, and suddenly everyone could build interoperable web services.

MCP is positioning itself to be the REST of AI integrations. Major players are already on board:

  • Anthropic: Created and maintains the protocol
  • Microsoft: Integrating into Copilot
  • Cursor: Native MCP support in the AI IDE
  • Sourcegraph: MCP servers for code intelligence

Running an MCP Server is the New Running a Web Server

Here's a bold prediction: by 2026, "Can you run an MCP server?" will be as common a developer interview question as "Can you build a REST API?" is today.

Why? Because every company with valuable data will want to expose it to AI agents in a controlled, standardized way. That means MCP servers for:

  • Internal documentation
  • Customer data (with proper authorization)
  • Business processes
  • Domain-specific tools

Building Your First MCP Server

Let's get hands-on. We'll build an MCP server that exposes a simple todo list API.

Project Setup

# Create a new project
mkdir mcp-todo-server
cd mcp-todo-server
npm init -y

# Install dependencies
npm install @modelcontextprotocol/sdk zod
npm install -D typescript @types/node tsx
Enter fullscreen mode Exit fullscreen mode

Basic Server Structure

Create src/index.ts:

import { Server } from "@modelcontextprotocol/sdk/server/index.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import {
  CallToolRequestSchema,
  ListToolsRequestSchema,
  ListResourcesRequestSchema,
  ReadResourceRequestSchema,
} from "@modelcontextprotocol/sdk/types.js";

// In-memory todo storage
interface Todo {
  id: string;
  title: string;
  completed: boolean;
  createdAt: Date;
}

const todos: Map<string, Todo> = new Map();

// Create the MCP server
const server = new Server(
  {
    name: "todo-server",
    version: "1.0.0",
  },
  {
    capabilities: {
      tools: {},
      resources: {},
    },
  }
);

// List available tools
server.setRequestHandler(ListToolsRequestSchema, async () => {
  return {
    tools: [
      {
        name: "create_todo",
        description: "Create a new todo item",
        inputSchema: {
          type: "object",
          properties: {
            title: {
              type: "string",
              description: "The title of the todo item",
            },
          },
          required: ["title"],
        },
      },
      {
        name: "complete_todo",
        description: "Mark a todo item as completed",
        inputSchema: {
          type: "object",
          properties: {
            id: {
              type: "string",
              description: "The ID of the todo item to complete",
            },
          },
          required: ["id"],
        },
      },
      {
        name: "delete_todo",
        description: "Delete a todo item",
        inputSchema: {
          type: "object",
          properties: {
            id: {
              type: "string",
              description: "The ID of the todo item to delete",
            },
          },
          required: ["id"],
        },
      },
    ],
  };
});

// Handle tool calls
server.setRequestHandler(CallToolRequestSchema, async (request) => {
  const { name, arguments: args } = request.params;

  switch (name) {
    case "create_todo": {
      const id = crypto.randomUUID();
      const todo: Todo = {
        id,
        title: args.title as string,
        completed: false,
        createdAt: new Date(),
      };
      todos.set(id, todo);
      return {
        content: [
          {
            type: "text",
            text: JSON.stringify({ success: true, todo }, null, 2),
          },
        ],
      };
    }

    case "complete_todo": {
      const todo = todos.get(args.id as string);
      if (!todo) {
        return {
          content: [
            { type: "text", text: JSON.stringify({ error: "Todo not found" }) },
          ],
          isError: true,
        };
      }
      todo.completed = true;
      return {
        content: [
          {
            type: "text",
            text: JSON.stringify({ success: true, todo }, null, 2),
          },
        ],
      };
    }

    case "delete_todo": {
      const deleted = todos.delete(args.id as string);
      return {
        content: [
          {
            type: "text",
            text: JSON.stringify({ success: deleted }, null, 2),
          },
        ],
      };
    }

    default:
      return {
        content: [
          { type: "text", text: JSON.stringify({ error: "Unknown tool" }) },
        ],
        isError: true,
      };
  }
});

// List available resources
server.setRequestHandler(ListResourcesRequestSchema, async () => {
  return {
    resources: [
      {
        uri: "todo://list",
        name: "Todo List",
        description: "Current list of all todo items",
        mimeType: "application/json",
      },
    ],
  };
});

// Read resources
server.setRequestHandler(ReadResourceRequestSchema, async (request) => {
  if (request.params.uri === "todo://list") {
    const todoList = Array.from(todos.values());
    return {
      contents: [
        {
          uri: "todo://list",
          mimeType: "application/json",
          text: JSON.stringify(todoList, null, 2),
        },
      ],
    };
  }

  throw new Error(`Unknown resource: ${request.params.uri}`);
});

// Start the server
async function main() {
  const transport = new StdioServerTransport();
  await server.connect(transport);
  console.error("Todo MCP server running on stdio");
}

main().catch(console.error);
Enter fullscreen mode Exit fullscreen mode

Configuration for Claude Desktop

To use this with Claude Desktop, add to your claude_desktop_config.json:

{
  "mcpServers": {
    "todo": {
      "command": "npx",
      "args": ["tsx", "/path/to/mcp-todo-server/src/index.ts"]
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Now Claude can:

  • Create todos: "Add a todo to buy groceries"
  • Complete todos: "Mark the groceries todo as done"
  • List todos: "What's on my todo list?"

Advanced MCP Patterns

Pattern 1: Database Integration

One of the most powerful MCP applications is giving AI read (and sometimes write) access to databases:

import { Pool } from 'pg';

const pool = new Pool({
  connectionString: process.env.DATABASE_URL,
});

server.setRequestHandler(CallToolRequestSchema, async (request) => {
  if (request.params.name === "query_database") {
    const { query } = request.params.arguments as { query: string };

    // IMPORTANT: Validate and sanitize the query
    if (!isReadOnlyQuery(query)) {
      return {
        content: [{ type: "text", text: "Only SELECT queries are allowed" }],
        isError: true,
      };
    }

    try {
      const result = await pool.query(query);
      return {
        content: [{
          type: "text",
          text: JSON.stringify(result.rows, null, 2),
        }],
      };
    } catch (error) {
      return {
        content: [{ type: "text", text: `Query error: ${error.message}` }],
        isError: true,
      };
    }
  }
});

function isReadOnlyQuery(query: string): boolean {
  const normalized = query.trim().toLowerCase();
  return normalized.startsWith('select') &&
         !normalized.includes('into') &&
         !normalized.includes('update') &&
         !normalized.includes('delete') &&
         !normalized.includes('insert') &&
         !normalized.includes('drop') &&
         !normalized.includes('alter');
}
Enter fullscreen mode Exit fullscreen mode

Pattern 2: OAuth Integration

For APIs requiring user authentication:

import { OAuth2Client } from 'google-auth-library';

const oauth2Client = new OAuth2Client(
  process.env.GOOGLE_CLIENT_ID,
  process.env.GOOGLE_CLIENT_SECRET,
  'http://localhost:3000/callback'
);

server.setRequestHandler(ListToolsRequestSchema, async () => {
  return {
    tools: [
      {
        name: "list_calendar_events",
        description: "List upcoming calendar events",
        inputSchema: {
          type: "object",
          properties: {
            maxResults: {
              type: "number",
              description: "Maximum number of events to return",
              default: 10,
            },
          },
        },
      },
    ],
  };
});

server.setRequestHandler(CallToolRequestSchema, async (request) => {
  if (request.params.name === "list_calendar_events") {
    // Token would be stored per-user in a real application
    const calendar = google.calendar({ version: 'v3', auth: oauth2Client });

    const response = await calendar.events.list({
      calendarId: 'primary',
      timeMin: new Date().toISOString(),
      maxResults: request.params.arguments?.maxResults ?? 10,
      singleEvents: true,
      orderBy: 'startTime',
    });

    return {
      content: [{
        type: "text",
        text: JSON.stringify(response.data.items, null, 2),
      }],
    };
  }
});
Enter fullscreen mode Exit fullscreen mode

Pattern 3: Long-Running Operations with Progress

For operations that take time, MCP supports progress notifications:

server.setRequestHandler(CallToolRequestSchema, async (request, extra) => {
  if (request.params.name === "analyze_codebase") {
    const files = await getAllFiles(request.params.arguments.path);
    const total = files.length;

    for (let i = 0; i < files.length; i++) {
      // Send progress update
      await extra.sendNotification({
        method: "notifications/progress",
        params: {
          progressToken: request.params._meta?.progressToken,
          progress: i,
          total,
        },
      });

      await analyzeFile(files[i]);
    }

    return {
      content: [{
        type: "text",
        text: `Analyzed ${total} files successfully`,
      }],
    };
  }
});
Enter fullscreen mode Exit fullscreen mode

MCP Security Best Practices

1. Validate All Inputs

Never trust data coming from the AI. Always validate:

import { z } from 'zod';

const CreateTodoSchema = z.object({
  title: z.string().min(1).max(200),
  priority: z.enum(['low', 'medium', 'high']).optional(),
});

server.setRequestHandler(CallToolRequestSchema, async (request) => {
  if (request.params.name === "create_todo") {
    const parsed = CreateTodoSchema.safeParse(request.params.arguments);
    if (!parsed.success) {
      return {
        content: [{
          type: "text",
          text: `Validation error: ${parsed.error.message}`,
        }],
        isError: true,
      };
    }
    // Use parsed.data which is now typed and validated
  }
});
Enter fullscreen mode Exit fullscreen mode

2. Implement Rate Limiting

Protect against runaway AI agents:

import { RateLimiter } from 'limiter';

const limiter = new RateLimiter({
  tokensPerInterval: 100,
  interval: 'minute',
});

server.setRequestHandler(CallToolRequestSchema, async (request) => {
  if (!await limiter.tryRemoveTokens(1)) {
    return {
      content: [{
        type: "text",
        text: "Rate limit exceeded. Please try again later.",
      }],
      isError: true,
    };
  }
  // Process request...
});
Enter fullscreen mode Exit fullscreen mode

3. Audit Logging

Log all tool invocations for security and debugging:

function logToolInvocation(name: string, args: unknown, result: unknown) {
  console.log(JSON.stringify({
    timestamp: new Date().toISOString(),
    tool: name,
    arguments: args,
    result: result,
  }));
}

server.setRequestHandler(CallToolRequestSchema, async (request) => {
  const result = await handleTool(request);
  logToolInvocation(
    request.params.name,
    request.params.arguments,
    result
  );
  return result;
});
Enter fullscreen mode Exit fullscreen mode

4. Principle of Least Privilege

Only expose what's necessary:

// BAD: Exposing raw database access
tools: [{
  name: "execute_sql",
  description: "Execute any SQL query",
  // This is a security nightmare!
}]

// GOOD: Exposing specific, scoped operations
tools: [
  {
    name: "get_user_orders",
    description: "Get orders for a specific user",
    inputSchema: {
      type: "object",
      properties: {
        userId: { type: "string" },
        limit: { type: "number", maximum: 100 },
      },
      required: ["userId"],
    },
  },
]
Enter fullscreen mode Exit fullscreen mode

MCP in Production: Lessons Learned

Lesson 1: Design for Failure

AI agents will call tools in unexpected ways. Build defensively:

server.setRequestHandler(CallToolRequestSchema, async (request) => {
  try {
    const result = await handleTool(request);
    return result;
  } catch (error) {
    // Return structured error that helps the AI recover
    return {
      content: [{
        type: "text",
        text: JSON.stringify({
          error: error.message,
          suggestion: "Try with different parameters",
          validExamples: [
            { title: "Buy groceries" },
            { title: "Call mom" },
          ],
        }),
      }],
      isError: true,
    };
  }
});
Enter fullscreen mode Exit fullscreen mode

Lesson 2: Provide Rich Descriptions

The quality of your tool descriptions directly impacts how well the AI uses them:

// BAD
{
  name: "search",
  description: "Search for items",
}

// GOOD
{
  name: "search_products",
  description: `Search the product catalog. Returns up to 20 products 
  matching the query. Supports filters for category, price range, 
  and availability. Results include product name, price, stock status, 
  and thumbnail URL. For best results, use specific product names or 
  categories rather than generic terms.`,
  inputSchema: {
    type: "object",
    properties: {
      query: {
        type: "string",
        description: "Search query. Examples: 'wireless headphones', 'laptop under $1000'",
      },
      // ...
    },
  },
}
Enter fullscreen mode Exit fullscreen mode

Lesson 3: Version Your Servers

As your MCP server evolves, maintain backwards compatibility:

const server = new Server(
  {
    name: "my-server",
    version: "2.1.0",  // Semantic versioning
  },
  {
    capabilities: {
      tools: {},
      resources: {},
    },
  }
);

// Support both old and new tool names during migration
server.setRequestHandler(CallToolRequestSchema, async (request) => {
  const name = request.params.name;

  // Handle legacy tool name
  if (name === "old_tool_name") {
    console.warn("Deprecated: use 'new_tool_name' instead");
    return handleNewTool(request);
  }

  if (name === "new_tool_name") {
    return handleNewTool(request);
  }
});
Enter fullscreen mode Exit fullscreen mode

The Future of MCP

What's Coming

  1. Streaming Responses: First-class support for streaming tool outputs
  2. Multi-Modal Tools: Tools that return images, audio, or video
  3. Tool Composition: Combining multiple tools into workflows
  4. Enhanced Security: Built-in OAuth flows and permission scopes

MCP vs. Alternatives

Feature MCP OpenAI Functions LangChain Tools
Vendor-agnostic
Standardized protocol
Built-in resource access
Progress notifications Partial
Community servers Growing N/A Limited

Conclusion: The Time to Learn MCP is Now

MCP is still early, but the trajectory is clear. Just as learning REST was essential for any web developer in the 2010s, learning MCP is becoming essential for AI developers in the 2020s.

Key takeaways:

  1. MCP is the universal connector for AI-to-world integration
  2. Start with simple servers and iterate—the SDK makes it easy
  3. Security is paramount—AI agents can be unpredictable
  4. Rich descriptions matter—they're your API documentation for AI
  5. Design for failure—help the AI recover gracefully

The companies that master MCP early will have a significant advantage in the AI-native future. Their AI agents will be more capable, more reliable, and more maintainable than those still duct-taping APIs together.

The question isn't whether you should learn MCP—it's whether you'll be ahead of the curve or playing catch-up.


Quick Reference: MCP Concepts

Concept Description Example
Server Exposes tools and resources Database access server
Client Connects to servers, used by AI Claude Desktop
Tool Callable function create_github_issue
Resource Readable data file:///project/README.md
Prompt Reusable template Code review template
Transport Communication layer stdio, SSE, WebSocket

Resources:


💡 Note: This article was originally published on the Pockit Blog.

Check out Pockit.tools for 50+ free developer utilities (JSON Formatter, Diff Checker, etc.) that run 100% locally in your browser.

Top comments (0)