DEV Community

KazKN
KazKN

Posted on • Edited on

What Is MCP? The Model Context Protocol Explained for Developers

What Is MCP? The Model Context Protocol Explained for Developers

Last updated: February 15, 2026 | Reading time: 15 min

You've seen "MCP" everywhere — in AI tool changelogs, developer forums, and product launches. Claude Desktop supports it. Cursor ships with it. Windsurf integrates it. But nobody has given you a clear, complete explanation of what MCP actually is, why it exists, and how to use it.

MCP (Model Context Protocol) is the USB-C of AI. Before USB-C, every device had its own cable. Before MCP, every AI tool needed custom integrations for every data source. MCP standardizes how AI models connect to external tools and data — one protocol, universal compatibility.

This is the definitive guide. By the end, you'll understand MCP's architecture, build your own MCP server, and see why it's reshaping how developers build AI applications.

What you'll learn:

  • What MCP is and why it was created
  • MCP architecture: clients, servers, and the protocol
  • How to build and use an MCP server (with working code)
  • Real-world MCP examples with Vinted marketplace data
  • Why MCP matters for the future of AI tooling

Table of Contents


What Is MCP (Model Context Protocol)?

MCP (Model Context Protocol) is an open standard created by Anthropic that defines how AI applications connect to external data sources and tools. It provides a standardized interface between AI models (like Claude, GPT, or local LLMs) and the systems they need to interact with — databases, APIs, file systems, web services, and more.

In technical terms, MCP is a JSON-RPC 2.0-based protocol that enables bidirectional communication between an MCP client (the AI application) and an MCP server (the data/tool provider). The server exposes "tools" that the AI can call, and "resources" that the AI can read.

Think of it this way: REST APIs standardized how web services talk to each other. MCP standardizes how AI models talk to everything else.

According to Anthropic's MCP specification, the protocol was designed to solve the "M×N integration problem" — where M AI applications each need custom integrations with N data sources, creating M×N total integrations. MCP reduces this to M+N: each application implements the MCP client once, each data source implements the MCP server once, and they all work together.

graph TD
    subgraph "Without MCP: M×N Integrations"
    C1[Claude] -->|Custom| D1[Database]
    C1 -->|Custom| D2[Vinted API]
    C1 -->|Custom| D3[GitHub]
    C2[Cursor] -->|Custom| D1
    C2 -->|Custom| D2
    C2 -->|Custom| D3
    C3[GPT] -->|Custom| D1
    C3 -->|Custom| D2
    C3 -->|Custom| D3
    end

    subgraph "With MCP: M+N Integrations"
    C4[Claude] -->|MCP| P[MCP Protocol]
    C5[Cursor] -->|MCP| P
    C6[Any AI] -->|MCP| P
    P -->|MCP| S1[DB Server]
    P -->|MCP| S2[Vinted Server]
    P -->|MCP| S3[GitHub Server]
    end
Enter fullscreen mode Exit fullscreen mode

Why MCP Was Created

Before MCP, integrating AI tools with external data required:

  1. Custom plugins for each AI platform — OpenAI had ChatGPT Plugins, Anthropic had tool use, every coding assistant had its own extension format
  2. Fragmented ecosystems — a tool built for Claude didn't work in Cursor, and vice versa
  3. Duplicated effort — developers built the same integration multiple times for different platforms
  4. Brittle connections — when one platform changed their plugin format, everything broke

Anthropic released MCP as an open standard in November 2024. By February 2026, it has become the dominant standard for AI-tool integration, with adoption across major platforms:

  • Claude Desktop — native MCP support since launch
  • Cursor — MCP integration for code-aware AI
  • Windsurf — full MCP client implementation
  • Continue.dev — open-source MCP support
  • VS Code Copilot — MCP compatibility layer

The network effect is powerful: the more MCP servers exist, the more valuable every MCP client becomes, and vice versa. According to the MCP server registry, there are now over 500 community-built MCP servers covering databases, APIs, cloud services, and niche platforms.

MCP Architecture Explained

MCP follows a client-server architecture with three core components:

1. MCP Host

The host is the AI application the user interacts with — Claude Desktop, Cursor, or any MCP-compatible tool. The host:

  • Manages user conversations
  • Decides when to call MCP tools
  • Displays results to the user

2. MCP Client

The client runs inside the host and handles the MCP protocol. It:

  • Discovers available MCP servers
  • Sends tool calls to servers
  • Receives responses and passes them to the host
  • Manages server lifecycle (start/stop)

3. MCP Server

The server provides tools and resources to the AI. It:

  • Declares what tools it offers (name, description, parameters)
  • Executes tool calls and returns results
  • Can expose "resources" (read-only data) and "prompts" (reusable templates)
graph LR
    U[User] --> H[MCP Host<br>Claude Desktop]
    H --> CL[MCP Client]
    CL -->|stdio/SSE| S1[MCP Server 1<br>Vinted Data]
    CL -->|stdio/SSE| S2[MCP Server 2<br>Database]
    CL -->|stdio/SSE| S3[MCP Server 3<br>File System]
    S1 --> API1[Vinted API]
    S2 --> DB[PostgreSQL]
    S3 --> FS[Local Files]
Enter fullscreen mode Exit fullscreen mode

Transport Protocols

MCP supports two transport mechanisms:

  • stdio — Server runs as a local process, communicates via standard input/output. Best for local tools (file system, databases).
  • SSE (Server-Sent Events) — Server runs remotely, communicates over HTTP. Best for cloud services and APIs.

How MCP Works: The Protocol

MCP uses JSON-RPC 2.0 as its wire format. Here's the communication flow:

1. Initialization

When a client connects to a server, they exchange capabilities:

// Client  Server: Initialize
{
  "jsonrpc": "2.0",
  "id": 1,
  "method": "initialize",
  "params": {
    "protocolVersion": "2024-11-05",
    "capabilities": { "tools": {} },
    "clientInfo": { "name": "claude-desktop", "version": "1.0.0" }
  }
}

// Server  Client: Capabilities
{
  "jsonrpc": "2.0",
  "id": 1,
  "result": {
    "protocolVersion": "2024-11-05",
    "capabilities": { "tools": { "listChanged": true } },
    "serverInfo": { "name": "vinted-mcp-server", "version": "1.2.0" }
  }
}
Enter fullscreen mode Exit fullscreen mode

2. Tool Discovery

The client asks what tools the server provides:

// Client  Server
{ "jsonrpc": "2.0", "id": 2, "method": "tools/list" }

// Server  Client
{
  "result": {
    "tools": [
      {
        "name": "search_vinted",
        "description": "Search Vinted listings across 19 EU countries",
        "inputSchema": {
          "type": "object",
          "properties": {
            "query": { "type": "string", "description": "Search term" },
            "country": { "type": "string", "description": "Country code (fr, de, es...)" },
            "maxPrice": { "type": "number", "description": "Maximum price filter" }
          },
          "required": ["query"]
        }
      }
    ]
  }
}
Enter fullscreen mode Exit fullscreen mode

3. Tool Execution

When the AI decides to use a tool, it sends a call:

// Client  Server: Call tool
{
  "jsonrpc": "2.0",
  "id": 3,
  "method": "tools/call",
  "params": {
    "name": "search_vinted",
    "arguments": {
      "query": "Nike Air Force 1",
      "country": "fr",
      "maxPrice": 30
    }
  }
}

// Server  Client: Results
{
  "result": {
    "content": [
      {
        "type": "text",
        "text": "[{\"title\":\"Nike AF1 - Taille 42\",\"price\":22,\"country\":\"FR\"}...]"
      }
    ]
  }
}
Enter fullscreen mode Exit fullscreen mode

The AI then interprets the results and responds to the user in natural language: "I found 12 Nike AF1 listings in France under €30. The cheapest is €18 in size 43, posted 2 hours ago..."

Building Your First MCP Server

Here's a minimal MCP server in TypeScript using the official SDK:

import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { z } from "zod";

const server = new McpServer({
  name: "my-first-mcp-server",
  version: "1.0.0",
});

// Define a tool
server.tool(
  "get_weather",
  "Get current weather for a city",
  {
    city: z.string().describe("City name"),
    unit: z.enum(["celsius", "fahrenheit"]).optional().default("celsius"),
  },
  async ({ city, unit }) => {
    // Your logic here — call a weather API, database, etc.
    const temp = unit === "celsius" ? "22°C" : "72°F";
    return {
      content: [
        { type: "text", text: `Weather in ${city}: ${temp}, partly cloudy` }
      ],
    };
  }
);

// Start the server
const transport = new StdioServerTransport();
await server.connect(transport);
Enter fullscreen mode Exit fullscreen mode

Register in Claude Desktop

Add to your claude_desktop_config.json:

{
  "mcpServers": {
    "my-weather": {
      "command": "npx",
      "args": ["tsx", "/path/to/your/server.ts"]
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Restart Claude Desktop. You can now ask: "What's the weather in Paris?" and Claude will call your MCP server automatically.

Real-World Example: Vinted MCP Server

The Vinted MCP Server is a production MCP server that connects Vinted marketplace data to any MCP-compatible AI tool. It exposes tools for searching listings, comparing prices across countries, and analyzing market trends.

What You Can Do

Instead of manually configuring scraper parameters, you ask questions in natural language:

You: "Find me PS5 consoles under €170 in Germany and compare with Netherlands prices"

Claude (via MCP): "I found 23 PS5 listings in Germany averaging €153 and 18 in the Netherlands averaging €224. The price gap is 46% — buying in Germany and reselling in the Netherlands yields roughly €50-70 profit per unit after shipping."

Installation

npm install -g vinted-mcp-server
Enter fullscreen mode Exit fullscreen mode

Or configure in Claude Desktop:

{
  "mcpServers": {
    "vinted": {
      "command": "npx",
      "args": ["-y", "vinted-mcp-server"],
      "env": {
        "APIFY_TOKEN": "your_apify_token"
      }
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

The full source code is available on GitHub and npm.

For a complete walkthrough of what you can do with it, read How to Use Vinted Data in Claude, Cursor, and Any AI Tool and I Built an MCP Server for Vinted — Here's How Cross-Border Price Gaps Make You Money.

MCP Clients: Where You Can Use MCP

Claude Desktop

Anthropic's flagship app has native MCP support. Add servers to claude_desktop_config.json and Claude can use them in any conversation.

Cursor

The AI-first code editor supports MCP servers, allowing you to connect databases, APIs, and custom tools directly into your coding workflow.

Windsurf

Codeium's IDE with built-in MCP client. Configure servers the same way as Claude Desktop.

Continue.dev

Open-source AI coding assistant with MCP support for VS Code and JetBrains.

Custom Applications

Using the MCP SDK, you can build MCP clients into any Node.js or Python application.

MCP vs Traditional API Integrations

Aspect Traditional API MCP
Integration effort Custom code per API per AI tool One server, all AI tools
Discovery Manual documentation reading Automatic tool listing
Schema Varies (REST, GraphQL, SOAP) Standardized JSON-RPC 2.0
AI awareness AI needs custom prompting AI natively understands tools
Reusability Platform-locked Universal across MCP clients
Maintenance N integrations to maintain 1 server to maintain
User experience Developer configures End user asks questions

The key insight: APIs are designed for developers. MCP is designed for AI models. An API returns raw data that a developer must interpret and present. An MCP server returns structured data that an AI model interprets and presents to the user in natural language.

graph TD
    subgraph "Traditional: Developer in the Loop"
    A1[User has question] --> A2[Developer writes code]
    A2 --> A3[Code calls API]
    A3 --> A4[Developer parses response]
    A4 --> A5[Developer builds UI]
    A5 --> A6[User gets answer]
    end

    subgraph "MCP: AI in the Loop"
    B1[User asks question] --> B2[AI calls MCP tool]
    B2 --> B3[MCP server returns data]
    B3 --> B4[AI interprets & responds]
    B4 --> B5[User gets answer]
    end
Enter fullscreen mode Exit fullscreen mode

The MCP Ecosystem in 2026

The MCP ecosystem has grown rapidly since Anthropic's November 2024 launch. Key developments:

  • 500+ community MCP servers on the official registry
  • Database servers — PostgreSQL, MySQL, MongoDB, SQLite
  • Cloud service servers — AWS, GCP, Azure management
  • Developer tool servers — GitHub, GitLab, Jira, Linear
  • Data servers — web scrapers, market data, weather, news
  • File system servers — local files, Google Drive, Dropbox

For developers building AI applications, MCP is becoming the standard way to give AI models access to external tools. Early movers in the MCP ecosystem — both server builders and adopters — have an advantage as the ecosystem matures.

Specific MCP servers already driving real business value:

  • Vinted MCP Server — marketplace price research via natural language
  • GitHub MCP Server — repository management through AI conversation
  • PostgreSQL MCP Server — database queries in plain English

FAQ

What does MCP stand for?

MCP stands for Model Context Protocol. It's an open standard created by Anthropic for connecting AI models to external data sources and tools. The "context" in the name refers to giving AI models the ability to access contextual data beyond their training — real-time information, databases, APIs, and user-specific data.

Is MCP only for Claude?

No. MCP is an open standard that any AI application can implement. While Anthropic created it, MCP is supported by Claude Desktop, Cursor, Windsurf, Continue.dev, and any application using the open-source MCP SDKs. It's designed to be model-agnostic — an MCP server works with any MCP client regardless of the underlying AI model.

How is MCP different from OpenAI function calling?

OpenAI function calling is a proprietary feature specific to OpenAI's API. MCP is an open standard that works across any AI platform. Function calling defines tools inline with each API request; MCP defines tools in a separate server that any client can discover and use. MCP also supports resources (read-only data) and prompts (reusable templates), which function calling doesn't offer.

Do I need to be a developer to use MCP?

To use existing MCP servers, you need basic technical skills — installing npm packages and editing a JSON config file. Most MCP servers can be set up in under 5 minutes. To build an MCP server, you need programming experience in TypeScript or Python. The official SDK provides helpers that make building a server straightforward.

What programming languages support MCP?

The official MCP SDKs are available for TypeScript/JavaScript and Python. Community implementations exist for Go, Rust, and Java. TypeScript is the most mature SDK with the broadest adoption.

Is MCP secure?

MCP servers run locally by default (via stdio transport), meaning data never leaves your machine. For remote servers (SSE transport), security depends on the server implementation — use HTTPS, authenticate API tokens, and review server code before granting access. The MCP specification includes capability negotiation so clients and servers agree on what's allowed.

Can I build my own MCP server?

Yes. The MCP TypeScript SDK lets you build a server in under 50 lines of code. Define your tools, implement the logic, and register the server in your MCP client's configuration. See our code example above for a minimal working server.

How does MCP handle rate limiting?

MCP itself doesn't define rate limiting — that's left to individual server implementations. A well-built MCP server should implement its own rate limiting based on the underlying API's constraints. The Vinted MCP Server, for example, respects Apify's rate limits and caches recent results to minimize API calls.

What's the relationship between MCP and Apify?

Apify actors can be wrapped in MCP servers, making any Apify scraper accessible through natural language via AI tools. The Vinted MCP Server is an example — it wraps the Vinted Smart Scraper actor in an MCP interface. This pattern can be applied to any of Apify's 2,000+ actors.

Will MCP replace REST APIs?

No. MCP complements REST APIs rather than replacing them. REST APIs are designed for machine-to-machine communication between web services. MCP is designed for AI-to-tool communication. Many MCP servers are thin wrappers around existing REST APIs — they translate natural language queries into API calls and format the responses for AI consumption.

Get Started with MCP

MCP is the infrastructure layer that makes AI tools genuinely useful beyond conversation. Whether you're a developer building AI applications or a power user who wants Claude to access real-world data, MCP is the bridge.

To use MCP:

  1. Install an MCP client (Claude Desktop, Cursor, or Windsurf)
  2. Find an MCP server for your use case (see MCP server registry)
  3. Add it to your client configuration
  4. Start asking questions in natural language

To build an MCP server:

  1. Install the SDK: npm install @modelcontextprotocol/sdk
  2. Define your tools with input schemas
  3. Implement the tool logic
  4. Test with Claude Desktop

Try it now: Install the Vinted MCP Server and ask Claude about cross-country price differences on Vinted. It takes under 5 minutes to set up.


Related articles:

Resources: Vinted MCP Server on Apify | GitHub | npm | Vinted Smart Scraper | MCP Specification

Top comments (0)