DEV Community

Alex Spinov
Alex Spinov

Posted on

MCP Explained in 5 Minutes — The Protocol That Makes AI Agents Actually Useful

MCP (Model Context Protocol) is everywhere right now. Claude Code uses it. Cursor supports it. Every AI agent framework is adding MCP support.

But most explanations make it way more complicated than it needs to be. Here's the simple version.

What MCP Is (In One Sentence)

MCP is a standard way for AI models to use external tools — like USB but for AI.

The Problem MCP Solves

Before MCP, every AI integration was custom:

Claude + GitHub → custom code
Claude + Slack → different custom code
Claude + Database → yet another custom integration
GPT + GitHub → completely different custom code
Enter fullscreen mode Exit fullscreen mode

With MCP:

Any AI + MCP Server (GitHub) → works
Any AI + MCP Server (Slack) → works
Any AI + MCP Server (Database) → works
Enter fullscreen mode Exit fullscreen mode

Write the integration once, use it with any AI model.

How It Works

┌──────────┐     MCP Protocol     ┌──────────────┐
│  AI Host │ ◄──────────────────► │  MCP Server  │
│ (Claude) │   JSON-RPC over      │ (GitHub,     │
│          │   stdio/SSE          │  Slack, DB)  │
└──────────┘                      └──────────────┘
Enter fullscreen mode Exit fullscreen mode

The MCP server exposes tools (functions the AI can call) and resources (data the AI can read). The AI host connects to multiple servers simultaneously.

Build a Simple MCP Server (Python)

from mcp.server import Server
from mcp.types import Tool, TextContent
import json

app = Server("weather-server")

@app.list_tools()
async def list_tools():
    return [
        Tool(
            name="get_weather",
            description="Get current weather for a city",
            inputSchema={
                "type": "object",
                "properties": {
                    "city": {"type": "string", "description": "City name"}
                },
                "required": ["city"]
            }
        )
    ]

@app.call_tool()
async def call_tool(name, arguments):
    if name == "get_weather":
        city = arguments["city"]
        # Call a free weather API
        import httpx
        async with httpx.AsyncClient() as client:
            resp = await client.get(
                f"https://api.open-meteo.com/v1/forecast",
                params={"latitude": 40.71, "longitude": -74.01, "current_weather": True}
            )
            weather = resp.json()["current_weather"]
            return [TextContent(
                type="text",
                text=f"Weather in {city}: {weather['temperature']}C, wind {weather['windspeed']}km/h"
            )]

if __name__ == "__main__":
    import asyncio
    from mcp.server.stdio import stdio_server
    asyncio.run(stdio_server(app))
Enter fullscreen mode Exit fullscreen mode

Use It With Claude Code

Add to your Claude Code config:

{
  "mcpServers": {
    "weather": {
      "command": "python",
      "args": ["weather_server.py"]
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Now Claude can check the weather as part of any conversation.

Real-World MCP Servers

Here are MCP servers that are actually useful:

Server What It Does
Playwright MCP Browser automation
GitHub MCP Repo management, PRs, issues
Notion MCP Read/write Notion pages
Slack MCP Send/read messages
PostgreSQL MCP Query databases
Filesystem MCP Read/write files

I've curated 15+ MCP servers that work out of the box.

Why MCP Will Win

  1. Anthropic backs it — and other companies are adopting it
  2. It's simple — JSON-RPC over stdio. No complex auth.
  3. It's composable — connect multiple servers to one AI
  4. It separates concerns — tool makers build servers, AI makers build hosts

Are you building with MCP? What servers have you found most useful?


More AI resources:

Top comments (0)