DEV Community

Alex
Alex

Posted on

Build Your First MCP Server in Python: Give AI Superpowers in 30 Minutes

Build Your First MCP Server in Python: Give AI Superpowers in 30 Minutes

MCP (Model Context Protocol) is the hottest thing in AI tooling right now. It lets AI assistants like Claude connect to your local tools, APIs, and databases. Think of it as USB-C for AI — one standard protocol to plug anything in.

Here's how to build one from scratch. No frameworks, no magic. Just Python.


What is MCP?

MCP is a protocol that lets AI models:

  • Read data from your systems (databases, files, APIs)
  • Execute actions (run scripts, send messages, create files)
  • Use templates to guide their behavior

Without MCP, your AI is limited to its training data. With MCP, it can interact with the real world.

┌──────────┐     MCP Protocol     ┌──────────────┐
│  AI Host │ ◄──────────────────► │  MCP Server  │
│ (Claude) │   JSON-RPC over      │  (Your Code) │
└──────────┘   stdio/HTTP         └──────┬───────┘
                                         │
                                    ┌────▼────┐
                                    │ Your    │
                                    │ Tools,  │
                                    │ Data,   │
                                    │ APIs    │
                                    └─────────┘
Enter fullscreen mode Exit fullscreen mode

Prerequisites

pip install mcp
Enter fullscreen mode Exit fullscreen mode

That's it. One dependency.


Step 1: The Simplest MCP Server

Let's build a server that gives AI access to a simple calculator:

# calculator_server.py
from mcp.server import Server
from mcp.types import Tool, TextContent
import mcp.server.stdio

server = Server("calculator")


@server.list_tools()
async def list_tools():
    """Tell the AI what tools are available."""
    return [
        Tool(
            name="calculate",
            description="Evaluate a math expression. Examples: '2+2', 'sqrt(16)', '3**4'",
            inputSchema={
                "type": "object",
                "properties": {
                    "expression": {
                        "type": "string",
                        "description": "The math expression to evaluate"
                    }
                },
                "required": ["expression"]
            }
        )
    ]


@server.call_tool()
async def call_tool(name: str, arguments: dict):
    """Execute a tool when the AI calls it."""
    if name == "calculate":
        expr = arguments["expression"]

        # Safety: only allow math operations
        allowed = set("0123456789+-*/.() ")
        if not all(c in allowed for c in expr):
            return [TextContent(type="text", text=f"Error: Invalid characters in expression")]

        try:
            result = eval(expr)  # Safe because we validated input
            return [TextContent(type="text", text=f"{expr} = {result}")]
        except Exception as e:
            return [TextContent(type="text", text=f"Error: {e}")]

    return [TextContent(type="text", text=f"Unknown tool: {name}")]


async def main():
    async with mcp.server.stdio.stdio_server() as (read, write):
        await server.run(read, write, server.create_initialization_options())


if __name__ == "__main__":
    import asyncio
    asyncio.run(main())
Enter fullscreen mode Exit fullscreen mode

Run it: python calculator_server.py

That's a working MCP server. The AI can now call calculate("2+2") and get 4.


Step 2: Add Resources (Read-Only Data)

Resources let the AI read data without executing code:

from mcp.types import Resource
import json

# Sample data our AI can access
CONTACTS = {
    "alice": {"email": "alice@example.com", "role": "Engineer"},
    "bob": {"email": "bob@example.com", "role": "Designer"},
    "carol": {"email": "carol@example.com", "role": "PM"}
}


@server.list_resources()
async def list_resources():
    return [
        Resource(
            uri="contacts://list",
            name="Contact List",
            description="All contacts in the system",
            mimeType="application/json"
        )
    ]


@server.read_resource()
async def read_resource(uri: str):
    if uri == "contacts://list":
        return json.dumps(CONTACTS, indent=2)
    raise ValueError(f"Unknown resource: {uri}")
Enter fullscreen mode Exit fullscreen mode

Now the AI can read your contact list and use it in conversations.


Step 3: A Practical Example — File Search Server

Here's something actually useful: an MCP server that searches your local files:

# file_search_server.py
from mcp.server import Server
from mcp.types import Tool, TextContent
from pathlib import Path
import mcp.server.stdio

server = Server("file-search")

# Configure which directories to search
SEARCH_DIRS = [
    Path.home() / "Documents",
    Path.home() / "Projects",
]


@server.list_tools()
async def list_tools():
    return [
        Tool(
            name="search_files",
            description="Search for files by name pattern. Returns matching file paths.",
            inputSchema={
                "type": "object",
                "properties": {
                    "pattern": {
                        "type": "string",
                        "description": "Glob pattern, e.g. '*.py', '**/*.md', 'README*'"
                    },
                    "max_results": {
                        "type": "integer",
                        "description": "Maximum results to return (default 20)",
                        "default": 20
                    }
                },
                "required": ["pattern"]
            }
        ),
        Tool(
            name="read_file",
            description="Read the contents of a file.",
            inputSchema={
                "type": "object",
                "properties": {
                    "path": {
                        "type": "string",
                        "description": "Absolute path to the file"
                    }
                },
                "required": ["path"]
            }
        ),
        Tool(
            name="grep",
            description="Search for a text pattern inside files.",
            inputSchema={
                "type": "object",
                "properties": {
                    "pattern": {
                        "type": "string",
                        "description": "Text to search for"
                    },
                    "file_glob": {
                        "type": "string",
                        "description": "File pattern to search in, e.g. '*.py'",
                        "default": "*"
                    }
                },
                "required": ["pattern"]
            }
        )
    ]


@server.call_tool()
async def call_tool(name: str, arguments: dict):
    if name == "search_files":
        pattern = arguments["pattern"]
        max_results = arguments.get("max_results", 20)
        matches = []

        for search_dir in SEARCH_DIRS:
            if search_dir.exists():
                for match in search_dir.rglob(pattern):
                    matches.append(str(match))
                    if len(matches) >= max_results:
                        break

        result = "\n".join(matches) if matches else "No files found"
        return [TextContent(type="text", text=result)]

    elif name == "read_file":
        path = Path(arguments["path"])

        # Security: only allow reading from configured directories
        if not any(path.is_relative_to(d) for d in SEARCH_DIRS):
            return [TextContent(type="text",
                    text=f"Error: Path outside allowed directories")]

        if not path.exists():
            return [TextContent(type="text", text=f"File not found: {path}")]

        content = path.read_text(errors="replace")
        # Truncate large files
        if len(content) > 10000:
            content = content[:10000] + "\n\n... (truncated)"

        return [TextContent(type="text", text=content)]

    elif name == "grep":
        pattern = arguments["pattern"]
        file_glob = arguments.get("file_glob", "*")
        matches = []

        for search_dir in SEARCH_DIRS:
            if not search_dir.exists():
                continue
            for file_path in search_dir.rglob(file_glob):
                if not file_path.is_file():
                    continue
                try:
                    content = file_path.read_text(errors="replace")
                    for i, line in enumerate(content.splitlines(), 1):
                        if pattern.lower() in line.lower():
                            matches.append(f"{file_path}:{i}: {line.strip()}")
                            if len(matches) >= 50:
                                break
                except (PermissionError, UnicodeDecodeError):
                    continue

        result = "\n".join(matches) if matches else f"No matches for '{pattern}'"
        return [TextContent(type="text", text=result)]

    return [TextContent(type="text", text=f"Unknown tool: {name}")]


async def main():
    async with mcp.server.stdio.stdio_server() as (read, write):
        await server.run(read, write, server.create_initialization_options())

if __name__ == "__main__":
    import asyncio
    asyncio.run(main())
Enter fullscreen mode Exit fullscreen mode

Step 4: Connect to Claude Desktop

Add your server to Claude Desktop's config file:

macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
Windows: %APPDATA%\Claude\claude_desktop_config.json

{
  "mcpServers": {
    "file-search": {
      "command": "python",
      "args": ["/path/to/file_search_server.py"]
    },
    "calculator": {
      "command": "python",
      "args": ["/path/to/calculator_server.py"]
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Restart Claude Desktop. Your tools now appear in the conversation.


Step 5: Testing Without Claude

You don't need Claude to test your server. Use the MCP inspector:

npx @modelcontextprotocol/inspector python calculator_server.py
Enter fullscreen mode Exit fullscreen mode

This opens a web UI where you can:

  • List tools and resources
  • Call tools with test inputs
  • See raw JSON-RPC messages

Security Best Practices

MCP servers run on your machine with your permissions. Be careful:

# GOOD: Restrict file access to specific directories
ALLOWED_DIRS = [Path.home() / "Documents"]

# BAD: Allow reading any file
# path = Path(arguments["path"])  # Could read /etc/passwd

# GOOD: Validate and sanitize inputs
if not all(c in allowed_chars for c in expr):
    return error

# BAD: Blindly eval() user input
# result = eval(arguments["code"])  # Remote code execution!

# GOOD: Set resource limits
if len(content) > MAX_SIZE:
    content = content[:MAX_SIZE] + "... (truncated)"
Enter fullscreen mode Exit fullscreen mode

Real-World MCP Server Ideas

Here are practical servers you can build:

Server What it does Complexity
Todoist MCP AI manages your todo list Easy
Git MCP AI reads repo history, diffs Medium
Database MCP AI queries your PostgreSQL/SQLite Medium
Slack MCP AI reads/sends Slack messages Medium
Email MCP AI reads/drafts emails Medium
Monitoring MCP AI checks server health Easy
Calendar MCP AI manages your schedule Medium

Common Gotchas

  1. Async everywhere: MCP is async. Use async def for all handlers.
  2. Schema validation: If your inputSchema is wrong, the AI won't call your tool correctly.
  3. Error handling: Always return TextContent with error messages, never raise exceptions.
  4. Large responses: Truncate output. AI models have context limits.
  5. stdin/stdout: The stdio transport uses stdin/stdout for communication. Don't print() debug info — it'll corrupt the protocol. Use logging with stderr instead.

What's Next

  • Add authentication for sensitive operations
  • Implement streaming for long-running tools
  • Build composite servers that combine multiple data sources
  • Add prompts (pre-built templates) for common workflows
  • Publish your server on the MCP server registry

TL;DR

  1. pip install mcp
  2. Define tools with @server.list_tools() and @server.call_tool()
  3. Add to Claude Desktop config
  4. Your AI now has superpowers

MCP is the bridge between AI and your world. Build the bridge.


Built something cool? Tag #MCPServer and share it with the community.

Top comments (0)