DEV Community

Cover image for Build Your First MCP Server in Python — 3 Patterns You Need
klement Gunndu
klement Gunndu

Posted on

Build Your First MCP Server in Python — 3 Patterns You Need

Your LLM can generate text. It cannot read your database, call your API, or check your calendar. That gap between "knows language" and "does useful work" is what the Model Context Protocol (MCP) closes.

MCP is a standardized protocol — maintained by Anthropic — that lets LLM-powered applications connect to external data and tools through servers you build. One protocol, any client. Claude Desktop, Cursor, VS Code Copilot, and custom agents all speak the same language.

This tutorial builds three MCP servers from scratch. Each one teaches a different primitive: tools (functions the LLM can call), resources (data the LLM can read), and prompts (reusable templates that guide the LLM). By the end, you will have working code for all three.

Setup: One Install, One Import

The official Python SDK ships everything you need. Install it:

pip install "mcp[cli]"
Enter fullscreen mode Exit fullscreen mode

As of v1.26.0, the SDK includes FastMCP — a high-level class that handles JSON-RPC protocol details, schema generation from type hints, and transport negotiation. You write Python functions. FastMCP turns them into MCP-compliant endpoints.

Every server starts the same way:

from mcp.server.fastmcp import FastMCP

mcp = FastMCP("my-server")
Enter fullscreen mode Exit fullscreen mode

The string "my-server" is your server's name — clients display it when listing available connections.

Pattern 1: Tools — Let the LLM Call Your Functions

Tools are the most common MCP primitive. The LLM sees a tool's name, description, and parameter schema. When it decides a tool is relevant, it calls it. The result goes back into the conversation.

Here is a complete MCP server that exposes two tools — one for fetching the current price of a stock ticker, and one for calculating compound interest:

from mcp.server.fastmcp import FastMCP
import httpx

mcp = FastMCP("finance-tools")


@mcp.tool()
async def get_stock_price(ticker: str) -> str:
    """Get the current stock price for a given ticker symbol.

    Args:
        ticker: Stock ticker symbol (e.g. AAPL, GOOGL, MSFT)
    """
    url = f"https://query1.finance.yahoo.com/v8/finance/chart/{ticker}"
    async with httpx.AsyncClient() as client:
        response = await client.get(
            url,
            headers={"User-Agent": "MCP-Server/1.0"},
            timeout=10.0,
        )
        if response.status_code != 200:
            return f"Error: Could not fetch data for {ticker}"

        data = response.json()
        price = data["chart"]["result"][0]["meta"]["regularMarketPrice"]
        currency = data["chart"]["result"][0]["meta"]["currency"]
        return f"{ticker.upper()}: {price} {currency}"


@mcp.tool()
def compound_interest(
    principal: float,
    annual_rate: float,
    years: int,
    compounds_per_year: int = 12,
) -> str:
    """Calculate compound interest on an investment.

    Args:
        principal: Initial investment amount in dollars
        annual_rate: Annual interest rate as a decimal (e.g. 0.05 for 5%)
        years: Number of years to compound
        compounds_per_year: How many times interest compounds per year
    """
    amount = principal * (1 + annual_rate / compounds_per_year) ** (
        compounds_per_year * years
    )
    interest = amount - principal
    return (
        f"Principal: ${principal:,.2f}\n"
        f"Rate: {annual_rate:.1%}\n"
        f"Period: {years} years\n"
        f"Final amount: ${amount:,.2f}\n"
        f"Interest earned: ${interest:,.2f}"
    )


if __name__ == "__main__":
    mcp.run(transport="stdio")
Enter fullscreen mode Exit fullscreen mode

Three things to notice:

  1. Type hints become the schema. ticker: str tells the LLM this parameter is a string. compounds_per_year: int = 12 means the parameter is optional with a default. FastMCP reads your signature and generates the JSON Schema automatically.

  2. Docstrings become the description. The LLM reads the docstring to decide when to call the tool. Write clear, specific descriptions. "Get the current stock price" is better than "Stock function."

  3. Sync and async both work. compound_interest is synchronous. get_stock_price is async. FastMCP handles both.

Running and Testing

Start the server:

python finance_server.py
Enter fullscreen mode Exit fullscreen mode

Or use the MCP CLI inspector for interactive testing:

mcp dev finance_server.py
Enter fullscreen mode Exit fullscreen mode

The inspector opens a browser UI where you can call tools, inspect schemas, and see responses — without connecting a full LLM client.

Pattern 2: Resources — Let the LLM Read Your Data

Resources are read-only data endpoints. The LLM (or the user through the client) requests a resource by URI, and your server returns the content. Resources work like GET endpoints in a REST API.

Two types exist: static resources with fixed URIs, and resource templates with dynamic segments.

from mcp.server.fastmcp import FastMCP
import json
from pathlib import Path
from datetime import datetime

mcp = FastMCP("project-data")

PROJECT_DIR = Path("./projects")


@mcp.resource("config://app-settings")
def get_app_settings() -> str:
    """Return current application settings."""
    settings = {
        "debug": False,
        "log_level": "INFO",
        "max_retries": 3,
        "timeout_seconds": 30,
        "version": "2.1.0",
    }
    return json.dumps(settings, indent=2)


@mcp.resource("status://health")
def health_check() -> str:
    """Return current system health status."""
    return json.dumps({
        "status": "healthy",
        "timestamp": datetime.now().isoformat(),
        "uptime_hours": 142.5,
        "active_connections": 7,
    })


@mcp.resource("file://projects/{project_name}/readme")
def get_project_readme(project_name: str) -> str:
    """Read the README file for a specific project.

    Args:
        project_name: Name of the project directory
    """
    readme_path = PROJECT_DIR / project_name / "README.md"
    if not readme_path.exists():
        return f"No README found for project '{project_name}'"
    return readme_path.read_text()


@mcp.resource("file://projects/{project_name}/structure")
def get_project_structure(project_name: str) -> str:
    """Get the directory structure of a project.

    Args:
        project_name: Name of the project directory
    """
    project_path = PROJECT_DIR / project_name
    if not project_path.exists():
        return f"Project '{project_name}' not found"

    lines = []
    for item in sorted(project_path.rglob("*")):
        if ".git" in item.parts or "__pycache__" in item.parts:
            continue
        relative = item.relative_to(project_path)
        depth = len(relative.parts) - 1
        prefix = "  " * depth + "├── " if depth > 0 else ""
        lines.append(f"{prefix}{item.name}")
    return "\n".join(lines) if lines else "Empty project"


if __name__ == "__main__":
    mcp.run(transport="stdio")
Enter fullscreen mode Exit fullscreen mode

The URI scheme is yours to define. config://, status://, file:// — pick whatever makes the data hierarchy clear. Clients list available resources and let users or the LLM browse them.

The key difference from tools: resources are not called by the LLM directly. The client application decides when to fetch a resource, often in response to user action or as context injection. Tools are LLM-initiated. Resources are client-initiated.

Pattern 3: Prompts — Reusable Templates for the LLM

Prompts are pre-built templates that guide the LLM toward a specific task. The client exposes them as options the user can select — like slash commands. When a user picks a prompt, the client sends the expanded template as messages.

from mcp.server.fastmcp import FastMCP

mcp = FastMCP("dev-prompts")


@mcp.prompt()
def code_review(code: str, language: str = "python") -> str:
    """Review code for bugs, security issues, and improvements.

    Args:
        code: The source code to review
        language: Programming language of the code
    """
    return f"""Review the following {language} code. Focus on:

1. **Bugs**: Logic errors, off-by-one, null references
2. **Security**: Injection, hardcoded secrets, unsafe deserialization
3. **Performance**: Unnecessary allocations, N+1 queries, missing indexes
4. **Readability**: Naming, function length, dead code

For each finding, state the severity (CRITICAL/HIGH/MEDIUM/LOW) and suggest a fix.

```{language}
{code}
```"""


@mcp.prompt()
def explain_error(error_message: str, context: str = "") -> str:
    """Explain an error message and suggest fixes.

    Args:
        error_message: The error message or traceback
        context: Optional context about what the code does
    """
    base_prompt = f"""Explain this error in plain language. Then provide 3 possible fixes, ranked by likelihood.

Error:
Enter fullscreen mode Exit fullscreen mode

{error_message}

    if context:
        base_prompt += f"\n\nContext about the code:\n{context}"
    return base_prompt


@mcp.prompt()
def write_tests(
    function_code: str,
    framework: str = "pytest",
    edge_cases: bool = True,
) -> str:
    """Generate tests for a Python function.

    Args:
        function_code: The function to write tests for
        framework: Test framework to use
        edge_cases: Whether to include edge case tests
    """
    content = f"""Write {framework} tests for this function:

Enter fullscreen mode Exit fullscreen mode


python
{function_code}


Requirements:
- Test the happy path with at least 2 examples
- Test input validation (wrong types, missing args)"""

    if edge_cases:
        content += """
- Test edge cases: empty input, None, boundary values
- Test error conditions"""

    content += f"""

Use {framework} conventions. Each test function should test exactly one behavior.
Name tests descriptively: test_<function>_<scenario>_<expected>."""

    return content


if __name__ == "__main__":
    mcp.run(transport="stdio")
Enter fullscreen mode Exit fullscreen mode


json

Each prompt function returns a string. FastMCP converts it into a user message automatically. For multi-turn prompts with alternating user/assistant messages, the SDK also supports returning a list of message objects — but a string covers most use cases.

Connecting to a Client

Your server runs. Now connect it. The standard configuration file for Claude Desktop is claude_desktop_config.json:

{
  "mcpServers": {
    "finance-tools": {
      "command": "python",
      "args": ["/absolute/path/to/finance_server.py"]
    },
    "project-data": {
      "command": "python",
      "args": ["/absolute/path/to/project_server.py"]
    },
    "dev-prompts": {
      "command": "python",
      "args": ["/absolute/path/to/prompts_server.py"]
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

On macOS, this file lives at ~/Library/Application Support/Claude/claude_desktop_config.json. On Windows, it is at %AppData%\Claude\claude_desktop_config.json.

After saving and restarting the client, your tools, resources, and prompts appear in the interface.

For Claude Code (the CLI), add MCP servers to your project's .claude/settings.json or your global settings.

When to Use Each Primitive

Primitive Who initiates Data flow Use case
Tool LLM decides to call LLM → Server → LLM Actions: API calls, calculations, database queries
Resource Client/user requests Server → Client Context: config files, project structure, system status
Prompt User selects Server → Client → LLM Workflows: code review, debugging, test generation

Most servers start with tools. Add resources when you need to inject context that the LLM should read but not actively call. Add prompts when you find yourself typing the same instructions repeatedly.

Production Considerations

Three things that matter once your server leaves localhost:

Error handling. Every tool should catch exceptions and return a meaningful error string instead of crashing the server. The get_stock_price example returns "Error: Could not fetch data" instead of letting httpx exceptions propagate.

Logging. For stdio-based servers, never print to stdout — it corrupts the JSON-RPC messages. Use logging (which defaults to stderr) or print("debug info", file=sys.stderr).

Transport. The examples use transport="stdio" — the client spawns your server as a subprocess and communicates over stdin/stdout. For remote servers, use transport="streamable-http" instead. The protocol is the same; only the transport layer changes.

What You Can Build From Here

The three patterns compose. A single server can expose tools, resources, and prompts together. A project management server might have:

  • Tools: create_task(), assign_task(), update_status()
  • Resources: project://tasks/active, project://team/members
  • Prompts: sprint_planning(goals), daily_standup(team)

MCP servers are composable across clients. Build once, connect from Claude Desktop, Cursor, VS Code, or your own custom application. The protocol does not change.

The official Python SDK documentation is at modelcontextprotocol.io/docs/develop/build-server. The source code is at github.com/modelcontextprotocol/python-sdk.


Follow @klement_gunndu for more AI engineering content. We're building in public.

Top comments (9)

Collapse
 
apogeewatcher profile image
Apogee Watcher

I was told on Twitter that MCP has just died :)

Collapse
 
klement_gunndu profile image
klement Gunndu

Ha, rumors of its death are greatly exaggerated — MCP just got donated to the Linux Foundation, has 97M+ monthly SDK downloads, and both OpenAI and Google DeepMind adopted it. The ecosystem is growing faster than ever.

Collapse
 
klement_gunndu profile image
klement Gunndu

Haha, MCP's "death" gets announced on Twitter about once a week — meanwhile adoption keeps climbing and Anthropic just shipped the streamable HTTP transport. The rumors are greatly exaggerated.

Collapse
 
sreno77 profile image
Scott Reno

Great information but the format seems broken

Collapse
 
klement_gunndu profile image
klement Gunndu

Good catch, Scott — there was a stray code fence tag breaking the rendering in the Prompts section (Pattern 3). Just pushed a fix, should display correctly now. Appreciate you flagging it.

Collapse
 
klement_gunndu profile image
klement Gunndu

Appreciate the heads up — I'll check the markdown rendering and get the formatting fixed. Which section looked most off to you?

Collapse
 
romainb_ai profile image
Romain Bailleul

Great post, thank you mate @klement_gunndu

Collapse
 
klement_gunndu profile image
klement Gunndu

Glad the patterns clicked — the resource pattern especially tends to surprise people with how much cleaner it makes context injection vs stuffing everything into prompts.

Collapse
 
klement_gunndu profile image
klement Gunndu

Appreciate it, Romain! If you end up building an MCP server with any of these patterns, I'd be curious which one you go with.