DEV Community

Raghava Joijode
Raghava Joijode

Posted on

Building Your First MCP Server: A Practical Guide

Part 2 of 3 in the MCP Server Series

In Part 1, we explored what MCP servers are and why they matter. Now it's time to build one.

By the end of this tutorial, you'll have a working MCP server that connects your AI to real data and tools. We'll cover both Python and TypeScript implementations, so you can choose what fits your stack.

Series Navigation:

  • Part 1: Understanding MCP conceptually
  • Part 2 (This post): Building your first MCP server
  • Part 3: Security, performance, and production deployment

Who This Guide Is For

Prerequisites:

  • Basic Python or TypeScript/Node.js knowledge
  • Familiarity with async/await patterns
  • Understanding of JSON and APIs (REST experience is helpful)
  • 30-45 minutes to follow along

You'll learn:

  • How to choose between Python and TypeScript for MCP
  • Building a complete, working MCP server
  • Connecting your server to Claude
  • Testing with Claude Desktop

Python vs TypeScript: Which Should You Use?

This is the most common question I see. Both languages have official MCP SDK support, but they excel in different scenarios.

Python: The AI-Native Choice

Why Python dominates MCP development:

  • FastMCP (part of the official SDK) has 3.7k+ GitHub stars
  • Minimal boilerplate - get servers running in minutes
  • Direct integration with AI/ML libraries (TensorFlow, PyTorch, pandas)
  • Perfect for data science and analytics workflows
  • Cleaner syntax for rapid prototyping

Best for:

  • Data-heavy operations (analytics, ML inference, data transformation)
  • Scientific computing integrations
  • Existing Python ecosystems (Django, FastAPI applications)
  • Teams prioritizing development speed

Popular frameworks:

  • FastMCP - Official, minimal config, best community support
  • FastAPI-MCP - Zero-config for existing FastAPI apps

TypeScript/Node.js: The Integration Powerhouse

Why TypeScript shines for MCP:

  • Superior I/O performance for concurrent operations
  • Event-driven architecture handles multiple simultaneous AI requests
  • Exceptional protocol handling (STDIO, SSE, WebSocket)
  • Massive NPM ecosystem (350k+ packages)
  • Perfect for IDE extensions and developer tools

Best for:

  • High-concurrency scenarios (many AI agents making requests)
  • Real-time streaming applications
  • VS Code extensions and developer tooling
  • Orchestrating multiple external APIs
  • Network-heavy operations

Popular frameworks:

  • EasyMCP - Express-like API, great developer experience
  • Official MCP TypeScript SDK - Production-ready reference

My Recommendation

Choose Python if:

  • You're processing data, running ML models, or doing scientific computing
  • Your team works primarily in Python
  • You value faster development and cleaner code

Choose TypeScript if:

  • You need high concurrency and raw I/O performance
  • You're building IDE extensions or developer tools
  • You're primarily orchestrating external API calls

Reality check: Both work excellently. For this tutorial, I'll show Python (it's more beginner-friendly), but I'll link to TypeScript equivalents.

Building the Customer Data MCP Server (Python)

Let's build a practical MCP server that gives AI access to customer data. This mirrors real-world use cases like customer support, CRM integration, or sales automation.

Step 1: Install Dependencies

# Create a new directory
mkdir customer-mcp-server
cd customer-mcp-server

# Install MCP SDK
pip install mcp

# Optional: Create virtual environment first
python -m venv venv
source venv/bin/activate  # On Windows: venv\Scripts\activate
pip install mcp
Enter fullscreen mode Exit fullscreen mode

Step 2: Create the Server

Create customer_server.py:

from mcp.server import Server
from mcp.types import Tool, TextContent
import mcp.server.stdio
import json

# Sample in-memory database (in production, this would be a real database)
CUSTOMERS = {
    "1": {
        "name": "Alice Johnson",
        "email": "alice@example.com",
        "tier": "premium",
        "orders": 24
    },
    "2": {
        "name": "Bob Smith",
        "email": "bob@example.com",
        "tier": "standard",
        "orders": 5
    },
    "3": {
        "name": "Carol Davis",
        "email": "carol@example.com",
        "tier": "premium",
        "orders": 47
    }
}

# Create the MCP server instance
app = Server("customer-data-server")

@app.list_tools()
async def list_tools() -> list[Tool]:
    """
    Define the tools available to AI.
    This is like creating an API documentation that the AI can read.
    """
    return [
        Tool(
            name="get_customer",
            description="Retrieve detailed customer information by their ID. "
                       "Use this when you need to look up a specific customer's "
                       "email, tier status, or order count.",
            inputSchema={
                "type": "object",
                "properties": {
                    "customer_id": {
                        "type": "string",
                        "description": "The customer's unique identifier (e.g., '1', '2', '3')"
                    }
                },
                "required": ["customer_id"]
            }
        ),
        Tool(
            name="list_premium_customers",
            description="Get a list of all customers with premium tier status. "
                       "Useful for identifying high-value customers or sending "
                       "premium-only communications.",
            inputSchema={
                "type": "object",
                "properties": {}
            }
        ),
        Tool(
            name="search_customers",
            description="Search for customers by name (case-insensitive). "
                       "Returns all customers whose names contain the search query.",
            inputSchema={
                "type": "object",
                "properties": {
                    "query": {
                        "type": "string",
                        "description": "The name or partial name to search for"
                    }
                },
                "required": ["query"]
            }
        )
    ]

@app.call_tool()
async def call_tool(name: str, arguments: dict) -> list[TextContent]:
    """
    Handle tool execution requests from the AI.
    This is where the actual work happens.
    """

    if name == "get_customer":
        customer_id = arguments["customer_id"]
        customer = CUSTOMERS.get(customer_id)

        if customer:
            result = json.dumps(customer, indent=2)
        else:
            result = f"Customer with ID '{customer_id}' not found"

        return [TextContent(type="text", text=result)]

    elif name == "list_premium_customers":
        premium = [
            {"id": cid, **cdata}
            for cid, cdata in CUSTOMERS.items()
            if cdata["tier"] == "premium"
        ]
        return [TextContent(type="text", text=json.dumps(premium, indent=2))]

    elif name == "search_customers":
        query = arguments["query"].lower()
        results = [
            {"id": cid, **cdata}
            for cid, cdata in CUSTOMERS.items()
            if query in cdata["name"].lower()
        ]

        if results:
            result_text = json.dumps(results, indent=2)
        else:
            result_text = f"No customers found matching '{arguments['query']}'"

        return [TextContent(type="text", text=result_text)]

    else:
        raise ValueError(f"Unknown tool: {name}")

async def main():
    """
    Run the MCP server using stdio transport.
    This allows it to communicate with AI clients through standard input/output.
    """
    async with mcp.server.stdio.stdio_server() as (read_stream, write_stream):
        await app.run(
            read_stream,
            write_stream,
            app.create_initialization_options()
        )

if __name__ == "__main__":
    import asyncio
    asyncio.run(main())
Enter fullscreen mode Exit fullscreen mode

Step 3: Test the Server Standalone

Before connecting to AI, let's verify it runs:

python customer_server.py
Enter fullscreen mode Exit fullscreen mode

The server will start and wait for input. If there are no errors, you're good! Press Ctrl+C to stop it.

What we just built:

  • ✅ Three tools the AI can use
  • ✅ Clear descriptions so AI knows when to use each tool
  • ✅ Type-safe input validation
  • ✅ Helpful error messages

Connecting Your MCP Server to Claude

Now let's connect this server to Claude so it can actually use these tools.

Method 1: Claude Desktop (Easiest)

This is the fastest way to see your MCP server in action.

Step 1: Find your Claude Desktop config file:

  • macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
  • Windows: %APPDATA%/Claude/claude_desktop_config.json

Step 2: Edit the config (create the file if it doesn't exist):

{
  "mcpServers": {
    "customer-data": {
      "command": "python",
      "args": ["/absolute/path/to/customer_server.py"]
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Important: Use the absolute path to your Python file!

Step 3: Restart Claude Desktop completely (Quit and reopen)

Step 4: Test it! In Claude Desktop, try:

Who are our premium customers?
Enter fullscreen mode Exit fullscreen mode

Claude should automatically use your list_premium_customers tool and return the results!

Try these prompts:

  • "Look up customer 1"
  • "Search for customers with 'smith' in their name"
  • "What's Alice's email address?"

You'll see Claude automatically choosing the right tools based on your descriptions.

Method 2: Programmatic Connection (For Custom Apps)

If you're building your own AI application, here's how to connect programmatically:

import anthropic
from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import stdio_client
import asyncio

ANTHROPIC_API_KEY = "your-api-key-here"

async def run_ai_with_mcp():
    """Connect Claude to your MCP server programmatically"""

    # 1. Configure the MCP server connection
    server_params = StdioServerParameters(
        command="python",
        args=["customer_server.py"]
    )

    # 2. Start the MCP server and create a session
    async with stdio_client(server_params) as (read, write):
        async with ClientSession(read, write) as session:

            # 3. Initialize and discover available tools
            await session.initialize()
            tools_response = await session.list_tools()

            print(f"Connected! Available tools: {[t.name for t in tools_response.tools]}\n")

            # 4. Convert MCP tools to Claude's expected format
            claude_tools = [{
                "name": tool.name,
                "description": tool.description,
                "input_schema": tool.inputSchema
            } for tool in tools_response.tools]

            # 5. Create Claude client
            client = anthropic.Anthropic(api_key=ANTHROPIC_API_KEY)

            # 6. Send a user query
            user_message = "Can you find all premium customers and tell me their names?"
            print(f"User: {user_message}\n")

            messages = [{"role": "user", "content": user_message}]

            # 7. Conversation loop with tool usage
            while True:
                response = client.messages.create(
                    model="claude-sonnet-4-20250514",
                    max_tokens=4096,
                    tools=claude_tools,
                    messages=messages
                )

                # Add Claude's response to conversation
                messages.append({
                    "role": "assistant",
                    "content": response.content
                })

                # Check if Claude wants to use tools
                if response.stop_reason == "tool_use":
                    tool_results = []

                    # Execute each tool call through MCP
                    for block in response.content:
                        if block.type == "tool_use":
                            print(f"Claude is calling: {block.name}")
                            print(f"With arguments: {json.dumps(block.input, indent=2)}\n")

                            # Execute via MCP server
                            result = await session.call_tool(block.name, block.input)

                            print(f"Result: {result.content[0].text}\n")

                            tool_results.append({
                                "type": "tool_result",
                                "tool_use_id": block.id,
                                "content": result.content[0].text
                            })

                    # Send tool results back to Claude
                    messages.append({
                        "role": "user",
                        "content": tool_results
                    })
                else:
                    # Claude provided final answer
                    final_text = next(
                        (block.text for block in response.content if hasattr(block, "text")),
                        None
                    )

                    if final_text:
                        print(f"Claude: {final_text}")
                    break

if __name__ == "__main__":
    asyncio.run(run_ai_with_mcp())
Enter fullscreen mode Exit fullscreen mode

Save this as test_client.py and run:

python test_client.py
Enter fullscreen mode Exit fullscreen mode

Expected output:

Connected! Available tools: ['get_customer', 'list_premium_customers', 'search_customers']

User: Can you find all premium customers and tell me their names?

Claude is calling: list_premium_customers
With arguments: {}

Result: [
  {
    "id": "1",
    "name": "Alice Johnson",
    ...
  },
  ...
]

Claude: I found 2 premium customers: Alice Johnson and Carol Davis.
Enter fullscreen mode Exit fullscreen mode

TypeScript Version (For Node.js Developers)

If you prefer TypeScript, here's the equivalent server:

import { Server } from "@modelcontextprotocol/sdk/server/index.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { z } from "zod";

const CUSTOMERS: Record<string, any> = {
  "1": { name: "Alice Johnson", email: "alice@example.com", tier: "premium" },
  "2": { name: "Bob Smith", email: "bob@example.com", tier: "standard" },
};

const server = new Server(
  { name: "customer-data-server", version: "1.0.0" },
  { capabilities: { tools: {} } }
);

server.setRequestHandler("tools/list", async () => ({
  tools: [
    {
      name: "get_customer",
      description: "Retrieve customer information by ID",
      inputSchema: {
        type: "object",
        properties: {
          customer_id: { type: "string", description: "Customer's unique ID" }
        },
        required: ["customer_id"]
      }
    },
    {
      name: "list_premium_customers",
      description: "Get all premium tier customers",
      inputSchema: { type: "object", properties: {} }
    }
  ]
}));

server.setRequestHandler("tools/call", async (request) => {
  const { name, arguments: args } = request.params;

  if (name === "get_customer") {
    const customer = CUSTOMERS[args.customer_id];
    return {
      content: [{
        type: "text",
        text: customer ? JSON.stringify(customer, null, 2) : "Customer not found"
      }]
    };
  }

  if (name === "list_premium_customers") {
    const premium = Object.entries(CUSTOMERS)
      .filter(([_, data]) => data.tier === "premium")
      .map(([id, data]) => ({ id, ...data }));
    return {
      content: [{ type: "text", text: JSON.stringify(premium, null, 2) }]
    };
  }

  throw new Error(`Unknown tool: ${name}`);
});

const transport = new StdioServerTransport();
await server.connect(transport);
Enter fullscreen mode Exit fullscreen mode

Install dependencies:

npm init -y
npm install @modelcontextprotocol/sdk zod
Enter fullscreen mode Exit fullscreen mode

Expanding Your Server: Adding Real Functionality

Our example uses in-memory data, but here's how to connect to real systems:

Connecting to PostgreSQL

import asyncpg

async def get_customer_from_db(customer_id: str):
    conn = await asyncpg.connect('postgresql://user:pass@localhost/db')
    try:
        result = await conn.fetchrow(
            'SELECT * FROM customers WHERE id = $1',
            customer_id
        )
        return dict(result) if result else None
    finally:
        await conn.close()

@app.call_tool()
async def call_tool(name: str, arguments: dict):
    if name == "get_customer":
        customer = await get_customer_from_db(arguments["customer_id"])
        # ... rest of the code
Enter fullscreen mode Exit fullscreen mode

Calling External APIs

import httpx

async def create_support_ticket(title: str, description: str):
    async with httpx.AsyncClient() as client:
        response = await client.post(
            'https://api.your-ticketing-system.com/tickets',
            headers={'Authorization': f'Bearer {TICKET_API_KEY}'},
            json={'title': title, 'description': description}
        )
        return response.json()
Enter fullscreen mode Exit fullscreen mode

Reading Files

import aiofiles

async def read_log_file(filename: str):
    async with aiofiles.open(f'/var/log/{filename}', 'r') as f:
        return await f.read()
Enter fullscreen mode Exit fullscreen mode

Testing Best Practices

Before deploying, test your MCP server thoroughly:

1. Unit Test Individual Tools

import pytest

@pytest.mark.asyncio
async def test_get_customer():
    result = await call_tool("get_customer", {"customer_id": "1"})
    data = json.loads(result[0].text)
    assert data["name"] == "Alice Johnson"
    assert data["tier"] == "premium"

@pytest.mark.asyncio
async def test_customer_not_found():
    result = await call_tool("get_customer", {"customer_id": "999"})
    assert "not found" in result[0].text.lower()
Enter fullscreen mode Exit fullscreen mode

2. Test with Claude Desktop First

Before writing complex client code, always test with Claude Desktop. It's the fastest way to:

  • Verify tool descriptions are clear
  • Catch input schema issues
  • See how Claude interprets your tools

3. Add Logging

import logging

logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)

@app.call_tool()
async def call_tool(name: str, arguments: dict):
    logger.info(f"Tool called: {name} with args: {arguments}")
    # ... rest of implementation
Enter fullscreen mode Exit fullscreen mode

Common Pitfalls & Solutions

Issue: "Server not responding"

Solution: Check absolute paths in config. Use which python or where python to find the correct Python path.

Issue: "Tool not being called"

Solution: Improve tool descriptions. Be specific about when to use each tool.

Issue: "Arguments are wrong"

Solution: Make your inputSchema more detailed with better descriptions for each property.

Issue: "Server crashes on certain inputs"

Solution: Add proper error handling and validation in call_tool().

What's Next?

You now have a working MCP server! But before deploying to production, there are critical considerations around security, authentication, and performance.

In Part 3, we'll cover:

  • Production security best practices
  • Authentication and authorization
  • Rate limiting and error handling
  • Performance optimization
  • Deployment strategies
  • Monitoring and debugging

Resources

Questions? Drop them in the comments. Share your MCP server projects—I'd love to see what you're building!


Follow for Part 3 where we make this production-ready with security, performance optimization, and deployment strategies.

Top comments (0)