You've heard MCP is the 'USB-C for AI.' But what does it take to actually build one? A hands-on walkthrough of creating an MCP server from scratch using Python and FastMCP — with tools your LLM can call.
Build Your First MCP Server in Python — A Weekend Project That Actually Impresses
Everyone talks about MCP. Very few people have actually built a server. Here's how to be one of them — in about an hour.
The Moment Your LLM Gets Hands
You've been playing with ChatGPT or Claude for months. Asking questions, generating code, summarizing documents. But there's always a wall: the model can only work with what you give it. Want it to check today's weather? You copy-paste from a browser. Want it to query your database? You run the query yourself and paste the results.
Now imagine you told your AI: "What's the weather in Chennai right now?" — and it actually went and fetched the answer. Not from training data. Not from a cached response. It called a real API, got real-time data, and gave you the result.
That's what an MCP server lets you do. You build a small Python service that exposes "tools" — functions your LLM can discover and call. The LLM sees what tools are available, decides which one to use, passes the right parameters, and gets the response back. No copy-pasting. No manual plumbing.
And the best part? The server you build works with any MCP-compatible client — Claude Desktop, Cursor, VS Code, or any custom app.
Why Should You Care?
Two reasons. First, MCP is becoming the standard way AI agents interact with external tools, adopted by both Anthropic and OpenAI, and governed by the Linux Foundation. Knowing how to build MCP servers is a genuinely useful skill for any AI-focused role.
Second — and more practically — this is one of the best portfolio projects you can build right now. Most people's AI projects are "I wrapped an API call in a chatbot." Building an MCP server shows you understand protocols, tool design, and how AI agents actually connect to the real world. That's a very different conversation in an interview.
Let Me Back Up — What Are We Building?
We're going to build a Python MCP server that exposes three tools to any LLM client:
- get_weather — Fetches current weather for any city using a free API
- calculate — Evaluates a math expression safely
- random_fact — Returns a random fun fact (because why not)
The server will run locally on your machine. We'll connect it to Claude Desktop so you can actually chat with an AI that uses your tools. The whole thing takes about 50 lines of Python.
What we're building: a Python server that exposes tools to Claude Desktop via MCP.
Okay, Let's Build It — Step by Step
Step 1: Set Up Your Environment
You need Python 3.10 or higher. If you're on a Mac or Linux machine, you probably already have it. Check with python3 --version.
Create a project folder and set up a virtual environment:
mkdir my-mcp-server && cd my-mcp-server
python3 -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
Install the MCP SDK. The recommended way is using pip:
pip install "mcp[cli]" httpx
mcp is the official MCP Python SDK. httpx is for making HTTP requests to external APIs. We're also pulling in FastMCP, which is included in the SDK and gives us a clean decorator-based API for defining tools.
Step 2: Write the Server
Create a file called server.py. Here's the entire thing:
from mcp.server.fastmcp import FastMCP
import httpx
import random
# Create the MCP server
mcp = FastMCP("my-first-server")
# Tool 1: Get weather for a city
@mcp.tool()
async def get_weather(city: str) -> str:
"""Get current weather for a city. Returns temperature and conditions."""
async with httpx.AsyncClient() as client:
response = await client.get(
f"https://wttr.in/{city}?format=j1"
)
data = response.json()
current = data["current_condition"][0]
temp = current["temp_C"]
desc = current["weatherDesc"][0]["value"]
return f"{city}: {temp}°C, {desc}"
# Tool 2: Safe calculator
@mcp.tool()
async def calculate(expression: str) -> str:
"""Evaluate a math expression safely. Example: '2 + 3 * 4'"""
allowed = set("0123456789+-*/.() ")
if not all(c in allowed for c in expression):
return "Error: Only numbers and basic operators allowed"
try:
result = eval(expression) # Safe because we filtered input
return f"{expression} = {result}"
except Exception as e:
return f"Error: {str(e)}"
# Tool 3: Random fun fact
@mcp.tool()
async def random_fact() -> str:
"""Return a random fun fact about technology or science."""
facts = [
"The first computer bug was an actual bug — a moth found in a Harvard Mark II computer in 1947.",
"The first 1GB hard drive, introduced in 1980, weighed about 250 kg and cost $40,000.",
"About 90% of the world's data was created in the last two years.",
"The average smartphone today has more computing power than NASA had for the Apollo 11 moon landing.",
"The first website ever created is still online at info.cern.ch.",
]
return random.choice(facts)
if __name__ == "__main__":
mcp.run(transport="stdio")
That's it. Seriously. Let's break down what's happening.
Step 3: Understand What You Just Wrote
FastMCP is a wrapper from the official SDK that makes defining tools dead simple. You create an instance, decorate your functions with @mcp.tool(), and FastMCP handles all the MCP protocol stuff — JSON-RPC messages, tool discovery, parameter validation.
Each tool function has:
-
Type hints —
city: strtells the LLM what parameters the tool expects - A docstring — This is critical. The LLM reads this to decide when to use the tool. Write it like you're explaining the tool to a person.
- A return value — A string that gets sent back to the LLM as the observation
The mcp.run(transport="stdio") at the bottom starts the server using standard input/output — this is how Claude Desktop communicates with local MCP servers. No HTTP, no ports, just stdin/stdout.
The full flow: you ask a question, Claude decides to use your tool, the server calls the API, and the result flows back.
Step 4: Connect to Claude Desktop
Now the fun part — making Claude Desktop actually use your server.
Open Claude Desktop and go to Settings > Developer > Edit Config. This opens a JSON file. Add your server to it:
{
"mcpServers": {
"my-first-server": {
"command": "python3",
"args": ["/full/path/to/your/server.py"],
"env": {}
}
}
}
Replace /full/path/to/your/server.py with the actual path to your file. Save, and restart Claude Desktop.
You should now see a small hammer icon in the chat input area — that means Claude has discovered your tools. Click it to see the three tools listed.
Step 5: Test It
Type into Claude: "What's the weather in Bangalore right now?"
Claude should recognize that it needs to use the get_weather tool, call your server, and return the live weather data. Try the calculator: "What's 15 * 37 + 42?" Try the fun fact: "Tell me a random tech fact."
Each time, you'll see Claude decide which tool to use, call it through your MCP server, and incorporate the result into its response. You've just given an LLM the ability to do things it couldn't do before.
Making It Better — Ideas for Your Next Steps
The basic server works, but here are a few directions to take it further:
Add more tools. Wrap any API you use regularly — a to-do list API, a movie database, your college's timetable system, a GitHub API for checking your repos. Each tool is just another decorated function.
Add resources. Tools let the LLM do things. Resources let it read things. You can expose file contents, database records, or API responses as read-only resources that the LLM can pull into its context.
Try the inspector. The MCP SDK ships with a built-in inspector tool. Run mcp dev server.py to get a web UI where you can test your tools interactively without needing Claude Desktop. Super useful for debugging.
Deploy it remotely. Local servers are great for development, but for production you'd want an HTTP-based transport (SSE or WebSocket). The FastMCP docs cover this — it's a one-line change from stdio to sse.
Mistakes That Bite — Things That Trip Up Beginners
"My tools don't show up in Claude Desktop." The most common issue. Double-check: is the path in the config JSON absolute? Is the virtual environment activated? Did you restart Claude Desktop after editing the config? The server needs to start successfully for tools to appear.
"The docstring doesn't matter, right?" Wrong. The LLM uses the docstring to decide whether and when to use your tool. A vague docstring like "does something with weather" will confuse the model. Be specific: "Get current weather for a city. Returns temperature in Celsius and conditions."
"I'll just expose my entire database as a tool." Resist this urge. Each tool should do one specific thing. A tool called query_anything that accepts raw SQL is both a security nightmare and confusing for the LLM. Instead, create focused tools like get_user_by_email or list_recent_orders. Smaller, focused tools get used correctly more often.
Now Go Break Something
You've just built something that most developers only read about. An MCP server — the standard protocol that major AI labs are converging on — running on your machine, giving an LLM real-world capabilities.
Here's what to explore next:
- The official MCP docs at modelcontextprotocol.io have a quickstart guide and full API reference
- The free Hugging Face MCP course walks through building servers and connecting them to agents, with hands-on exercises
- FastMCP's GitHub repo has examples for advanced patterns like authentication, streaming, and resource management
- Search for "MCP server examples" on GitHub — the community has built servers for Notion, Kubernetes, Spotify, and hundreds of other services. Reading other people's servers is one of the fastest ways to learn
Remember staring at your LLM, wishing it could just check the weather or run a calculation instead of making you do it? Fifty lines of Python later, it can. That's what MCP is about — not replacing what LLMs do well, but giving them the tools to do what they couldn't. Your server is small. The pattern scales to anything.
Author: Shibin


Top comments (0)