Why this article?
In this article, I explain MCP from scratch using practical examples. I also build a real MCP server for Cursor that fetches weather data and demonstrate how to integrate it with Cursor. After reading this article, you will be able to build your own MCP server.
Many beginners hear “MCP server” and think it is something super advanced.
It is actually a very practical idea: it lets AI assistants use your tools in a safe, structured way.
Think of it like this:
- Your AI assistant is a smart person.
- Your app/tool (weather API, database, calculator, etc.) is a machine in your workshop.
- MCP is the standard plug/socket that lets the smart person use the machine correctly.
What is an MCP Server?
MCP = Model Context Protocol is a protocol that allows us to build our own MCP servers.
An MCP server is a small program that exposes tools (functions) to an AI client (like Cursor).
In simple words:
- You write normal functions.
- Mark them as tools.
- Cursor can call those tools when needed.
Why do we need MCP?
Without MCP, the assistant can only guess from text.
With MCP, it can actually do useful actions.
Daily life example:
- Without MCP: asking a friend to “estimate” your room temperature.
- With MCP: giving your friend a real thermometer.
So MCP gives:
- Real data instead of guesses
- Structured calls (clear input/output)
- Reusable integration across tools and apps
Importance of MCP
1) Standard way to connect AI and tools
You do not build custom glue code for every tool/client combination.
2) Safer and clearer than random scripts
Tools are explicit: name, input, output.
3) Better developer productivity
Your assistant can fetch live info, run internal utilities, and help faster.
4) Easy to scale
Start with one tool (weather), later add more (news, calendar, tasks, DB lookups).
How MCP server works (simple flow)
- Client(Cursor in our case) starts your MCP server.
- Server announces available tools.
- User asks something (e.g., weather in Paris).
- Cursor calls the matching tool with structured arguments.
- Tool runs, gets data from API, returns a result.
- Cursor shows the answer.
Ways to Call an MCP Server
There are 3 main ways MCP servers are typically invoked:
⚙️ 1. stdio (Our Case ✅)
👉 Most common for local tools
How it Works
Client starts process → communicates via stdin/stdout
✅ Characteristics:
- No port required
- Fast (local IPC)
🌐 2. HTTP (Alternative)
👉 MCP server runs like an API server
Client → HTTP → MCP Server
✅ Characteristics:
- Needs port
- Remote access possible
- Easier for web apps
🔌 3. WebSocket (Realtime)
👉 For streaming / realtime interaction
Client ↔ WebSocket ↔ MCP Server
✅ Characteristics:
- Persistent connection
- Supports streaming responses
- More complex setup
Below is the Code Example(weather MCP server). Which we can integrate with Cursor.
"""
MCP server: current weather via Open-Meteo (geocoding + forecast APIs, no API key).
Run with: python weather_server.py
Connect Cursor using stdio (see project instructions for mcp.json).
"""
from __future__ import annotations
import httpx
from mcp.server.fastmcp import FastMCP
mcp = FastMCP("Weather")
# WMO Weather interpretation codes (Open-Meteo)
_WMO: dict[int, str] = {
0: "Clear sky",
1: "Mainly clear",
2: "Partly cloudy",
3: "Overcast",
45: "Fog",
48: "Depositing rime fog",
51: "Light drizzle",
53: "Moderate drizzle",
55: "Dense drizzle",
56: "Light freezing drizzle",
57: "Dense freezing drizzle",
61: "Slight rain",
63: "Moderate rain",
65: "Heavy rain",
66: "Light freezing rain",
67: "Heavy freezing rain",
71: "Slight snow fall",
73: "Moderate snow fall",
75: "Heavy snow fall",
77: "Snow grains",
80: "Slight rain showers",
81: "Moderate rain showers",
82: "Violent rain showers",
85: "Slight snow showers",
86: "Heavy snow showers",
95: "Thunderstorm",
96: "Thunderstorm with slight hail",
99: "Thunderstorm with heavy hail",
}
def _describe_code(code: int | None) -> str:
if code is None:
return "Unknown"
return _WMO.get(int(code), f"Code {code}")
async def _geocode(place: str) -> tuple[float, float, str, str | None]:
place = place.strip()
if not place:
raise ValueError("Location is empty.")
async with httpx.AsyncClient() as client:
geo = await client.get(
"https://geocoding-api.open-meteo.com/v1/search",
params={"name": place, "count": 1},
timeout=20.0,
)
geo.raise_for_status()
body = geo.json()
results = body.get("results") or []
if not results:
raise ValueError(f'No coordinates found for "{place}". Try another spelling.')
r0 = results[0]
lat = float(r0["latitude"])
lon = float(r0["longitude"])
name = r0.get("name", place)
admin = r0.get("admin1")
country = r0.get("country_code") or r0.get("country")
region_parts = [p for p in (admin, country) if p]
region = ", ".join(region_parts) if region_parts else None
label = f"{name}" + (f" ({region})" if region else "")
return lat, lon, label, region
@mcp.tool()
async def get_current_weather(location: str) -> str:
"""Current weather for a place name or city (worldwide). Examples: Tokyo, Paris France, 90210."""
try:
lat, lon, label, _ = await _geocode(location)
except ValueError as e:
return str(e)
except httpx.HTTPError as e:
return f"Geocoding request failed: {e}"
params = {
"latitude": lat,
"longitude": lon,
"current": ",".join(
[
"temperature_2m",
"relative_humidity_2m",
"apparent_temperature",
"precipitation",
"weather_code",
"wind_speed_10m",
"wind_direction_10m",
]
),
"wind_speed_unit": "ms",
"timezone": "auto",
}
try:
async with httpx.AsyncClient() as client:
resp = await client.get(
"https://api.open-meteo.com/v1/forecast",
params=params,
timeout=20.0,
)
resp.raise_for_status()
data = resp.json()
except httpx.HTTPError as e:
return f"Weather request failed: {e}"
cur = data.get("current") or {}
temp = cur.get("temperature_2m")
feels = cur.get("apparent_temperature")
rh = cur.get("relative_humidity_2m")
precip = cur.get("precipitation")
code = cur.get("weather_code")
wspd = cur.get("wind_speed_10m")
wdir = cur.get("wind_direction_10m")
summary = _describe_code(code)
lines = [
f"**{label}**",
"",
f"- Conditions: {summary}",
f"- Temperature: {temp} °C (feels like {feels} °C)" if temp is not None else "- Temperature: n/a",
]
if rh is not None:
lines.append(f"- Relative humidity: {rh} %")
if precip is not None:
lines.append(f"- Precipitation (current interval): {precip} mm")
if wspd is not None:
wind_line = f"- Wind: {wspd} m/s"
if wdir is not None:
wind_line += f" from {wdir}°"
lines.append(wind_line)
lines.extend(
[
"",
"_Data: [Open-Meteo](https://open-meteo.com/) (no API key)._",
]
)
return "\n".join(lines)
if __name__ == "__main__":
mcp.run()
Now let’s explain code.
1) Create MCP app
mcp = FastMCP("Weather")
This creates the server object and names it "Weather".
2) Keep weather code meanings
_WMO dictionary maps numeric weather codes to text like "Clear sky".
3) Helper to convert code to text
_describe_code(code) returns readable weather condition.
4) Convert place name to coordinates
_geocode(place) calls Open-Meteo geocoding API.
Example: "Lahore" -> latitude/longitude.
Why needed?
Weather API needs coordinates, not just city name.
5) Expose a tool with decorator
@mcp.tool()
This makes get_current_weather(location) available to Cursor.
6) Fetch weather data
Inside get_current_weather:
First geocode the location.
Then call forecast endpoint with requested current fields:
- temperature
- humidity
- precipitation
- wind speed/direction
- weather code
7) Format response
Builds a markdown-style response list and returns one final string.
If errors happen (bad location/network), returns friendly messages.
8) Start server when run directly
if name == "main": mcp.run()
This starts MCP server loop.
How to integrate this with Cursor
Steps:
- Open Cursor MCP settings.
Click on 'Add Custom MCP'
It will open mcp.json file.
Paste below code in it.
{
"mcpServers": {
"weather": {
"command": "python",
"args": ["e:\\Python Projects\\weatherMCP\\weather_server.py"] // replace this path with your local project path
}
}
}
- Restart Cursor, and the weather MCP server will appear under Tools & MCPs.
Now, if you ask for the weather in Cursor, it will use the weather tool to provide the answer. See the screenshot below.



Top comments (0)