MCP (Model Context Protocol) is the standard Anthropic built to let AI agents use tools. 17,000+ MCP servers exist, but most are API wrappers. You can build one in 30 minutes that actually adds value.
This is a complete walkthrough based on shipping FinanceKit (17 tools) and SiteAudit (11 tools) in 2 weeks solo.
The stack
- FastMCP 3.2 — Python framework for MCP servers. Handles stdio + HTTP transport, tool registration, and schema generation automatically.
- uv — Fast Python package manager. Way better than pip for this.
- Pydantic — For parameter validation.
Step 1: Init the project (2 min)
uv init my-mcp
cd my-mcp
uv add fastmcp
Your pyproject.toml:
[project]
name = "my-mcp"
version = "0.1.0"
requires-python = ">=3.11"
dependencies = ["fastmcp>=3.2.0"]
[project.scripts]
my-mcp = "my_mcp.server:main"
Step 2: Write your first tool (5 min)
src/my_mcp/server.py:
"""My first MCP server — weather tool."""
import os
from typing import Annotated
import requests
from fastmcp import FastMCP
from pydantic import Field
mcp = FastMCP(
name="my-weather",
instructions="Provides current weather for any city.",
version="0.1.0",
)
@mcp.tool(
tags={"weather"},
annotations={"readOnlyHint": True},
)
def get_weather(
city: Annotated[str, Field(description="City name (e.g., 'Madrid')")],
) -> dict:
"""Get current weather for a city."""
resp = requests.get(
f"https://wttr.in/{city}?format=j1",
timeout=10,
)
data = resp.json()
current = data["current_condition"][0]
return {
"city": city,
"temp_c": current["temp_C"],
"feels_like_c": current["FeelsLikeC"],
"description": current["weatherDesc"][0]["value"],
"humidity_pct": current["humidity"],
"wind_kph": current["windspeedKmph"],
}
def main():
"""Entry point — auto stdio/HTTP based on PORT env."""
if os.environ.get("PORT"):
port = int(os.environ["PORT"])
mcp.run(transport="http", host="0.0.0.0", port=port)
else:
mcp.run() # stdio default
if __name__ == "__main__":
main()
Step 3: Test locally (3 min)
uv run python -c "
import asyncio
from my_mcp.server import mcp
tools = asyncio.run(mcp.list_tools())
print(f'Tools: {len(tools)}')
for t in tools:
print(f' {t.name}: {t.description[:60]}')
"
Expected output: Tools: 1 → get_weather: Get current weather for a city.
Step 4: Connect to Claude Desktop (2 min)
Add to ~/Library/Application Support/Claude/claude_desktop_config.json (Mac) or equivalent:
{
"mcpServers": {
"my-weather": {
"command": "uv",
"args": ["--directory", "/absolute/path/to/my-mcp", "run", "my-mcp"]
}
}
}
Restart Claude Desktop. Ask: "What's the weather in Madrid?" — Claude will call your tool.
Step 5: Make it BETTER than an API wrapper (10 min)
The hack that separates monetizable MCPs from throwaways:
Don't return raw data. Return verdicts.
Bad (what most MCPs do):
{"temp_c": "18", "humidity": "65", "condition": "Cloudy"}
Good (what FinanceKit's technical_analysis tool does):
{
"overall_verdict": "PLEASANT",
"recommendation": "Good weather for outdoor activities",
"alerts": [],
"raw_data": {"temp_c": 18, ...}
}
LLMs consume verdicts better than numbers. Users feel the value immediately.
Here's the pattern applied to weather:
@mcp.tool()
def get_weather_smart(city: str) -> dict:
"""Weather with activity recommendations."""
raw = get_weather(city) # from Step 2
temp = int(raw["temp_c"])
wind = int(raw["wind_kph"])
if temp < 5 or temp > 35:
verdict = "EXTREME"
recommendation = "Stay indoors"
elif wind > 40:
verdict = "WINDY"
recommendation = "Outdoor activities not recommended"
elif 18 <= temp <= 25:
verdict = "PERFECT"
recommendation = "Ideal for any outdoor activity"
else:
verdict = "OK"
recommendation = "Fine for outdoor activities with appropriate clothing"
return {
"city": city,
"verdict": verdict,
"recommendation": recommendation,
"conditions": raw,
}
Step 6: Publish to PyPI (5 min)
uv build
uv publish # needs PyPI token
Now anyone can install: pip install my-mcp.
Step 7: Distribute (3 min)
Submit via GitHub issues to:
- punkpeye/awesome-mcp-servers (84K stars) — one server per PR
- mcp.so — GitHub issue
- modelcontextprotocol/servers — official community servers
-
Smithery — auto-discovers from GitHub if you add
smithery.yaml -
Glama — auto-indexes, but requires
glama.jsonfor ownership
Also publish to the Official MCP Registry (feeds PulseMCP, Smithery, Anthropic):
brew install mcp-publisher
mcp-publisher init
# edit server.json
mcp-publisher login github
mcp-publisher publish
Step 8: Monetize (optional)
Three paths:
- Subscription — MCPize.com (85% rev share, Stripe Connect included)
- Pay-per-call — x402 protocol (Coinbase SDK shipped April 2026)
-
Sponsors — GitHub Sponsors from the
.github/FUNDING.ymlfile
Tool description quality = discoverability
Glama and Smithery rank MCPs by Tool Description Quality Score (TDQS). The worst-scoring tool drags your whole score (40% weight). So:
- Every tool needs a clear
descriptionarg (not just the docstring) - Parameters need
Field(description=...) - Use
annotations={"readOnlyHint": True}for non-mutating tools - Return structured data with well-named keys, not flat strings
What took me longer than expected
- Distribution is 30% of the work. Building took 10 days. Distributing to 27 directories + first Reddit/Twitter posts took another 3 days.
- Smithery scoring is brutal. A score <60 means invisible in their search. Iterate on descriptions until AAA.
- Monetization UX. Users don't want to sign up for a new platform. MCPize one-click install is the highest converter.
Next steps
- Look at FinanceKit and SiteAudit as reference implementations
- Read the Coinbase x402 examples for payment integration
- Join the Anthropic Discord #mcp channel
Links
- FastMCP: https://gofastmcp.com
- MCP spec: https://modelcontextprotocol.io
- My MCPs (reference):
- FinanceKit — 17 financial tools
- SiteAudit — 11 web audit tools
- Try them instantly on MCPize — free tier
If you build something with this, drop the link in the comments. Happy to help debug or boost it on Twitter.
Top comments (0)