I Built 2 Production MCP Servers — Here's What I Learned
Most MCP tutorials stop at "hello world." Here's what happens when you build the real thing.
I've built two production MCP (Model Context Protocol) servers over the past few months:
- OathScore — 8 tools for AI trading agents. Real-time exchange status, volatility data, economic events, API quality ratings. Live at api.oathscore.dev.
- Curistat — 10 tools for volatility forecasting. Regime detection, directional signals, session planning. Launching March 2026.
Both are built with Python, FastMCP, and FastAPI. Both are deployed on Railway. Both are listed in MCP directories. Here's everything I learned that the tutorials don't tell you.
1. Tool Design Matters More Than Code
The hardest part of building an MCP server isn't the code — it's deciding what the tools should be.
AI agents read your tool descriptions to decide when to call them. If the description is vague, the agent won't use it. If you make too many tools, the agent gets confused. Too few, and it can't do anything useful.
What worked: One "dashboard" tool that returns everything in a single call. OathScore's get_now combines exchange status, volatility, events, and data health into one response. An agent asks "what's happening in markets?" and gets a complete answer without chaining 4 separate calls.
@mcp.tool()
def get_now() -> str:
"""Get current world state: exchange status, volatility
(VIX/VVIX/SKEW/term structure), economic event countdowns,
and data health. One call replaces 4-6 separate API calls."""
data = _get("/now")
return json.dumps(data, indent=2)
The rule: If an agent would always call tools A, B, and C together, combine them into one tool. Agents are better at using fewer, richer tools than many granular ones.
2. Docstrings Are Your API Contract
The tool's docstring is the only thing an AI agent sees before deciding to call it. This is your API documentation, marketing copy, and user manual in one string.
Bad:
def get_data() -> str:
"""Get data."""
Good:
def get_score(api_name: str) -> str:
"""Get OathScore quality rating for a specific API.
Available APIs: curistat, alphavantage, polygon, finnhub,
twelvedata, eodhd, fmp. Returns composite score (0-100),
letter grade, and component breakdown."""
The good version tells the agent:
- What APIs are valid inputs
- What the response looks like
- What the score means
I rewrote every docstring 3-4 times before agents used them reliably.
3. Return JSON Strings, Not Objects
MCP tools return strings. Always return json.dumps(data, indent=2) — not raw dicts, not plain text, not markdown.
Why: Agents can parse JSON reliably. They struggle with unstructured text. And indent=2 makes it readable when the agent shows it to the user.
# Do this
return json.dumps({"score": 85, "grade": "B+"}, indent=2)
# Not this
return "The score is 85 (B+)"
4. HTTP Clients Need Timeouts
This sounds obvious but I shipped without it once. An upstream API hung for 30 seconds, which froze the MCP tool, which froze the agent, which froze the user's Claude Desktop.
_client = httpx.Client(timeout=15)
15 seconds is generous. Most API calls finish in 1-2 seconds. If it takes longer than 15, something is wrong and the agent should get an error, not hang.
5. Deployment: Docker + Railway Is the Fastest Path
My stack for both servers:
FROM python:3.12-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
CMD uvicorn src.main:app --host 0.0.0.0 --port ${PORT:-8000}
Railway auto-deploys on git push. $5/month for a hobby project. Custom domain with Cloudflare DNS. The whole deploy pipeline took an afternoon to set up.
For MCP specifically, the server needs to support both:
- stdio transport — for Claude Desktop (local)
- HTTP transport — for remote agents
FastMCP handles both out of the box.
6. Get Listed in Directories Early
MCP directories are how agents discover your server. I submitted to:
- Glama (glama.ai/mcp/servers) — approved, biggest directory
- awesome-mcp-servers (GitHub) — PR submitted
- mcp.so — submitted
- mcpservers.org — submitted
Glama was the most impactful. They verify your server actually works (Docker build, tool inspection) which gives you credibility.
Submit early, even before your server is "done." The review process takes days, and you want to be listed when buyers are searching.
7. Billing Is a Separate Problem (and Worth Solving)
OathScore has three monetization layers:
- Free tier — rate-limited (10 calls/day for /now, 5/day for /score)
- Stripe API keys — $9/month founding member pricing
- x402 micropayments — pay-per-request with USDC (for agents with wallets)
The x402 layer is the interesting one. As AI agents get their own wallets, they'll pay for data directly without human intervention. OathScore is ready for that future.
If you're building an MCP server for a client, billing is where the real engineering challenge is — not the MCP tools themselves.
8. What I'd Do Differently
Start with 3 tools, not 8. I over-built OathScore's first version. Half the tools were decomposed views of the same data (get_exchanges, get_volatility, get_events are all subsets of get_now). Start small, add tools when agents actually need them.
Write the docstrings first. Before any code, write the tool names and descriptions. Share them with someone (or an AI) and ask "would you know when to use each of these?" If not, redesign.
Test with Claude Desktop from day one. Don't wait until the server is "ready." Deploy a single working tool and test the full loop: Claude Desktop config → tool discovery → tool call → response. Finding integration bugs early saves days.
The Template
I extracted a starter template from OathScore: mcp-server-template. It has:
- 5 example tools showing common patterns
- 2 standalone examples (weather API, SQLite database)
- Dockerfile for production deployment
- Claude Desktop integration config
- pyproject.toml for pip installation
If you're building your first MCP server, start there instead of from scratch.
The Source
OathScore is open source: github.com/moxiespirit/oathscore
Hit the live API right now: api.oathscore.dev/now
I build production MCP servers for companies and developers who need AI agents to access their APIs and data. If you need one built, find me on Fiverr or GitHub.
Top comments (0)