DEV Community

vdalhambra
vdalhambra

Posted on

I built an MCP server in one weekend — here's what FastMCP made easy (and what it didn't)

Last weekend I built two MCP servers from scratch — FinanceKit (17 financial data tools) and SiteAudit (11 website analysis tools). Both are live, free, and MIT licensed. Here's an honest account of what FastMCP 3.2 made easy, what it didn't, and the architecture pattern I landed on.


Why FastMCP instead of the raw SDK

The official MCP Python SDK works, but it's verbose. Defining a tool requires a schema object, a handler function, and registration boilerplate — three separate things for what should be one.

FastMCP collapses all of that into a decorator. Here's what a tool looks like:

from fastmcp import FastMCP

mcp = FastMCP("financekit")

@mcp.tool()
def get_stock_quote(ticker: str) -> dict:
    """Get real-time stock quote for a given ticker symbol."""
    # ... implementation
    return {"ticker": ticker, "price": price, "change_pct": change}
Enter fullscreen mode Exit fullscreen mode

That's it. FastMCP reads your type hints, generates the JSON schema, registers the tool, and handles the transport. The docstring becomes the tool description that Claude reads when deciding which tool to call.

For 17+ tools across two servers, this matters. The alternative — maintaining separate schema definitions — would have added hours of tedious work and introduced drift between schemas and implementations.


The 3 things that worked great

1. Auto-discovery of tools via decorators

FastMCP scans your module for @mcp.tool() decorated functions and registers them automatically. Add a function, it becomes a tool. Delete a function, it disappears from the server. No manifest file to update, no registration list to maintain.

This made iterating on FinanceKit's tool set fast. I added get_options_chain at 11pm and it was callable in Claude within 30 seconds of saving the file.

2. Transport detection

FastMCP handles stdio, SSE, and HTTP transports automatically based on how the server is invoked. When running locally via claude mcp add, it uses stdio. When deployed to a server, you switch to HTTP with one flag:

if __name__ == "__main__":
    mcp.run(transport="streamable-http", host="0.0.0.0", port=8000)
Enter fullscreen mode Exit fullscreen mode

No changes to your tool implementations. The same code runs in both contexts. This turned out to be important for the mcpize.com hosted deployment — I didn't have to maintain two versions of each server.

3. Type-safe tool signatures

FastMCP validates inputs against your type hints before your function is ever called. If Claude sends a malformed request (wrong types, missing required params), FastMCP rejects it with a clear error. This saved me from writing defensive validation in every tool.

For complex inputs I used Pydantic models:

from pydantic import BaseModel
from typing import List

class PortfolioInput(BaseModel):
    tickers: List[str]
    weights: List[float]
    lookback_days: int = 30

@mcp.tool()
def calculate_portfolio_risk(portfolio: PortfolioInput) -> dict:
    """Calculate VaR, Sharpe, Sortino, Beta, and correlation matrix."""
    # FastMCP handles nested Pydantic models correctly
    ...
Enter fullscreen mode Exit fullscreen mode

The 2 things that were annoying

1. Deployment documentation is thin

The FastMCP docs are great for local development and thin on production deployment. I spent more time than I should have figuring out the right Dockerfile structure and how to handle the streamable-http transport in a containerized environment.

The working pattern I landed on:

FROM python:3.11-slim
WORKDIR /app
COPY pyproject.toml .
RUN pip install uvx
COPY src/ ./src/
CMD ["uvx", "--from", ".", "financekit", "--transport", "streamable-http", "--port", "8000"]
Enter fullscreen mode Exit fullscreen mode

This wasn't obvious from the docs. uvx handles the venv isolation cleanly, but getting the entrypoint right required reading the FastMCP source.

2. Versioning and changelogs are your problem

FastMCP has no opinions about versioning. Your pyproject.toml version is what gets published to PyPI. That's correct behavior — but it means when you update a tool's signature, you need to manually track what changed, bump the version, and communicate breaking changes.

For MCP servers distributed via uvx, callers always get the latest version unless they pin it. I ended up adding a get_server_info tool that returns the current version:

@mcp.tool()
def get_server_info() -> dict:
    """Returns server version and available tool list."""
    return {
        "version": "0.3.1",
        "tools": [t.name for t in mcp.list_tools()],
        "description": "FinanceKit MCP — real-time financial data"
    }
Enter fullscreen mode Exit fullscreen mode

Not elegant, but it works.


The architecture pattern

Both FinanceKit and SiteAudit follow the same structure:

financekit-mcp/
├── src/
│   └── financekit_mcp/
│       ├── __init__.py
│       ├── server.py        ← FastMCP instance + tool definitions
│       ├── tools/
│       │   ├── quotes.py    ← get_stock_quote, get_crypto_price
│       │   ├── technical.py ← get_technical_analysis (RSI/MACD/BB/ADX)
│       │   ├── portfolio.py ← calculate_portfolio_risk
│       │   └── options.py   ← get_options_chain
│       └── utils/
│           └── formatters.py
├── pyproject.toml
└── README.md
Enter fullscreen mode Exit fullscreen mode

server.py imports and registers tools from each module:

from fastmcp import FastMCP
from .tools import quotes, technical, portfolio, options

mcp = FastMCP("financekit")
mcp.include_module(quotes)
mcp.include_module(technical)
mcp.include_module(portfolio)
mcp.include_module(options)
Enter fullscreen mode Exit fullscreen mode

This keeps tool implementations isolated and testable. Each module is just functions — no FastMCP dependencies in the tool files themselves, which made unit testing straightforward.


Lessons

Return structured verdicts, not just data. The most useful tools in FinanceKit aren't the ones that return raw numbers — they're the ones that include a verdict field. When get_technical_analysis returns {"rsi": 68.4, "rsi_signal": "approaching_overbought"}, Claude can synthesize a useful answer. When it just returns 68.4, Claude has to do the interpretation itself, which is less reliable.

Keep tools narrow. I initially built a full_analysis tool that ran RSI, MACD, Bollinger, and ADX in one call. Claude rarely used it — the narrow tools gave it more control over what to fetch. I kept full_analysis but the individual tools see 4x more usage.

Test with Claude, not just pytest. Unit tests verify correctness. Only testing with Claude tells you whether the tool descriptions are clear enough for the model to use them correctly. I rewrote three docstrings after Claude consistently misused those tools.


Both servers install in one command and require no API keys:

claude mcp add financekit -- uvx --from financekit-mcp financekit
claude mcp add siteaudit -- uvx --from siteaudit-mcp siteaudit
Enter fullscreen mode Exit fullscreen mode

If you're thinking about building your own MCP server, FastMCP is the right starting point. The decorator-based approach removes 80% of the boilerplate. The remaining 20% — deployment, versioning, observability — you'll figure out as you go.

Top comments (0)