Hello, I'm Shrijith Venkatramana. I'm building git-lrc, an AI code reviewer that runs on every commit. Star Us to help devs discover the project. Do give it a try and share your feedback for improving the product.
Imagine you want your AI coding assistant or agent to actually do useful things in your world—read the latest database records, run calculations against your data, update tickets, or even show an interactive dashboard—without you writing pages of glue code, managing JSON schemas by hand, or worrying about credential leaks every time you expose a capability.
FastMCP is the Python framework that makes this straightforward. It turns ordinary Python functions, classes, and data sources into fully compliant Model Context Protocol (MCP) servers that any compatible AI host (Claude Desktop, Cursor, custom agents, etc.) can discover and use safely.
What Problem Does MCP (and FastMCP) Actually Solve?
Traditional tool-calling approaches have friction:
- Raw function calling often requires manual schema definitions.
- CLI "skills" can expose too much (credentials, full shell access).
- Custom HTTP endpoints mean reinventing discovery, validation, auth, and state management.
- Connecting interactive UIs back to your backend is painful.
MCP standardizes how LLMs discover and call tools, read dynamic resources, and use prompt templates. FastMCP implements that standard with Pythonic decorators and sensible defaults so you stay focused on business logic.
Installation and Your First Working Server (5 Minutes)
Use uv (recommended) or pip:
uv pip install fastmcp
# or
pip install fastmcp
Create server.py:
from fastmcp import FastMCP
mcp = FastMCP("demo") # Human-friendly name for the server
@mcp.tool
def add(a: int, b: int) -> int:
"""Add two numbers. Useful for any math the agent needs to perform accurately."""
return a + b
@mcp.tool
def greet(name: str, formal: bool = False) -> str:
"""Generate a greeting message."""
title = "Mr. " if formal else ""
return f"Hello, {title}{name}!"
if __name__ == "__main__":
mcp.run() # Defaults to Streamable HTTP transport
Run it:
python server.py
The server starts on http://localhost:8000 (configurable). You now have a live MCP endpoint with automatic schema generation from type hints and docstrings.
Exploring Your Server: Tools, Resources, and Prompts
FastMCP supports three core primitives.
Tools — Executable functions the LLM can call.
Resources — Readable data (static or dynamic) the LLM can fetch.
Prompts — Reusable prompt templates with parameters.
Add these to the same file:
from typing import List, Dict
import datetime
# Dynamic resource example
@mcp.resource("resource://time/now")
def current_time() -> str:
"""Return current UTC time as ISO string."""
return datetime.datetime.now(datetime.timezone.utc).isoformat()
# List resource
@mcp.resource("resource://users")
def list_users() -> List[Dict]:
"""Return a list of sample users. In production, query your DB here."""
return [
{"id": 1, "name": "Alice", "role": "engineer"},
{"id": 2, "name": "Bob", "role": "designer"}
]
# Prompt template
@mcp.prompt
def research_topic(topic: str, depth: str = "medium") -> str:
"""Generate a structured research prompt."""
return f"""
Research {topic} at {depth} depth.
1. Use available tools to gather latest information.
2. Summarize key findings.
3. List open questions.
"""
These are automatically discoverable. Resources are great for live data (JIRA tickets, GitHub issues, database views) without forcing the LLM to call a tool every time.
Connecting Clients and Testing Locally
FastMCP includes a rich client library. Create client.py:
import asyncio
from fastmcp import Client
async def main():
async with Client("http://localhost:8000") as client: # Adjust URL/port
# List capabilities
tools = await client.list_tools()
print("Available tools:", [t.name for t in tools])
# Call a tool
result = await client.call_tool("add", {"a": 15, "b": 27})
print("15 + 27 =", result)
# Read a resource
time_data = await client.read_resource("resource://time/now")
print("Current time:", time_data)
asyncio.run(main())
For local development with interactive UIs, use the built-in dev server (more on that below).
Building Interactive Apps Inside the Conversation
One of FastMCP’s most powerful features is Apps — tools that return rich, interactive UIs rendered directly in the host’s conversation.
Mark a tool with app=True and return Prefab components (or custom HTML).
Example dashboard.py:
from fastmcp import FastMCP
from prefab import Column, Header, Chart, DataTable, Button, CallTool # Prefab components
mcp = FastMCP("sales-demo")
def fetch_sales_data(region: str):
# Simulate or query real data
return [
{"month": "Jan", "revenue": 12000},
{"month": "Feb", "revenue": 15000},
# ...
]
@mcp.tool(app=True)
def sales_dashboard(region: str = "Global"):
"""Interactive sales dashboard for the selected region."""
data = fetch_sales_data(region)
return Column([
Header(f"Sales Dashboard - {region}"),
Chart(data, type="bar", title="Monthly Revenue"),
DataTable(data, searchable=True),
Button(
"Export CSV",
on_click=CallTool("export_sales", {"region": region})
)
])
# Backend tool used by the UI (hidden from LLM by default)
@mcp.tool
def export_sales(region: str):
# Generate and return file or link
return {"status": "success", "message": f"CSV for {region} ready"}
Run with fastmcp dev apps dashboard.py to preview locally. The dev UI gives you a picker, auto-generated forms, live rendering, and an MCP inspector showing all traffic. Changes hot-reload.
Advanced Patterns and Production Considerations
Server Composition & Namespacing
from fastmcp import FastMCP, Provider
main_mcp = FastMCP("main")
# Mount another server under a prefix
github_provider = ... # or another FastMCP instance
main_mcp.mount(github_provider, prefix="github")
Authentication & Security
FastMCP supports multiple auth methods. You keep credentials server-side; the LLM only sees tool results. This is a major advantage over exposing raw API keys or full shell access.
Deployment
- Run locally or on any VPS with
mcp.run(). - For production, Prefect Horizon offers managed hosting, GitHub-based deploys, branch previews, SSO, RBAC, audit logs, etc.
- Containerize easily with Docker for self-hosting.
Debugging Tips
- Use
fastmcp dev appsfor UI tools. - Append
.mdto any docs URL on gofastmcp.com for markdown. - The server at
https://gofastmcp.com/mcplets you query the docs via MCP itself.
Common Real-World Use Cases
- Internal Tools: Expose company DB queries, ticket systems, or monitoring data safely.
- Personal Agents: Connect to your calendar, email summaries, or note-taking app.
- Data Analysis: Dynamic resources for CSVs, live API feeds, or vector search results.
- Workflow Automation: Tools that trigger Prefect flows or other orchestration.
- E-commerce / CRM Demos: As seen in community examples with order management and dashboards.
Next Steps and Resources
- Read the official quickstart and tutorials at gofastmcp.com.
- Explore example apps in the FastMCP GitHub repo.
- Try connecting your server to Claude Desktop or other MCP hosts.
- Experiment with
fastmcp dev apps—it’s the fastest way to iterate on interactive tools.
FastMCP removes most of the repetitive work so you can focus on what the AI should actually do with access to your systems. The code you write looks like normal Python, but suddenly becomes powerful, discoverable, and interactive capabilities for any MCP-compatible agent.
Start small with a couple of tools today. You’ll quickly see how natural it feels to give your AI real work to do. What domain or dataset will you connect first?
*AI agents write code fast. They also silently remove logic, change behavior, and introduce bugs -- without telling you. You often find out in production.
git-lrc fixes this. It hooks into git commit and reviews every diff before it lands. 60-second setup. Completely free.*
Any feedback or contributors are welcome! It's online, source-available, and ready for anyone to use.
HexmosTech
/
git-lrc
Free, Micro AI Code Reviews That Run on Commit
| 🇩🇰 Dansk | 🇪🇸 Español | 🇮🇷 Farsi | 🇫🇮 Suomi | 🇯🇵 日本語 | 🇳🇴 Norsk | 🇵🇹 Português | 🇷🇺 Русский | 🇦🇱 Shqip | 🇨🇳 中文 |
git-lrc
Free, Micro AI Code Reviews That Run on Commit
AI agents write code fast. They also silently remove logic, change behavior, and introduce bugs -- without telling you. You often find out in production.
git-lrc fixes this. It hooks into git commit and reviews every diff before it lands. 60-second setup. Completely free.
See It In Action
See git-lrc catch serious security issues such as leaked credentials, expensive cloud operations, and sensitive material in log statements
git-lrc-intro-60s.mp4
Why
- 🤖 AI agents silently break things. Code removed. Logic changed. Edge cases gone. You won't notice until production.
- 🔍 Catch it before it ships. AI-powered inline comments show you exactly what changed and what looks wrong.
- 🔁 Build a…

Top comments (0)