DEV Community

SatStack
SatStack

Posted on

Connect Your AI Agent to Anything: Building MCP Servers with Python and FastMCP

Connect Your AI Agent to Anything: Building MCP Servers with Python and FastMCP

The problem with local AI agents isn't intelligence — it's isolation. Your LLM can reason about code, but it can't run it. It can discuss your database, but it can't query it. It's a brain in a jar.

The Model Context Protocol (MCP), developed by Anthropic, solves this. It's a standard that lets AI agents connect to external tools — filesystems, APIs, databases, CLIs — through a consistent interface. Build an MCP server, and any MCP-compatible client (Claude, a local agent, your own code) can use your tool.

FastMCP is the Python library that makes building MCP servers straightforward. Here's how it works.


What MCP Actually Is

MCP defines a standard protocol between:

  • MCP Client: the AI agent/assistant making requests (e.g., Claude Desktop, your agent code)
  • MCP Server: a service that exposes tools the agent can call

When an agent needs to do something real — read a file, query an API, run a command — it calls an MCP tool. The server executes it and returns the result. The agent uses that result to continue reasoning.

Think of it as a standardized plugin system for AI agents.


Install FastMCP

pip install fastmcp
Enter fullscreen mode Exit fullscreen mode

That's the whole dependency. FastMCP handles the MCP protocol, tool registration, and server lifecycle.


Your First MCP Server: 20 Lines

from fastmcp import FastMCP

mcp = FastMCP("My First Server")

@mcp.tool()
def add(a: int, b: int) -> int:
    """Add two numbers together."""
    return a + b

@mcp.tool()
def get_current_time() -> str:
    """Get the current date and time."""
    from datetime import datetime
    return datetime.now().isoformat()

if __name__ == "__main__":
    mcp.run()
Enter fullscreen mode Exit fullscreen mode

Run it:

python server.py
Enter fullscreen mode Exit fullscreen mode

Any MCP client can now call add and get_current_time as native tools. The docstrings become the tool descriptions the agent uses to decide when to call them.


Building Something Useful: A Filesystem + Shell Server

Here's a practical MCP server that gives an AI agent safe, controlled access to your filesystem and shell:

import subprocess
from pathlib import Path
from fastmcp import FastMCP

mcp = FastMCP("Dev Tools Server")

# ── Filesystem tools ──────────────────────────────────────────

@mcp.tool()
def read_file(path: str) -> str:
    """Read the contents of a file."""
    p = Path(path).expanduser().resolve()
    if not p.exists():
        return f"Error: {path} does not exist"
    if not p.is_file():
        return f"Error: {path} is not a file"
    try:
        return p.read_text(encoding="utf-8", errors="replace")
    except Exception as e:
        return f"Error reading file: {e}"


@mcp.tool()
def write_file(path: str, content: str) -> str:
    """Write content to a file, creating it if it doesn't exist."""
    p = Path(path).expanduser().resolve()
    p.parent.mkdir(parents=True, exist_ok=True)
    p.write_text(content, encoding="utf-8")
    return f"Written {len(content)} chars to {path}"


@mcp.tool()
def list_directory(path: str = ".", pattern: str = "*") -> list[str]:
    """List files in a directory matching a glob pattern."""
    p = Path(path).expanduser().resolve()
    if not p.is_dir():
        return [f"Error: {path} is not a directory"]
    return sorted(str(f.relative_to(p)) for f in p.glob(pattern))


@mcp.tool()
def file_exists(path: str) -> bool:
    """Check whether a file or directory exists."""
    return Path(path).expanduser().resolve().exists()


# ── Shell tools ───────────────────────────────────────────────

ALLOWED_COMMANDS = {"git", "python3", "pip", "ls", "cat", "grep",
                    "find", "wc", "head", "tail", "echo", "pwd"}

@mcp.tool()
def run_command(command: str, cwd: str = ".") -> dict:
    """
    Run a shell command and return stdout, stderr, and exit code.
    Only whitelisted commands are allowed for safety.
    """
    parts = command.strip().split()
    if not parts:
        return {"error": "Empty command"}

    base_cmd = parts[0]
    if base_cmd not in ALLOWED_COMMANDS:
        return {"error": f"Command '{base_cmd}' not in allowlist: {sorted(ALLOWED_COMMANDS)}"}

    result = subprocess.run(
        parts,
        cwd=Path(cwd).expanduser().resolve(),
        capture_output=True,
        text=True,
        timeout=30
    )
    return {
        "stdout": result.stdout,
        "stderr": result.stderr,
        "returncode": result.returncode
    }


@mcp.tool()
def git_status(repo_path: str = ".") -> str:
    """Get git status for a repository."""
    result = subprocess.run(
        ["git", "status", "--short"],
        cwd=Path(repo_path).expanduser().resolve(),
        capture_output=True,
        text=True
    )
    return result.stdout or "Clean working tree"


@mcp.tool()
def git_diff(repo_path: str = ".", staged: bool = False) -> str:
    """Get git diff for a repository."""
    cmd = ["git", "diff"]
    if staged:
        cmd.append("--cached")
    result = subprocess.run(
        cmd,
        cwd=Path(repo_path).expanduser().resolve(),
        capture_output=True,
        text=True
    )
    return result.stdout or "No changes"


if __name__ == "__main__":
    mcp.run()
Enter fullscreen mode Exit fullscreen mode

Building an API Gateway MCP Server

Give your agent access to external APIs through a controlled interface:

import requests
from fastmcp import FastMCP

mcp = FastMCP("API Gateway")

@mcp.tool()
def http_get(url: str, params: dict = None) -> dict:
    """
    Make an HTTP GET request and return the response.
    Returns status code, headers subset, and parsed body.
    """
    try:
        resp = requests.get(url, params=params, timeout=15)
        body = None
        try:
            body = resp.json()
        except Exception:
            body = resp.text[:2000]  # Cap text responses

        return {
            "status": resp.status_code,
            "content_type": resp.headers.get("content-type", ""),
            "body": body
        }
    except requests.RequestException as e:
        return {"error": str(e)}


@mcp.tool()
def search_web(query: str) -> list[dict]:
    """
    Search the web using Brave Search API.
    Returns list of {title, url, description} results.
    """
    import os
    api_key = os.environ.get("BRAVE_API_KEY", "")
    if not api_key:
        return [{"error": "BRAVE_API_KEY not set"}]

    resp = requests.get(
        "https://api.search.brave.com/res/v1/web/search",
        headers={"Accept": "application/json",
                 "X-Subscription-Token": api_key},
        params={"q": query, "count": 5},
        timeout=10
    )
    results = resp.json().get("web", {}).get("results", [])
    return [{"title": r["title"], "url": r["url"],
             "description": r.get("description", "")}
            for r in results]


@mcp.tool()
def fetch_page(url: str, max_chars: int = 4000) -> str:
    """Fetch a web page and return its text content."""
    try:
        resp = requests.get(url, timeout=15,
                            headers={"User-Agent": "Mozilla/5.0"})
        # Very basic HTML strip
        import re
        text = re.sub(r'<[^>]+>', ' ', resp.text)
        text = re.sub(r'\s+', ' ', text).strip()
        return text[:max_chars]
    except Exception as e:
        return f"Error: {e}"


if __name__ == "__main__":
    mcp.run()
Enter fullscreen mode Exit fullscreen mode

Connecting to a Local Ollama Agent

Wire up your MCP server to the local agent from Article #2:

import subprocess
import json
import requests

OLLAMA_URL = "http://localhost:11434"
MODEL = "qwen2.5:14b"

# Start MCP server as subprocess
server_proc = subprocess.Popen(
    ["python3", "dev_tools_server.py"],
    stdin=subprocess.PIPE,
    stdout=subprocess.PIPE,
    text=True
)

def call_mcp_tool(tool_name: str, **kwargs) -> str:
    """Send a tool call to the MCP server via JSON-RPC."""
    request = {
        "jsonrpc": "2.0",
        "id": 1,
        "method": "tools/call",
        "params": {
            "name": tool_name,
            "arguments": kwargs
        }
    }
    server_proc.stdin.write(json.dumps(request) + "\n")
    server_proc.stdin.flush()
    response = json.loads(server_proc.stdout.readline())
    return response.get("result", {}).get("content", [{}])[0].get("text", "")


def agent_with_tools(task: str) -> str:
    """Run agent with MCP tool access."""
    # First pass: ask what tools are needed
    planning_prompt = f"""Task: {task}

Available tools: read_file, write_file, list_directory, run_command, git_status, git_diff

Which tools do you need and in what order? Reply as JSON:
{{"steps": [{{"tool": "tool_name", "args": {{"key": "value"}}}}]}}"""

    plan_response = requests.post(
        f"{OLLAMA_URL}/api/generate",
        json={"model": MODEL, "prompt": planning_prompt,
              "stream": False, "options": {"temperature": 0.1}}
    ).json()["response"]

    # Parse and execute plan
    try:
        import re
        json_match = re.search(r'\{.*\}', plan_response, re.DOTALL)
        if json_match:
            plan = json.loads(json_match.group())
            results = []
            for step in plan.get("steps", []):
                tool = step["tool"]
                args = step.get("args", {})
                result = call_mcp_tool(tool, **args)
                results.append(f"[{tool}]: {result}")

            # Final synthesis
            synthesis_prompt = f"""Task: {task}

Tool results:
{chr(10).join(results)}

Based on these results, provide your final answer:"""

            return requests.post(
                f"{OLLAMA_URL}/api/generate",
                json={"model": MODEL, "prompt": synthesis_prompt,
                      "stream": False}
            ).json()["response"]
    except Exception as e:
        return f"Error executing plan: {e}"

    return plan_response
Enter fullscreen mode Exit fullscreen mode

MCP Resources and Prompts

Beyond tools, MCP supports two more primitives:

from fastmcp import FastMCP

mcp = FastMCP("Full Server")

# Resources: static or dynamic data the agent can read
@mcp.resource("config://app")
def get_app_config() -> str:
    """Return the current application configuration."""
    return open("config.yaml").read()

@mcp.resource("logs://recent")
def get_recent_logs() -> str:
    """Return the last 100 lines of application logs."""
    result = subprocess.run(
        ["tail", "-100", "/var/log/app.log"],
        capture_output=True, text=True
    )
    return result.stdout

# Prompts: reusable prompt templates
@mcp.prompt()
def code_review_prompt(diff: str) -> str:
    """Generate a code review prompt for the given diff."""
    return f"""Review this code change for bugs, security issues, and style:

{diff}

Provide structured feedback with severity levels."""
Enter fullscreen mode Exit fullscreen mode

Running in Production

For a persistent MCP server that survives reboots:

sudo tee /etc/systemd/system/mcp-devtools.service > /dev/null << 'EOF'
[Unit]
Description=MCP Dev Tools Server
After=network.target

[Service]
Type=simple
User=YOUR_USER
WorkingDirectory=/home/YOUR_USER/mcp-servers
ExecStart=/usr/bin/python3 /home/YOUR_USER/mcp-servers/dev_tools_server.py
Restart=always
RestartSec=3

[Install]
WantedBy=multi-user.target
EOF

sudo systemctl enable --now mcp-devtools
Enter fullscreen mode Exit fullscreen mode

The Full Local AI Stack

With MCP in place, the architecture is complete:

Ollama (qwen2.5:14b)          — Local LLM
    ↓
Memory Agent (SQLite)          — Persistent context
    ↓
RAG System (ChromaDB)          — Document knowledge
    ↓
MCP Server (FastMCP)           — Real-world tool access
    ↓
Your actual filesystem, APIs, databases
Enter fullscreen mode Exit fullscreen mode

Each layer from this series stacks on the previous:

  1. Bitcoin CLI + Lightning — payment integration
  2. Local AI Agent (Ollama) — the core LLM layer
  3. Lightning Network Analysis — ecosystem context
  4. RAG System — document knowledge
  5. Agent Memory (SQLite) — persistent state
  6. AI Code Review Bot — applied tooling
  7. MCP Server (this post) — real-world connectivity

The result: a fully local, zero-subscription AI agent that can read files, query APIs, review code, remember conversations, search documents, and reason about all of it — on hardware you already own.

Top comments (1)

Collapse
 
renato_marinho profile image
Renato Marinho

Great post! FastMCP Python is a solid choice for getting started quickly. One thing worth considering as your MCP servers grow in complexity: the output format matters a lot to the consuming agent.

For example, returning { amount: 45000, currency: 'USD' } is ambiguous — is that $45k or $450.00 in cents? A human would check the UI context, but an AI agent has no such luxury.

I've been exploring this problem with a TypeScript framework called mcp-fusion (github.com/vinkius-labs/mcp-fusion) that introduces a Presenter layer specifically for this — think of it as an agent-first "view" layer that adds semantic clarity and action affordances to tool responses. Interesting complement to what you're building here.