MCP in practice
Model Context Protocol (MCP) is a way to expose tools (capabilities) to an LLM client in a consistent, inspectable format. In practice that means:
- you run an MCP server that advertises tools and implements them
- an MCP client (often embedded in an AI app) connects, lists tools, and calls them with structured arguments
- you keep the “tool boundary” crisp: inputs/outputs are explicit, side effects are controlled, and failures are predictable
This article is intentionally practical: a minimal code example you can copy/paste, plus a checklist for making it safe(‑ish) and maintainable.
Finalized minimal code example (Python)
The goal of this example is not to be feature-complete—it’s to show the shape of a real MCP server:
- a couple of tools with typed inputs
- clear validation and error handling
- a small “allowlist” security posture (no arbitrary code execution)
Note: MCP ecosystems move quickly. Treat this as a reference implementation pattern, not a promise that every client/server library uses identical APIs.
"""mcp_example.py
A minimal MCP-style tool server example.
What it demonstrates:
- A tiny tool registry (name -> callable)
- JSON-schema-like argument definitions
- Input validation with clear errors
- An allowlist approach to side effects
This is intentionally lightweight so you can adapt it to the MCP Python SDK
or your preferred framework.
"""
from __future__ import annotations
from dataclasses import dataclass
from typing import Any, Callable, Dict, Optional, Tuple
class ToolError(Exception):
"""Raised when tool invocation fails in a user-visible way."""
@dataclass(frozen=True)
class ToolSpec:
name: str
description: str
# Minimal "schema" to keep this example dependency-free.
# In production, prefer JSON Schema and a validator.
params: Dict[str, Dict[str, Any]]
handler: Callable[[Dict[str, Any]], Any]
def _require_str(args: Dict[str, Any], key: str) -> str:
val = args.get(key)
if not isinstance(val, str) or not val.strip():
raise ToolError(f"'{key}' must be a non-empty string")
return val
def _require_int(args: Dict[str, Any], key: str, *, min_value: Optional[int] = None) -> int:
val = args.get(key)
if not isinstance(val, int):
raise ToolError(f"'{key}' must be an integer")
if min_value is not None and val < min_value:
raise ToolError(f"'{key}' must be >= {min_value}")
return val
def tool_echo(args: Dict[str, Any]) -> Dict[str, Any]:
message = _require_str(args, "message")
return {"message": message}
# A tiny, explicitly allowlisted "math" tool.
_ALLOWED_OPS: Dict[str, Callable[[int, int], int]] = {
"add": lambda a, b: a + b,
"sub": lambda a, b: a - b,
"mul": lambda a, b: a * b,
}
def tool_math(args: Dict[str, Any]) -> Dict[str, Any]:
op = _require_str(args, "op")
a = _require_int(args, "a")
b = _require_int(args, "b")
if op not in _ALLOWED_OPS:
raise ToolError(f"Unsupported op '{op}'. Allowed: {sorted(_ALLOWED_OPS)}")
result = _ALLOWED_OPS[op](a, b)
return {"op": op, "a": a, "b": b, "result": result}
TOOLS: Dict[str, ToolSpec] = {
"echo": ToolSpec(
name="echo",
description="Return the provided message.",
params={
"message": {"type": "string", "description": "Message to echo"},
},
handler=tool_echo,
),
"math": ToolSpec(
name="math",
description="Perform a simple allowlisted math operation.",
params={
"op": {"type": "string", "enum": sorted(_ALLOWED_OPS), "description": "Operation"},
"a": {"type": "integer", "description": "Left operand"},
"b": {"type": "integer", "description": "Right operand"},
},
handler=tool_math,
),
}
def list_tools() -> Dict[str, Any]:
"""Return a representation a client could display/inspect."""
return {
"tools": [
{
"name": spec.name,
"description": spec.description,
"params": spec.params,
}
for spec in TOOLS.values()
]
}
def call_tool(name: str, args: Dict[str, Any]) -> Tuple[bool, Any]:
"""Invoke a tool safely.
Returns: (ok, payload)
- ok=True -> payload is the result
- ok=False -> payload is an error dict safe to show to a user/model
"""
spec = TOOLS.get(name)
if spec is None:
return False, {"error": f"Unknown tool '{name}'"}
try:
return True, spec.handler(args)
except ToolError as e:
return False, {"error": str(e)}
except Exception:
# Avoid leaking internals; log server-side in real deployments.
return False, {"error": "Tool execution failed"}
if __name__ == "__main__":
# Demo “client” interactions
print("== list_tools ==")
print(list_tools())
print("\n== call_tool: echo ==")
print(call_tool("echo", {"message": "hello"}))
print("\n== call_tool: math ==")
print(call_tool("math", {"op": "mul", "a": 6, "b": 7}))
print("\n== call_tool: math (bad op) ==")
print(call_tool("math", {"op": "rm -rf /", "a": 1, "b": 2}))
What to adapt when you wire this to a real MCP server
- Replace
list_tools()with your server’s tool discovery mechanism. - Replace
call_tool()with the library’s tool invocation entry point. - Keep the patterns:
- validate inputs
- constrain side effects
- return structured errors
Implementation checklist (ship it without surprises)
Use this checklist when turning a demo MCP server into something you can run for teammates or customers.
Tool surface
- [ ] Each tool has a single, well-defined responsibility
- [ ] Inputs are validated (types, ranges, required fields)
- [ ] Outputs are deterministic and structured (no “stringly-typed” blobs)
- [ ] Tool names/descriptions are stable and versioned if clients depend on them
Safety & security
- [ ] Prefer allowlists over blocklists (operations, file paths, domains, commands)
- [ ] No arbitrary code execution and no shelling out without strict constraints
- [ ] Secrets never appear in tool output (or logs)
- [ ] Add rate limiting / timeouts for long-running tools
- [ ] Audit external network calls (domain allowlist, SSRF protections)
Reliability
- [ ] Tools are idempotent where possible; where not, document side effects
- [ ] Errors are explicit and user-safe (don’t leak stack traces)
- [ ] Add structured logging (tool name, latency, status, request id)
- [ ] Add tests per tool: happy path + validation failures
Operability
- [ ] Health check endpoint (or equivalent) and basic metrics
- [ ] Pin dependency versions; automate updates
- [ ] Document configuration and environment variables
- [ ] Create a minimal runbook: “what to do when tool X fails”
Sources / further reading
- Model Context Protocol (MCP) — official site: https://modelcontextprotocol.io/
- MCP specification (overview): https://spec.modelcontextprotocol.io/
- Anthropic documentation (MCP concepts & ecosystem entry points): https://docs.anthropic.com/
If you’re using an SDK (Python/TypeScript), also read that SDK’s README and examples end-to-end—small API differences matter when you’re wiring tool schemas and streaming results.
Tip footer
If this helped, consider leaving a tip. It directly funds more copy/pasteable examples, deeper production checklists, and follow-up posts (deployment, auth, and multi-tool orchestration).
Top comments (0)