You've seen the demos. Claude reads your filesystem, queries your database, fires off API calls — all from a chat window. You try to replicate it. Three hours later you have 12 broken servers, a config file that looks like JSON vomited on YAML, and nothing actually works.
MCP (Model Context Protocol) is genuinely powerful. The setup story is genuinely terrible. This is the guide I wish existed when I started.
What MCP Actually Is (In One Paragraph)
MCP is Anthropic's open protocol for giving LLMs structured access to external tools and data. Think of it as a standardized USB interface: your AI client (Claude Desktop, Cursor, Windsurf, Zed) is the host, and MCP servers are the peripherals. Each server exposes tools (functions the model can call), resources (data it can read), and prompts (reusable templates).
The key insight most tutorials miss: MCP servers are just processes. They communicate over stdio or SSE. No magic, no cloud dependency. That simplicity is what makes it powerful — and what makes misconfiguration so easy.
The Config File Is Where Everyone Dies
Every MCP client has a JSON config that registers your servers. For Claude Desktop on macOS it lives at ~/Library/Application Support/Claude/claude_desktop_config.json. For Cursor, it's .cursor/mcp.json in your project root or ~/.cursor/mcp.json globally.
The structure is simple. The gotchas are not.
{
"mcpServers": {
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/Users/you/projects"],
"env": {}
},
"postgres": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-postgres"],
"env": {
"DATABASE_URL": "postgresql://localhost/mydb"
}
}
}
}
Common failure modes:
-
commandmust be an absolute path or a binary in your$PATH.npxworks.nodeworks if it's in PATH. A bare script name doesn't. - Environment variables in
envoverride your shell env — so secrets you export in.bashrcwon't be visible unless you re-declare them here. - The client inherits a minimal shell environment. If your server needs
nvm-managed Node, you'll need to pointcommandat the full path:/Users/you/.nvm/versions/node/v20.0.0/bin/node. - Restart the client after every config change. Claude Desktop doesn't hot-reload.
Choosing and Vetting Servers
The official reference servers live at github.com/modelcontextprotocol/servers. They cover filesystem, Git, PostgreSQL, SQLite, Brave Search, Puppeteer, and about 20 others. These are your safe starting point.
For everything else, vet before you run. An MCP server runs as your user, with your credentials, on your machine. Before npx-installing any community server:
- Check the npm package page — weekly downloads, publish date, maintainer history.
- Read the source. Seriously. Most good servers are under 500 lines.
- Run it in a sandbox first:
npx @whatever/server --dry-runor just read what tools it registers before giving it live credentials.
For production use, prefer pinned versions over latest:
"args": ["-y", "@modelcontextprotocol/server-filesystem@0.6.2", "/projects"]
The 10 Servers That Actually Matter
Most developers don't need 40 MCP servers. They need 10 that work reliably. Here's the practical stack:
| Server | Use Case | Install |
|---|---|---|
@mcp/filesystem |
Read/write local files | npx |
@mcp/git |
Commit, diff, branch | npx |
@mcp/postgres |
Query prod/dev DB | npx |
@mcp/brave-search |
Web search without scraping | npx + API key |
@mcp/puppeteer |
Headless browser automation | npx |
@mcp/github |
Issues, PRs, repos | npx + PAT |
@mcp/slack |
Post messages, read channels | npx + Bot token |
@mcp/memory |
Persistent knowledge graph | npx |
@mcp/sequential-thinking |
Complex reasoning chains | npx |
| Custom HTTP wrapper | Your internal APIs | ~50 lines Python |
Run this stack and you cover 90% of real dev workflows: read docs, query data, push code, search the web, automate browsers, coordinate with your team.
Building Your First Custom Server in 20 Minutes
If your internal API isn't covered by existing servers, writing a minimal MCP server is faster than you think. Here's a Python skeleton using the official SDK:
from mcp.server import Server
from mcp.server.stdio import stdio_server
from mcp.types import Tool, TextContent
import asyncio, httpx
app = Server("my-api")
@app.list_tools()
async def list_tools():
return [Tool(
name="get_order",
description="Fetch order by ID from internal API",
inputSchema={
"type": "object",
"properties": {"order_id": {"type": "string"}},
"required": ["order_id"]
}
)]
@app.call_tool()
async def call_tool(name: str, arguments: dict):
if name == "get_order":
async with httpx.AsyncClient() as client:
r = await client.get(f"http://api.internal/orders/{arguments['order_id']}")
return [TextContent(type="text", text=r.text)]
if __name__ == "__main__":
asyncio.run(stdio_server(app))
Register it:
"my-api": {
"command": "/path/to/venv/bin/python",
"args": ["/path/to/my_server.py"]
}
That's it. The model can now call get_order like any other tool. Add authentication, rate limiting, and error handling as needed — but for a proof of concept, this runs in production.
Debugging When It All Goes Wrong
When a server fails silently, you have three tools:
1. Enable MCP logging in Claude Desktop: Add "logging": {"level": "debug"} to your config. Logs appear in ~/Library/Logs/Claude/.
2. Run the server manually: Copy the exact command from your config and run it in a terminal. If it exits immediately or throws an error, you'll see it. Most issues are missing env vars or bad paths.
3. Use MCP Inspector: npx @modelcontextprotocol/inspector <your-server-command> spins up a local web UI where you can call tools directly, without an AI client in the loop. Invaluable for development.
The #1 cause of "server not showing up" in practice: the process crashes on startup because of a missing dependency or a bad connection string. The client silently drops it. Always verify your server runs standalone before debugging the client integration.
From Setup to Production Workflow
The gap between "I got it working locally" and "my whole team uses this reliably" comes down to three things: reproducible config, secret management, and process stability.
Keep your mcp.json in version control with env var placeholders. Store actual secrets in your existing secret manager (1Password CLI, AWS Secrets Manager, Vault — whatever you use). For stability, wrap long-running servers in a process supervisor if you need them persistent; for on-demand servers, the stdio spawn model handles this automatically.
Once your stack stabilizes at 10 well-chosen servers, you'll find you stop fighting config and start actually building. The model has consistent, reliable access to your tools. You have one config file to maintain. Debugging takes minutes instead of hours.
I compiled everything into a practical guide: MCP Mastery: Zero to Production in 48h
Top comments (0)