The Bartender's Guide to Model Context Protocol: Mixing the Perfect API Cocktail 🍸
Last month, I was building an AI assistant that needed to interact with our company's internal tools: Jira, Confluence, our PostgreSQL database, and our custom CRM. Each integration was a special snowflake of pain:
# The nightmare before MCP
class MyFragileAgent:
def __init__(self):
self.jira_client = JiraAPI(auth=weird_oauth_flow)
self.confluence = ConfluenceWrapper(needs_custom_session)
self.db = PostgresConnection(with_connection_pooling)
self.crm = SomeRESTClient(uses_different_auth_again)
# 200 lines of glue code later...
Every tool needed its own authentication dance, error handling strategy, and data transformation layer. My AI agent was less "intelligent assistant" and more "integration hell supervisor."
The real kicker? When Claude or GPT-4 wanted to use these tools, I had to write even more code to translate between their function-calling formats and my janky integrations.
Enter MCP: The Universal Bartender's Handbook
Model Context Protocol is Anthropic's answer to this chaos. Think of it as a standardized way for AI models to interact with external tools, data sources, and services. Instead of every AI application reinventing the wheel, MCP provides:
- A standard server protocol (the bar's layout)
- Resource discovery (the menu)
- Tool definitions (the recipes)
- Streaming context (the conversation with your bartender)
Here's the beautiful part: write an MCP server once, use it with any MCP-compatible client.
The Anatomy of an MCP Server: Building Your First "Bar"
Let's build a simple MCP server that exposes a GitHub repository as a resource. I'll show you the surprisingly elegant structure:
from mcp.server import Server
from mcp.types import Resource, Tool, TextContent
import httpx
# Initialize our "bar"
app = Server("github-mcp-server")
# Define what's "on the menu" - our resources
@app.list_resources()
async def list_resources() -> list[Resource]:
return [
Resource(
uri="github://repos/user/repo",
name="Repository Info",
mimeType="application/json",
description="Access repository details"
)
]
# The bartender knows how to serve what you order
@app.read_resource()
async def read_resource(uri: str) -> str:
if uri.startswith("github://repos/"):
repo_path = uri.replace("github://repos/", "")
async with httpx.AsyncClient() as client:
response = await client.get(
f"https://api.github.com/repos/{repo_path}",
headers={"Authorization": f"token {GITHUB_TOKEN}"}
)
return response.text
raise ValueError(f"Unknown resource: {uri}")
# Define tools (the bartender's special skills)
@app.list_tools()
async def list_tools() -> list[Tool]:
return [
Tool(
name="create_issue",
description="Create a GitHub issue",
inputSchema={
"type": "object",
"properties": {
"repo": {"type": "string"},
"title": {"type": "string"},
"body": {"type": "string"}
},
"required": ["repo", "title"]
}
)
]
@app.call_tool()
async def call_tool(name: str, arguments: dict) -> list[TextContent]:
if name == "create_issue":
# Mix the cocktail (create the issue)
async with httpx.AsyncClient() as client:
response = await client.post(
f"https://api.github.com/repos/{arguments['repo']}/issues",
json={"title": arguments["title"], "body": arguments.get("body", "")},
headers={"Authorization": f"token {GITHUB_TOKEN}"}
)
return [TextContent(
type="text",
text=f"Created issue #{response.json()['number']}"
)]
The Three Core Ingredients: Resources, Prompts, and Tools
MCP servers can serve three types of "drinks":
1. Resources (The Ingredients)
Resources are data sources that the AI can read:
github://repos/anthropics/mcp → Repository metadata
file:///home/docs/api.md → Documentation
postgres://localhost/users → Database tables
Think of resources as the bottles behind the bar. The AI can ask, "What do you have?" and get a structured list.
2. Prompts (The House Specials)
Prompts are pre-defined templates that guide the AI:
@app.list_prompts()
async def list_prompts():
return [
Prompt(
name="debug_issue",
description="Help debug a GitHub issue",
arguments=[
PromptArgument(
name="issue_number",
description="Issue number to debug",
required=True
)
]
)
]
@app.get_prompt()
async def get_prompt(name: str, arguments: dict):
if name == "debug_issue":
issue = await fetch_issue(arguments["issue_number"])
return f"""Analyze this GitHub issue and suggest solutions:
Title: {issue['title']}
Body: {issue['body']}
Comments: {len(issue['comments'])} comments
Labels: {', '.join(issue['labels'])}
Provide a structured debugging approach."""
3. Tools (The Bartender's Skills)
Tools are actions the AI can perform:
# Think of tools as verbs: create, update, delete, search, analyze
create_issue()
search_codebase()
run_tests()
deploy_service()
The Real Magic: Client-Side Integration
Here's where MCP shines. On the client side (Claude, your custom AI app, etc.), connecting to an MCP server is ridiculously simple:
from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import stdio_client
# Connect to our "bar"
server_params = StdioServerParameters(
command="python",
args=["github_mcp_server.py"]
)
async with stdio_client(server_params) as (read, write):
async with ClientSession(read, write) as session:
# Initialize - like walking into the bar
await session.initialize()
# Ask what's available - check the menu
resources = await session.list_resources()
tools = await session.list_tools()
# Order something - use a tool
result = await session.call_tool("create_issue", {
"repo": "anthropics/mcp",
"title": "MCP is awesome!",
"body": "Seriously, this protocol rocks."
})
The Plot Twist: Why This Changes Everything
Remember my fragile agent from the beginning? Here's what it looks like with MCP:
class MyElegantAgent:
def __init__(self):
# Connect to MCP servers - that's it!
self.servers = [
MCPClient("jira-mcp-server"),
MCPClient("confluence-mcp-server"),
MCPClient("postgres-mcp-server"),
MCPClient("crm-mcp-server")
]
async def ask_claude(self, question: str):
# Claude automatically discovers and uses tools
# from all connected MCP servers
tools = []
for server in self.servers:
tools.extend(await server.list_tools())
response = await anthropic_client.messages.create(
model="claude-3-5-sonnet-20241022",
tools=tools,
messages=[{"role": "user", "content": question}]
)
return response
Zero custom glue code. The MCP servers handle their own authentication, error handling, and data formatting. Claude natively understands how to call MCP tools.
The Secret Sauce: Composability
Here's the bartender analogy's payoff: just like a good bar can source ingredients from multiple suppliers, an MCP client can connect to multiple servers simultaneously:
┌─────────────┐
│ Claude │
│ (or GPT) │
└──────┬──────┘
│
├──────► MCP Server: GitHub
├──────► MCP Server: PostgreSQL
├──────► MCP Server: Slack
├──────► MCP Server: Filesystem
└──────► MCP Server: Your Custom API
Each server is an independent service. Add or remove them without touching your AI code.
Real-World Recipe: A Multi-Tool Development Assistant
Let's build something practical - a dev assistant that can:
- Read your codebase (filesystem MCP server)
- Search Stack Overflow (web search MCP server)
- Create issues in GitHub (GitHub MCP server)
- Query your production database (PostgreSQL MCP server)
# File: dev_assistant.py
import asyncio
from anthropic import Anthropic
from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import stdio_client
async def dev_assistant():
# Start multiple MCP servers
servers = {
"filesystem": StdioServerParameters(
command="npx",
args=["-y", "@modelcontextprotocol/server-filesystem", "/path/to/code"]
),
"github": StdioServerParameters(
command="npx",
args=["-y", "@modelcontextprotocol/server-github"]
),
"postgres": StdioServerParameters(
command="npx",
args=["-y", "@modelcontextprotocol/server-postgres",
"postgresql://localhost/mydb"]
)
}
sessions = {}
all_tools = []
# Connect to all servers
for name, params in servers.items():
read, write = await stdio_client(params).__aenter__()
session = await ClientSession(read, write).__aenter__()
await session.initialize()
sessions[name] = session
# Gather all available tools
tools_list = await session.list_tools()
all_tools.extend(tools_list)
# Now Claude can use any tool from any server!
client = Anthropic()
question = """
Look at the recent error logs in logs/error.log,
search for similar issues on Stack Overflow,
check if this affects our user_sessions table,
and create a GitHub issue if it's a bug.
"""
response = client.messages.create(
model="claude-3-5-sonnet-20241022",
max_tokens=4096,
tools=all_tools, # Claude sees tools from ALL servers
messages=[{"role": "user", "content": question}]
)
# Handle tool calls (simplified)
for block in response.content:
if block.type == "tool_use":
# Route to correct server
server = find_server_for_tool(block.name, sessions)
result = await server.call_tool(block.name, block.input)
print(f"Tool result: {result}")
if __name__ == "__main__":
asyncio.run(dev_assistant())
The Gotchas (Things Your Bartender Won't Tell You)
1. Authentication Management
MCP servers handle their own auth, but you need to configure it:
// claude_desktop_config.json
{
"mcpServers": {
"github": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-github"],
"env": {
"GITHUB_TOKEN": "your_token_here"
}
}
}
}
Pro tip: Use environment variables or secret managers, never hardcode tokens.
2. Error Handling
MCP servers can fail. Always implement retry logic:
from tenacity import retry, stop_after_attempt, wait_exponential
@retry(stop=stop_after_attempt(3), wait=wait_exponential(multiplier=1, min=2, max=10))
async def call_tool_with_retry(session, tool_name, args):
try:
return await session.call_tool(tool_name, args)
except Exception as e:
print(f"Tool call failed: {e}")
raise
3. Performance Considerations
Each MCP server is a separate process. Connection overhead matters:
# BAD: Connect for every request
async def bad_pattern():
for i in range(100):
async with stdio_client(params) as (read, write):
async with ClientSession(read, write) as session:
await session.call_tool("some_tool", {})
# GOOD: Reuse connections
async def good_pattern():
async with stdio_client(params) as (read, write):
async with ClientSession(read, write) as session:
for i in range(100):
await session.call_tool("some_tool", {})
The Future Bar: Where MCP is Heading
The MCP ecosystem is exploding. Here's what's brewing:
- Pre-built servers for popular services (already 30+ official implementations)
- Sampling support - servers can request AI completions (mind-bending recursion!)
- Streaming resources - real-time data feeds
- Multi-modal resources - images, audio, video
Imagine:
# Future MCP server for security scanning
await session.call_tool("scan_dependencies", {
"project_path": "/my/app",
"ai_explanation": True # Server asks Claude to explain vulnerabilities!
})
Your Takeaway Menu
Start Here:
- Install the MCP SDK:
pip install mcp - Try an existing server: GitHub's MCP Explorer
- Build a simple server for your most-used API
Quick Wins:
- Wrap your internal APIs as MCP servers
- Connect Claude to your company's knowledge base
- Build tool chains without custom integration code
Watch Out For:
- Credential management (use env vars!)
- Connection pooling for production
- Tool naming conflicts when running multiple servers
The Last Call
Model Context Protocol isn't just another API standard - it's a fundamental shift in how we build AI applications. Instead of building monolithic AI systems with hardcoded integrations, we're moving toward a composable ecosystem where:
- Servers are reusable, focused, and maintained independently
- Clients (AI models) discover and use tools dynamically
- Developers assemble capabilities like mixing cocktails
The bartender doesn't need to know how to make every drink ever invented. They just need to know the fundamentals and have access to the right ingredients.
Your AI agents are the same way.
Top comments (0)