The Model Context Protocol (MCP) is revolutionizing how AI assistants interact with external tools. But building MCP servers from scratch? That's tedious.
In this post, I'll show you how we built ConnectSafely's MCP server that automatically generates 12+ LinkedIn automation tools from an existing OpenAPI specification—in under 50 lines of Python.
What We're Building
An MCP server that:
- ✅ Auto-generates tools from any OpenAPI spec
- ✅ Handles per-request authentication (API keys)
- ✅ Works with Claude Desktop, Cursor, and any MCP client
- ✅ Deploys as a standard web service with health checks
The result? AI assistants can now execute LinkedIn actions like posting comments, fetching profiles, and managing engagement—all through natural language.
The Tech Stack
| Component | Purpose |
|---|---|
| FastMCP | MCP server framework with OpenAPI support |
| FastAPI | Web framework for middleware & health checks |
| httpx | Async HTTP client with event hooks |
| ContextVar | Thread-safe per-request state management |
The Architecture
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ Claude Desktop │────▶│ MCP Server │────▶│ LinkedIn API │
│ / Cursor / AI │ │ (FastMCP) │ │ (OpenAPI) │
└─────────────────┘ └─────────────────┘ └─────────────────┘
│ │
│ ?apiKey=xxx │ Authorization: Bearer xxx
└───────────────────────┘
API Key Flow
The magic happens in the middle layer—our MCP server reads an OpenAPI spec and automatically exposes every endpoint as an MCP tool.
The Complete Code
Let's break this down piece by piece:
1. Setting Up Per-Request Authentication
The challenge: MCP servers handle multiple concurrent requests, each potentially with different API keys. We need thread-safe state management.
from contextvars import ContextVar
import httpx
# Store API key per request using context variables (thread-safe)
api_key_context: ContextVar[str] = ContextVar('api_key', default='')
# Event hook to inject Authorization header from context
async def add_auth_header(request: httpx.Request):
"""
Async event hook that injects the Authorization header
from context variable. Called for every outgoing request.
"""
api_key = api_key_context.get()
if api_key:
request.headers["Authorization"] = f"Bearer {api_key}"
Why ContextVar? Unlike global variables, ContextVar maintains separate values for each async context. When Request A sets an API key, it won't leak into Request B.
2. Creating the HTTP Client with Event Hooks
# Create HTTP client with event hook for dynamic header injection
api_client = httpx.AsyncClient(
base_url="https://api.connectsafely.ai/linkedin",
timeout=300.0, # 5 minute timeout for long operations
event_hooks={"request": [add_auth_header]}
)
The event_hooks pattern is incredibly powerful—every request automatically gets the correct Authorization header injected, without modifying any endpoint code.
3. Loading the OpenAPI Spec
import sys
try:
print("Loading OpenAPI spec...")
openapi_spec = httpx.get(
"https://api.connectsafely.ai/linkedin/openapi.json",
timeout=5.0
).json()
print("OpenAPI spec loaded successfully!")
except httpx.ConnectError:
print("ERROR: Could not connect to API")
sys.exit(1)
except Exception as e:
print(f"ERROR: Failed to load OpenAPI spec: {e}")
sys.exit(1)
We fetch the OpenAPI spec at startup. This means:
- Zero manual tool definitions – endpoints become tools automatically
- Always in sync – update your API, MCP tools update too
- Self-documenting – tool descriptions come from OpenAPI
4. The Magic: FastMCP from OpenAPI
from fastmcp import FastMCP
# Create the MCP server from the OpenAPI specification
mcp = FastMCP.from_openapi(
openapi_spec=openapi_spec,
client=api_client,
name="LinkedIn API Server"
)
This single line converts your entire OpenAPI spec into MCP tools. Each endpoint becomes a callable tool with:
- Proper parameter validation
- Type hints from the schema
- Descriptions from OpenAPI docs
5. FastAPI Integration for Production Features
from fastapi import FastAPI, Request
# Create MCP ASGI app
mcp_app = mcp.http_app(path='/')
# Create FastAPI app with MCP lifespan
app = FastAPI(lifespan=mcp_app.lifespan)
@app.get("/health")
async def health_check():
"""Health check endpoint for Docker and load balancers."""
return {"status": "healthy"}
We wrap the MCP app in FastAPI to add:
- Health check endpoints (essential for Kubernetes/Docker)
- Custom middleware
- Additional REST endpoints if needed
6. API Key Extraction Middleware
@app.middleware("http")
async def extract_api_key(request: Request, call_next):
"""
Middleware to extract API key from query parameter
or Authorization header. Supports both formats:
- ?apiKey=your-key
- Authorization: Bearer your-key
"""
# Extract from query parameter first
api_key = request.query_params.get("apiKey", "")
if not api_key:
# Fallback to Authorization header
auth_header = request.headers.get("Authorization", "")
if auth_header.startswith("Bearer "):
api_key = auth_header.split(" ")[1]
# Store in context for downstream use
api_key_context.set(api_key)
response = await call_next(request)
return response
# Mount MCP server at root
app.mount("/", mcp_app)
This middleware intercepts every request, extracts the API key, and stores it in our ContextVar. The httpx event hook then reads it for outgoing API calls.
7. Running the Server
if __name__ == "__main__":
import uvicorn
print("\nStarting MCP server on http://0.0.0.0:3011")
print("Connect with: http://localhost:3011?apiKey=your-api-key")
uvicorn.run(app, host="0.0.0.0", port=3011)
Connecting to Claude Desktop
Add this to your Claude Desktop config (claude_desktop_config.json):
{
"mcpServers": {
"connectsafely": {
"url": "http://localhost:3011?apiKey=YOUR_API_KEY"
}
}
}
Now Claude can execute LinkedIn automation tasks through natural conversation:
"Post a comment on the latest post from @naval saying 'Great insight on leverage!'"
Claude will use the MCP tools to:
- Search for the user's posts
- Find the latest one
- Post the comment via the API
Why This Pattern is Powerful
1. Zero Duplication
Your REST API and MCP tools share the same OpenAPI source of truth. Update once, both update.
2. Instant AI Integration
Any existing API with an OpenAPI spec can become AI-accessible in minutes.
3. Production Ready
- Health checks for orchestrators
- Proper async handling
- Thread-safe authentication
- Configurable timeouts
4. Client Flexibility
Works with any MCP-compatible client:
- Claude Desktop
- Cursor IDE
- Custom LangChain agents
- n8n AI nodes
The Complete Code
Here's everything in one copy-paste block:
import httpx
from fastmcp import FastMCP
from fastapi import FastAPI, Request
from contextvars import ContextVar
import sys
# Store API key per request using context variables (thread-safe)
api_key_context: ContextVar[str] = ContextVar('api_key', default='')
# Event hook to inject Authorization header from context
async def add_auth_header(request: httpx.Request):
api_key = api_key_context.get()
if api_key:
request.headers["Authorization"] = f"Bearer {api_key}"
# Create HTTP client with event hook for dynamic header injection
api_client = httpx.AsyncClient(
base_url="https://api.connectsafely.ai/linkedin",
timeout=300.0,
event_hooks={"request": [add_auth_header]}
)
# Load the OpenAPI spec
try:
print("Loading OpenAPI spec...")
openapi_spec = httpx.get(
"https://api.connectsafely.ai/linkedin/openapi.json",
timeout=5.0
).json()
print("OpenAPI spec loaded successfully!")
except Exception as e:
print(f"ERROR: Failed to load OpenAPI spec: {e}")
sys.exit(1)
# Create the MCP server from OpenAPI specification
mcp = FastMCP.from_openapi(
openapi_spec=openapi_spec,
client=api_client,
name="LinkedIn API Server"
)
# Create MCP ASGI app
mcp_app = mcp.http_app(path='/')
# Create FastAPI app with MCP lifespan and middleware
app = FastAPI(lifespan=mcp_app.lifespan)
@app.get("/health")
async def health_check():
return {"status": "healthy"}
@app.middleware("http")
async def extract_api_key(request: Request, call_next):
api_key = request.query_params.get("apiKey", "")
if not api_key:
auth_header = request.headers.get("Authorization", "")
if auth_header.startswith("Bearer "):
api_key = auth_header.split(" ")[1]
api_key_context.set(api_key)
response = await call_next(request)
return response
# Mount MCP server to FastAPI app at root
app.mount("/", mcp_app)
if __name__ == "__main__":
import uvicorn
print("\nStarting MCP server on http://0.0.0.0:3011")
print("Connect with: http://localhost:3011?apiKey=your-api-key")
uvicorn.run(app, host="0.0.0.0", port=3011)
What's Next?
This pattern opens up possibilities:
- Multi-tenant SaaS: Each user's API key routes to their data
- Rate limiting: Add middleware to track usage per key
- Caching: Cache frequently-requested data at the MCP layer
- Logging: Track which AI tools are used most
Try It Yourself
- Get a ConnectSafely API key: connectsafely.ai
- Or adapt for your own API: Just point to your OpenAPI spec URL
- Full documentation: connectsafely.ai/integrations/mcp-server
Top comments (0)