Last week, Google shipped a Colab MCP Server. Anthropic's Claude can now talk to databases, APIs, and file systems through MCP. If you've been paying attention, you know MCP (Model Context Protocol) is eating the AI tooling world.
But here's the thing — most tutorials show you "hello world" examples that are useless in production. Let's fix that.
In this guide, we'll build a real MCP server that gives AI agents the ability to query databases, manage deployments, and monitor services. By the end, you'll understand the protocol deeply enough to connect AI to anything.
What is MCP (and Why Should You Care)?
MCP is an open protocol that standardizes how AI models interact with external tools and data. Think of it as USB for AI — a universal plug that lets any AI model connect to any tool.
Before MCP, every AI integration was bespoke. Want Claude to read your database? Custom code. Want GPT to deploy your app? Different custom code. MCP says: "Here's a standard way to expose tools, and any AI can use them."
The architecture is simple:
AI Model (Client) <--JSON-RPC over stdio/SSE--> MCP Server <--> Your Tools/Data
An MCP server exposes three primitives:
-
Tools: Functions the AI can call (like
query_databaseordeploy_service) - Resources: Data the AI can read (like file contents or API docs)
- Prompts: Reusable prompt templates with parameters
Setting Up the Project
mkdir mcp-ops-server && cd mcp-ops-server
npm init -y
npm install @modelcontextprotocol/sdk zod
npm install -D typescript @types/node
Create tsconfig.json:
{
"compilerOptions": {
"target": "ES2022",
"module": "Node16",
"moduleResolution": "Node16",
"outDir": "./dist",
"strict": true,
"esModuleInterop": true
},
"include": ["src/**/*"]
}
Building the Server
Here's our src/index.ts — a production-ready MCP server that exposes database and deployment tools:
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { z } from "zod";
const server = new McpServer({
name: "ops-server",
version: "1.0.0",
});
// Tool 1: Query a database safely
server.tool(
"query_database",
"Execute a read-only SQL query against the application database",
{
query: z.string().describe("SQL SELECT query to execute"),
database: z.enum(["analytics", "users", "logs"]).describe("Target database"),
},
async ({ query, database }) => {
// Validate it's truly read-only
const normalized = query.trim().toUpperCase();
if (!normalized.startsWith("SELECT")) {
return {
content: [{ type: "text", text: "Error: Only SELECT queries are allowed." }],
isError: true,
};
}
// Block dangerous patterns
const forbidden = ["DROP", "DELETE", "UPDATE", "INSERT", "ALTER", "EXEC"];
if (forbidden.some((word) => normalized.includes(word))) {
return {
content: [{ type: "text", text: "Error: Query contains forbidden operations." }],
isError: true,
};
}
try {
// In production, use your actual DB client here
const results = await executeQuery(database, query);
return {
content: [
{
type: "text",
text: JSON.stringify(results, null, 2),
},
],
};
} catch (err) {
return {
content: [{ type: "text", text: `Query failed: ${(err as Error).message}` }],
isError: true,
};
}
}
);
// Tool 2: Check service health
server.tool(
"check_service",
"Check the health status of a deployed service",
{
service: z.string().describe("Service name (e.g., 'api', 'worker', 'web')"),
environment: z.enum(["staging", "production"]).default("production"),
},
async ({ service, environment }) => {
const healthUrl = `https://${service}.${environment}.internal/health`;
try {
const response = await fetch(healthUrl, {
signal: AbortSignal.timeout(5000),
});
const data = await response.json();
return {
content: [
{
type: "text",
text: `## ${service} (${environment})\n` +
`Status: ${response.ok ? "✅ Healthy" : "❌ Unhealthy"}\n` +
`Response time: ${data.latency_ms ?? "N/A"}ms\n` +
`Version: ${data.version ?? "unknown"}\n` +
`Uptime: ${data.uptime ?? "unknown"}`,
},
],
};
} catch (err) {
return {
content: [
{
type: "text",
text: `❌ ${service} (${environment}) is unreachable: ${(err as Error).message}`,
},
],
isError: true,
};
}
}
);
// Tool 3: Trigger a deployment
server.tool(
"deploy",
"Trigger a deployment for a service. Requires confirmation for production.",
{
service: z.string().describe("Service to deploy"),
environment: z.enum(["staging", "production"]),
version: z.string().describe("Git tag or commit SHA to deploy"),
confirm: z.boolean().describe("Must be true for production deployments"),
},
async ({ service, environment, version, confirm }) => {
if (environment === "production" && !confirm) {
return {
content: [
{
type: "text",
text: "⚠️ Production deployment requires explicit confirmation. " +
"Set confirm=true to proceed.",
},
],
isError: true,
};
}
// Call your CI/CD API (GitHub Actions, ArgoCD, etc.)
const deploymentId = await triggerDeployment(service, environment, version);
return {
content: [
{
type: "text",
text: `🚀 Deployment initiated\n` +
`Service: ${service}\n` +
`Environment: ${environment}\n` +
`Version: ${version}\n` +
`Deployment ID: ${deploymentId}\n\n` +
`Track progress: https://deploys.internal/${deploymentId}`,
},
],
};
}
);
// Resource: Expose runbook documentation
server.resource(
"runbook://{service}",
"Operational runbook for a service",
async (uri) => {
const service = uri.pathname.split("/").pop();
const runbook = await loadRunbook(service!);
return {
contents: [
{
uri: uri.href,
mimeType: "text/markdown",
text: runbook,
},
],
};
}
);
// Start the server
async function main() {
const transport = new StdioServerTransport();
await server.connect(transport);
console.error("OPS MCP Server running on stdio");
}
main().catch(console.error);
// Placeholder implementations — replace with your actual integrations
async function executeQuery(db: string, query: string): Promise<unknown[]> {
// Replace with: pg.query(), mysql2, better-sqlite3, etc.
return [{ placeholder: "Connect your actual database here" }];
}
async function triggerDeployment(
service: string, env: string, version: string
): Promise<string> {
// Replace with: GitHub Actions API, ArgoCD, Kubernetes API, etc.
return `deploy-${Date.now()}`;
}
async function loadRunbook(service: string): Promise<string> {
// Replace with: file system, Notion API, Confluence, etc.
return `# ${service} Runbook\n\nNo runbook found for ${service}.`;
}
Production Patterns That Matter
1. Authentication & Authorization
Never expose an MCP server without auth. For internal tools, use mTLS or token-based auth:
// Validate auth before processing any request
server.tool("admin_action", "...", { token: z.string() }, async ({ token }) => {
if (!await validateToken(token, "admin")) {
return {
content: [{ type: "text", text: "Unauthorized" }],
isError: true,
};
}
// proceed...
});
2. Rate Limiting
AI agents can be chatty. Protect your downstream services:
const rateLimiter = new Map<string, { count: number; reset: number }>();
function checkRateLimit(tool: string, limit = 30, windowMs = 60000): boolean {
const now = Date.now();
const entry = rateLimiter.get(tool);
if (!entry || now > entry.reset) {
rateLimiter.set(tool, { count: 1, reset: now + windowMs });
return true;
}
if (entry.count >= limit) return false;
entry.count++;
return true;
}
3. Structured Logging
When an AI agent breaks something, you need to know exactly what happened:
function logToolCall(tool: string, params: unknown, result: unknown) {
console.error(JSON.stringify({
timestamp: new Date().toISOString(),
tool,
params,
success: !(result as any).isError,
// Don't log full results — they can be huge
resultPreview: JSON.stringify(result).slice(0, 200),
}));
}
4. Input Sanitization
AI models can hallucinate dangerous inputs. Always validate:
// Use Zod for runtime validation (MCP SDK does this automatically)
// Add business logic validation on top
const MAX_QUERY_LENGTH = 1000;
const ALLOWED_TABLES = ["events", "metrics", "users_public"];
function validateQuery(query: string): string | null {
if (query.length > MAX_QUERY_LENGTH) return "Query too long";
// Simple table allowlist check
const tables = query.match(/FROM\s+(\w+)/gi);
if (tables?.some(t => {
const name = t.replace(/FROM\s+/i, "");
return !ALLOWED_TABLES.includes(name);
})) {
return "Access to that table is not allowed";
}
return null;
}
Connecting to Claude Desktop
Add your server to ~/Library/Application Support/Claude/claude_desktop_config.json:
{
"mcpServers": {
"ops": {
"command": "node",
"args": ["/path/to/mcp-ops-server/dist/index.js"],
"env": {
"DATABASE_URL": "postgresql://...",
"DEPLOY_API_KEY": "..."
}
}
}
}
Now Claude can:
- "Query the analytics database for yesterday's signup count"
- "Check if the API service is healthy in production"
- "Deploy version v2.3.1 to staging"
Deploying as a Remote Server (SSE)
For team-wide access, serve over HTTP with Server-Sent Events:
import { SSEServerTransport } from "@modelcontextprotocol/sdk/server/sse.js";
import express from "express";
const app = express();
app.get("/sse", async (req, res) => {
const transport = new SSEServerTransport("/messages", res);
await server.connect(transport);
});
app.post("/messages", async (req, res) => {
// Handle incoming messages from clients
await transport.handlePostMessage(req, res);
});
app.listen(3001, () => {
console.error("MCP SSE server on :3001");
});
What's Next
MCP is moving fast. Here's what to watch:
- Streamable HTTP transport — replacing SSE for better reliability
- OAuth 2.1 integration — standardized auth for remote MCP servers
- Elicitation — servers can ask the user for input mid-tool-call
- Multi-server composition — chaining MCP servers together
The ecosystem is exploding. VS Code, JetBrains, Cursor, Windsurf — they all support MCP now. Building an MCP server today means your tool works with every AI-powered IDE tomorrow.
The best part? You don't need permission from any AI company. MCP is open. Build a server, share it, and every AI agent in the world can use it.
What MCP server are you building? Drop a comment — I'd love to hear what tools you're connecting to AI.
If this helped you build something cool, buy me a coffee ☕
You Might Also Like
- BullMQ Job Queues in Node.js: Background Processing Done Right (2026 Guide)
- Building AI-Ready Backends: Streaming, Tool Use, and LLM Integration Patterns (2026)
- Background Job Processing in Node.js: BullMQ, Queues, and Worker Patterns (2026)
Follow me for more production-ready backend content!
If this helped you, buy me a coffee on Ko-fi!
Top comments (0)