DEV Community

Bilal Saeed
Bilal Saeed

Posted on

Building Your First MCP Server: A Developer's Honest Guide

8 million downloads. 5,800+ servers in the ecosystem. Every major AI company on board.

Model Context Protocol is everywhere right now. But most tutorials skip the parts that actually trip you up in production. After building three MCP servers for internal tools at my company, I want to share what I wish someone had told me on day one.

What MCP Actually Does (Skip If You Know This)

MCP standardizes how AI models talk to external tools. Before MCP, connecting Claude or GPT to your database meant writing custom integration code. Every. Single. Time. Different formats, different auth flows, different everything.

MCP fixes that with a simple client-server architecture over JSON-RPC 2.0. Your AI assistant becomes the client. Your data sources become servers. One protocol handles the communication.

Think of it like USB-C for AI integrations. Plug anything into anything.

The Architecture You'll Actually Build

Here's the mental model that clicked for me:

┌─────────────────┐ JSON-RPC 2.0 ┌─────────────────┐
│ │ ◄──────────────────► │ │
│ MCP Client │ │ MCP Server │
│ (Claude, GPT) │ stdio / HTTP │ (Your Code) │
│ │ ◄──────────────────► │ │
└─────────────────┘ └─────────────────┘


┌─────────────────┐
│ Your Database │
│ APIs, Files │
│ Whatever │
└─────────────────┘

The server exposes three things: Tools (functions the AI can call), Resources (data the AI can read), and Prompts (reusable templates). Most tutorials focus on tools. That's where we'll start.

Setting Up a Real MCP Server

Let's build something useful: a server that queries your PostgreSQL database. Not a toy example—something you'd actually deploy.

First, install the SDK:

npm install @modelcontextprotocol/sdk pg
Enter fullscreen mode Exit fullscreen mode

Now the server skeleton:

// src/index.ts
import { Server } from "@modelcontextprotocol/sdk/server/index.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import {
  CallToolRequestSchema,
  ListToolsRequestSchema,
} from "@modelcontextprotocol/sdk/types.js";
import { Pool } from "pg";

// Database connection
const pool = new Pool({
  connectionString: process.env.DATABASE_URL,
});

// Initialize the MCP server
const server = new Server(
  {
    name: "postgres-query-server",
    version: "1.0.0",
  },
  {
    capabilities: {
      tools: {},
    },
  }
);

// Define available tools
server.setRequestHandler(ListToolsRequestSchema, async () => {
  return {
    tools: [
      {
        name: "query_database",
        description: "Execute a read-only SQL query against the database",
        inputSchema: {
          type: "object",
          properties: {
            query: {
              type: "string",
              description: "SQL SELECT query to execute",
            },
          },
          required: ["query"],
        },
      },
      {
        name: "list_tables",
        description: "List all tables in the database",
        inputSchema: {
          type: "object",
          properties: {},
        },
      },
    ],
  };
});

// Handle tool execution
server.setRequestHandler(CallToolRequestSchema, async (request) => {
  const { name, arguments: args } = request.params;

  if (name === "query_database") {
    return await executeQuery(args.query as string);
  }

  if (name === "list_tables") {
    return await listTables();
  }

  throw new Error(`Unknown tool: ${name}`);
});

async function executeQuery(query: string) {
  // Security: Only allow SELECT statements
  const normalizedQuery = query.trim().toUpperCase();
  if (!normalizedQuery.startsWith("SELECT")) {
    return {
      content: [
        {
          type: "text",
          text: "Error: Only SELECT queries are allowed for safety.",
        },
      ],
    };
  }

  try {
    const result = await pool.query(query);
    return {
      content: [
        {
          type: "text",
          text: JSON.stringify(result.rows, null, 2),
        },
      ],
    };
  } catch (error) {
    return {
      content: [
        {
          type: "text",
          text: `Query error: ${error.message}`,
        },
      ],
    };
  }
}

async function listTables() {
  const query = `
    SELECT table_name 
    FROM information_schema.tables 
    WHERE table_schema = 'public'
    ORDER BY table_name;
  `;

  try {
    const result = await pool.query(query);
    const tables = result.rows.map((row) => row.table_name);
    return {
      content: [
        {
          type: "text",
          text: `Available tables:\n${tables.join("\n")}`,
        },
      ],
    };
  } catch (error) {
    return {
      content: [
        {
          type: "text",
          text: `Error listing tables: ${error.message}`,
        },
      ],
    };
  }
}

// Start the server
async function main() {
  const transport = new StdioServerTransport();
  await server.connect(transport);
  console.error("Postgres MCP server running on stdio");
}

main().catch(console.error);

Enter fullscreen mode Exit fullscreen mode

That's about 120 lines for a working database query server. Not bad.

The Part Everyone Skips: Configuration

Your shiny new server won't do anything until you tell Claude Desktop (or whatever client) where to find it. This config file lives at

~/Library/Application Support/Claude/claude_desktop_config.json
Enter fullscreen mode Exit fullscreen mode

on Mac:

{
  "mcpServers": {
    "postgres": {
      "command": "node",
      "args": ["/absolute/path/to/your/dist/index.js"],
      "env": {
        "DATABASE_URL": "postgresql://user:pass@localhost:5432/mydb"
      }
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Restart Claude Desktop after editing this. I've lost count of how many times I forgot that step and wondered why nothing worked.

Where Things Get Tricky

Here's the stuff that bit me in production.

*Problem 1: Error Handling That Actually Helps
*

The basic example above returns generic error messages. In practice, you want structured errors the AI can reason about:

interface QueryError {
  code: string;
  message: string;
  suggestion?: string;
}

function formatError(error: any): QueryError {
  // PostgreSQL-specific error codes
  if (error.code === "42P01") {
    return {
      code: "TABLE_NOT_FOUND",
      message: `Table does not exist: ${error.message}`,
      suggestion: "Use list_tables to see available tables",
    };
  }

  if (error.code === "42703") {
    return {
      code: "COLUMN_NOT_FOUND",
      message: `Column does not exist: ${error.message}`,
      suggestion: "Check column names in the table schema",
    };
  }

  if (error.code === "28P01") {
    return {
      code: "AUTH_FAILED",
      message: "Database authentication failed",
      suggestion: "Check DATABASE_URL environment variable",
    };
  }

  return {
    code: "UNKNOWN_ERROR",
    message: error.message || "An unexpected error occurred",
  };
}
Enter fullscreen mode Exit fullscreen mode

The AI can now understand why something failed and suggest fixes. This matters more than you'd think.

Problem 2: Query Timeouts

Long-running queries will hang your server. Always set timeouts:

async function executeQuery(query: string) {
  const client = await pool.connect();

  try {
    // Set a 30-second timeout for this query
    await client.query("SET statement_timeout = 30000");

    const result = await client.query(query);
    return {
      content: [
        {
          type: "text",
          text: JSON.stringify(result.rows, null, 2),
        },
      ],
    };
  } catch (error) {
    if (error.message.includes("statement timeout")) {
      return {
        content: [
          {
            type: "text",
            text: "Query timed out after 30 seconds. Try adding LIMIT or optimizing the query.",
          },
        ],
      };
    }
    throw error;
  } finally {
    client.release();
  }
}
Enter fullscreen mode Exit fullscreen mode

Problem 3: Result Size Limits

Return 10,000 rows and watch everything grind to a halt. The AI's context window can't handle it, and you're wasting tokens anyway.

async function executeQuery(query: string) {
  const MAX_ROWS = 100;

  // Inject LIMIT if not present
  let safeQuery = query.trim();
  if (!safeQuery.toUpperCase().includes("LIMIT")) {
    safeQuery = `${safeQuery} LIMIT ${MAX_ROWS}`;
  }

  const result = await pool.query(safeQuery);

  const response = {
    rowCount: result.rows.length,
    data: result.rows,
    truncated: result.rows.length >= MAX_ROWS,
  };

  if (response.truncated) {
    response.note = `Results limited to ${MAX_ROWS} rows. Add specific filters for complete data.`;
  }

  return {
    content: [
      {
        type: "text",
        text: JSON.stringify(response, null, 2),
      },
    ],
  };
}
Enter fullscreen mode Exit fullscreen mode

Adding Resources for Schema Context

Tools let the AI do things. Resources let it know things. For a database server, exposing the schema as a resource helps the AI write better queries:

import {
  ListResourcesRequestSchema,
  ReadResourceRequestSchema,
} from "@modelcontextprotocol/sdk/types.js";

// Update server capabilities
const server = new Server(
  {
    name: "postgres-query-server",
    version: "1.0.0",
  },
  {
    capabilities: {
      tools: {},
      resources: {},
    },
  }
);

// List available resources
server.setRequestHandler(ListResourcesRequestSchema, async () => {
  const tables = await pool.query(`
    SELECT table_name 
    FROM information_schema.tables 
    WHERE table_schema = 'public'
  `);

  return {
    resources: tables.rows.map((row) => ({
      uri: `postgres://schema/${row.table_name}`,
      name: `${row.table_name} schema`,
      description: `Column definitions for the ${row.table_name} table`,
      mimeType: "application/json",
    })),
  };
});

// Read a specific resource
server.setRequestHandler(ReadResourceRequestSchema, async (request) => {
  const uri = request.params.uri;
  const tableName = uri.replace("postgres://schema/", "");

  // Validate table name to prevent injection
  if (!/^[a-zA-Z_][a-zA-Z0-9_]*$/.test(tableName)) {
    throw new Error("Invalid table name");
  }

  const schema = await pool.query(
    `
    SELECT 
      column_name,
      data_type,
      is_nullable,
      column_default
    FROM information_schema.columns
    WHERE table_schema = 'public' AND table_name = $1
    ORDER BY ordinal_position
  `,
    [tableName]
  );

  return {
    contents: [
      {
        uri: uri,
        mimeType: "application/json",
        text: JSON.stringify(schema.rows, null, 2),
      },
    ],
  };
});
Enter fullscreen mode Exit fullscreen mode

Now when someone asks "what columns does the users table have?", the AI can read the schema resource directly instead of running a query.

Security Considerations (Don't Skip This)

Running arbitrary SQL—even read-only—against production databases is risky. Here's my security checklist:

1. Use a read-only database user:

CREATE USER mcp_readonly WITH PASSWORD 'secure_password';
GRANT CONNECT ON DATABASE mydb TO mcp_readonly;
GRANT USAGE ON SCHEMA public TO mcp_readonly;
GRANT SELECT ON ALL TABLES IN SCHEMA public TO mcp_readonly;
Enter fullscreen mode Exit fullscreen mode

2. Whitelist allowed tables:

const ALLOWED_TABLES = ["users", "orders", "products"];

function validateQuery(query: string): boolean {
  const upperQuery = query.toUpperCase();

  // Check for disallowed operations
  const forbidden = ["INSERT", "UPDATE", "DELETE", "DROP", "ALTER", "TRUNCATE"];
  if (forbidden.some((op) => upperQuery.includes(op))) {
    return false;
  }

  // Check table references against whitelist
  const tablePattern = /FROM\s+([a-zA-Z_][a-zA-Z0-9_]*)/gi;
  const matches = [...query.matchAll(tablePattern)];

  for (const match of matches) {
    if (!ALLOWED_TABLES.includes(match[1].toLowerCase())) {
      return false;
    }
  }

  return true;
}
Enter fullscreen mode Exit fullscreen mode

3. Rate limit requests:

import { RateLimiter } from "limiter";

const limiter = new RateLimiter({
  tokensPerInterval: 20,
  interval: "minute",
});

server.setRequestHandler(CallToolRequestSchema, async (request) => {
  const hasToken = await limiter.tryRemoveTokens(1);
  if (!hasToken) {
    return {
      content: [
        {
          type: "text",
          text: "Rate limit exceeded. Please wait before making more queries.",
        },
      ],
    };
  }

  // ... rest of handler
});
Enter fullscreen mode Exit fullscreen mode

Testing Your Server

Don't just test against Claude. Use the MCP Inspector tool for faster iteration:

npx @modelcontextprotocol/inspector node dist/index.js
Enter fullscreen mode Exit fullscreen mode

This opens a web UI where you can call tools and read resources directly. Way faster than restarting Claude Desktop every time you change something.

What I'd Do Differently

After three production MCP servers, here's what I've learned:

Start with fewer tools. My first server had twelve tools. It was confusing for the AI and harder to maintain. Now I start with two or three and add more only when there's a clear need.

Log everything. MCP communication happens over stdio, which makes debugging painful. Add structured logging from day one:

import pino from "pino";

const logger = pino({
  transport: {
    target: "pino-pretty",
    options: { destination: "/tmp/mcp-server.log" },
  },
});

server.setRequestHandler(CallToolRequestSchema, async (request) => {
  logger.info({ tool: request.params.name, args: request.params.arguments }, "Tool called");

  // ... handler logic

  logger.info({ tool: request.params.name, success: true }, "Tool completed");
});
Enter fullscreen mode Exit fullscreen mode

Version your tools. When you need to change a tool's behavior, add a new version instead of breaking existing prompts. The AI doesn't handle breaking changes gracefully.

The Bottom Line

MCP isn't complicated once you understand the pattern: expose tools, handle requests, return structured responses. The complexity comes from all the production concerns—error handling, security, performance—that tutorials gloss over.

The ecosystem is still maturing. Security best practices are evolving. Some enterprise teams are holding back until things stabilize more. That's reasonable.

But if you're building internal tools or prototyping AI integrations, MCP is already good enough. The standardization alone saves hours of integration work.

Start small. Build something useful. Ship it.

Found this helpful? I'm writing more about practical AI integration patterns. The next post covers building MCP servers for REST APIs—including the OAuth dance that makes everyone's life difficult.

Top comments (0)