DEV Community

Cover image for Build Your First MCP Server in 20 Minutes, Give Any LLM Access to Your App's Data
GDS K S
GDS K S

Posted on

Build Your First MCP Server in 20 Minutes, Give Any LLM Access to Your App's Data

I Built an MCP Server in 20 Minutes. Now Claude, GPT, and Gemini All Talk to My Database

You've got an app. You've got data. You want Claude (or GPT, or Gemini) to actually use that data — not through copy-pasting context, not through janky API wrappers, but through a clean protocol that any AI client can connect to.

That's exactly what MCP (Model Context Protocol) does.

If you haven't heard of it yet: MCP is an open standard created by Anthropic that lets AI models connect to external tools, databases, and APIs through a universal interface. Think of it like USB-C for AI — one protocol, every model, any tool.

In this tutorial, I'll walk you through building a real MCP server in TypeScript that exposes your PostgreSQL database to any LLM. By the end, you'll have a working server that Claude Desktop, Cursor, or any MCP-compatible client can connect to.

TL;DR: Install @modelcontextprotocol/sdk, define your tools, run the server. Any MCP client connects instantly. Skip to Quick Start


Table of Contents


Why MCP? The Problem It Solves

Right now, if you want an LLM to interact with your app, you have three bad options:

Approach What Happens The Problem
Copy-paste context Dump data into the prompt manually Doesn't scale. Manual. Stale data.
Custom function calling Build tool schemas per model Different for OpenAI vs Anthropic vs Google. Maintain 3 integrations.
RAG pipeline Embed everything, search, inject Overkill for structured data. Expensive. Lossy.

MCP fixes all three. You build one server, and every MCP-compatible client connects to it. When Anthropic updates Claude, when OpenAI adds MCP support, when Cursor ships a new version — your server keeps working. Zero changes.

Custom Integration MCP Server
Models supported 1 per integration All MCP-compatible models
Maintenance Per-model updates Zero — protocol handles it
Data freshness Depends on pipeline Real-time queries
Setup time Hours per model 20 minutes, once
Client support Your app only Claude Desktop, Cursor, Windsurf, Claude Code, any MCP client

How MCP Works (30-Second Version)

Your App's Data
      ↓
┌─────────────────────┐
│   MCP Server        │  ← You build this (TypeScript/Python)
│                     │
│  Tools:             │  → "query_users" — run SQL queries
│  Resources:         │  → "schema://tables" — expose DB schema
│  Prompts:           │  → "analyze_data" — pre-built analysis prompt
└─────────┬───────────┘
          │ JSON-RPC over stdio/SSE
          ↓
┌─────────────────────┐
│   MCP Client        │  ← Claude Desktop / Cursor / Your App
│   (Any LLM)         │
└─────────────────────┘
Enter fullscreen mode Exit fullscreen mode

Three core concepts:

  • Tools → Functions the LLM can call (like "run this SQL query")
  • Resources → Data the LLM can read (like "here's the database schema")
  • Prompts → Pre-built prompt templates (like "analyze this table")

That's it. Let's build one.


What We're Building

A MCP server that connects to PostgreSQL and exposes:

  • A query_database tool — LLM can run read-only SQL queries
  • A list_tables tool — LLM can discover what tables exist
  • A describe_table tool — LLM can inspect any table's schema
  • A schema://tables resource — auto-exposes full DB schema
  • An analyze_table prompt — pre-built analysis template

By the end, you'll be able to open Claude Desktop and say: "What are the top 10 customers by revenue this quarter?" — and Claude will query your actual database and answer.


Quick Start

If you just want to get running:

# Create project
mkdir my-mcp-server && cd my-mcp-server
npm init -y

# Install dependencies
npm install @modelcontextprotocol/sdk pg zod
npm install -D typescript @types/node @types/pg tsx

# Init TypeScript
npx tsc --init --target es2022 --module nodenext --moduleResolution nodenext --outDir dist
Enter fullscreen mode Exit fullscreen mode

Then create src/index.ts (full code below) and run:

npx tsx src/index.ts
Enter fullscreen mode Exit fullscreen mode

Want the full walkthrough? Keep reading.


Step 1: Project Setup

mkdir mcp-database-server && cd mcp-database-server
npm init -y
Enter fullscreen mode Exit fullscreen mode

Install what we need:

npm install @modelcontextprotocol/sdk pg zod
npm install -D typescript @types/node @types/pg tsx
Enter fullscreen mode Exit fullscreen mode

Here's what each package does:

Package Purpose
@modelcontextprotocol/sdk The official MCP SDK — handles protocol, transport, everything
pg PostgreSQL client
zod Schema validation for tool inputs
tsx Run TypeScript directly without compiling

Create your project structure:

mkdir src
touch src/index.ts
Enter fullscreen mode Exit fullscreen mode

Step 2: Define Your Tools

This is the core of it. Each tool is a function the LLM can call.

// src/index.ts
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { z } from "zod";
import pg from "pg";

// Database connection
const pool = new pg.Pool({
  connectionString: process.env.DATABASE_URL || "postgresql://localhost:5432/myapp",
});

// Create MCP server
const server = new McpServer({
  name: "database-server",
  version: "1.0.0",
});

// Tool 1: List all tables
server.tool(
  "list_tables",
  "List all tables in the database with row counts",
  {},
  async () => {
    const result = await pool.query(`
      SELECT schemaname, tablename,
             n_tup_ins - n_tup_del AS estimated_rows
      FROM pg_stat_user_tables
      ORDER BY estimated_rows DESC
    `);
    return {
      content: [{
        type: "text",
        text: JSON.stringify(result.rows, null, 2),
      }],
    };
  }
);

// Tool 2: Describe a table's schema
server.tool(
  "describe_table",
  "Get the schema (columns, types, constraints) for a specific table",
  {
    table_name: z.string().describe("Name of the table to describe"),
  },
  async ({ table_name }) => {
    // Prevent SQL injection — only allow valid identifiers
    if (!/^[a-zA-Z_][a-zA-Z0-9_]*$/.test(table_name)) {
      return {
        content: [{ type: "text", text: "Invalid table name." }],
        isError: true,
      };
    }

    const result = await pool.query(`
      SELECT column_name, data_type, is_nullable,
             column_default, character_maximum_length
      FROM information_schema.columns
      WHERE table_name = $1
      ORDER BY ordinal_position
    `, [table_name]);

    return {
      content: [{
        type: "text",
        text: JSON.stringify(result.rows, null, 2),
      }],
    };
  }
);

// Tool 3: Run a read-only SQL query
server.tool(
  "query_database",
  "Execute a read-only SQL query against the database. Only SELECT queries are allowed.",
  {
    sql: z.string().describe("The SQL SELECT query to execute"),
  },
  async ({ sql }) => {
    // Safety: only allow SELECT statements
    const trimmed = sql.trim().toUpperCase();
    if (!trimmed.startsWith("SELECT") && !trimmed.startsWith("WITH")) {
      return {
        content: [{ type: "text", text: "Only SELECT queries are allowed." }],
        isError: true,
      };
    }

    try {
      const result = await pool.query(sql);
      return {
        content: [{
          type: "text",
          text: JSON.stringify({
            rows: result.rows,
            rowCount: result.rowCount,
            fields: result.fields.map(f => f.name),
          }, null, 2),
        }],
      };
    } catch (error: any) {
      return {
        content: [{ type: "text", text: `Query error: ${error.message}` }],
        isError: true,
      };
    }
  }
);
Enter fullscreen mode Exit fullscreen mode

Three things to notice:

  1. Zod schemas define what inputs each tool accepts — the MCP client shows these to the LLM
  2. Safety checks prevent write queries — the LLM can only read
  3. Error handling returns structured errors the LLM can understand

Step 3: Add Resources

Resources are data the LLM can read before deciding what to do. Think of them as context the model gets automatically.

// Resource: Expose the full database schema
server.resource(
  "db-schema",
  "schema://tables",
  async (uri) => {
    const result = await pool.query(`
      SELECT table_name, column_name, data_type, is_nullable
      FROM information_schema.columns
      WHERE table_schema = 'public'
      ORDER BY table_name, ordinal_position
    `);

    // Group by table
    const schema: Record<string, any[]> = {};
    for (const row of result.rows) {
      if (!schema[row.table_name]) schema[row.table_name] = [];
      schema[row.table_name].push({
        column: row.column_name,
        type: row.data_type,
        nullable: row.is_nullable === "YES",
      });
    }

    return {
      contents: [{
        uri: uri.href,
        mimeType: "application/json",
        text: JSON.stringify(schema, null, 2),
      }],
    };
  }
);
Enter fullscreen mode Exit fullscreen mode

Now add a prompt template — pre-built instructions the LLM can use:

// Prompt: Data analysis template
server.prompt(
  "analyze_table",
  { table_name: z.string().describe("Table to analyze") },
  ({ table_name }) => ({
    messages: [{
      role: "user",
      content: {
        type: "text",
        text: `Analyze the "${table_name}" table. First, describe its schema using the describe_table tool. Then run a few queries to understand the data distribution: row count, null percentages for each column, and the top 5 most common values for text/enum columns. Summarize your findings.`,
      },
    }],
  })
);
Enter fullscreen mode Exit fullscreen mode

Step 4: Connect to Claude Desktop

Add the transport and start the server:

// Start the server
async function main() {
  const transport = new StdioServerTransport();
  await server.connect(transport);
  console.error("MCP Database Server running on stdio");
}

main().catch(console.error);
Enter fullscreen mode Exit fullscreen mode

Now tell Claude Desktop about your server. Open your Claude Desktop config:

macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
Windows: %APPDATA%\Claude\claude_desktop_config.json

{
  "mcpServers": {
    "my-database": {
      "command": "npx",
      "args": ["tsx", "/full/path/to/mcp-database-server/src/index.ts"],
      "env": {
        "DATABASE_URL": "postgresql://localhost:5432/myapp"
      }
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Restart Claude Desktop. You should see a hammer icon in the chat input — that means your tools are connected.


Step 5: Test It

Open Claude Desktop and try these:

"What tables are in my database?"

Claude will call list_tables and show you every table with row counts.

"Show me the schema for the users table"

Claude calls describe_table with table_name: "users" and displays the columns.

"What are the top 10 customers by total order value this quarter?"

Claude will:

  1. Read the schema://tables resource to understand your data model
  2. Call describe_table on relevant tables
  3. Write and execute a SQL query using query_database
  4. Format the results into a clean answer

No prompt engineering. It just works.


Going Further: Real-World Patterns

Pattern 1: Add Authentication

Don't expose your production database without auth. Use environment variables and connection pooling:

const pool = new pg.Pool({
  connectionString: process.env.DATABASE_URL,
  max: 5,              // Limit connections
  idleTimeoutMillis: 30000,
  connectionTimeoutMillis: 5000,
});

// Add a query timeout to prevent long-running queries
async function safeQuery(sql: string) {
  const client = await pool.connect();
  try {
    await client.query("SET statement_timeout = '10s'");
    return await client.query(sql);
  } finally {
    client.release();
  }
}
Enter fullscreen mode Exit fullscreen mode

Pattern 2: Multiple Data Sources

One MCP server can expose tools for multiple services:

// Tool: Search your API
server.tool("search_products", "Search products by name or category", {
  query: z.string(),
  category: z.string().optional(),
}, async ({ query, category }) => {
  const response = await fetch(`${API_URL}/products/search?q=${query}&cat=${category || ""}`);
  const data = await response.json();
  return { content: [{ type: "text", text: JSON.stringify(data) }] };
});

// Tool: Get analytics
server.tool("get_metrics", "Get app metrics for a date range", {
  start_date: z.string().describe("ISO date string"),
  end_date: z.string().describe("ISO date string"),
  metric: z.enum(["revenue", "signups", "churn", "active_users"]),
}, async ({ start_date, end_date, metric }) => {
  // Query your analytics service
  const data = await analyticsClient.query(metric, start_date, end_date);
  return { content: [{ type: "text", text: JSON.stringify(data) }] };
});
Enter fullscreen mode Exit fullscreen mode

Pattern 3: SSE Transport for Remote Servers

The stdio transport works for local development. For production, use Server-Sent Events so remote clients can connect:

import express from "express";
import { SSEServerTransport } from "@modelcontextprotocol/sdk/server/sse.js";

const app = express();

app.get("/sse", async (req, res) => {
  const transport = new SSEServerTransport("/messages", res);
  await server.connect(transport);
});

app.post("/messages", async (req, res) => {
  // Handle incoming messages
  await transport.handlePostMessage(req, res);
});

app.listen(3001, () => {
  console.log("MCP server running on http://localhost:3001");
});
Enter fullscreen mode Exit fullscreen mode

The Full Server (Copy-Paste Ready)

Here's the complete src/index.ts — one file, fully working:

import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { z } from "zod";
import pg from "pg";

const pool = new pg.Pool({
  connectionString: process.env.DATABASE_URL || "postgresql://localhost:5432/myapp",
  max: 5,
  idleTimeoutMillis: 30000,
});

const server = new McpServer({
  name: "database-server",
  version: "1.0.0",
});

// --- Tools ---

server.tool("list_tables", "List all tables with row counts", {}, async () => {
  const result = await pool.query(`
    SELECT schemaname, tablename, n_tup_ins - n_tup_del AS estimated_rows
    FROM pg_stat_user_tables ORDER BY estimated_rows DESC
  `);
  return { content: [{ type: "text", text: JSON.stringify(result.rows, null, 2) }] };
});

server.tool("describe_table", "Get table schema", {
  table_name: z.string().describe("Table name"),
}, async ({ table_name }) => {
  if (!/^[a-zA-Z_][a-zA-Z0-9_]*$/.test(table_name)) {
    return { content: [{ type: "text", text: "Invalid table name." }], isError: true };
  }
  const result = await pool.query(`
    SELECT column_name, data_type, is_nullable, column_default
    FROM information_schema.columns WHERE table_name = $1
    ORDER BY ordinal_position
  `, [table_name]);
  return { content: [{ type: "text", text: JSON.stringify(result.rows, null, 2) }] };
});

server.tool("query_database", "Run a read-only SQL query (SELECT only)", {
  sql: z.string().describe("SQL SELECT query"),
}, async ({ sql }) => {
  const trimmed = sql.trim().toUpperCase();
  if (!trimmed.startsWith("SELECT") && !trimmed.startsWith("WITH")) {
    return { content: [{ type: "text", text: "Only SELECT queries allowed." }], isError: true };
  }
  try {
    const client = await pool.connect();
    try {
      await client.query("SET statement_timeout = '10s'");
      const result = await client.query(sql);
      return {
        content: [{
          type: "text",
          text: JSON.stringify({ rows: result.rows, rowCount: result.rowCount }, null, 2),
        }],
      };
    } finally {
      client.release();
    }
  } catch (error: any) {
    return { content: [{ type: "text", text: `Error: ${error.message}` }], isError: true };
  }
});

// --- Resources ---

server.resource("db-schema", "schema://tables", async (uri) => {
  const result = await pool.query(`
    SELECT table_name, column_name, data_type, is_nullable
    FROM information_schema.columns WHERE table_schema = 'public'
    ORDER BY table_name, ordinal_position
  `);
  const schema: Record<string, any[]> = {};
  for (const row of result.rows) {
    if (!schema[row.table_name]) schema[row.table_name] = [];
    schema[row.table_name].push({
      column: row.column_name, type: row.data_type, nullable: row.is_nullable === "YES",
    });
  }
  return {
    contents: [{ uri: uri.href, mimeType: "application/json", text: JSON.stringify(schema, null, 2) }],
  };
});

// --- Prompts ---

server.prompt("analyze_table", { table_name: z.string() }, ({ table_name }) => ({
  messages: [{
    role: "user",
    content: {
      type: "text",
      text: `Analyze the "${table_name}" table. Describe its schema, run queries to understand data distribution (row count, null percentages, top values for text columns), and summarize findings.`,
    },
  }],
}));

// --- Start ---

async function main() {
  const transport = new StdioServerTransport();
  await server.connect(transport);
  console.error("MCP Database Server running");
}

main().catch(console.error);
Enter fullscreen mode Exit fullscreen mode

package.json additions:

{
  "type": "module",
  "scripts": {
    "start": "tsx src/index.ts",
    "build": "tsc && node dist/index.js"
  }
}
Enter fullscreen mode Exit fullscreen mode

FAQ

Q: Does this work with GPT / OpenAI?
MCP was created by Anthropic, but the protocol is open. OpenAI has announced MCP support for ChatGPT and their API. It also works with Cursor, Windsurf, Cline, and any client that implements the protocol. You build once.

Q: Is it safe to let an LLM query my database?
The example above only allows SELECT queries and has a 10-second timeout. For production: use a read-only database replica, create a restricted Postgres role with SELECT-only permissions, and add row-level security if needed.

Q: How is this different from function calling?
Function calling is model-specific — you define schemas differently for OpenAI vs Anthropic. MCP is universal. Build the server once, connect any client. It also supports resources (context) and prompts (templates), not just function calls.

Q: Can I use Python instead of TypeScript?
Yes. The MCP SDK has official Python support via mcp package. The patterns are identical — just different syntax:

from mcp.server.fastmcp import FastMCP

mcp = FastMCP("database-server")

@mcp.tool()
def list_tables() -> str:
    """List all tables in the database"""
    # Your query logic here
    return json.dumps(tables)
Enter fullscreen mode Exit fullscreen mode

Q: Can I deploy this to production?
Absolutely. Use the SSE transport (shown above) for remote access. Deploy to any Node.js host — Railway, Render, AWS Lambda, a VPS. The server is stateless, so it scales horizontally.


Resources


Building MCP servers for production? We've been working on similar patterns at Glincker — our orchestration layer uses MCP-compatible tooling to connect AI models with app data in real time. If you're building in this space, I'd love to hear what you're connecting.


Published by GDSKS • Lead Architect, Founder at GLINCKER • Building GLINR STUDIOS | AskVerdict.ai

Found this helpful? Give it a ❤️ and follow for more Dev + AI tutorials!

Top comments (0)