DEV Community

Cover image for Local MCP Server in 15 Minutes (And What to Do With It After)
Juan Torchia
Juan Torchia

Posted on • Originally published at juanchi.dev

Local MCP Server in 15 Minutes (And What to Do With It After)

87% of developers who mention MCP on Twitter have never written their own server. I read that in an informal poll inside a Hacker News thread and had to read it twice. Because until three weeks ago, I was part of that 87%.

MCP — Model Context Protocol — has been in every AI conversation for months. Anthropic published it, editors adopted it, Claude Desktop uses it by default. Everyone talks about it. Almost nobody has actually touched it. I decided to be one of the people who touches it.

The result was weird: it worked too fast. And that left me in an uncomfortable place that's worth exploring.

What a Local MCP Server Is and Why It Matters Right Now

MCP is a protocol that lets a language model communicate with external tools in a standardized way. The core idea is simple: instead of every AI integration inventing its own way to call functions, there's a common contract. An MCP server exposes tools, the client (Claude, Cursor, any compatible LLM) discovers them and uses them.

Thinking of it as a REST API for AI context isn't far off. But there's an important difference: MCP is designed to be bidirectional and stateful. The server can maintain state between calls. The client can negotiate capabilities. It's closer to a language protocol (like LSP for editors) than a simple HTTP endpoint.

Locally, this means I can have a process running on my machine that gives Claude access to my files, my databases, my internal APIs — without sending anything to any external service. For certain use cases, that's huge.

The spec lives at modelcontextprotocol.io. The official TypeScript SDK is on npm. The documentation is surprisingly good for something this new.

Spinning Up the Server: The Real 12 Minutes

I used the official TypeScript SDK. Node 20, a fresh project, three dependencies.

# Initialize project
npm init -y
npm install @modelcontextprotocol/sdk zod
npm install -D typescript @types/node tsx
Enter fullscreen mode Exit fullscreen mode

The simplest possible server — a tool that reads a directory:

// src/server.ts
import { Server } from '@modelcontextprotocol/sdk/server/index.js';
import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js';
import {
  CallToolRequestSchema,
  ListToolsRequestSchema,
} from '@modelcontextprotocol/sdk/types.js';
import { readdir, readFile } from 'fs/promises';
import { join } from 'path';
import { z } from 'zod';

// Create server instance with metadata
const server = new Server(
  {
    name: 'juanchi-local-tools',
    version: '0.1.0',
  },
  {
    capabilities: {
      tools: {}, // This server exposes tools
    },
  }
);

// Define which tools are available
server.setRequestHandler(ListToolsRequestSchema, async () => {
  return {
    tools: [
      {
        name: 'read_directory',
        description: 'Lists files in a local directory',
        inputSchema: {
          type: 'object',
          properties: {
            path: {
              type: 'string',
              description: 'Absolute path of the directory to read',
            },
          },
          required: ['path'],
        },
      },
      {
        name: 'read_file',
        description: 'Reads the contents of a text file',
        inputSchema: {
          type: 'object',
          properties: {
            path: {
              type: 'string',
              description: 'Absolute path of the file',
            },
          },
          required: ['path'],
        },
      },
    ],
  };
});

// Handle tool calls
server.setRequestHandler(CallToolRequestSchema, async (request) => {
  const { name, arguments: args } = request.params;

  if (name === 'read_directory') {
    // Validate input with zod
    const { path } = z.object({ path: z.string() }).parse(args);

    try {
      const entries = await readdir(path, { withFileTypes: true });
      const list = entries.map((f) =>
        `${f.isDirectory() ? '[DIR]' : '[FILE]'} ${f.name}`
      );

      return {
        content: [
          {
            type: 'text',
            text: list.join('\n'),
          },
        ],
      };
    } catch (error) {
      return {
        content: [{ type: 'text', text: `Error: ${error}` }],
        isError: true,
      };
    }
  }

  if (name === 'read_file') {
    const { path } = z.object({ path: z.string() }).parse(args);

    try {
      const contents = await readFile(path, 'utf-8');
      return {
        content: [{ type: 'text', text: contents }],
      };
    } catch (error) {
      return {
        content: [{ type: 'text', text: `Error: ${error}` }],
        isError: true,
      };
    }
  }

  // Tool not found
  throw new Error(`Unknown tool: ${name}`);
});

// Connect using stdio transport (standard for local MCP)
async function main() {
  const transport = new StdioServerTransport();
  await server.connect(transport);
  console.error('MCP Server running on stdio');
}

main().catch(console.error);
Enter fullscreen mode Exit fullscreen mode

Minimal tsconfig.json:

{
  "compilerOptions": {
    "target": "ES2022",
    "module": "Node16",
    "moduleResolution": "Node16",
    "outDir": "./dist",
    "strict": true
  },
  "include": ["src"]
}
Enter fullscreen mode Exit fullscreen mode

To connect it to Claude Desktop, edit ~/Library/Application Support/Claude/claude_desktop_config.json on Mac:

{
  "mcpServers": {
    "juanchi-local": {
      "command": "npx",
      "args": ["tsx", "/absolute/path/to/your/project/src/server.ts"]
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Restart Claude Desktop. A tools icon appears. The tools show up correctly. It worked.

Minute 12.

Minute 13: The Real Problem With a Local MCP Server

There I was. Claude Desktop with my server connected. Tools showing up perfectly. All green.

And I had no idea what to ask it.

This is what nobody tells you in MCP tutorials: the protocol itself is not the hard part. The use case is the hard part.

Sure, you can read directories. But why? Claude already knows how to read files if you paste them into the context. Sure, you can connect a database. But when do you actually need an LLM to run queries autonomously on your local machine?

I started to realize that MCP isn't a solution looking for a problem — it's infrastructure for when you already have the problem figured out. And most tutorials teach it backwards: protocol first, context never.

It reminded me of what happened when I dug into multi-agent systems and their race condition problems: the architecture was elegant, but the real complexity only showed up when you tried to apply it to something concrete. The "15 minutes" of the tutorial is real. What comes after requires actual thinking.

The Real Gotchas I Hit

The stdio transport is not obvious. Local MCP uses stdin/stdout to communicate. That means if you use console.log in your server, you break the protocol because you're writing to stdout. All logging has to go to console.error (stderr). I lost 20 minutes to this one.

Absolute paths are mandatory in the Claude Desktop config. Relative paths don't work. The process starts from a different directory than you expect.

The server restarts with every conversation. You have no persistent state between chats unless you implement it explicitly (database, files, etc.). That changes how you design your tools.

Errors are not verbose by default. If something fails in the connection, Claude Desktop just shows that tools aren't available. To debug, you need to check the logs in ~/Library/Logs/Claude/ on Mac.

Zod is practically mandatory. The inputSchema is pure JSON Schema, but validating argument input manually is a nightmare. Zod makes that elegant. Don't skip it.

This reminded me of the moment I understood that LLMs can find real vulnerabilities: the technical capability is impressive, but the context in which you apply it changes everything.

FAQ: Local MCP Server and AI Tools

What's the difference between MCP and a normal Function Calling API?
Function Calling (OpenAI, Anthropic) is provider-specific and generally stateless per request. MCP is an open, standardized protocol that can maintain state and works with any compatible client. The most accurate analogy: Function Calling is like a REST endpoint, MCP is like a complete transport protocol. If you're using Claude today and migrate to another compatible model tomorrow, your MCP servers keep working exactly the same.

Is it safe to give an LLM access to my local filesystem?
Depends on how you implement it. The MCP server runs with your system permissions. If you give it access to /, it could read (or write, if you implement it) anything. The recommended practice is to explicitly limit paths in the server, not trust that the model won't go exploring, and never expose write or execution tools without a human in the loop for confirmation. This applies especially if you're wiring up tools that run shell commands.

Does it work with clients other than Claude Desktop?
Yes. Cursor has native MCP support. Continue.dev too. Any client that implements the spec can connect. That's precisely the value of the protocol — write the server once, it works across multiple clients. The ecosystem is growing fast; worth checking mcp.so for servers that are already built.

Can I use MCP to connect a local PostgreSQL database?
Yes, and it's one of the most powerful use cases. There's an official @modelcontextprotocol/server-postgres server you can configure in minutes. Give it access to your local instance and the model can run queries, explore the schema, analyze data. Where this really shines is in exploratory analysis tasks where you don't know upfront what queries you need — the model builds them dynamically.

How stable is the spec? Is it worth investing time now?
The spec is at version 2025-03-26 as I write this. It changed significantly between 2024 and early 2025. My take: if you're building something production-critical, wait a bit longer. If you're exploring and learning, now is the time — the ecosystem is at that sweet spot where there's enough documentation and examples, but you can still understand the full spec in an afternoon.

Does MCP make sense for a personal project or is it overkill?
Depends on the project. If you have repetitive workflows where an LLM needs access to your local data — notes, code, personal databases, logs — local MCP is a clean solution. If your use case is "I want Claude to help me write code," Cursor or Claude Projects with files are simpler. MCP shines when you need the model to access data sources that can't live in the chat context.

The Protocol Everyone Uses Without Understanding: Where I Landed

Thirty years watching technologies come and go taught me to tell hype from real infrastructure. MCP has the shape of real infrastructure. It's not a product, it doesn't have a marketing page — it's a protocol with a public spec, open source SDKs, and genuine adoption across multiple ecosystems.

What stayed with me from this experiment isn't the code — that was simple. What stayed with me is the minute-13 question: what do you actually use it for?

I have some ideas starting to take shape. A server that gives Claude access to my Railway projects. A server connected to the metrics database from the Buenos Aires bus sonification experiment so I can ask exploratory questions about the data in real time. A server that indexes my Obsidian vault and enables semantic search from the chat.

None of those use cases existed in my head before I spun up the server. That's also part of the process: sometimes you have to build the infrastructure to discover what it's for.

It happened to me with Docker when I first learned it. It happened with Rust's runtime for TypeScript: first you understand the mechanics, then the natural use case surfaces on its own. The technology that lasts is the kind that doesn't impose the problem on you — it gives you the tools to solve it when you find it.

That's what MCP feels like to me. I don't know yet if I'm right. But minute 13 doesn't feel like a failure anymore — it feels like the beginning of the interesting part.

If you already have a clear use case and want to go deeper into the spec, start at modelcontextprotocol.io. If you're still in minute 13 like I was, that's fine. Build the server, let it run, and wait for the problem to find you.

Top comments (0)