DEV Community

Cover image for Building my first Local MCP server using Swagger & OpenAPITools
ShatilKhan
ShatilKhan

Posted on • Edited on

Building my first Local MCP server using Swagger & OpenAPITools

Let's be real: enterprise software can feel like a labyrinth. Our app at AnchorBlock, AnchorBooks.ai, is a beast with hundreds of features for finance and bookkeeping. It's powerful, but finding what you need can mean a dozen clicks through a maze of menus.

So, I got this crazy idea: what if you could just talk to the app? What if you could say, "Hey, create an invoice for client X with these items," and it would just... happen?

That's the dream, right? To give our app a brain. This is the story of how I took the first step, moving from a complex API to a simple chat prompt. In this first part of our two-part series, we're going full mad-scientist-in-the-lab. We'll build a local tool server using the Model Context Protocol (MCP) and hook it up to GitHub Copilot, proving we can control our entire backend with plain English.

Let's get our hands dirty.

Step 1: The Rosetta Stone for AI - Turning an API into Tools

An AI model, as smart as it is, has no idea what a POST /api/item request is. You can't just give it your backend URL and hope for the best. It needs a manual, an instruction book that clearly explains what's possible.

For us, that manual was our Swagger (OpenAPI) documentation. With over 249 endpoints, our NestJS backend was well-documented, which turned out to be the secret ingredient. Each endpoint definition, with its inputs and outputs, was a potential "skill" we could teach our AI.

But was I going to manually write a wrapper script for all 249 endpoints? Absolutely not. My laziness (I call it efficiency) led me to a game-changing tool: OpenAPITools.

Think of OpenAPITools as a magic translator. You feed it your swagger.json file, and it spits out a ready-to-use MCP server structure, complete with Python scripts for every single API endpoint. It's like giving the AI a universal remote for our app, and OpenAPITools just programmed all the buttons for us.

Here’s a quick look at that workflow:

mcp flow

Step 2: A Peek Under the Hood: The Anatomy of a Single Tool

Let's zoom in on one example to see how this actually works. Say we want to create a new item in our inventory.

In our NestJS backend, we have a CreateItemDto that defines exactly what data is needed. It's decorated with @ApiProperty for Swagger, which is key.

The Source of Truth: dto/create-item.dto.ts

item dto

When OpenAPITools reads this, it generates two crucial files:

  1. The AI's Cheat Sheet: tools.json (Snippet) This file is the manifest. It tells the AI, "Hey, there's a tool called ItemCreator. It needs a name, a type, a price, and so on." The schema here is a direct translation of our DTO.

toolsjson

  1. The Worker Bee: ItemCreator-main.py This Python script is the muscle. Its job is dead simple: get the JSON input from the AI, build a real HTTP request with the right headers (including our JWT token for auth!), and fire it off to our actual NestJS backend.

Extract parameters from the AI's request

boilerplate

Build the payload for our NestJS API

payload

Get our secret JWT token from the environment

Make the actual API call!

api call

Send the result back to the AI

And just like that, every single one of our 249 endpoints had its own little Python worker, ready and waiting for orders.

Step 3: Firing Up the Local Brain

With our tools ready, we needed a "switchboard operator" to direct traffic from the AI agent to the correct Python script. The generated project includes a lightweight Node.js server that does exactly this. For local testing, it runs in stdio mode, communicating silently in the background.

The stdio-server.js file orchestrates everything, but the gist is simple:

It starts up and reads our tools.json to know what tools are available.

It listens for two main commands from an agent: ListTools (what can you do?) and CallTool (do this thing!).

When CallTool comes in, it finds the right Python script, passes along the inputs, and pipes the result back to the agent.

I just pop open my terminal, run node index.js, and my local AI "brain" is online.

Step 4: The "It's Alive!" Moment: Hooking up GitHub Copilot

This is where it gets really cool. We have a server running on our machine that knows how to use our app. Now, let's connect an AI to it. GitHub Copilot's agent mode is perfect for this.

Here’s how you can do it too:

  1. Light Up the Server In your terminal, navigate to your MCP project folder and run the server.
node index.js
Enter fullscreen mode Exit fullscreen mode

It will just sit there, waiting. That's what it's supposed to do!

  1. Tweak Your VS Code Settings
    Open your VS Code User settings.json file. The easiest way is to press Ctrl+Shift+P (or Cmd+Shift+P) and type Preferences: Open User Settings (JSON).

  2. Tell Copilot About Your Server
    Add this snippet to your settings.json. This tells Copilot, "Hey, there's a tool server available. If I say @anchorbooks, you should run this command to talk to it."

{
  // ...your other settings...
  "github.copilot.mcp.enabled": true,
  "github.copilot.mcp.servers": {
    "anchorbooks": { // Call it whatever you want
      "command": "node",
      "args": [
        // IMPORTANT: Use the FULL, absolute path to your index.js file
        "G:\\Office\\anchorbooks\\item-mcp-tool\\index.js"
      ]
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Pro Tip: Make sure the path is correct! This trips up a lot of people.

  1. Let the Magic Happen Restart VS Code to make sure the settings take effect. Now, open the GitHub Copilot chat panel and type @—you should see your server's name (@anchorbooks) pop up!

I sent my first command:

@anchorbooks create a new item of type goods named "Premium Arabica Beans" with a selling price of 25 and purchase price of 15. Use unit "kg", sales account 1, purchase account 2, for organization 1.

I held my breath. A few seconds later... success!

Copilot parsed my sentence, correctly identified the ItemCreator tool, mapped all the parameters, and sent the request to my local Node.js server. The server executed the Python script, which made a real, authenticated API call to my backend. A new item was created in my database, and the JSON response appeared right in my chat window.

(Placeholder: Screenshot of the Copilot chat with the prompt and the successful tool execution and JSON response)

It felt like magic. I could now manage invoices, update bills, transfer stock—perform over 33 complex backend operations—just by chatting with Copilot.

What's Next?

We've built a powerful local playground. We proved that we can bridge the gap between human language and a complex API. But let's be honest, this is a lab experiment. The "brain" is tethered to my machine and depends on an external tool like VS Code.

This setup is awesome, but it's not a product.

In Part 2 of this series, we're cutting the cord. I’ll show you how we took this successful experiment and turned it into a real, integrated feature by:

Ditching the local server and building the MCP logic directly into our NestJS backend.

Creating our very own chat interface right inside the AnchorBooks.ai app.

Plugging into powerful models like Claude 3.5 for smarter, faster tool use.

Stay tuned, because that's when we truly give our application a mind of its own.

Author's Note: No code repos shared here since it was a company project. I wanted to share my experience building stuff & will continue the story in the 2nd blog along with more code examples for better understanding. This workflows also works with complex operations like invoice creation or generating new customer and multi stage workflows as well which we'll discuss in future blogs.

Top comments (0)