Give Superpowers to Your AI Inside Your Cursor IDE
We use AI every day. We ask it to generate code, fix bugs, refactor logic, and explain documentation. But what if your AI could actually call your APIs, fetch real data, update backend records, and test endpoints — all without leaving your IDE?
That's where MCP comes in.
What is MCP?
In simple words:
Building an MCP server is nothing but giving extra superpowers to an LLM-based chat agent.
Instead of just generating code, your AI can now execute real API calls, read Swagger documentation, perform structured operations, and return real backend data. It becomes more than a chatbot. It becomes a system operator.
The Everyday Developer Problem
Let's be honest. If you want to test an API manually, what do you do?
Open Postman → Search for the correct endpoint → Add authentication → Add headers → Add query parameters → Modify the payload → Click Send → Debug errors → Repeat.
Now imagine you're in the middle of development. You're focused. Suddenly you need to fetch 5 users, or update one record, or verify an API response, or test a mutation.
You leave your IDE. You open another tool. You lose focus. You get distracted.
Sound familiar?
What If…
What if the entire process could be automated with just one prompt inside your IDE?
No searching endpoints. No opening Postman. No writing temporary scripts.
You just type:
"Get the latest 10 users"
And your AI knows which endpoint to call, sends the request, and returns structured, real data.
Exciting, right? Now let's build this.
What We're Building
We're going to build a custom MCP server using Node.js and TypeScript that connects to the Fake REST API — no auth required — and works inside Cursor IDE and Claude Desktop.
This will expose one powerful tool called query_api which allows AI to:
- Send GET/POST/PUT/PATCH/DELETE requests
- Pass query params
- Send a request body
- Return structured output
It also exposes Swagger docs as a readable resource so the AI understands the API structure automatically.
Available resources: Activities, Authors, Books, CoverPhotos, Users
Requirements
Before starting, make sure you have:
- Node.js installed
- Cursor IDE and/or Claude Desktop
Step 1 — Scaffold the Project
Instead of setting everything up manually, the MCP SDK gives you a ready-made command to scaffold a starter project instantly.
npx @modelcontextprotocol/create-server my-mcp-server
cd my-mcp-server
It will ask for a name and description, then generate the full project structure — package.json, tsconfig.json, and a wired-up src/index.ts — all in one shot.
Then install dependencies:
npm install
npm install axios
Your project structure will look like this:
my-mcp-server/
├── src/
│ └── index.ts
├── package.json
├── tsconfig.json
└── node_modules/
Step 2 — Write the MCP Server
Now open src/index.ts and replace its contents with the following:
#!/usr/bin/env node
import { Server } from "@modelcontextprotocol/sdk/server/index.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import {
CallToolRequestSchema,
ListResourcesRequestSchema,
ListToolsRequestSchema,
ReadResourceRequestSchema,
} from "@modelcontextprotocol/sdk/types.js";
import axios from "axios";
/**
* Fake REST API (https://fakerestapi.azurewebsites.net)
* No auth required. Resources: Activities, Authors, Books, CoverPhotos, Users.
*/
const API_BASE_URL = "https://fakerestapi.azurewebsites.net";
/**
* Axios client
*/
const apiClient = axios.create({
baseURL: API_BASE_URL,
headers: {
Accept: "application/json",
"Content-Type": "application/json",
},
});
/**
* Build axios request config
*/
function buildRequestConfig(
method: string,
path: string,
headers: Record<string, string>,
params?: Record<string, unknown>,
body?: Record<string, unknown>
) {
const lowerMethod = method.toLowerCase();
const normalizedPath = path.startsWith("/") ? path : `/${path}`;
const config: any = {
method: lowerMethod,
url: normalizedPath,
headers: { ...headers },
};
if (params) config.params = params;
if (body && ["post", "put", "patch"].includes(lowerMethod)) {
config.data = body;
}
return config;
}
/**
* Format success response
*/
function formatSuccessResponse(response: any) {
return {
content: [
{
type: "text",
text: JSON.stringify(
{
status: response.status,
statusText: response.statusText,
data: response.data,
},
null,
2
),
},
],
};
}
/**
* Format error response
*/
function formatErrorResponse(error: any) {
return {
content: [
{
type: "text",
text: JSON.stringify(
{
error: error.message,
response: error.response
? {
status: error.response.status,
data: error.response.data,
}
: null,
},
null,
2
),
},
],
isError: true,
};
}
/**
* Create MCP Server
*/
const server = new Server(
{ name: "fakerestapi-mcp", version: "1.0.0" },
{ capabilities: { resources: {}, tools: {} } }
);
/**
* 1️⃣ Expose Swagger as a resource
*/
server.setRequestHandler(ListResourcesRequestSchema, async () => {
return {
resources: [
{
uri: "swagger://fakerestapi",
mimeType: "application/json",
name: "Fake REST API Documentation",
description:
"OpenAPI spec for Fake REST API. Read this resource to see all endpoints, request/response schemas, and examples. Resources: Activities, Authors, Books, CoverPhotos, Users.",
},
],
};
});
/**
* 2️⃣ Allow AI to read Swagger
*/
server.setRequestHandler(ReadResourceRequestSchema, async () => {
try {
const response = await axios.get(
"https://fakerestapi.azurewebsites.net/swagger/v1/swagger.json"
);
return {
contents: [
{
uri: "swagger://fakerestapi",
mimeType: "application/json",
text: JSON.stringify(response.data, null, 2),
},
],
};
} catch (error: any) {
throw new Error(`Failed to load Swagger: ${error.message}`);
}
});
/**
* 3️⃣ Register query_api tool
*/
server.setRequestHandler(ListToolsRequestSchema, async () => ({
tools: [
{
name: "query_api",
description:
"Interact with the Fake REST API (no auth). Examples: list activities → GET /api/v1/Activities; get book by id → GET /api/v1/Books/1; create activity → POST /api/v1/Activities with body; update user → PUT /api/v1/Users/1; delete → DELETE /api/v1/Books/1.",
inputSchema: {
type: "object",
properties: {
method: {
type: "string",
enum: ["GET", "POST", "PUT", "PATCH", "DELETE"],
default: "GET",
description: "HTTP method.",
},
path: {
type: "string",
description:
"Endpoint path. Example: /api/v1/Activities, /api/v1/Books/1",
},
params: {
type: "object",
additionalProperties: true,
description: "Optional query parameters.",
},
body: {
type: "object",
additionalProperties: true,
description: "Request body for POST, PUT, PATCH.",
},
headers: {
type: "object",
additionalProperties: { type: "string" },
description: "Optional extra headers.",
},
},
required: ["path"],
},
},
],
}));
/**
* 4️⃣ Handle tool execution
*/
server.setRequestHandler(CallToolRequestSchema, async (request) => {
const args = request.params.arguments as {
method?: string;
path: string;
params?: Record<string, unknown>;
body?: Record<string, unknown>;
headers?: Record<string, string>;
};
try {
const requestConfig = buildRequestConfig(
args.method || "GET",
args.path,
args.headers || {},
args.params,
args.body
);
const response = await apiClient.request(requestConfig);
return formatSuccessResponse(response);
} catch (error: any) {
return formatErrorResponse(error);
}
});
/**
* Start server
*/
async function main() {
const transport = new StdioServerTransport();
await server.connect(transport);
console.error("Fake REST API MCP Server running...");
}
main().catch((error) => {
console.error("Server error:", error);
process.exit(1);
});
What's happening here, section by section:
-
apiClient— a single axios instance created once at the top, reused for every request. -
buildRequestConfig()— assembles a clean axios config from tool arguments and normalizes the path to always start with/. -
formatSuccessResponse()/formatErrorResponse()— MCP expects responses in a specific shape; these helpers keep that consistent. -
ListResourcesRequestSchema— exposes the Fake REST API's Swagger/OpenAPI docs as a readable resource. -
ReadResourceRequestSchema— fetches the Swagger JSON when the AI requests it. -
ListToolsRequestSchema— registers thequery_apitool with rich descriptions so the AI knows exactly what to use. -
CallToolRequestSchema— the main handler: builds the request, calls the API, and returns the result.
Step 3 — Build the Project
npm run build
This compiles TypeScript from src/ into the dist/ folder. Once this succeeds, you're ready to connect your AI client — no need to manually start the server. Cursor and Claude Desktop will launch it automatically from the config you provide next.
Step 4 — Connect to Your AI
The same MCP server works with both Cursor and Claude Desktop — and the config is exactly the same for both.
Cursor: Go to Settings → MCP Servers and add a new entry.
Claude Desktop: Open your config file:
- Mac:
~/Library/Application Support/Claude/claude_desktop_config.json - Windows:
%APPDATA%\Claude\claude_desktop_config.json
Config (same for both)
{
"mcpServers": {
"my-mcp-server": {
"command": "node",
"args": ["/absolute/path/to/my-mcp-server/dist/index.js"]
}
}
}
Three things to keep in mind:
- Use absolute paths. Neither Cursor nor Claude Desktop inherit your shell's working directory, so relative paths will fail.
- Restart after saving. Both tools read this config only on startup — fully quit and reopen after making changes.
-
Verify Node is in PATH. If the tool can't find
node, replace"command": "node"with the full path. Find it by runningwhich node(Mac/Linux) orwhere node(Windows).
Try It Out
Whether you're in Cursor or Claude Desktop, try prompts like:
- Get 10 activities
- Get book with ID 1
- Create a new activity called "Team Standup"
- Delete user with ID 3
Your AI will call query_api directly — no Postman, no curl, no temporary scripts. Just execution.
Why This Changes Everything
| Without MCP | With MCP |
|---|---|
| AI writes temporary scripts that break | All operations go through a structured tool |
| API calls are inconsistent | Responses are always predictable |
| You keep jumping between tools | AI reads Swagger docs automatically |
| Your flow gets interrupted constantly | You stay focused inside your IDE |
What You Can Build With This
Once your backend is MCP-enabled, the possibilities open up fast: conversational admin panels, AI-powered dashboards, natural-language order management, automated reporting agents, and internal operations bots. The frontend becomes optional. The capability becomes primary.
Final Thoughts
MCP is not complicated. It's simply a structured bridge between AI and your systems. One Node project, one tool, one execution layer.
Build it once. Connect it to Cursor or Claude Desktop. The same server works everywhere.
Once you build your first MCP server, you'll wonder why you were manually testing APIs all this time. And this is just the beginning — you can extend further with environment switching, logging, authentication middleware, and multiple tools for different parts of your system.
AI is evolving. And now your system is ready for it.

Top comments (0)