TL;DR
Build an MCP server with TypeScript to expose three actionable tools: run_test, validate_schema, and list_environments. Configure it in ~/.claude/settings.json for Claude Code or .cursor/mcp.json for Cursor. This enables your AI agents to run Apidog tests, validate OpenAPI schemas, and fetch environmentsβdirectly from the chat interface. The full source is about 150 lines and leverages the @modelcontextprotocol/sdk package.
MCP lets Claude Code, Cursor, and other AI agents run Apidog API tests, validate schemas, and compare responses without leaving their chat UI.
π‘ Scenario: Your AI agent just finished building an API endpoint. Instead of copying code and manually running tests in Apidog, you want to issue a single command from your chat and get results immediately.
Thatβs what Model Context Protocol (MCP) enables: AI agents access external tools through a standard interface. Build an MCP server for Apidog and your AI agent can test, validate, and fetch environmentsβno context switch.
What Is MCP?
MCP (Model Context Protocol) is a protocol for AI agents to access external tools and data. Think of it as a plugin system that works across Claude Code, Cursor, and any MCP-compatible client.
An MCP server exposes tools (functions the agent can call) and resources (data the agent can read). Here, your Apidog MCP server will expose tools for API testing.
βββββββββββββββββββ ββββββββββββββββββββ βββββββββββββββ
β AI Agent β β MCP Server β β Apidog β
β (Claude Code) ββββββββββΊβ (Your Code) ββββββββββΊβ API β
βββββββββββββββββββ JSON ββββββββββββββββββββ HTTP βββββββββββββββ
Step 1: Set Up the Project
Create a new TypeScript project:
mkdir apidog-mcp-server
cd apidog-mcp-server
npm init -y
npm install @modelcontextprotocol/sdk zod
npm install -D typescript @types/node
Create tsconfig.json:
{
"compilerOptions": {
"target": "ES2022",
"module": "NodeNext",
"moduleResolution": "NodeNext",
"outDir": "./dist",
"rootDir": "./src",
"strict": true,
"esModuleInterop": true,
"skipLibCheck": true,
"forceConsistentCasingInFileNames": true
},
"include": ["src/**/*"],
"exclude": ["node_modules"]
}
Add build scripts to package.json:
{
"scripts": {
"build": "tsc",
"start": "node dist/index.js"
}
}
Step 2: Create the MCP Server Skeleton
Create src/index.ts:
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { z } from "zod";
const server = new McpServer({
name: "apidog",
version: "1.0.0",
description: "Apidog API testing tools for AI agents"
});
// Tools will be added here
const transport = new StdioServerTransport();
await server.connect(transport);
This sets up the MCP server and stdio transport. The transport manages communication with the AI agent over stdin/stdout.
Step 3: Define the run_test Tool
Add the primary tool to src/index.ts:
// Tool: run_test
server.tool(
"run_test",
{
projectId: z.string().describe("Apidog project ID (from project URL)"),
environmentId: z.string().optional().describe("Environment ID for test execution"),
testSuiteId: z.string().optional().describe("Test suite ID to run a specific suite")
},
async ({ projectId, environmentId, testSuiteId }) => {
const apiKey = process.env.APIDOG_API_KEY;
if (!apiKey) {
return {
content: [{ type: "text", text: "Error: APIDOG_API_KEY environment variable not set" }]
};
}
// Build API URL
let url = `https://api.apidog.com/v1/projects/${projectId}/tests/run`;
const params = new URLSearchParams();
if (environmentId) params.append("environmentId", environmentId);
if (testSuiteId) params.append("testSuiteId", testSuiteId);
if (params.toString()) url += `?${params.toString()}`;
try {
const response = await fetch(url, {
method: "POST",
headers: {
"Authorization": `Bearer ${apiKey}`,
"Content-Type": "application/json"
}
});
if (!response.ok) {
const error = await response.text();
return {
content: [{ type: "text", text: `API Error: ${response.status} ${error}` }]
};
}
const results = await response.json();
return {
content: [{ type: "text", text: JSON.stringify(results, null, 2) }]
};
} catch (error) {
return {
content: [{ type: "text", text: `Request failed: ${error instanceof Error ? error.message : String(error)}` }]
};
}
}
);
Key points:
- Name:
run_test(AI agents select tools by name) - Schema: Zod validation for input
- Handler: Async function calls the Apidog API
Step 4: Add the validate_schema Tool
Add a schema validation tool for OpenAPI:
// Tool: validate_schema
server.tool(
"validate_schema",
{
schema: z.object({}).describe("OpenAPI 3.x schema object to validate"),
strict: z.boolean().optional().default(false).describe("Enable strict mode for additional checks")
},
async ({ schema, strict }) => {
const apiKey = process.env.APIDOG_API_KEY;
if (!apiKey) {
return {
content: [{ type: "text", text: "Error: APIDOG_API_KEY environment variable not set" }]
};
}
try {
const response = await fetch("https://api.apidog.com/v1/schemas/validate", {
method: "POST",
headers: {
"Authorization": `Bearer ${apiKey}`,
"Content-Type": "application/json"
},
body: JSON.stringify({ schema, strict })
});
const result = await response.json();
if (!response.ok) {
return {
content: [{ type: "text", text: `Validation failed: ${JSON.stringify(result.errors, null, 2)}` }]
};
}
return {
content: [{
type: "text",
text: result.valid
? "Schema is valid OpenAPI 3.x"
: `Warnings: ${JSON.stringify(result.warnings, null, 2)}`
}]
};
} catch (error) {
return {
content: [{ type: "text", text: `Validation failed: ${error instanceof Error ? error.message : String(error)}` }]
};
}
}
);
Step 5: Add the list_environments Tool
Add a tool to fetch available environments:
// Tool: list_environments
server.tool(
"list_environments",
{
projectId: z.string().describe("Apidog project ID")
},
async ({ projectId }) => {
const apiKey = process.env.APIDOG_API_KEY;
if (!apiKey) {
return {
content: [{ type: "text", text: "Error: APIDOG_API_KEY environment variable not set" }]
};
}
try {
const response = await fetch(
`https://api.apidog.com/v1/projects/${projectId}/environments`,
{ headers: { "Authorization": `Bearer ${apiKey}` } }
);
if (!response.ok) {
const error = await response.text();
return {
content: [{ type: "text", text: `API Error: ${response.status} ${error}` }]
};
}
const environments = await response.json();
return {
content: [{
type: "text",
text: environments.length === 0
? "No environments found for this project"
: environments.map((e: any) =>
`- ${e.name} (ID: ${e.id})${e.isDefault ? " [default]" : ""}`
).join("\n")
}]
};
} catch (error) {
return {
content: [{ type: "text", text: `Request failed: ${error instanceof Error ? error.message : String(error)}` }]
};
}
}
);
Step 6: Build and Test
Build the server:
npm run build
To test locally, create a simple MCP client (test-client.js):
import { spawn } from "child_process";
const server = spawn("node", ["dist/index.js"], {
env: { ...process.env, APIDOG_API_KEY: "your-api-key" }
});
server.stdout.on("data", (data) => {
console.log(`Server output: ${data}`);
});
server.stderr.on("data", (data) => {
console.error(`Server error: ${data}`);
});
const message = {
jsonrpc: "2.0",
id: 1,
method: "initialize",
params: {
protocolVersion: "2024-11-05",
capabilities: {},
clientInfo: { name: "test-client", version: "1.0.0" }
}
};
server.stdin.write(JSON.stringify(message) + "\n");
Step 7: Configure for Claude Code
Add your MCP server to Claude Code:
Create or edit ~/.claude/settings.json:
{
"mcpServers": {
"apidog": {
"command": "node",
"args": ["/absolute/path/to/apidog-mcp-server/dist/index.js"],
"env": {
"APIDOG_API_KEY": "your-api-key-here"
}
}
}
}
Restart Claude Code. The Apidog tools will be available in the chat.
Example usage in Claude Code:
Use the run_test tool to run tests on my Apidog project.
Project ID: proj_12345
Environment: staging
Validate this OpenAPI schema against Apidog rules:
[paste schema]
List all environments for project proj_12345
Step 8: Configure for Cursor
Cursor uses a similar MCP config. Create .cursor/mcp.json:
{
"mcpServers": {
"apidog": {
"command": "node",
"args": ["/absolute/path/to/apidog-mcp-server/dist/index.js"],
"env": {
"APIDOG_API_KEY": "your-api-key-here"
}
}
}
}
Example usage in Cursor:
@apidog run_test projectId="proj_12345" environmentId="staging"
Complete Source Code
Full src/index.ts:
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { z } from "zod";
const server = new McpServer({
name: "apidog",
version: "1.0.0",
description: "Apidog API testing tools for AI agents"
});
// run_test tool
server.tool(
"run_test",
{
projectId: z.string().describe("Apidog project ID"),
environmentId: z.string().optional().describe("Environment ID"),
testSuiteId: z.string().optional().describe("Test suite ID")
},
async ({ projectId, environmentId, testSuiteId }) => {
const apiKey = process.env.APIDOG_API_KEY;
if (!apiKey) {
return { content: [{ type: "text", text: "Error: APIDOG_API_KEY not set" }] };
}
let url = `https://api.apidog.com/v1/projects/${projectId}/tests/run`;
const params = new URLSearchParams();
if (environmentId) params.append("environmentId", environmentId);
if (testSuiteId) params.append("testSuiteId", testSuiteId);
if (params.toString()) url += `?${params.toString()}`;
try {
const response = await fetch(url, {
method: "POST",
headers: {
"Authorization": `Bearer ${apiKey}`,
"Content-Type": "application/json"
}
});
const results = await response.json();
return {
content: [{ type: "text", text: JSON.stringify(results, null, 2) }]
};
} catch (error) {
return {
content: [{ type: "text", text: `Request failed: ${error instanceof Error ? error.message : String(error)}` }]
};
}
}
);
// validate_schema tool
server.tool(
"validate_schema",
{
schema: z.object({}).describe("OpenAPI schema"),
strict: z.boolean().optional().default(false)
},
async ({ schema, strict }) => {
const apiKey = process.env.APIDOG_API_KEY;
if (!apiKey) {
return { content: [{ type: "text", text: "Error: APIDOG_API_KEY not set" }] };
}
const response = await fetch("https://api.apidog.com/v1/schemas/validate", {
method: "POST",
headers: {
"Authorization": `Bearer ${apiKey}`,
"Content-Type": "application/json"
},
body: JSON.stringify({ schema, strict })
});
const result = await response.json();
return {
content: [{
type: "text",
text: result.valid
? "Schema is valid"
: `Issues: ${JSON.stringify(result.errors || result.warnings, null, 2)}`
}]
};
}
);
// list_environments tool
server.tool(
"list_environments",
{
projectId: z.string().describe("Apidog project ID")
},
async ({ projectId }) => {
const apiKey = process.env.APIDOG_API_KEY;
if (!apiKey) {
return { content: [{ type: "text", text: "Error: APIDOG_API_KEY not set" }] };
}
const response = await fetch(
`https://api.apidog.com/v1/projects/${projectId}/environments`,
{ headers: { "Authorization": `Bearer ${apiKey}` } }
);
const environments = await response.json();
return {
content: [{
type: "text",
text: environments.map((e: any) =>
`- ${e.name} (${e.id})${e.isDefault ? " [default]" : ""}`
).join("\n")
}]
};
}
);
const transport = new StdioServerTransport();
await server.connect(transport);
What You Built
| Component | Purpose |
|---|---|
| MCP Server | Bridges AI agents to Apidog API |
run_test |
Execute test collections programmatically |
validate_schema |
Catch OpenAPI errors before deployment |
list_environments |
Discover available test environments |
| Zod validation | Type-safe parameter handling |
| Stdio transport | Works with Claude Code, Cursor, any MCP client |
Next Steps
Extend the server:
- Add
compare_responsestool to diff test results across environments - Implement
get_test_historyfor fetching past test runs - Add
trigger_mock_serverto start/stop mock endpoints
Production considerations:
- Add retry logic for unreliable network requests
- Implement rate limiting to avoid API throttling
- Add logging for debugging
- Store API keys in a secure vault, not just environment variables
Share with your team:
- Publish to npm as
@your-org/apidog-mcp-server - Document required environment variables
- Include example MCP configs for common clients
Troubleshooting Common Issues
MCP server not loading in Claude Code:
- Use absolute paths in
~/.claude/settings.json - Verify
nodeis in yourPATH:which node - Confirm
dist/index.jsexists:ls -la dist/ - Check Claude Code MCP logs for errors
Tools not appearing:
- Restart Claude Code completely
- Run
npm run buildto compile TypeScript - Make sure all three tools are defined before
server.connect() - Verify the server starts:
node dist/index.js
API requests failing with 401:
- Check
APIDOG_API_KEYis set in config - No extra spaces/quotes in the key value
- Make sure your Apidog account has API access enabled
- Test manually:
curl -H "Authorization: Bearer $APIDOG_API_KEY" https://api.apidog.com/v1/user
Zod validation errors:
- Parameter names must match the schema
- Required fields must be present (check for typos)
- Use
.optional()for optional fields - Review error messages for details
TypeScript compilation errors:
- Run
npm installfor dependencies - Check TypeScript version:
npx tsc --version(should be 5.x) - Clean build:
rm -rf dist && npm run build - Add
astype assertions if needed
Testing Your MCP Server Locally
Manual testing with stdio:
# Start the server
node dist/index.js
# In another terminal, send a test message
echo '{"jsonrpc":"2.0","id":1,"method":"tools/list","params":{}}' | node dist/index.js
Expected output:
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"tools": [
{ "name": "run_test", "description": "...", "inputSchema": {...} },
{ "name": "validate_schema", "description": "...", "inputSchema": {...} },
{ "name": "list_environments", "description": "...", "inputSchema": {...} }
]
}
}
Test a tool call:
echo '{"jsonrpc":"2.0","id":2,"method":"tools/call","params":{"name":"list_environments","arguments":{"projectId":"your-project-id"}}}' | node dist/index.js
Your AI agents now have direct access to Apidogβs testing capabilitiesβno more context switching or manual test runs.
MCP enables you to extend AI agents with domain-specific tools and automate your development workflow.
Key Takeaways
- MCP servers bridge AI agents to external APIs: Build once, use across Claude Code, Cursor, and any MCP-compatible client.
-
Three tools cover most API testing needs:
run_testfor execution,validate_schemafor OpenAPI validation,list_environmentsfor discovery. - Zod validation enforces type-safe parameters: Prevents bad requests before they hit the API.
-
Configuration is tool-specific: Claude Code uses
~/.claude/settings.json, Cursor uses.cursor/mcp.json. - Production needs robust error handling: Add retries, rate limits, and secure key storage before deploying.
FAQ
What is MCP in AI?
MCP (Model Context Protocol) is a standardized protocol that lets AI agents access external tools and data sourcesβthink plugin system for agents.
How do I create an MCP server for Apidog?
Install @modelcontextprotocol/sdk, define tools with Zod validation, implement handlers to call the Apidog API, and use StdioServerTransport for communication.
Can I use this with Cursor?
Yes. Add the MCP server config to .cursor/mcp.json in your project. The same server works for Claude Code, Cursor, and other MCP clients.
What tools should I expose?
Start with run_test (run test collections), validate_schema (OpenAPI validation), and list_environments (fetch environments).
Is the Apidog MCP server production-ready?
This tutorial provides a starter. For production, add retry logic, rate limiting, robust error handling, and secure API key management.
Do I need an Apidog API key?
Yes. Set APIDOG_API_KEY as an environment variable. The server uses this key for API authentication.
Can I share this MCP server with my team?
Yes. Publish as a private npm package, document required environment variables, and provide example MCP configurations.
Top comments (0)