DEV Community

Cover image for How to Build an MCP Server That Gives AI Agents API Testing Powers
Wanda
Wanda

Posted on • Originally published at apidog.com

How to Build an MCP Server That Gives AI Agents API Testing Powers

TL;DR

Build an MCP server with TypeScript to expose three actionable tools: run_test, validate_schema, and list_environments. Configure it in ~/.claude/settings.json for Claude Code or .cursor/mcp.json for Cursor. This enables your AI agents to run Apidog tests, validate OpenAPI schemas, and fetch environmentsβ€”directly from the chat interface. The full source is about 150 lines and leverages the @modelcontextprotocol/sdk package.

MCP lets Claude Code, Cursor, and other AI agents run Apidog API tests, validate schemas, and compare responses without leaving their chat UI.

Try Apidog today

πŸ’‘ Scenario: Your AI agent just finished building an API endpoint. Instead of copying code and manually running tests in Apidog, you want to issue a single command from your chat and get results immediately.

That’s what Model Context Protocol (MCP) enables: AI agents access external tools through a standard interface. Build an MCP server for Apidog and your AI agent can test, validate, and fetch environmentsβ€”no context switch.

What Is MCP?

MCP (Model Context Protocol) is a protocol for AI agents to access external tools and data. Think of it as a plugin system that works across Claude Code, Cursor, and any MCP-compatible client.

An MCP server exposes tools (functions the agent can call) and resources (data the agent can read). Here, your Apidog MCP server will expose tools for API testing.

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”         β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”         β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  AI Agent       β”‚         β”‚  MCP Server      β”‚         β”‚  Apidog     β”‚
β”‚  (Claude Code)  │◄───────►│  (Your Code)     │◄───────►│  API        β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜   JSON  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  HTTP   β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
Enter fullscreen mode Exit fullscreen mode

Step 1: Set Up the Project

Create a new TypeScript project:

mkdir apidog-mcp-server
cd apidog-mcp-server
npm init -y
npm install @modelcontextprotocol/sdk zod
npm install -D typescript @types/node
Enter fullscreen mode Exit fullscreen mode

Create tsconfig.json:

{
  "compilerOptions": {
    "target": "ES2022",
    "module": "NodeNext",
    "moduleResolution": "NodeNext",
    "outDir": "./dist",
    "rootDir": "./src",
    "strict": true,
    "esModuleInterop": true,
    "skipLibCheck": true,
    "forceConsistentCasingInFileNames": true
  },
  "include": ["src/**/*"],
  "exclude": ["node_modules"]
}
Enter fullscreen mode Exit fullscreen mode

Add build scripts to package.json:

{
  "scripts": {
    "build": "tsc",
    "start": "node dist/index.js"
  }
}
Enter fullscreen mode Exit fullscreen mode

Step 2: Create the MCP Server Skeleton

Create src/index.ts:

import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { z } from "zod";

const server = new McpServer({
  name: "apidog",
  version: "1.0.0",
  description: "Apidog API testing tools for AI agents"
});

// Tools will be added here

const transport = new StdioServerTransport();
await server.connect(transport);
Enter fullscreen mode Exit fullscreen mode

This sets up the MCP server and stdio transport. The transport manages communication with the AI agent over stdin/stdout.

Step 3: Define the run_test Tool

Add the primary tool to src/index.ts:

// Tool: run_test
server.tool(
  "run_test",
  {
    projectId: z.string().describe("Apidog project ID (from project URL)"),
    environmentId: z.string().optional().describe("Environment ID for test execution"),
    testSuiteId: z.string().optional().describe("Test suite ID to run a specific suite")
  },
  async ({ projectId, environmentId, testSuiteId }) => {
    const apiKey = process.env.APIDOG_API_KEY;
    if (!apiKey) {
      return {
        content: [{ type: "text", text: "Error: APIDOG_API_KEY environment variable not set" }]
      };
    }

    // Build API URL
    let url = `https://api.apidog.com/v1/projects/${projectId}/tests/run`;
    const params = new URLSearchParams();
    if (environmentId) params.append("environmentId", environmentId);
    if (testSuiteId) params.append("testSuiteId", testSuiteId);
    if (params.toString()) url += `?${params.toString()}`;

    try {
      const response = await fetch(url, {
        method: "POST",
        headers: {
          "Authorization": `Bearer ${apiKey}`,
          "Content-Type": "application/json"
        }
      });

      if (!response.ok) {
        const error = await response.text();
        return {
          content: [{ type: "text", text: `API Error: ${response.status} ${error}` }]
        };
      }

      const results = await response.json();
      return {
        content: [{ type: "text", text: JSON.stringify(results, null, 2) }]
      };
    } catch (error) {
      return {
        content: [{ type: "text", text: `Request failed: ${error instanceof Error ? error.message : String(error)}` }]
      };
    }
  }
);
Enter fullscreen mode Exit fullscreen mode

Key points:

  • Name: run_test (AI agents select tools by name)
  • Schema: Zod validation for input
  • Handler: Async function calls the Apidog API

Step 4: Add the validate_schema Tool

Add a schema validation tool for OpenAPI:

// Tool: validate_schema
server.tool(
  "validate_schema",
  {
    schema: z.object({}).describe("OpenAPI 3.x schema object to validate"),
    strict: z.boolean().optional().default(false).describe("Enable strict mode for additional checks")
  },
  async ({ schema, strict }) => {
    const apiKey = process.env.APIDOG_API_KEY;
    if (!apiKey) {
      return {
        content: [{ type: "text", text: "Error: APIDOG_API_KEY environment variable not set" }]
      };
    }

    try {
      const response = await fetch("https://api.apidog.com/v1/schemas/validate", {
        method: "POST",
        headers: {
          "Authorization": `Bearer ${apiKey}`,
          "Content-Type": "application/json"
        },
        body: JSON.stringify({ schema, strict })
      });

      const result = await response.json();

      if (!response.ok) {
        return {
          content: [{ type: "text", text: `Validation failed: ${JSON.stringify(result.errors, null, 2)}` }]
        };
      }

      return {
        content: [{
          type: "text",
          text: result.valid
            ? "Schema is valid OpenAPI 3.x"
            : `Warnings: ${JSON.stringify(result.warnings, null, 2)}`
        }]
      };
    } catch (error) {
      return {
        content: [{ type: "text", text: `Validation failed: ${error instanceof Error ? error.message : String(error)}` }]
      };
    }
  }
);
Enter fullscreen mode Exit fullscreen mode

Step 5: Add the list_environments Tool

Add a tool to fetch available environments:

// Tool: list_environments
server.tool(
  "list_environments",
  {
    projectId: z.string().describe("Apidog project ID")
  },
  async ({ projectId }) => {
    const apiKey = process.env.APIDOG_API_KEY;
    if (!apiKey) {
      return {
        content: [{ type: "text", text: "Error: APIDOG_API_KEY environment variable not set" }]
      };
    }

    try {
      const response = await fetch(
        `https://api.apidog.com/v1/projects/${projectId}/environments`,
        { headers: { "Authorization": `Bearer ${apiKey}` } }
      );

      if (!response.ok) {
        const error = await response.text();
        return {
          content: [{ type: "text", text: `API Error: ${response.status} ${error}` }]
        };
      }

      const environments = await response.json();
      return {
        content: [{
          type: "text",
          text: environments.length === 0
            ? "No environments found for this project"
            : environments.map((e: any) =>
                `- ${e.name} (ID: ${e.id})${e.isDefault ? " [default]" : ""}`
              ).join("\n")
        }]
      };
    } catch (error) {
      return {
        content: [{ type: "text", text: `Request failed: ${error instanceof Error ? error.message : String(error)}` }]
      };
    }
  }
);
Enter fullscreen mode Exit fullscreen mode

Step 6: Build and Test

Build the server:

npm run build
Enter fullscreen mode Exit fullscreen mode

To test locally, create a simple MCP client (test-client.js):

import { spawn } from "child_process";

const server = spawn("node", ["dist/index.js"], {
  env: { ...process.env, APIDOG_API_KEY: "your-api-key" }
});

server.stdout.on("data", (data) => {
  console.log(`Server output: ${data}`);
});

server.stderr.on("data", (data) => {
  console.error(`Server error: ${data}`);
});

const message = {
  jsonrpc: "2.0",
  id: 1,
  method: "initialize",
  params: {
    protocolVersion: "2024-11-05",
    capabilities: {},
    clientInfo: { name: "test-client", version: "1.0.0" }
  }
};

server.stdin.write(JSON.stringify(message) + "\n");
Enter fullscreen mode Exit fullscreen mode

Step 7: Configure for Claude Code

Add your MCP server to Claude Code:

Create or edit ~/.claude/settings.json:

{
  "mcpServers": {
    "apidog": {
      "command": "node",
      "args": ["/absolute/path/to/apidog-mcp-server/dist/index.js"],
      "env": {
        "APIDOG_API_KEY": "your-api-key-here"
      }
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Restart Claude Code. The Apidog tools will be available in the chat.

Example usage in Claude Code:

Use the run_test tool to run tests on my Apidog project.
Project ID: proj_12345
Environment: staging
Enter fullscreen mode Exit fullscreen mode
Validate this OpenAPI schema against Apidog rules:
[paste schema]
Enter fullscreen mode Exit fullscreen mode
List all environments for project proj_12345
Enter fullscreen mode Exit fullscreen mode

Step 8: Configure for Cursor

Cursor uses a similar MCP config. Create .cursor/mcp.json:

{
  "mcpServers": {
    "apidog": {
      "command": "node",
      "args": ["/absolute/path/to/apidog-mcp-server/dist/index.js"],
      "env": {
        "APIDOG_API_KEY": "your-api-key-here"
      }
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Example usage in Cursor:

@apidog run_test projectId="proj_12345" environmentId="staging"
Enter fullscreen mode Exit fullscreen mode

Complete Source Code

Full src/index.ts:

import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { z } from "zod";

const server = new McpServer({
  name: "apidog",
  version: "1.0.0",
  description: "Apidog API testing tools for AI agents"
});

// run_test tool
server.tool(
  "run_test",
  {
    projectId: z.string().describe("Apidog project ID"),
    environmentId: z.string().optional().describe("Environment ID"),
    testSuiteId: z.string().optional().describe("Test suite ID")
  },
  async ({ projectId, environmentId, testSuiteId }) => {
    const apiKey = process.env.APIDOG_API_KEY;
    if (!apiKey) {
      return { content: [{ type: "text", text: "Error: APIDOG_API_KEY not set" }] };
    }

    let url = `https://api.apidog.com/v1/projects/${projectId}/tests/run`;
    const params = new URLSearchParams();
    if (environmentId) params.append("environmentId", environmentId);
    if (testSuiteId) params.append("testSuiteId", testSuiteId);
    if (params.toString()) url += `?${params.toString()}`;

    try {
      const response = await fetch(url, {
        method: "POST",
        headers: {
          "Authorization": `Bearer ${apiKey}`,
          "Content-Type": "application/json"
        }
      });

      const results = await response.json();
      return {
        content: [{ type: "text", text: JSON.stringify(results, null, 2) }]
      };
    } catch (error) {
      return {
        content: [{ type: "text", text: `Request failed: ${error instanceof Error ? error.message : String(error)}` }]
      };
    }
  }
);

// validate_schema tool
server.tool(
  "validate_schema",
  {
    schema: z.object({}).describe("OpenAPI schema"),
    strict: z.boolean().optional().default(false)
  },
  async ({ schema, strict }) => {
    const apiKey = process.env.APIDOG_API_KEY;
    if (!apiKey) {
      return { content: [{ type: "text", text: "Error: APIDOG_API_KEY not set" }] };
    }

    const response = await fetch("https://api.apidog.com/v1/schemas/validate", {
      method: "POST",
      headers: {
        "Authorization": `Bearer ${apiKey}`,
        "Content-Type": "application/json"
      },
      body: JSON.stringify({ schema, strict })
    });

    const result = await response.json();
    return {
      content: [{
        type: "text",
        text: result.valid
          ? "Schema is valid"
          : `Issues: ${JSON.stringify(result.errors || result.warnings, null, 2)}`
      }]
    };
  }
);

// list_environments tool
server.tool(
  "list_environments",
  {
    projectId: z.string().describe("Apidog project ID")
  },
  async ({ projectId }) => {
    const apiKey = process.env.APIDOG_API_KEY;
    if (!apiKey) {
      return { content: [{ type: "text", text: "Error: APIDOG_API_KEY not set" }] };
    }

    const response = await fetch(
      `https://api.apidog.com/v1/projects/${projectId}/environments`,
      { headers: { "Authorization": `Bearer ${apiKey}` } }
    );

    const environments = await response.json();
    return {
      content: [{
        type: "text",
        text: environments.map((e: any) =>
          `- ${e.name} (${e.id})${e.isDefault ? " [default]" : ""}`
        ).join("\n")
      }]
    };
  }
);

const transport = new StdioServerTransport();
await server.connect(transport);
Enter fullscreen mode Exit fullscreen mode

What You Built

Component Purpose
MCP Server Bridges AI agents to Apidog API
run_test Execute test collections programmatically
validate_schema Catch OpenAPI errors before deployment
list_environments Discover available test environments
Zod validation Type-safe parameter handling
Stdio transport Works with Claude Code, Cursor, any MCP client

Next Steps

Extend the server:

  • Add compare_responses tool to diff test results across environments
  • Implement get_test_history for fetching past test runs
  • Add trigger_mock_server to start/stop mock endpoints

Production considerations:

  • Add retry logic for unreliable network requests
  • Implement rate limiting to avoid API throttling
  • Add logging for debugging
  • Store API keys in a secure vault, not just environment variables

Share with your team:

  • Publish to npm as @your-org/apidog-mcp-server
  • Document required environment variables
  • Include example MCP configs for common clients

Troubleshooting Common Issues

MCP server not loading in Claude Code:

  • Use absolute paths in ~/.claude/settings.json
  • Verify node is in your PATH: which node
  • Confirm dist/index.js exists: ls -la dist/
  • Check Claude Code MCP logs for errors

Tools not appearing:

  • Restart Claude Code completely
  • Run npm run build to compile TypeScript
  • Make sure all three tools are defined before server.connect()
  • Verify the server starts: node dist/index.js

API requests failing with 401:

  • Check APIDOG_API_KEY is set in config
  • No extra spaces/quotes in the key value
  • Make sure your Apidog account has API access enabled
  • Test manually:
  curl -H "Authorization: Bearer $APIDOG_API_KEY" https://api.apidog.com/v1/user
Enter fullscreen mode Exit fullscreen mode

Zod validation errors:

  • Parameter names must match the schema
  • Required fields must be present (check for typos)
  • Use .optional() for optional fields
  • Review error messages for details

TypeScript compilation errors:

  • Run npm install for dependencies
  • Check TypeScript version: npx tsc --version (should be 5.x)
  • Clean build: rm -rf dist && npm run build
  • Add as type assertions if needed

Testing Your MCP Server Locally

Manual testing with stdio:

# Start the server
node dist/index.js

# In another terminal, send a test message
echo '{"jsonrpc":"2.0","id":1,"method":"tools/list","params":{}}' | node dist/index.js
Enter fullscreen mode Exit fullscreen mode

Expected output:

{
  "jsonrpc": "2.0",
  "id": 1,
  "result": {
    "tools": [
      { "name": "run_test", "description": "...", "inputSchema": {...} },
      { "name": "validate_schema", "description": "...", "inputSchema": {...} },
      { "name": "list_environments", "description": "...", "inputSchema": {...} }
    ]
  }
}
Enter fullscreen mode Exit fullscreen mode

Test a tool call:

echo '{"jsonrpc":"2.0","id":2,"method":"tools/call","params":{"name":"list_environments","arguments":{"projectId":"your-project-id"}}}' | node dist/index.js
Enter fullscreen mode Exit fullscreen mode

Your AI agents now have direct access to Apidog’s testing capabilitiesβ€”no more context switching or manual test runs.

MCP enables you to extend AI agents with domain-specific tools and automate your development workflow.

Key Takeaways

  • MCP servers bridge AI agents to external APIs: Build once, use across Claude Code, Cursor, and any MCP-compatible client.
  • Three tools cover most API testing needs: run_test for execution, validate_schema for OpenAPI validation, list_environments for discovery.
  • Zod validation enforces type-safe parameters: Prevents bad requests before they hit the API.
  • Configuration is tool-specific: Claude Code uses ~/.claude/settings.json, Cursor uses .cursor/mcp.json.
  • Production needs robust error handling: Add retries, rate limits, and secure key storage before deploying.

FAQ

What is MCP in AI?

MCP (Model Context Protocol) is a standardized protocol that lets AI agents access external tools and data sourcesβ€”think plugin system for agents.

How do I create an MCP server for Apidog?

Install @modelcontextprotocol/sdk, define tools with Zod validation, implement handlers to call the Apidog API, and use StdioServerTransport for communication.

Can I use this with Cursor?

Yes. Add the MCP server config to .cursor/mcp.json in your project. The same server works for Claude Code, Cursor, and other MCP clients.

What tools should I expose?

Start with run_test (run test collections), validate_schema (OpenAPI validation), and list_environments (fetch environments).

Is the Apidog MCP server production-ready?

This tutorial provides a starter. For production, add retry logic, rate limiting, robust error handling, and secure API key management.

Do I need an Apidog API key?

Yes. Set APIDOG_API_KEY as an environment variable. The server uses this key for API authentication.

Can I share this MCP server with my team?

Yes. Publish as a private npm package, document required environment variables, and provide example MCP configurations.

Top comments (0)