DEV Community

AbdlrahmanSaber
AbdlrahmanSaber

Posted on

Debug MCP Like Network Tab: Seeing Every Tool Call in Real Time

You triggered a tool call from Cursor, something went wrong, and the only feedback was silence—or a vague error. The server is running, the tool is registered, but you have no idea what actually crossed the wire. This article walks through getting a browser-based trace of every JSON-RPC message without touching your server code.


Disclosure

I maintain mcpkit. This article explains how I use it to inspect Model Context Protocol traffic—I am biased, but the problem is real whether you use this tool or another.


What is MCP?

Model Context Protocol (MCP) is an open protocol—introduced by Anthropic—that standardises how AI models connect to external tools and data. Think of it as a USB standard for AI capabilities: instead of every model and every tool inventing their own integration, they speak a shared language.

The two main actors:

  • MCP client — the AI host: Cursor, Claude Desktop, VS Code with Copilot, or any app that wants to call tools on behalf of an LLM.
  • MCP server — a small process (yours, or a third party's) that exposes tools, resources, and prompts over the protocol.

When an AI decides it needs to query a database or fetch documentation, it sends a tools/call request to the relevant MCP server. The server runs the function and returns a result. All of this is JSON-RPC—typically over stdio pipes (stdin/stdout), though HTTP+SSE transports exist too.

That's the handshake. The rest of this article is about what happens when that handshake goes wrong.


The black box

MCP connects AI clients to tools: databases, APIs, filesystems. Under the hood the client and server exchange JSON-RPC messages—often over stdio (stdin/stdout pipes). Everything works until it doesn't: slow calls, wrong arguments, silent failures.

Terminal logs help, but they rarely show:

  • Which JSON-RPC method ran (tools/call, resources/list, …).
  • Latency end-to-end.
  • Full request and response payloads side by side.
  • Whether the failure was protocol-level or inside the tool result.

You need something closer to a network tab: a live list of interactions with drill-down—not a grep war.


Transparent proxy: no server surgery

mcpkit sits between the MCP client and your server process. You don't patch your MCP server code. You wrap the spawn command:

npx @abdlrahmansaber/mcpkit inspect -- node ./dist/server.js
Enter fullscreen mode Exit fullscreen mode

(or mcpkit if installed globally.)

The inspector:

  1. Starts your server as a child process.
  2. Copies traffic on stdin/stdout through a transparent proxy.
  3. Opens a browser dashboard with a trace table.

What you see on the dashboard

Typical workflow:

npm install -g @abdlrahmansaber/mcpkit
mcpkit inspect -- node path/to/your-mcp-server.js
Enter fullscreen mode Exit fullscreen mode

Stderr prints a line like:

[mcpkit] inspector dashboard: http://localhost:3200
Enter fullscreen mode Exit fullscreen mode

Open that URL. For each message you'll see:

Signal Why it matters
Direction Client → server vs server → client
Method e.g. tools/call, tools/list
Tool Which tool name (for tools/call)
Latency Time for paired request/response
Status OK, error, timeout, pending

Click a row to open request and response JSON. That's how I confirmed Flux queries, API keys in params (then moved them to env), and slow buckets.

  1. Trace table — several rows (mix success / error). Alt text example: "mcpkit inspector dashboard showing method, tool name, latency, and status columns."
  2. Detail panel — expanded row with Request / Response JSON for a tools/call. Alt text example: "Expanded MCP trace showing JSON request and response for a tools/call."


CLI options worth knowing

  • --port 3201 — if 3200 is busy.
  • --log-only — stderr logging only, no browser (CI or SSH).
  • --export traces.json — dump collected traces on exit.
  • --server-name my-influx — label traces when you run multiple proxies later.

What's next

Inspecting one server covers a lot of debugging. When you run several MCP servers at once—Influx, Postgres, docs fetchers, whatever—you usually configure each spawn command in whatever MCP host you use: Cursor (.cursor/mcp.json or project MCP config), Claude Desktop (claude_desktop_config.json), VS Code and other editors with MCP support, or any launcher that starts servers over stdio. Same idea everywhere: wrap the command so traffic can be traced.

For multi-server setups you want one shared dashboard and a Server column — that’ll be in the next article. mcpkit serve plus mckit proxy with --inspector in each configured command (adapt the snippet to your tool’s JSON shape).

Links


Feel free to connect 🌹me on:

Top comments (1)

Collapse
 
ahmed112 profile image
Ahmed Alasser

Great breakdown 👏

The “black box” problem with MCP is real—especially when debugging silent failures or mismatched tool params. Using a transparent proxy instead of modifying the server is a clean approach.

What I like most is the network-tab analogy. Having visibility into:

  • full JSON-RPC payloads
  • latency per call
  • clear request/response pairing

…is exactly what’s missing when working with MCP integrations.

Quick question: have you tested this with high-throughput scenarios or multiple concurrent tool calls? Curious how the inspector handles performance and trace volume at scale.

Nice work 🔥