An MCP tool call is a tiny line of agent code that fans out to syscalls, library calls, and kernel paths the agent has no view of.
TL;DR
Through April and early May, vendors shipped MCP servers in batches: Datadog, BlueCat, Command Zero, DBmaestro, the public CVE MCP, Grafana Cloud Remote MCP, SAS Viya MCP. The agent-side abstraction is small (a tool name and a JSON schema). The kernel-side surface that runs when the agent calls the tool is large and unstated. eBPF fills in what the tool actually touches.
What an MCP tool call looks like to the agent
An MCP tool, from the agent’s perspective, is a function with a name and an input schema. The agent calls it; a JSON-RPC payload goes to the tool server; a result returns. The MCP specification covers transport and discovery, not what the tool does between request and response.
That gap is fine when the tool wraps a pure HTTP API. It widens fast for tools that wrap a database client, a cloud SDK, a filesystem helper, or a GPU runtime. A “run query” tool can spawn a subprocess, open a unix socket, hit an SDK that maintains a connection pool, and trigger a kernel scheduling event the agent will never see.
What a syscall trace shows for the same call
Point an eBPF tracer at the tool-server process while the agent issues the call. The trace records the syscalls the tool made, the libraries it pulled in (resolved via /proc/[pid]/maps), the network endpoints it opened, and the on-CPU time spent in user vs kernel mode. Now the call is no longer an opaque box. The agent’s “run analysis” maps to a concrete path through the host.
# capture an MCP tool's real footprint while the agent calls it
ingero trace --pid $(pgrep -f mcp-server) --duration 30s \
--out /tmp/mcp-tool-trace.db
# then ask the trace what the tool touched
ingero query /tmp/mcp-tool-trace.db \
"SELECT comm, syscall, COUNT(*) AS n
FROM host_events
GROUP BY comm, syscall
ORDER BY n DESC LIMIT 20"
Why this matters more for GPU MCP tools
On a GPU host the unstated kernel-side surface is wider. A tool that “reads GPU utilization” might call nvidia-smi (a fork+exec), might open /dev/nvidia*, might link libnvidia-ml.so. DCGM exporters running alongside add their own surface. The agent still sees one tool name; the kernel sees many distinct paths.
When an MCP-driven workflow is slow or wrong, the question “which tool call is responsible” stops at the JSON layer. eBPF on the tool-server process pushes that question through to a syscall and a library, and often to a CUDA driver call.
Try it locally
Pick any MCP server you already run (Filesystem, Postgres, the Anthropic reference servers). Start the server. Run an agent against it. In another shell:
# 1. install
curl -fsSL https://github.com/ingero-io/ingero/releases/latest/download/install.sh | sh
# 2. capture the tool server's footprint for one minute
ingero trace --pid $(pgrep -f your-mcp-server) --duration 60s \
--out /tmp/mcp.db
# 3. inspect what the tool actually did
ingero query /tmp/mcp.db "SELECT * FROM cuda_events LIMIT 20"
ingero query /tmp/mcp.db "SELECT * FROM net_events LIMIT 20"
ingero query /tmp/mcp.db "SELECT * FROM io_events LIMIT 20"
Three queries against the same DB cover the three surfaces an MCP tool most often hides: GPU runtime calls, network calls, and disk I/O.
Smaller surface, same investigation
MCP narrows the agent-facing API. It does not narrow the host-side path a tool runs through. Treating an MCP call as a syscall pattern, not a JSON message, is what keeps a multi-MCP agent investigable when one of the tools is the slow or broken one.
Ingero – open-source eBPF agent for GPU debugging. One binary, zero deps, <2% overhead. Apache 2.0 + GPL-2.0. *GitHub ⭐** · Open an issue if you are shipping or operating MCP servers and want a kernel-level view of what your tools actually touch.*
Related reading
-
read-only kernel telemetry as MCP tools – design notes for the MCP server that ships in
ingero mcp. - MCP shows what the agent did, eBPF shows why the GPU stalled – one layer down: what an MCP call returns vs. what the kernel saw.
- connecting AI agents to kernel tracepoints – the original framing for MCP-driven kernel observability.

Top comments (0)