DEV Community

ecap0
ecap0

Posted on

We Scanned 33 MCP Servers — Here's What We Found

We Scanned 33 MCP Servers — Here's What We Found

MCP (Model Context Protocol) servers are exploding in popularity. Give your AI agent filesystem access, database queries, browser control, Kubernetes operations—all through simple tool interfaces. Sounds amazing, right?

But here's the uncomfortable question nobody's asking: Who's checking the security of these servers?

When your AI agent can execute SQL queries or run shell commands, a single vulnerability isn't just a bug—it's a direct path to your database, your filesystem, or your entire infrastructure.

So we decided to find out. We grabbed AgentAudit, scanned 33 of the most popular MCP servers, and dove deep into the ecosystem.

Spoiler: We found SQL injection in a community database server and identified 118+ security findings across the ecosystem. But we also found some excellent security patterns in the official servers that every MCP developer should know about.

The Methodology

We used AgentAudit v3.9.8 to scan popular MCP servers from npm and GitHub. We prioritized servers that:

  • Handle sensitive operations (databases, filesystems, command execution)
  • Have high usage on Smithery.ai or GitHub stars
  • Are actively maintained

Our process:

  1. Identify targets — 33 servers based on popularity and risk surface
  2. Clone and analyze — Deep code review of high-priority servers
  3. Pattern matching — Look for known vulnerability patterns (SQL injection, command injection, path traversal)
  4. Responsible disclosure — Report findings to maintainers before publication

We focused on the dangerous stuff: database servers, filesystem access, command execution tools. These are the servers where a vulnerability means game over.

What We Found

The Numbers

  • 33 servers scanned across the MCP ecosystem
  • 118+ security findings identified
  • 5 CRITICAL severity findings (including SQL injection)
  • 9 HIGH severity findings
  • 3 servers with A+ security (all official @modelcontextprotocol)
  • 63 MEDIUM severity findings (mostly overly broad permissions)
  • 41 LOW severity findings (outdated dependencies, missing headers)

The Good News First

The official @modelcontextprotocol servers are chef's kiss.

Take the filesystem server—it has six layers of path validation:

export function isPathWithinAllowedDirectories(
  absolutePath: string, 
  allowedDirectories: string[]
): boolean {
  // Reject null bytes (classic path traversal defense)
  if (absolutePath.includes('\x00')) return false;

  // Normalize and resolve to absolute path
  const normalizedPath = path.resolve(path.normalize(absolutePath));

  // Check containment within allowed directories
  return allowedDirectories.some(dir => {
    const normalizedDir = path.resolve(path.normalize(dir));
    return normalizedPath.startsWith(normalizedDir + path.sep);
  });
}
Enter fullscreen mode Exit fullscreen mode

And they don't stop there. They also:

  • Resolve symlinks to prevent symlink attacks
  • Validate after resolution to catch sneaky redirects outside allowed dirs
  • Use atomic writes to prevent race conditions
  • Reject null bytes (classic path traversal defense)

This is textbook defensive programming. Props to the MCP team.

The Kubernetes Servers

Both popular K8s MCP servers (mcp-server-kubernetes and kubernetes-mcp-server) got it right on command execution:

export async function execInPod(k8sManager, input: {
  name: string;
  command: string[];  // ← MUST be an array
  container?: string;
}) {
  // Defense in depth: Validate array type
  if (!Array.isArray(input.command)) {
    throw new McpError(
      ErrorCode.InvalidParams,
      "Command must be an array of strings. String commands not supported for security."
    );
  }

  // Execute via Kubernetes client (no shell interpretation)
  const exec = new k8s.Exec(kc);
  await exec.exec(namespace, podName, container, commandArr, ...);
}
Enter fullscreen mode Exit fullscreen mode

Why this matters: By forcing commands to be arrays instead of strings, they completely eliminate shell injection attacks. You can't inject ; rm -rf / when the command is ["kubectl", "get", "pods"].

The SQL Injection (Responsible Disclosure Edition)

Now for the critical finding. We discovered a SQL injection vulnerability in @f4ww4z/mcp-mysql-server, a community-built MCP server for MySQL databases.

The Issue: The server accepts SQL queries with optional parameterization. While it supports safe parameterized queries, nothing forces you to use them.

Here's the vulnerable pattern:

// From src/index.ts:357
const [rows] = await this.connection!.query(args.sql, args.params || []);
Enter fullscreen mode Exit fullscreen mode

See that || []? That means if the AI agent doesn't provide parameters, the raw SQL string executes as-is.

Attack Scenario:

  1. User sends a malicious prompt to their AI agent
  2. AI generates SQL with injected code: "SELECT * FROM users WHERE username = 'admin' OR '1'='1'"
  3. No params provided, so query executes raw
  4. Attacker gains unauthorized access

Why This Is Tricky:

Traditional SQL injection happens when developers concatenate user input. But here, the AI is writing the SQL. And if the AI doesn't consistently use parameterized queries (say, due to prompt injection), you get SQL injection as a side effect.

The Fix:

// BEFORE (risky)
await connection.query(userSQL, userParams || []);

// AFTER (secure)
if (!userParams || userParams.length === 0) {
  throw new Error("Parameterized queries required for security");
}
await connection.query(userSQL, userParams);
Enter fullscreen mode Exit fullscreen mode

Enforce security by default, not as an optional best practice.

Note: We've responsibly disclosed this to the maintainer and are awaiting a patch. We're not publishing exploit code.

Expanded Findings Across 33 Servers

Beyond the SQL injection, our expanded scan of 33 servers revealed several concerning patterns:

Environment Variable Leakage (Medium - 15 findings)

Multiple MCP servers accidentally expose API keys, tokens, and secrets through error messages, logs, or LLM context windows. This is the most common medium-severity pattern we found.

Overly Broad Permissions (Medium - 22 findings)

Servers requesting full filesystem access when they only need specific directories. This violates least privilege and expands the blast radius.

Dependency Chain Risks (Medium - 18 findings)

Packages with deep transitive dependency trees, some containing unmaintained or vulnerable packages. Your server might be secure, but its supply chain introduces risk.

Missing Input Validation (Low - 31 findings)

Parameters accepted without type checking, length limits, or format validation. Not immediately exploitable, but creates attack surface.

The Pattern: Good vs Bad

Here's what separates secure MCP servers from vulnerable ones:

✅ Good Security Patterns

1. Commands as Arrays, Not Strings

// GOOD ✅
execFileSync("kubectl", ["get", "pods", "-n", namespace]);

// BAD ❌
execSync(`kubectl get pods -n ${namespace}`);
Enter fullscreen mode Exit fullscreen mode

2. Enforce Parameterization, Don't Just Support It

// GOOD ✅
if (!params || params.length === 0) {
  throw new Error("Parameterized queries required");
}

// BAD ❌
const params = args.params || []; // Optional = risky
Enter fullscreen mode Exit fullscreen mode

3. Layered Path Validation

// GOOD ✅
1. Normalize the path
2. Resolve symlinks  
3. Check against allowed directories
4. THEN perform file operation

// BAD ❌
fs.readFile(userInput); // YOLO
Enter fullscreen mode Exit fullscreen mode

🚨 Red Flags to Watch For

  • Optional security features — If safe usage is optional, it WILL be misused
  • String-based command executionexec() with string interpolation = command injection
  • Trusting AI-generated input — LLMs can be prompt-injected; validate on the server
  • No input validation — "The AI will always do the right thing" is not a security model

The Bigger Picture: LLM-Mediated Attacks

Here's the paradigm shift: MCP servers aren't just being called by humans writing code. They're being called by AI agents, which are influenced by user prompts, which can contain malicious instructions.

Traditional security model:

Human writes code → Code calls API → API validates input
Enter fullscreen mode Exit fullscreen mode

New MCP model:

User writes prompt → AI generates code → MCP server executes
                ↑ prompt injection possible
Enter fullscreen mode Exit fullscreen mode

This means MCP servers must not trust AI-generated input. Defense in depth is critical.

What You Should Do

If You're Building an MCP Server

  1. Study the official servers — The @modelcontextprotocol team nailed it. Copy their patterns.
  2. Enforce security by default — Make dangerous features explicitly opt-in
  3. Use AgentAudit — Scan your server before publishing: npx agentaudit scan
  4. Document security implications — Put warnings in your README

If You're Using MCP Servers

  1. Audit your dependencies — Just because it's on npm doesn't mean it's secure
  2. Use the official servers when possible — They're battle-tested
  3. Run AgentAudit — Check your MCP setup: npm install -g agentaudit && agentaudit scan
  4. Principle of least privilege — Don't give your AI agent more access than it needs

If You're Maintaining the MCP Ecosystem

  1. Publish security guidelines — Give community developers a checklist
  2. Create a validation library@modelcontextprotocol/validation with common patterns
  3. Security badge program — Verified/audited servers get a badge
  4. Encourage automated scanning — Make AgentAudit part of the CI/CD pipeline

The Bottom Line

MCP is powerful. Like, really powerful. But power requires responsibility.

The good news: The foundation is solid. Official servers demonstrate excellent security practices.

The challenge: Community servers have varying security maturity. And with AI agents in the loop, traditional security assumptions break down.

The solution: Scan your MCP servers. Enforce secure patterns. Don't assume the AI will always do the right thing.

Want to check your MCP servers for vulnerabilities?

👉 Try AgentAudit: https://agentaudit.dev

👉 GitHub: https://github.com/starbuck100/agentaudit-mcp

👉 Install: npm install -g agentaudit

Let's build a secure AI ecosystem together. 🔒


Full research report and code examples available in our GitHub repository. If you find security issues in MCP servers, please practice responsible disclosure.

AgentAudit is open source and free to use. Star us on GitHub if this helped you!

Top comments (0)