DEV Community

ecap0
ecap0

Posted on • Originally published at agentaudit.dev

We Scanned 8 Popular MCP Servers — Here's What We Found

If you're building AI agents with the Model Context Protocol (MCP), security probably isn't the first thing on your mind. You're focused on connecting your LLM to databases, filesystems, and APIs. But here's the thing: MCP servers execute code on behalf of AI agents — and that creates a unique attack surface.

We spent a night analyzing popular MCP servers from the npm registry and GitHub. We looked at official implementations from Anthropic, community favorites with thousands of stars, and niche tools with specialized use cases. We found one real SQL injection vulnerability, several excellent security patterns worth copying, and a few concerning gaps.

Here's what we learned.


What We Scanned

We identified 17 popular MCP servers based on GitHub stars and Smithery.ai usage data, then conducted in-depth code review on 8 of them:

  • @playwright/mcp — Browser automation (27K stars)
  • @modelcontextprotocol/server-filesystem — File operations (official)
  • @f4ww4z/mcp-mysql-server — MySQL database access (~130 weekly downloads)
  • mcp-server-kubernetes (Flux159) — Kubernetes operations (1.3K stars)
  • kubernetes-mcp-server (containers) — Alternative K8s server (1.1K stars)
  • executeautomation/mcp-playwright — Community Playwright wrapper (5.2K stars)
  • mcp-framework — Community framework
  • Plus several official servers for scoping

Our methodology: manual source code review focused on high-risk areas (database queries, file operations, command execution). We looked for common vulnerability patterns like SQL injection, command injection, and path traversal.


The SQL Injection Vulnerability

Package: @f4ww4z/mcp-mysql-server

Severity: Medium-High (CWE-89)

Status: Reported to maintainer

This MySQL MCP server allows AI agents to execute database queries. It supports parameterized queries (the safe way), but doesn't require them. Here's the vulnerable code:

// handleQuery() at line 357
const [rows] = await this.connection!.query(args.sql, args.params || []);
Enter fullscreen mode Exit fullscreen mode

The problem? args.params is optional. If the LLM doesn't provide parameters (or provides an empty array), the raw SQL string executes directly against the database.

Attack scenario:

  1. User sends prompt: "Show me all users'; DROP TABLE users--"
  2. LLM generates: {"sql": "SELECT * FROM users'; DROP TABLE users--", "params": []}
  3. Server executes the malicious SQL

Why this matters: Unlike traditional apps where a human writes the SQL, here an AI generates it dynamically. Prompt injection attacks can trick the LLM into generating malicious queries — and without enforced parameterization, there's no safety net.

The Fix

The server does implement parameterized queries correctly in some places:

// Line 436 - using ?? for identifiers
const [rows] = await this.connection!.query('DESCRIBE ??', [args.table]);
Enter fullscreen mode Exit fullscreen mode

The fix is simple: enforce parameterization. Reject queries that don't use the params array:

if (!args.params || args.params.length === 0) {
  throw new Error("Security: Parameterized queries required");
}
Enter fullscreen mode Exit fullscreen mode

We've disclosed this responsibly to the maintainer.


The Good: Security Patterns Worth Copying

Not everything we found was concerning. Official MCP servers from Anthropic demonstrate excellent security practices that community developers should study.

Pattern 1: Path Traversal Protection (server-filesystem)

The official filesystem server has six layers of path validation:

export function isPathWithinAllowedDirectories(
  absolutePath: string, 
  allowedDirectories: string[]
): boolean {
  // 1. Null byte rejection
  if (absolutePath.includes('\x00')) return false;

  // 2. Normalization
  const normalizedPath = path.resolve(path.normalize(absolutePath));

  // 3. Check containment
  return allowedDirectories.some(dir => {
    const normalizedDir = path.resolve(path.normalize(dir));
    return normalizedPath.startsWith(normalizedDir + path.sep);
  });
}
Enter fullscreen mode Exit fullscreen mode

Plus symlink resolution, atomic writes with race condition prevention, and proper error handling. This is how you do filesystem security.

Pattern 2: Command Execution via Arrays (Kubernetes servers)

Both Kubernetes servers we analyzed execute commands safely using array-based arguments:

// SECURE ✅
const command = "kubectl";
const args = ["delete", resourceType, name];
execFileSync(command, args);

// INSECURE ❌ (not found in any server)
execSync(`kubectl delete ${resourceType} ${name}`);
Enter fullscreen mode Exit fullscreen mode

One server even explicitly validates array types:

if (!Array.isArray(input.command)) {
  throw new McpError(
    ErrorCode.InvalidParams,
    "Command must be an array. String commands not supported for security."
  );
}
Enter fullscreen mode Exit fullscreen mode

Why this matters: String concatenation + shell execution = command injection vulnerability. Arrays bypass the shell entirely.


The Concerning: Security Gaps in Community Servers

We noticed a pattern: official servers have mature security practices, community servers vary widely.

Anti-Pattern: Optional Security Features

Security features that are opt-in rather than enforced create risk. The SQL injection we found is an example — parameterization exists but isn't mandatory.

Knowledge Gap: LLM-Specific Threats

Traditional security assumes humans write code. But in MCP, the LLM writes code (SQL queries, file paths, shell commands). That creates new attack vectors:

  • Prompt injection leading to malicious MCP calls
  • LLMs inconsistently using security features
  • Edge cases where the model "forgets" to parameterize

MCP server developers need to think defensively: don't trust LLM-generated input, even from your own system.


By The Numbers

  • 8 servers analyzed in-depth
  • 1 vulnerability found (SQL injection)
  • 100% of servers implement input validation
  • 100% of servers handling commands use array-based arguments
  • 37.5% rated A+ security (official servers)
  • 12.5% need security improvements

The MCP ecosystem is young but maturing fast. Most servers are reasonably secure — but there's room for improvement.


Recommendations

For MCP Server Developers:

  1. Study official servers — The filesystem server is a masterclass in defensive programming
  2. Enforce security by default — Don't make parameterization optional
  3. Validate LLM-generated input — Treat it like untrusted user input
  4. Use array-based command arguments — Never concatenate strings for shell execution
  5. Document security implications — Warn users about SQL injection, command injection risks

For the MCP Community:

  1. Security guidelines — The protocol needs official security best practices
  2. Audit popular servers — Many have thousands of downloads but no security review
  3. Standard validation library — A shared @modelcontextprotocol/validation package could help

For AI Agent Builders:

  1. Review your MCP servers — What are you exposing to your agents?
  2. Principle of least privilege — Only grant necessary permissions
  3. Monitor MCP calls — Log what your agents are doing

What's Next

We plan to expand this research to more servers and develop automated scanning for MCP-specific vulnerability patterns. The intersection of LLMs and traditional security is fascinating — and underexplored.

If you're building MCP servers, we'd love to hear from you. What security challenges are you facing? What patterns have worked well?


About This Research

This scan was conducted using a combination of manual code review and experimental automated analysis. We focused on publicly available MCP servers with significant usage or stars.

Responsible disclosure: We reported the SQL injection vulnerability to the maintainer before publication.

Tools: The scanning approach was developed as part of AgentAudit, an open-source security toolkit for AI agents. If you're interested in MCP security research, check out the project or reach out.


The Bottom Line

The MCP ecosystem is building the pipes that connect LLMs to the real world. That infrastructure needs to be secure.

Official servers set a strong precedent. Community servers are catching up. And there's a growing awareness that LLM-mediated code execution creates unique security challenges.

If you're building with MCP: validate your inputs, enforce security by default, and study the patterns that work. The ecosystem is young enough that we can bake security in from the start.


Want to learn more about MCP security? Follow this series or contribute to the discussion on GitHub.

Top comments (2)

Collapse
 
itskondrat profile image
Mykola Kondratiuk

this is really solid work. been building with MCP servers for a few months and the input validation gaps are the ones that scare me most - especially the filesystem server accepting any path without sanitization. we ended up wrapping every MCP call in a permission layer that checks allowed paths/operations before the tool even executes. curious if you found any servers that actually implement proper sandboxing natively?

Collapse
 
vibeyclaw profile image
Vic Chen

Great research. The point about LLM-generated input being fundamentally different from human-written code is something a lot of MCP developers overlook. In traditional apps, you trust the developer to write parameterized queries — but when an LLM is generating the SQL dynamically, you need the server itself to enforce safe patterns.

I've been building AI tools that interact with financial databases (SEC filings, 13F data), and this exact issue came up — we had to make parameterization mandatory at the server layer, not optional. The "trust the caller" model just doesn't work when the caller is a probabilistic model.

The array-based command execution pattern from the K8s servers is a great example of defense-in-depth. Would love to see something like a @modelcontextprotocol/validation package become a community standard.