DEV Community

Arnaud
Arnaud

Posted on

I Built MCP Servers in Rust. Here Are the 5 Security Mistakes I See in Every Deployment.

Thirty CVEs in sixty days. That's the count for MCP server vulnerabilities filed between January and March 2026. One of them scored CVSS 9.6 — a remote code execution that affected 437,000+ installations.

I've been building MCP servers in Rust for the past year. I designed the security layer for a spec-driven development CLI that uses MCP to orchestrate AI agents. I've also watched the ecosystem grow from a few experimental servers to something enterprises are deploying in production. The security posture of most deployments terrifies me.

Here are the five mistakes I see everywhere.

1. Static API Keys in Environment Variables

The Astrix Security report found that 53% of MCP servers authenticate with static, long-lived secrets. API keys in .env files, personal access tokens passed as environment variables. Only 8.5% use OAuth.

I get why. The MCP quickstart guides show you how to set API_KEY=mytoken123 and move on. It works. It's fast. And it means that anyone who gains read access to your environment — through a leaked Docker image, a misconfigured CI pipeline, a compromised dependency — has permanent access to your MCP server.

Static keys don't expire. They don't rotate. They can't be scoped to specific tools or operations. When one leaks, you don't know until something breaks.

The fix isn't complicated. OAuth 2.1 with PKCE works for MCP. I implemented it with jose for JWT validation and zod for request parsing — about 80 lines of middleware. Short-lived tokens (15-minute TTL), automatic rotation, proper scope management. The migration path from static keys to OAuth takes about a day if your server is reasonably structured.

2. No Input Validation on File Paths

Eighty-two percent. That's the share of MCP implementations vulnerable to path traversal, according to a survey of 2,614 servers. The same study found 67% exposed to code injection.

Most MCP servers implement a read_file or write_file tool. The implementation looks something like this: take the path from the tool arguments, pass it to fs.readFile(), return the content. No validation. No sandboxing. No allowlist.

An attacker who can influence the tool arguments — through prompt injection, tool poisoning, or a compromised upstream agent — can read /etc/passwd, your .env file, your SSH keys. On cloud instances without IMDSv2 enforced, they can hit http://169.254.169.254/latest/meta-data/iam/security-credentials/ and steal your AWS credentials.

The fix: resolve every path to its canonical absolute form, verify it's within your allowed base directory, check for symlink escapes, and maintain a blocked file list (.env, .git/, credentials.*, service account files). I use a ValidatedPath type in Rust that makes invalid paths unrepresentable at the type level. In TypeScript, a validation function that runs before every file operation.

3. Binding to 0.0.0.0

The Clawdbot incident in January 2026 was a masterclass in what happens when default configurations meet the internet. Admin panels bound to 0.0.0.0:8080 — publicly accessible from the first deployment. Security researchers found 8,000+ MCP servers exposed on the public internet, 492 of them vulnerable to abuse without any authentication.

I've seen this in production deployments at companies that should know better. The quickstart says "run the server," the default config binds to all interfaces, and nobody changes it because it works on the developer's laptop.

Bind to 127.0.0.1. If you need remote access, put it behind a reverse proxy with TLS. If you need cross-machine communication, use mTLS. There's no scenario where an MCP server should be directly accessible on a public interface.

4. Trusting the instructions Field

This one is subtle. The MCP spec has an instructions field where the server can tell the client how to use its tools. "Only call the delete tool after user confirmation." "Never pass user input directly to the execute tool."

I tested this across Claude Desktop, Cursor, Continue, and Windsurf. Most clients ignore it. The instructions are advisory. They're not enforced.

If your security model depends on the client reading and following your instructions, you don't have a security model. I learned this the hard way when building the MCP layer for minter. The solution is what I call belt-and-suspenders: the instructions field for clients that respect it, an unlock-tool pattern that requires explicit user action before destructive operations, smart refusals in every tool handler that validate independently, and system-prompt reinforcement for clients that support it. Three independent layers, each failing safely if the others are bypassed.

5. No Monitoring for Tool Invocation Patterns

Most MCP deployments have zero observability into how tools are being called. No logging of which tools are invoked, by which agent, with what arguments. No anomaly detection. No alerting.

When the WhatsApp exfiltration attack was demonstrated — a malicious MCP server silently extracting a user's entire message history through tool poisoning — the victim had no way to know it was happening. No logs. No alerts. Nothing.

At minimum, log every tool invocation with a correlation ID, the tool name, argument sizes (not contents — you don't want PII in logs), the requesting agent identity, and a timestamp. Then watch for patterns: cross-server data flows (tool A reads data, tool B sends it externally), unusual argument sizes, tools being called outside business hours, rapid sequential calls to different tools (tool spraying).

The Gap Between Checklists and Code

Free resources exist. OWASP published MCP security guides. SlowMist has a checklist on GitHub. There's a community security site with basic recommendations. They're useful for understanding the categories of risk. But they tell you to "implement proper authentication" without showing the middleware. They say "validate inputs" without a path traversal function you can copy.

I put together an 80-page guide that covers all of this with production-ready TypeScript and Rust code, architecture diagrams, real incident case studies, and a 47-point security checklist. Every pattern comes from building and operating MCP servers in production: MCP Security Hardening Guide.

Top comments (2)

Collapse
 
klement_gunndu profile image
klement Gunndu

The 82% path traversal stat is alarming. For mistake #1 though — is OAuth 2.1 with PKCE realistic for local dev MCP servers where the client is a CLI tool on localhost? Seems like the threat model is different there vs cloud-deployed servers, and the migration cost might not justify it for single-user setups.

Collapse
 
blacksmith profile image
Arnaud

Yeah, fair point. OAuth for a localhost single-user setup is overkill. Should've been clearer about that.
The 53% stat is about servers exposed on a network or shared across teams, that's where static keys become a problem. For local dev with one user, a static key is fine, I can't argue with that.
I'd focus on #2 and #3 first regardless of your env. Even on localhost, loosing creds through an unvalidated read_file will hurt.