When you install an MCP server, you're not just adding a feature to your AI assistant. You're handing a piece of software near-unrestricted access to your machine — your files, your credentials, your network connections, sometimes the ability to execute arbitrary code. Most people do this by copying a config snippet from a README without a second thought. That's a problem.
The MCP ecosystem is moving fast, and the infrastructure for evaluating security hasn't kept up. This article is for developers who want to actually understand what they're installing before they install it.
What MCP Servers Can Actually Do
The Model Context Protocol is powerful precisely because of how much access it grants. An MCP server can:
- Read and write files anywhere your user has filesystem permissions
- Make arbitrary network requests
- Execute shell commands
- Access environment variables (which is where API keys, database credentials, and tokens live)
- Persist data locally and transmit it elsewhere
None of this is a bug. It's the feature. MCP servers need this access to be useful — a filesystem MCP server needs to read your files, a database MCP server needs your connection credentials. But "needs this access" and "can be trusted with this access" are two different questions, and most users conflate them.
The threat model isn't primarily "the developer is malicious" (though that does happen). It's more mundane: abandoned repositories with unpatched dependencies, credentials inadvertently logged, overly broad permission scopes that expose more than intended, and supply chain attacks through transitive dependencies.
What Can Go Wrong
Data exfiltration is the obvious one. An MCP server that has file system access and makes outbound HTTP calls can, technically, read your private files and send them somewhere. This doesn't require the original developer to be adversarial — a supply chain compromise in a dependency can introduce this after the fact, and if the repository isn't actively maintained, nobody's reviewing the packages.
Credential theft is more targeted. If an MCP server reads your environment variables (legitimately, to find an API key it needs), a compromised version of that server can harvest every other key in your environment at the same time. AWS keys, OpenAI API keys, GitHub tokens — all of it is there.
Abandoned servers are the risk most people ignore entirely. The MCP ecosystem has hundreds of community-built servers, many of which were written quickly, published, and then not touched again. An unmaintained server running on your machine means unpatched CVEs in its dependencies, no response if you report a vulnerability, and no guarantee that future versions of its dependencies don't introduce breaking security issues.
Supply chain attacks are the hardest to defend against. Your MCP server might be written by a developer you trust. But that server has dependencies, and those dependencies have dependencies. One compromised package three levels deep can give an attacker whatever access your server has.
What Most Directories Currently Offer
Not much. Smithery, Glama, and most other MCP directories aggregate community submissions. They're useful for discovering servers, but they list with no security auditing. There's no indication of whether a server has been reviewed for overly broad permissions, whether its dependencies have known CVEs, or when it was last updated. Some directories surface star counts and GitHub metadata, which is marginally useful, but a server with 1,200 stars can still be running a dependency with a published CVE from eight months ago.
This isn't a criticism of directory maintainers specifically — auditing every server in a fast-moving ecosystem is a significant undertaking. It's just the current reality, and developers should know it going in.
What a Proper Security Audit Looks Like
If you're evaluating an MCP server seriously, here's what that should actually cover:
Dependency scanning. Run the dependency tree through a tool like npm audit, pip-audit, or osv-scanner and look for known vulnerabilities. Pay attention to severity — a critical CVE in a package that handles network requests is more concerning than a low-severity issue in a dev dependency.
Authentication review. How does the server handle credentials? Does it require them to be passed as environment variables (acceptable), does it read from a config file (check what permissions that file has), or does it accept them inline in ways that might get logged (bad)?
Permission scope analysis. Does the server request access to everything, or does it scope its permissions to what it actually needs? A markdown-rendering MCP server probably doesn't need filesystem write access. If it asks for it anyway, that's a red flag.
Maintenance status check. When was the last commit? When were dependencies last updated? Are there open security issues that haven't been addressed? A repository with the last commit two years ago and ten open issues mentioning potential vulnerabilities is not the same as an actively maintained one.
Known CVE scanning. Check NVD and OSV for any CVEs that reference the package or its direct dependencies. This takes five minutes and can save you significant headaches.
The Three-Tier Rating Approach
One framework that makes sense is a three-tier classification: Verified, Caution, and Untested. Coda One's MCP security methodology uses an 8-point audit checklist to assign these ratings across the MCP servers it reviews.
The logic is straightforward: Verified means the server passed all checks — clean dependencies, appropriate permission scope, active maintenance, no known CVEs. Caution means it passed with caveats that don't necessarily disqualify it but warrant knowing about — maybe it's well-written but hasn't been updated in six months, or it requests slightly broader permissions than strictly necessary. Untested means it hasn't been reviewed yet.
"Untested" is the honest answer for most of the ecosystem. It's not a cop-out — it's accurate. The alternative is false confidence, and false confidence in this context is dangerous. If a directory tells you every server it lists is "safe" without explaining what that assessment is based on, that's a marketing claim, not a security guarantee.
What You Should Check Before Installing Anything
Star count tells you about popularity, not security. Here's what actually matters:
Look at open issues and PRs. Are there open security reports? Has the maintainer responded to them? An unacknowledged security issue report from three months ago is a meaningful signal.
Check the dependency list. For Node-based servers, look at package.json. For Python, look at requirements.txt or pyproject.toml. Run an audit against it. This takes two minutes and is the single highest-return action you can take.
Review what environment variables it reads. The README usually documents this. If it reads AWS_ACCESS_KEY_ID but you can't explain why a Notion integration needs AWS credentials, ask that question before installing.
Check the last commit date and release cadence. Not because old code is automatically bad, but because a server with dependencies it hasn't updated in 18 months almost certainly has known vulnerabilities by now.
Look at what permissions the server actually requests. The MCP spec gives servers the ability to declare their capabilities. If a server declares filesystem write access for functionality that doesn't obviously require it, that's worth understanding before proceeding.
The MCP ecosystem is one of the more interesting developments in how we use AI tooling. It's also, right now, a place where the security culture hasn't fully caught up to the adoption curve. That gap will close — but it probably won't close before you need to make decisions. Don't outsource those decisions to a star rating.
Top comments (0)