DEV Community

Michael Kayode Onyekwere
Michael Kayode Onyekwere

Posted on • Originally published at agentscores.xyz

Continuous monitoring caught a credential leak in a published MCP package. Six republishes later, it is still there.

Continuous monitoring caught a credential leak in a published MCP package. Six republishes later, it is still there.

This is a disclosure writeup. It describes the case at the class level only. No credential values are quoted anywhere in this post.

What was found

The package is fa-mcp-sdk on npm. It is distributed as a Model Context Protocol SDK, which means it is installed by agent-framework tooling (Claude, Cursor, OpenAI agents, custom MCP clients) typically via npm install fa-mcp-sdk or npx -y fa-mcp-sdk. Because that install path runs without manual review in most agent setups, anything inside the published tarball reaches consumers immediately on first install.

On 2026-04-19 a continuous scanner I run flagged the package on a fresh publish. The score dropped sharply, and the finding type was hardcoded_secret at critical severity. On manual review I found a file at package/config/local.yaml containing real production credentials. The classes:

  • An OpenAI / LiteLLM API key tied to a named user at an internal LLM gateway
  • An Active Directory service-account username and password covering four LDAP controllers across two production domains in a financial-services estate
  • Consul ACL tokens for both dev and prod data centres
  • A Postgres superuser password
  • A JWT encryption key
  • A basic-auth password

The combination is what makes this severe, not any single credential. The LDAP service account alone gives an attacker bind access to enumerate the entire Active Directory of the affected organisation. The Consul prod token likely exposes further secrets in their KV store. The Postgres superuser password is direct database access on a financial-services platform. Anyone who installed the package and read it had everything needed to map the network, escalate privileges, and exfiltrate data.

Why MCP supply chain is structurally different

Regular npm supply chain has a 20-year body of guidance, lockfiles, audit tooling, and review culture around it. MCP is new, and four properties of how MCP packages are typically used make the threat model materially worse:

  1. No lockfiles in most MCP client configs. Claude Desktop, Cursor, and similar agent frontends launch MCP servers from a config file that uses npx -y with no version pin. Every restart pulls latest. The npm ecosystem solved this with package-lock.json years ago; MCP has not yet picked it up.
  2. Install paths bypass review. When an agent framework starts an MCP server, no human looks at what was installed. The pattern is closer to running a remote shell command than installing a dependency. Any code in the published tarball runs on the consumer's machine the first time the agent boots.
  3. The capability surface is opaque. A typical npm dependency exposes functions a developer chooses to import. An MCP package declares a set of tools that the agent can autonomously decide to call. Adding email_messaging or shell_exec between minor versions is a meaningful scope change that consumers rarely notice.
  4. Publisher posture is patchy. Many MCP packages have no provenance attestations, no repository link, no published security contact, no release pipeline that excludes config files. Across 880 currently-monitored MCP packages, missing-provenance is by far the most common finding type, accounting for 64% of all flagged issues.

Put together, an unpinned npx -y install of an MCP package gives the publisher more authority over the consumer's runtime than a typical npm dependency does, with less review and less visibility.

The disclosure timeline

I held the public disclosure for six days while attempting private remediation through several channels. None worked.

Date (UTC) Action
2026-04-19 First private email to the maintainer's published npm contact address with the full credential list and recommended remediation (unpublish, rotate, add .npmignore). No reply.
2026-04-19 / 20 Maintainer published 0.4.58 and 0.4.59. local.yaml unchanged.
2026-04-20 Second private email to the maintainer's GitHub-listed email address, citing the file path explicitly. No reply.
2026-04-20 Third (urgent) private email listing four of the six credential values verbatim, in case the previous emails had not been read carefully. No reply.
2026-04-22 Maintainer published 0.4.69. A sanitised template was added at _local.yaml, but the original local.yaml was left in place. So the maintainer touched the same directory and still left the file.
2026-04-22 Escalation email to security@npmjs.com requesting a force-unpublish of the affected versions or a registry-level advisory. No (visible) action.
2026-04-23 Maintainer published 0.4.70 and 0.4.71. File still present.
2026-04-25 Public class-level GitHub issue filed on the affected repository. AgentScore advisory AGENTSCORE-2026-0012 published. CVE request submitted to GitHub Security Lab.

I extracted the latest published tarball (0.4.71) on 2026-04-25 to verify before going public. Ten of the original credential pattern matches were still present in package/config/local.yaml. Same file, same values, six republishes since the first private disclosure.

What I think happened internally

This is speculation, but it fits the visible pattern. Someone at the maintainer organisation added _local.yaml (the underscore-prefixed sanitised template) to the config/ directory, probably in response to one of my emails or an internal review. That same person, or a different person, did not notice or did not act on the original local.yaml sitting next to it. The publish pipeline ships the entire config/ directory unconditionally, so every subsequent release continues to leak the credentials regardless of any other code changes shipped alongside.

If that read is right, the actual fix is one line in package.json:

"files": [
  "dist",
  "README.md"
]
Enter fullscreen mode Exit fullscreen mode

Or one entry in .npmignore:

config/local.yaml
Enter fullscreen mode Exit fullscreen mode

That one-line change has not landed across six versions and six days. Whatever process the maintainer organisation has for receiving and acting on security reports has not produced a remediation in this window.

Where this sits in the broader MCP ecosystem

Numbers are from the live AgentScore monitoring dataset on 2026-04-25, covering 880 MCP packages on npm and 9,129 scans on record:

  • Across the most recent 500 scans, 87% of MCP packages produce at least one finding. Most of those findings are not vulnerabilities; they are verifiability gaps (no_provenance, no_repository).
  • no_provenance accounts for 64% of all findings. The MCP ecosystem has not yet adopted npm provenance attestations at scale.
  • Source-code findings are rarer but present: command_injection patterns appear in 10% of findings, unsafe_eval in 2%, excessive_dependencies in 1%.
  • 8.6% of 720 sampled packages publish at least one install-time script. Most are benign banners; the pattern is a known supply-chain vector either way.
  • hardcoded_secret is the rarest finding type the scanner produces, and after the April 22 v2.1 update added context-aware downgrade for test fixtures, it surfaces only on packages with genuine credential exposure. In a recent 500-scan sample with that downgrade applied, zero packages flagged for it. That is what makes the fa-mcp-sdk case unusual: real credentials in a published tarball, not a test fixture.

The risk distribution across the monitored set is healthy overall: 85% of packages score LOW, 12% MODERATE, 2% ELEVATED, 0.5% HIGH. The supply-chain attack surface for MCP is not concentrated in the average package; it is concentrated in the long tail. Continuous monitoring exists to surface that tail.

Lessons for MCP consumers

If you install MCP packages without pinning, this is the kind of thing that ends up on disk in your build environment. Four concrete moves:

  1. Pin your MCP dependencies to exact versions. Do not use npx -y or open ranges. The Redis team did this for RedisInsight in April 2026 after a separate scan report flagged five unpinned MCP packages there. Two days from report to every MCP version pinned in their config. The full case is at agentscores.xyz/case-study/redis.
  2. Treat install scripts on MCP packages as a manual-review gate. Most are benign banners, but the pattern is a classic supply-chain vector. The Policy Gate flags their presence so a human decides.
  3. Re-evaluate capability changes between version bumps. A package that adds email_messaging, filesystem_write, or shell_exec between minor versions is a scope change, not a routine update. The Agions case at agentscores.xyz/case-study/agions shows what a four-day arc looks like when a maintainer engages: scan report, targeted patch in 48h, then a major-version structural cleanup that removed seven capabilities from the tool surface.
  4. Watch a public advisory feed for the packages you have in your inventory. Score drops on packages already installed are the early-warning signal. RSS at agentscores.xyz/security/advisories/rss.xml.

The two case studies above bracket the full range of maintainer responses we have seen. Most maintainers, when their package is flagged, behave like Redis or Agions. The fa-mcp-sdk case is what continuous monitoring catches in the rare instance where they do not.

How this got caught

The scanner that flagged fa-mcp-sdk on its first publish runs continuously over the npm registry feed. It is not a manual research project, and it does not require the maintainer to opt in. When a new version of any monitored MCP package is published, the scanner has a result within minutes. That detection mechanism is what made the six-republishes-in-six-days timeline visible. Without continuous monitoring, the only way to find a credential leak in a tarball is for an individual consumer to manually inspect what they installed, which essentially never happens. Full advisory at agentscores.xyz/security/advisories.

What happens next

If fa-mcp-sdk@0.4.72 ships with local.yaml removed and the credentials rotated, the AgentScore monitor will detect the change and the advisory will move to a resolved state. If not, the GitHub Security Lab CVE request and the resulting GitHub Advisory Database entry will route the finding into Dependabot, Snyk, Socket, and the rest of the dependency-scanning ecosystem automatically. Either path closes the consumer exposure.

If you maintain an MCP package: please add config/local*.yaml, .env, and any local development config files to .npmignore or to the files array in package.json today. The pattern in this case is preventable with a one-line change, and the cost of getting it wrong scales with how many people install the package.

References:

Top comments (0)