I audited 25 top npm packages with a zero-install CLI. Here's who passes.
npx proof-of-commitment react zod chalk lodash axios typescript
That's it. No install, no API key, no account. Run it against any package — or drop your package.json at getcommit.dev/audit.
I ran it against 25 of the most downloaded npm packages. Here's what the data shows — and the results are worse than I expected.
The scoring model
Five behavioral dimensions, all from public registry data:
| Dimension | Max | What it measures |
|---|---|---|
| Longevity | 25 | Package age — time in production is signal |
| Download Momentum | 25 | Weekly downloads + trend direction |
| Release Consistency | 20 | Cadence, recency, gaps |
| Maintainer Depth | 15 | Number of active maintainers |
| GitHub Backing | 15 | Star traction, repo activity |
CRITICAL = 1 maintainer + >10M weekly downloads. Same profile as the LiteLLM attack (March 2026) and the axios compromise (April 1st, 2026).
The data: 25 packages scored (live, April 17 2026)
| Package | Score | Risk | Maintainers | Downloads/wk |
|---|---|---|---|---|
| webpack | 100 | ✅ SAFE | 8 | 44M |
| prettier | 100 | ✅ SAFE | 11 | 87M |
| typescript | 98 | ✅ SAFE | 6 | 178M |
| express | 97 | ✅ SAFE | 5 | 93M |
| dotenv | 93 | ✅ SAFE | 3 | 120M |
| jest | 95 | ✅ SAFE | 5 | 44M |
| tailwindcss | 95 | ✅ SAFE | 3 | 89M |
| fastify | 95 | ✅ SAFE | 5 | 6M |
| react | 91 | ✅ SAFE | 2 | 122M |
| eslint | 91 | ✅ SAFE | 2 | 125M |
| vite | 91 | ✅ SAFE | 4 | 105M |
| next | 91 | ✅ SAFE | 2 | 36M |
| prisma | 91 | ✅ SAFE | 2 | 10M |
| rollup | 99 | ✅ SAFE | 5 | 102M |
| drizzle-orm | 87 | ✅ SAFE | 4 | 7M |
| uuid | 82 | ✅ SAFE | 2 | 239M |
| esbuild | 88 | 🔴 CRITICAL | 1 | 190M |
| sharp | 84 | 🔴 CRITICAL | 1 | 51M |
| nodemon | 86 | 🔴 CRITICAL | 1 | 12M |
| hono | 82 | 🔴 CRITICAL | 1 | 34M |
| axios | 89 | 🔴 CRITICAL | 1 | 101M |
| zod | 83 | 🔴 CRITICAL | 1 | 158M |
| lodash | 87 | 🔴 CRITICAL | 1 | 145M |
| chalk | 75 | 🔴 CRITICAL | 1 | 413M |
| ts-node | 59 | ⚠️ WARN | 2 | — |
What stands out
esbuild has 190M weekly downloads. One maintainer. Evan Wallace built one of the most important tools in the JavaScript ecosystem — the bundler that powers Vite, Next.js, and dozens of other frameworks. It's exceptional engineering. It's also a single point of failure for roughly half the JavaScript build toolchain. If something happens to Evan's npm token, the blast radius is enormous.
That's more downloads than TypeScript (178M/wk). TypeScript has 6 maintainers. esbuild has 1.
Sharp processes images on ~51M npm installs per week. One maintainer. Server-side image processing for most Node.js production deployments. It has native bindings. A malicious version would be hard to detect and devastating.
Chalk (413M downloads/week) is still the biggest exposure. The most downloaded package on npm that's sole-maintained. It colors your terminal output. Every project that has a CLI, every build script, every logging framework — chalk is in there. One token compromise.
The "safe" packages earn it. webpack (score=100) has 8 maintainers, 44M weekly downloads, and 15 years of shipping. prettier has 11 maintainers. typescript is Microsoft-backed. These packages would survive a maintainer leaving. The CRITICAL packages wouldn't.
The axios attack on April 1st proved the model. A compromised npm token published a malicious version of axios in minutes. npm audit showed zero issues beforehand. The behavioral score had flagged it CRITICAL for months (1 maintainer, 100M downloads/week = prime target).
Why this matters now
Three patterns converged in early 2026:
AI-assisted supply chain attacks are getting faster. Identifying a high-value target (1 maintainer + massive downloads), generating a plausible malicious payload, and timing the publish to a token compromise — all of this can be automated.
npm audit waits for CVEs. The database catches known vulnerabilities. It has nothing to say about structural risk. Both tools answer different questions. You need both.
Transitive dependencies hide the risk. I audited
@anthropic-ai/sdk— score=86, 14 maintainers, looks solid. But two levels deep:json-schema-to-ts(CRITICAL, sole maintainer, 12M downloads/week). You'd never find that in a direct audit.
How to use it
Zero install (try it now):
npx proof-of-commitment axios zod chalk hono esbuild
# Against your own project:
npx proof-of-commitment --file package.json
# PyPI too:
npx proof-of-commitment --pypi litellm langchain requests
GitHub Action (posts table directly on your PR):
- uses: piiiico/proof-of-commitment@main
with:
fail-on-critical: false
comment-on-pr: true
MCP server (zero install, works with Claude Desktop/Cursor/Windsurf):
{
"mcpServers": {
"proof-of-commitment": {
"type": "streamable-http",
"url": "https://poc-backend.amdal-dev.workers.dev/mcp"
}
}
}
Then: "Audit the dependencies in vercel/ai" — it fetches the package.json, scores everything, returns a risk table.
Web demo: getcommit.dev/audit — paste packages or drop your package.json.
What surprises you most? esbuild? The dotenv result? And what signals matter most to you — maintainer count, release recency, something else?
Top comments (0)