npm audit isn't enough: I simulated a supply chain attack on my Node dependencies and found what the scanner can't see
The right answer for protecting a Node project's dependencies is don't trust npm audit. I know that sounds wrong — it's the official tool, it's in every doc, the green CI badge tells you you're good. But after running the same vector that destroyed the PyTorch Lightning situation against my own stack, I have to be straight with you: the green badge is the most dangerous part of the entire chain.
Let me tell you what I found.
Supply chain attacks on Node production dependencies: the problem audit doesn't model
When the post about PyTorch Lightning malware dropped I couldn't let it go. I covered it from the ML angle (you can read that here), but the question that kept nagging at me was different: what happens if I run the same vector against my Node dependencies?
Not against a toy project. Against my real stack: Next.js, Railway, PostgreSQL, TypeScript, and a dozen third-party libraries I installed without thinking too hard. The kind of project where you run npm install at 11pm because there's an urgent deploy and you don't bother looking at the postinstall.
Here's the thesis, blunt and clean: npm audit detects known vulnerabilities. A well-executed supply chain attack doesn't use known vulnerabilities — it uses trust. Those are two completely different threat models and the Node ecosystem does a terrible job distinguishing between them.
How I structured the simulation — real methodology, no fake lab
First, let me be clear about what I did and what I didn't do. I didn't publish malicious packages. I didn't infect anything real. I worked in an isolated staging environment, cloned my production package.json, and ran the simulation against that copy. The findings are about documented attack vectors applied to dependencies that exist in my stack right now.
I started with an honest inventory:
# List direct dependencies with their pinned versions
npm list --depth=0 --json | jq '.dependencies | keys'
# Count the full tree — this was the first gut punch
npm list --all 2>/dev/null | wc -l
# Output: 1,847 lines
# Direct dependencies: 23
# Full transitive tree: 847 packages
847 packages for a project with 23 direct dependencies. Every single one of those 847 has a maintainer, has a publish history, and has full filesystem access at install time via postinstall. npm audit reported 0 critical vulnerabilities. Zero.
Vector 1 — Typosquatting in the transitive tree
Classic typosquatting (publishing lodahs instead of lodash) is well-known. What doesn't get talked about nearly enough is typosquatting inside the transitive tree — a package you trust installs a dependency you've never heard of, and that dependency has a name visually close to something legitimate.
I ran this script against my package-lock.json:
#!/bin/bash
# Extract all packages from the lock and look for suspicious names
# Criteria: Levenshtein distance <= 2 against top-1000 npm packages
cat package-lock.json | \
jq -r '.packages | keys[]' | \
grep -v "^node_modules/@" | \ # ignore scoped for now
sed 's|node_modules/||' | \
sort -u > my_packages.txt
# Compare against list of popular packages
# (downloaded top-1000 from npm registry stats)
while read pkg; do
python3 -c "
import sys
from difflib import SequenceMatcher
name = '$pkg'
with open('npm_top1000.txt') as f:
for line in f:
legit = line.strip()
ratio = SequenceMatcher(None, name, legit).ratio()
# Alert if similar but not identical
if 0.85 < ratio < 1.0:
print(f'SUSPICIOUS: {name} similar to {legit} ({ratio:.2f})')
"
done < my_packages.txt
Result: 3 packages with similarity > 0.88 to popular names. All three turned out to be legitimate — prefixed variants from the same author. But the point is I had never manually audited them and npm audit never flagged a single one.
Vector 2 — Lifecycle scripts with unrestricted access
This is the one that made me most uncomfortable. I ran an analysis of every preinstall, install, and postinstall script across my entire dependency tree:
# Find lifecycle scripts across the entire tree
find node_modules -name "package.json" -not -path "*/node_modules/*/node_modules/*" | \
xargs jq -r 'select(.scripts) |
{
name: .name,
version: .version,
preinstall: .scripts.preinstall,
install: .scripts.install,
postinstall: .scripts.postinstall
} |
select(.preinstall != null or .install != null or .postinstall != null)' \
2>/dev/null | jq -s '.'
47 packages in my tree have lifecycle scripts. Forty-seven. I manually reviewed the first 20 and found:
- 12 legitimate (native binary compilation, type generation)
- 6 that make network calls during installation — fetching configs, opt-out telemetry, license verification
- 2 that write to directories outside
node_modules
The 2 that write outside: one is a fonts package that copies files to /usr/local/share/fonts if it has permissions. The other is a CLI tool that creates a config file at ~/.config/. Nothing malicious. But both have the exact mechanism an attacker would use. And npm audit: total silence.
Vector 3 — Silent maintainer takeover
This was the most interesting experiment. The maintainer takeover vector — where someone seizes control of an npm account and publishes a new version with a malicious payload — is the hardest to detect because the package signature is legitimate.
I simulated the scenario like this: I picked 5 packages from my tree with fewer than 50 dependents on npm (niche packages, not a lot of scrutiny), then checked their maintainers' activity and historical publish frequency:
# For each package, look at version history and dates
for pkg in "package-a" "package-b" "package-c" "package-d" "package-e"; do
echo "=== $pkg ==="
# Get publish history from registry
curl -s "https://registry.npmjs.org/$pkg" | \
jq -r '.time | to_entries | .[-10:] | .[] | "\(.key) → \(.value)"'
done
I found one package — I won't name it, but I did email the maintainer — with 15 months of inactivity and a new version published three weeks ago. The changelog said "dependency update." The dependencies it added are legitimate. But the pattern (long inactivity + new publish + vague changelog) is exactly the fingerprint of an account takeover.
npm audit on that package: 0 vulnerabilities. Technically correct, because there's no registered CVE. But the risk is real.
Mistakes we all make — and that I made until three months ago
Mistake 1: Confusing "no known vulnerabilities" with "secure"
npm audit looks for CVEs. A well-executed supply chain attack doesn't generate a CVE until after the damage is done. They're different time windows — the attack exists weeks before the advisory.
Mistake 2: Treating lockfiles as security guarantees, not reproducibility guarantees
package-lock.json guarantees you install the same versions. It does not guarantee those versions weren't compromised after you generated the lock. If the npm registry serves a different file for the same version number (which shouldn't happen but has happened before), your lock doesn't save you.
I started using explicit verifiable checksums. The integrity field in the lockfile helps, but you have to actively validate it:
# Verify integrity of all installed packages
# comparing against the lock
npm ci --ignore-scripts # first, without executing lifecycle scripts
# Then verify hashes match
node -e "
const lock = require('./package-lock.json');
const crypto = require('crypto');
const fs = require('fs');
const path = require('path');
// Iterate over packages and verify integrity
Object.entries(lock.packages || {}).forEach(([pkgPath, pkgData]) => {
if (!pkgPath || !pkgData.integrity) return;
// The integrity field uses SRI hashing (sha512)
console.log(\`✓ \${pkgPath}: \${pkgData.integrity.slice(0, 20)}...\`);
});
"
Mistake 3: npm install in CI with access to secrets
This one got hammered home when I was working on my autonomous agents on Railway — when a process has access to environment variables with credentials, any code running inside that process can exfiltrate them. Running npm install (with lifecycle scripts) in the same step where you inject DATABASE_URL or RAILWAY_TOKEN gives every postinstall access to your secrets.
The separation I implemented:
# .github/workflows/deploy.yml — separate install from deploy
jobs:
install-deps:
runs-on: ubuntu-latest
# No access to production secrets
steps:
- uses: actions/checkout@v4
- name: Install dependencies WITHOUT lifecycle scripts
run: npm ci --ignore-scripts
- name: Run only known build scripts
run: npm run build # only what I defined
deploy:
needs: install-deps
# Secrets live here — but npm install is already done
environment: production
steps:
- name: Deploy to Railway
env:
RAILWAY_TOKEN: ${{ secrets.RAILWAY_TOKEN }}
run: railway up
What I changed in my stack after this
Three concrete changes I shipped to production:
1. --ignore-scripts by default in CI
# Instead of npm ci
npm ci --ignore-scripts
# And an explicit allowlist for legitimate scripts
npm run build # only my own scripts
2. Socket.dev in the pipeline
Socket.dev does exactly what npm audit doesn't: it analyzes behavior, not just CVEs. It has a GitHub Actions integration. Since I added it, it's blocked 2 packages I installed carelessly — one with an undocumented network call in postinstall, another with process.env access at runtime that had nothing to do with what the package was supposed to do.
3. Manual audit of lifecycle scripts before merge
I automated detection in the PR:
#!/bin/bash
# scripts/audit-lifecycle.sh — runs in pre-commit
# Detect new or modified lifecycle scripts
git diff HEAD~1 package-lock.json | \
grep '"scripts"' -A 5 | \
grep -E '"(pre|post)?install"' && \
echo "⚠️ Lifecycle script detected in dependency change — manual review required" && \
exit 1
echo "✓ No new lifecycle scripts"
Not perfect. But it's a first filter.
FAQ — Supply chain attacks in npm and Node.js
Is npm audit completely useless?
No, but its scope is much narrower than it looks. npm audit is good for known vulnerabilities with an assigned CVE. It works for that. The problem is that most active supply chain attacks don't have a CVE at the time of the attack — the CVE shows up later, once someone discovers the problem. For proactive protection you need additional tools like Socket.dev or Snyk with behavioral analysis.
How easy is it to pull off a typosquatting attack on npm?
Technically trivial — creating an npm account and publishing a package with a name similar to a popular one takes minutes. npm has automated controls for names that are nearly identical to heavily downloaded packages, but the space of variants is enormous and the controls have gaps. The most effective vector today isn't direct typosquatting — it's injection into the transitive tree: compromise a third- or fourth-level package that nobody audits.
Does --ignore-scripts break anything in production?
Depends on the project. The cases where it breaks: packages with native binaries that need to compile (node-sass, bcrypt, canvas), packages that generate types in postinstall, and some CLI tools. The fix is to maintain an explicit allowlist of scripts you know are legitimate and run them manually afterward. For most web projects, --ignore-scripts + manual build covers 95% of cases without friction.
Does package-lock.json protect against maintainer takeover?
Partially. The lockfile pins the version and the integrity hash (the integrity field with SHA-512). If the registry serves a different file for the same version, the hash won't match and installation fails. But if the attacker published a new version (e.g., malicious 2.1.4 instead of compromising 2.1.3), and you run npm update or accept the change in the lock, you're already exposed. The lock doesn't protect you from updates you approved yourself.
How many real projects have lifecycle scripts in their dependencies?
In my experience across four Node production projects: between 5% and 8% of packages in the transitive tree have some lifecycle script. Most are legitimate (binary compilation). But in an 800-package tree that's between 40 and 65 packages with the ability to execute arbitrary code on the developer's machine or in CI with access to secrets.
Does running CI in Docker help mitigate this?
Quite a bit. Running the build in a container without access to production secrets significantly reduces the exfiltration surface. I covered this in detail when I documented my Docker Compose stack in production over 30 days — context separation is one of those benefits that never shows up in performance benchmarks but is worth gold in security terms. It doesn't eliminate supply chain attack risk, but it contains it: if a malicious postinstall steals environment variables, in a properly configured CI those variables shouldn't be there yet.
What I learned and what still keeps me up at night
The experiment confirmed what I suspected: npm audit is a compliance tool, not a security tool. It checks a box. You tell the auditor "we run npm audit in CI" and technically that's true. But the threat model it solves is the easiest one that exists.
What does make sense to me: the combination of --ignore-scripts in CI, Socket.dev for behavioral analysis, and manual review of the lifecycle script diff in PRs covers the three vectors I found. It's not perfect — nothing is — but the attack surface shrinks in a concrete and measurable way.
What still doesn't sit right with me: the abandoned package maintenance problem is structural. There's no clear signal in the npm ecosystem to distinguish "this package is stable and doesn't need updates" from "this package is dead and nobody will notice if it gets compromised." Commit activity isn't enough. Download counts aren't either. It's an unsolved problem and it scares me more than any CVE.
I get the same uneasy feeling I had when I inspected what Chrome was installing without asking every single time I run npm install without --ignore-scripts. The difference is that with Chrome I couldn't do much about it. Here I have levers. And that's exactly what I'm going to keep pulling.
If you're running Node in production and you've never audited the lifecycle scripts in your transitive dependencies, do it this week. Not because they're likely to be compromised — they probably aren't. But because you don't know if they are, and that uncertainty is the real problem.
Found anything weird in your own project's dependency tree? Tell me — I'm genuinely interested in building a map of common patterns across real Node stacks.
This article was originally published on juanchi.dev
Top comments (0)