Okay, story time. Last Tuesday I'm scrolling Twitter (sorry, "X", whatever) and I see the fifth take of the week along the lines of:
"AI is destroying software security. The Shai-Hulud worm proves AI is dangerous."
And I'm sitting there like… my brother in Christ, the worm is literally called Shai-Hulud. It's named after the giant sandworm in Dune. A worm. That eats things. Through a desert. That is exactly the level of subtlety we're operating at, and you're telling me ChatGPT did this?
Look. I've spent the last few weeks reading every Socket, Aikido, Wiz, Snyk, Unit 42, and Microsoft writeup on Shai-Hulud 1.0, Shai-Hulud 2.0, Mini Shai-Hulud, Sha1-Hulud: The Second Coming (yes that's a real name), SANDWORM_MODE, PhantomRaven, and the s1ngularity/Nx mess that started it all. I'm a full-stack dev. I ship Flutter apps, I run my own VPS, I publish to npm occasionally, and I use AI tools every single day. So let me say this with my whole chest:
The AI did not do this. We did this. We have been doing this. We will keep doing this. The AI just made it slightly easier to do this faster.
Let me actually walk through what happened so we can stop being weird about it.
Part 1: A quick recap of the dumpster fire so far
The npm ecosystem has been getting absolutely cooked since August 2025. Here's the speedrun:
August 26, 2025 — The s1ngularity / Nx attack
Attackers exploited a GitHub Actions injection vulnerability in the Nx repo, stole their npm publishing token, and pushed eight malicious versions of nx and related packages to npm over four hours. The malware ran a postinstall script called telemetry.js (cute) that scanned your filesystem for .env files, SSH keys, crypto wallets, and npm tokens.
But here's the genuinely interesting part — and the part that everyone should have been screaming about: it was the first attack to weaponize local AI CLIs. The malware checked if you had Claude Code, Gemini CLI, or Amazon Q installed, and if you did, it ran them with the safety pins pulled out:
const cliChecks = {
claude: { cmd: 'claude', args: ['--dangerously-skip-permissions', '-p', PROMPT] },
gemini: { cmd: 'gemini', args: ['--yolo', '-p', PROMPT] },
q: { cmd: 'q', args: ['chat', '--trust-all-tools', '--no-interactive', PROMPT] }
};
--dangerously-skip-permissions. --yolo. --trust-all-tools. These are flags that exist for a reason — and that reason is "you, the developer, are taking responsibility for whatever happens next." The attackers used those flags because we normalized using those flags. The malware then exfiltrated everything to a public GitHub repo called s1ngularity-repository containing a results.b64 file with double-base64-encoded secrets, made shutdown commands get appended to your .bashrc and .zshrc (so your terminal would shut down your machine on launch, lol), and called it a day.
Final scoreboard: 2,349 secrets stolen from 1,079 systems, including GitHub tokens, AWS keys, OpenAI keys, Anthropic keys, the works. 85% of victims were on macOS. About half had at least one AI CLI installed.
September 8, 2025 — Qix gets phished, chalk and debug fall
Josh Junon, known on npm as qix, maintains chalk, debug, strip-ansi, ansi-regex, ansi-styles, and like 15 other packages you've definitely installed without knowing it. Combined weekly downloads of his stuff: over 2.6 billion. With a "b."
He got a phishing email from support@npmjs.help (note: not npmjs.com), a domain registered on Porkbun three days earlier. The email said "update your 2FA." He clicked. He typed his password. He typed his TOTP code. Attackers took over his account within minutes and pushed malicious versions of 18 packages containing a crypto-wallet drainer that hijacks Ethereum and Solana transactions in browsers.
Live for about two hours. Two billion+ weekly downloads. Do the math on how many CI builds, Vercel deploys, and npm installs probably grabbed those versions.
Josh, to his credit, was incredibly transparent about it. He wrote a long postmortem essentially saying "I clicked the link, I typed my code, this is on me." That's the integrity move. Compare and contrast with everyone tweeting "AI BAD."
September 15, 2025 — Shai-Hulud Mark I
This was the big one. The first self-replicating worm in npm history. Started with @ctrl/tinycolor and spread to over 500 packages, including a bunch owned by CrowdStrike. Yes. The security company.
The mechanism was beautiful in a horrible way:
- You install a compromised package.
- Its
postinstallscript downloads TruffleHog (a legit secret scanner, repurposed for evil). - TruffleHog finds your GitHub tokens, npm tokens, AWS keys, GCP keys.
- The worm uses your npm token to enumerate other packages you maintain.
- It republishes those packages with the same malicious code.
- The worm uses your GitHub token to dump all your secrets into a new public repo called
Shai-Huludon your account. - It also flips your private org repos to public, renaming them with a
-migrationsuffix.
It's a worm. It propagates. Without a command-and-control server. Just by reading its own code and shoving itself into the next package down the line. Pure mechanical horror.
November 24, 2025 — Shai-Hulud 2.0: The Second Coming
(Misspelled as "Sha1-Hulud" in the GitHub repo descriptions because of course it was.)
This one was worse. Compromised 796 packages, ~20 million weekly downloads, including stuff from Zapier, PostHog, Postman, and AsyncAPI. Key changes from v1:
- Preinstall instead of postinstall — runs even earlier, before any tests or security checks.
- Installs Bun specifically to evade Node.js monitoring tools. Yes the JavaScript runtime is now an attack vector vehicle.
- Cross-victim exfiltration — if it can't dump your secrets to your own GitHub, it'll dump them to a different victim's GitHub. Wild.
-
Destructive fallback — if exfiltration fails, it tries to wipe your home directory. Just nukes
~. Goodbye dotfiles, goodbye SSH keys, goodbye that side project you forgot to push.
February 20, 2026 — SANDWORM_MODE
Socket's research team found a Shai-Hulud-style worm that, in addition to all the previous greatest hits, injects prompt-injection payloads into AI coding assistants. It poisons your .claude/ and .cursor/ config so your AI assistant starts working for the attackers while still appearing to work for you. So now the worm doesn't just steal your secrets — it makes your AI pair programmer subtly leak future secrets too.
May 11, 2026 — Mini Shai-Hulud
Microsoft Security Research caught a fresh wave a few days ago: 170+ npm packages, 2 PyPI packages, 404 malicious versions, spanning both ecosystems in one coordinated campaign for the first time. Same playbook, expanded reach. Bun runtime, preinstall, GitHub exfil, the works.
We're up to like a half-dozen Shai-Hulud variants in eight months and there's no sign of it stopping.
Part 2: "But surely the AI helped do all this?"
I know what you're thinking. "Pranta, didn't you just describe the malware using Claude and Gemini to steal stuff? Isn't that the AI's fault?"
Let me unpack this carefully, because there are actually three different "AI is to blame" arguments floating around, and each of them is wrong in a slightly different way.
Argument 1: "The malware was AI-generated, therefore AI is the problem."
Palo Alto's Unit 42 said they were "moderately confident" the Shai-Hulud bash script was AI-generated because it had comments and emojis in it. Cool. You know what else has comments and emojis? Every codebase I have ever worked on. This argument basically says "if your malware is well-organized, AI did it." This is a vibes-based threat model.
Malware authors have been writing malware for forty years. AI didn't invent the post-install script. AI didn't invent worms. The 1988 Morris worm was self-replicating, written in C, and predates LLMs by approximately forever. The only thing AI changed is that the README for the malware is slightly better formatted.
Argument 2: "The malware abused AI CLIs, therefore AI CLIs are the problem."
This is the s1ngularity / SANDWORM_MODE argument. And it has more meat to it than the first one, but the conclusion is still wrong.
Yes, the s1ngularity malware spawned claude --dangerously-skip-permissions and gemini --yolo on victim machines. But ask yourself: why did that work?
It worked because the victim already had those CLIs installed and authenticated. The malware didn't pull a Claude API key out of thin air. It used yours. The malware didn't bypass Claude's permission system — it used the flag that you, the developer, agreed exists for cases when you're taking full responsibility for what happens next.
This isn't "AI is dangerous." This is "you installed a tool that can run arbitrary commands on your machine, then you let it run as --yolo, and then you also ran arbitrary code from npm during postinstall." Two loaded guns in a small room. The fact that one of them was branded with an Anthropic logo doesn't make it the more dangerous gun.
Fun fact from Wiz's analysis: when the s1ngularity malware actually tried to use the AI tools on real victims, Claude rejected ~25% of the malicious prompts thanks to safety guardrails. Gemini was foiled about 25% of the time by its default workspace directory restrictions. The AI tools were the least cooperative link in the chain. The most cooperative link was npm running random shell scripts on install with zero sandboxing.
Argument 3: "Slopsquatting proves AI is creating new attack surfaces."
Okay this one is real. Let me explain it because it's actually interesting.
Slopsquatting is when an LLM hallucinates a package name that doesn't exist (because LLMs hallucinate, this happens constantly), an attacker registers that package name on npm or PyPI with malware in it, and then the next person who asks the same LLM the same question gets pointed to the malicious package.
A USENIX 2025 paper tested 16 models on 576,000 code samples and found ~20% of AI-recommended packages don't exist. Worse, 58% of hallucinated names repeat across multiple prompts. Which means attackers can prompt-engineer their way to a list of high-value fake names to register and squat.
Real cases:
-
huggingface-clion PyPI was a hallucination. The real install ispip install -U "huggingface_hub[cli]". A researcher registered the hallucinated name and got 30,000+ real downloads in three months from people whose AI told them to install it. -
react-codeshifton npm — also a hallucination, a mashup ofjscodeshiftandreact-codemod. Aikido's Charlie Eriksen registered it in January 2026 to study the attack and it ended up referenced in 237 GitHub repositories through forked AI agent skills before anyone noticed. -
unused-importson npm (the real one iseslint-plugin-unused-imports) — this one was actually malicious, and as of February it was still pulling 233 downloads a week. - And then PhantomRaven: 126 npm packages, 86,000 installs, using slopsquatted names with invisible HTTP URL dependencies that npm's scanners don't even follow. (Yes, npm lets you declare a dependency as a remote URL. Yes, this is as bad as it sounds.)
So slopsquatting is real. Is it the AI's fault?
Half-yes. The hallucination is the AI's fault. But the chain that lets a hallucination become a compromise is:
- AI hallucinates a name.
- Attacker registers it (this is not the AI's fault, this is a human criminal).
- npm allows anyone to register any name with zero verification (not the AI's fault, this is a registry design choice).
-
Developer runs
npm install <name>without reading anything (this is — uh — the developer). -
The package's
postinstallscript runs arbitrary code on install (not the AI's fault, this is npm letting maintainers shell out to your machine the moment you install). - The developer's machine has no isolation, no sandbox, no nothing.
If you remove step 1, you still have typosquatting (which has existed since 2017). If you remove steps 3–6, slopsquatting is harmless even with full AI hallucinations. The AI is one input into a broken pipeline.
Part 3: What actually went wrong
Let me list the real causes of every single one of these attacks, in order of how much I want to scream about them:
1. postinstall scripts are an unhinged feature that should not exist
When you run npm install, npm will gleefully execute arbitrary shell scripts from random strangers on the internet. This is the default. This has been the default since 2010. We have known it was a problem since 2018 when event-stream got compromised. We still have it.
Every. Single. One. Of these attacks works because of preinstall or postinstall scripts. Shai-Hulud? Postinstall (and later preinstall). Nx? Postinstall. PhantomRaven? Postinstall. Chalk/debug? Didn't even need postinstall because the malicious code was in the actual library code, which is somehow worse.
You can disable these with npm install --ignore-scripts. You can put ignore-scripts=true in your .npmrc. Almost nobody does. I didn't, until I started writing this post.
2. We don't actually read our dependencies
Quick poll: when was the last time you looked at the source code of a transitive dependency before running npm install? Yeah, me neither.
The average modern React app has like 2,000 transitive dependencies. The chalk attack hit because chalk is a dependency of a dependency of a dependency of basically everything. When qix got phished, the blast radius wasn't 18 packages — it was every project in the world that ever pulled in something that pulled in something that pulled in ansi-styles.
You can't read 2,000 dependencies. Nobody can. So we just… don't. We trust that someone else is doing it. Spoiler: nobody is doing it.
3. Maintainers don't have phishing-resistant 2FA
Josh Junon had 2FA. TOTP-based. The attacker phished his TOTP code in real time. WebAuthn / hardware keys would have stopped this cold because there's no code to steal — the key is bound to the actual domain. npm now supports WebAuthn. Most maintainers haven't switched. You should switch.
4. We use floating versions and never pin
"chalk": "^5.6.0" says "give me whatever 5.x.y you've got, hot off the registry, I trust the universe." This is the default behavior when you npm install chalk. So when chalk 5.6.1 got published with a wallet drainer two hours after the legit 5.6.0, anyone running npm install in those two hours got the drainer.
The fix is npm ci in CI/CD (uses your lockfile, exactly) and cooldown periods (don't auto-pull packages newer than ~14 days, which is what Elastic now does). Almost nobody does this either.
5. The npm registry has no concept of trust
Anyone can register any package name. Anyone can publish anything. There's no review, no signing requirement (until very recently and it's opt-in), no provenance check. The only signal you have that a package is legit is "it's popular" and "the name looks right." When the AI hallucinates a name, both of those checks fail silently.
6. We give CI/CD environments god-mode tokens
Why does your GitHub Actions runner have a npm publish token with write access to all your org's packages? Why does it have an AWS key with s3:* and not just s3:PutObject on one bucket? Why does your .env in development have your production database password?
Because we're lazy. The Shai-Hulud worm did so much damage so fast because every machine it landed on had a Pandora's box of credentials sitting in environment variables and config files. Least-privilege isn't a buzzword, it's the only thing that limits blast radius.
7. Nobody is using npm provenance, trusted publishers, or signed commits
These all exist. They've existed for over a year. Adoption is single-digit percent in most ecosystems. We collectively shrugged.
Part 4: So where does AI actually fit in?
Let me be fair, because I'm not in the "AI can do no wrong" camp either. Here's what AI is genuinely making worse:
- Hallucinated package names are a real, novel attack surface. Slopsquatting wouldn't exist without LLMs.
-
AI agents that auto-install dependencies are dangerous because they shrink the verification window from "human reads it" to "literally nothing reads it."
cursor run, autonomous Claude Code workflows, agentic dev tools — they're a lethal trifecta when combined withpostinstall. -
AI coding assistants are valuable targets for prompt injection (see SANDWORM_MODE poisoning
.claude/configs). That's a real new risk. -
--yoloflags exist, and that's a vibe. We collectively decided convenience was worth more than safety boundaries. That's on the tooling builders and on us for using them.
But here's the thing — every single one of those AI-specific risks rides on top of an existing broken substrate. Slopsquatting doesn't matter if postinstall can't run. Prompt injection of AI configs doesn't matter if you don't autorun untrusted code. --dangerously-skip-permissions doesn't matter if your AI CLI doesn't have credentials to all your services.
The AI is the new attack delivery. The vulnerabilities being delivered are the same ones we've been ignoring for fifteen years.
Part 5: What I'm actually going to do about it
I publish to npm. I've shipped Flutter apps that pull from pub.dev (same problems, different ecosystem). I run my own VPS. So this stuff is personal. Here's what I'm changing this weekend, and you should too:
- Turn on phishing-resistant 2FA everywhere that supports it. WebAuthn / hardware keys for GitHub, npm, AWS, GCP. TOTP is not enough anymore.
-
Put
ignore-scripts=truein my.npmrcfor any project where I don't desperately need install scripts. When I do need them, I enable them per-install. -
Use
npm ciin every CI pipeline. No morenpm installon the server. The lockfile is law. - Set a cooldown period — don't auto-pull packages newer than 14 days. Tools like Renovate support this natively.
-
Audit my AI agent permissions. No more
--dangerously-skip-permissionsunless I'm in a sandboxed VM I'm willing to nuke. - Stop using floating version ranges for anything I actually care about. Pin, don't carat.
- Verify package names before installing them, especially when an AI suggests one. If I haven't heard of it, I check the registry, the GitHub repo, the maintainer history, the download count. It takes 30 seconds.
-
Rotate credentials I've had sitting around for "too long". If I haven't rotated my GitHub PAT in a year, it's probably already in someone's
results.b64. - Scope my tokens. npm tokens scoped to specific packages. AWS keys scoped to specific actions on specific resources. Stop using god-tokens.
- Use Trusted Publishing if I'm publishing to npm. No more long-lived tokens. (Elastic and Nx both moved to this after getting burned, learn from their pain.)
TL;DR
The Shai-Hulud worms, the chalk hijack, the Nx attack — these all happened because:
- npm executes arbitrary code on install
- Maintainers got phished
- Tokens were over-privileged
- Nobody pins versions
- Nobody audits dependencies
- Nobody uses hardware 2FA
- The registry has no trust model
The AI's contribution to all of this was: it generated some malware comments with emojis, it occasionally hallucinated a package name that an attacker pre-registered, and it offered --yolo flags that developers eagerly enabled.
If you removed AI entirely from this story, we'd still be cooked. The npm ecosystem has been a security disaster since 2018. The AI just turned the dial from "disaster" to "disaster with better grammar."
So please, the next time you see "AI ruined supply chain security" trending — close the tab, open your package.json, and look at how many packages with postinstall scripts you have. That's where the call is coming from.
If you got value out of this, follow me on Dev.to and pranta.dev. I write about full-stack stuff, Flutter, server ops, and apparently now angry security posts. Stay safe out there. Pin your dependencies.




Top comments (0)