Your AI coding assistant just suggested an npm package that doesn't exist. An attacker has already registered that name and is serving malware from it.
The problem
When AI models generate code, they sometimes reference packages that were never published — names that sound plausible but aren't in any registry. This isn't a hypothetical edge case. Aruneesh Salhotra documented at OWASP AppSec 2024 that this pattern is now a recognized attack vector: adversaries monitor AI-generated code samples, identify hallucinated package names, register those names in npm or PyPI, and publish malicious packages under those exact identifiers.
How it works
A developer asks their AI assistant to implement a feature. The assistant suggests installing npm install ai-crypto-helper (or any similarly plausible name). The developer runs it without checking. The package resolves — because an attacker already registered it — and executes malicious code during installation. Standard SCA tools return clean because the package exists as published; they don't know the developer was tricked into using it. Salhotra's framing is direct: the developer who commits AI-generated code owns its security. The AI won't warn you that it invented a package name.
The fix
Before installing any package an AI assistant recommends:
- Search the registry manually and verify the publisher, download count, and repository URL are consistent with a real, maintained project.
- Add a mandatory annotation policy to your AI-assisted development workflow: every AI-suggested dependency requires a source citation before it gets committed. Two sentences in a PR description ("AI suggested this, I verified it at X with Y stars and Z downloads") forces the review step.
- Run
npm auditor equivalent after every install — but understand this catches known-bad packages, not freshly registered attacker-controlled ones. Verification before install is the primary control.
McKinsey projects $1 trillion in economic impact from AI-assisted coding. The attack surface scales with the adoption.
Full deep-dive on AI code generation risks → https://thecyberarchive.com/talks/ai-code-generation-risks-mitigation-controls/
Top comments (0)