Your AI coding assistant just suggested a package that doesn't exist. An attacker is about to register it with malware.
Researchers analyzed 576,000 AI-generated code samples and found something terrifying:
β 205,474 unique "phantom packages" that don't exist in PyPI/npm
β 43% repeat PERFECTLY across identical prompts
β Commercial AI (GPT-4, Claude, Copilot): 5.2% hallucination rate
β Open-source LLMs: 21.7% hallucination rate
This isn't typosquatting. It's slopsquatting - exploiting systematic AI behavior.
The attack is trivial:
Query AI assistant with common prompts
Collect hallucinated package names
Register them on PyPI/npm with malicious code
Wait for developers to pip install your malware
Here's what makes this surreal:
Despite 6 months of security research, 205K identified targets, and trivial exploitation... zero confirmed attacks exist in the wild.
The window for defense is open. But it's closing fast.
I wrote a deep-dive on:
β Why AI reliably hallucinates the same phantom packages
β Which security tools detect this (spoiler: almost none)
β A 15-minute scanner you can deploy today
β Why zero attacks won't last much longer
If you're using AI coding assistants (and you probably are), this affects you.
Read it before the first confirmed attack makes headlines.
Disclaimer: Personal analysis based on my cybersecurity background. Not legal advice. Views are my own.
hashtag#CyberSecurity hashtag#AppSec hashtag#AI hashtag#DevSecOps hashtag#SupplyChainSecurity hashtag#SoftwareDevelopment hashtag#InfoSec hashtag#DevOps
Top comments (0)