This article was originally published on LucidShark Blog.
Your AI coding agent just invented a package that doesn't exist. It happens dozens of times a day in codebases everywhere. The agent confidently writes import { parseJWT } from 'jwt-lite-parser', you run npm install, and one of two things happens: the install fails with a module-not-found error, or it succeeds because someone registered that exact package name yesterday.
The second outcome is the dangerous one.
AI model hallucinations in dependency names are not a minor inconvenience. They are an active attack surface. Threat actors monitor AI-generated code repositories and developer forums, extract hallucinated package names, and register them on npm, PyPI, and RubyGems before you notice. They fill those packages with credential stealers, backdoors, or supply chain worms. By the time your developer runs npm install, they are already compromised.
This is not a theoretical risk. Socket Security and Checkmarx have documented dozens of cases in 2025 and 2026 where attackers specifically targeted AI model hallucination patterns, registering the exact phantom names generated by popular coding assistants. The Bitwarden CLI worm this week used a related vector: a preinstall hook in a legitimate package. The hallucinated-dependency attack skips that step entirely. There is no supply chain to poison when you can simply register the name the model made up.
Active threat in 2026: Security researchers have confirmed that attackers actively monitor GitHub Copilot, Claude Code, and Cursor output for hallucinated package names and register them within hours. The attack is called "AI package hallucination hijacking" and it requires no exploitation skill: just a npm account and fast monitoring.
How AI Models Hallucinate Package Names
Language models are trained on code, documentation, and Stack Overflow posts. They absorb naming conventions, API patterns, and package ecosystems. When generating code, they predict plausible package names based on patterns, not registry lookups. A model trained on thousands of repositories that use JWT parsing will confidently generate import statements for packages like jwt-parser, jwt-lite, fast-jwt-parse, or express-jwt-middleware. Some of these exist. Some do not. The model has no way to know the difference at generation time.
The problem compounds with niche domains. If you ask an AI agent to add Kubernetes operator support, database migration utilities, or cloud provider SDKs, the hallucination rate increases sharply. The model's training data is thinner, naming conventions are less standardized, and the space of plausible-sounding names is larger.
Here is a real pattern researchers have documented: an AI agent generates a utility function that imports from @aws-utils/s3-presign-helper. The package doesn't exist. The developer commits the code, the lockfile doesn't include it yet, and the CI pipeline fails on install. The developer types the package name into Google, finds nothing, and manually substitutes the correct AWS SDK call. Problem solved, they think.
What they don't see: three days earlier, a different developer in a different company hit the same hallucination. They opened a GitHub issue about it. An attacker read the issue, registered @aws-utils/s3-presign-helper on npm with a readme that looks plausible, and added a postinstall hook that exfiltrates environment variables. Now when your CI pipeline installs it, your AWS credentials leave your environment silently.
The Detection Gap: Why Your Current Tooling Misses This
Standard dependency auditing tools like npm audit, pip-audit, and Dependabot are built around a different threat model: known vulnerabilities in existing, legitimate packages. They compare your dependency tree against vulnerability databases. A freshly registered malicious package has no CVEs yet. It's too new. These tools will not flag it.
SAST tools don't help here either. They analyze code patterns, not registry state. A hallucinated import looks identical to a legitimate one at the AST level.
The detection gap sits specifically between code generation and package installation. The hallucinated name exists as a string literal in your source code. Until someone runs npm install, no tool in the standard pipeline has a reason to validate whether the name is legitimate.
// This looks perfectly fine to SAST, linters, and code review
import { parseJWT } from 'jwt-lite-parser'; // Does this package exist?
import { hashPassword } from 'bcrypt-fast'; // Is it what it claims to be?
import { encrypt } from '@crypto-utils/aes'; // Who published it?
By the time npm install resolves these names, you've already accepted the package into your environment. The postinstall hook runs with the same permissions as your build process.
Five Concrete Checks to Close the Gap
The remediation lives at the intersection of SCA tooling and pre-install validation. Here is what each layer needs to do.
1. Validate dependency names before they enter your lockfile
Before running npm install on new imports in AI-generated code, verify the package exists and has a credible history:
#!/bin/bash
# validate-deps.sh - run before npm install on AI-generated code
check_package() {
local pkg=$1
local result=$(curl -s "https://registry.npmjs.org/${pkg}" 2>/dev/null)
local created=$(echo "$result" | python3 -c "
import json, sys, datetime
try:
d = json.load(sys.stdin)
# Get creation date of first version
times = d.get('time', {})
if 'created' in times:
created = times['created'][:10]
age_days = (datetime.date.today() - datetime.date.fromisoformat(created)).days
dl_count = d.get('downloads', {}).get('last-month', 'unknown')
print(f'exists,created={created},age={age_days}d')
else:
print('not-found')
except:
print('not-found')
" 2>/dev/null)
if [[ "$result" == *'"error"'* ]] || [[ "$created" == "not-found" ]]; then
echo "FAIL: $pkg - not found on npm registry"
return 1
fi
local age=$(echo "$created" | grep -oP 'age=\K[0-9]+')
if [[ -n "$age" ]] && [[ "$age" -lt 30 ]]; then
echo "WARN: $pkg - registered less than 30 days ago (age: ${age}d)"
else
echo "OK: $pkg - $created"
fi
}
# Extract imports from staged changes
git diff --cached --name-only | grep -E '\.(ts|js|tsx|jsx)$' | while read file; do
grep -oP "from ['\"](@?[a-z][a-z0-9\-@/\.]+)['\"]" "$file" | \
grep -oP "(@?[a-z][a-z0-9\-@/\.]+)" | \
sort -u | while read pkg; do
# Skip relative imports and node built-ins
if [[ "$pkg" != .* ]] && [[ "$pkg" != /* ]]; then
check_package "$pkg"
fi
done
done
2. Flag packages with no download history
Legitimate packages accumulate download counts over time. A package with zero downloads in the last month on a name that sounds like a common utility is a strong signal of hallucination hijacking:
# Check npm download count for the last week
check_downloads() {
local pkg=$1
local weekly=$(curl -s "https://api.npmjs.org/downloads/point/last-week/${pkg}" | \
python3 -c "import json,sys; d=json.load(sys.stdin); print(d.get('downloads',0))" 2>/dev/null)
if [[ "$weekly" -lt 100 ]]; then
echo "SUSPICIOUS: $pkg has only $weekly downloads last week"
return 1
fi
echo "OK: $pkg ($weekly downloads/week)"
}
3. Verify publisher trust before first install
For any new package entering your dependency tree, check whether the publisher has a history of trusted packages. A publisher account created last week with one package is a strong red flag:
import requests
import datetime
def check_publisher_trust(package_name: str) -> dict:
"""Check if a package's publisher has an established track record."""
r = requests.get(f"https://registry.npmjs.org/{package_name}")
if r.status_code != 200:
return {"trusted": False, "reason": "package not found"}
data = r.json()
maintainers = data.get("maintainers", [])
created = data.get("time", {}).get("created", "")
if not maintainers:
return {"trusted": False, "reason": "no maintainers listed"}
# Check publisher account age via npm API
first_maintainer = maintainers[0].get("name", "")
if created:
pkg_age = (datetime.date.today() -
datetime.date.fromisoformat(created[:10])).days
if pkg_age
**Why pre-commit?** Running at commit time catches the problem before the dependency ever enters the lockfile or gets installed. The developer sees the warning while the context is fresh, before the code is reviewed or merged. Post-install hooks are too late: by the time CI runs npm install, the malicious code has already executed in the CI environment.
## The Broader Pattern: AI-Amplified Supply Chain Risk
Hallucinated dependency hijacking is one instance of a larger pattern: AI coding tools dramatically expand the attack surface of your software supply chain. Before AI agents, a developer who needed a new package would search npm, read the readme, check the download count, and make a deliberate choice. AI agents skip every step of that evaluation. They emit package names as confidently as they emit function bodies, and the developer's attention is on the logic, not the package metadata.
The supply chain tooling the industry built over the last decade assumes human-paced, human-evaluated dependency management. That assumption is now wrong for any team using AI coding tools at scale. The tooling needs to move earlier in the pipeline, closer to where the AI output enters the codebase, and it needs to be automated rather than relying on developer attention.
This is the same argument that applies to SAST, secret scanning, and code coverage gates in AI-assisted workflows. The AI generates fast. The checks need to be faster, automated, and positioned at the commit boundary so they don't slow the developer down but do catch the problems before they propagate.
**Key stat:** Socket Security found that npm package hallucination hijacking attempts increased 340% in Q1 2026 compared to Q1 2025, directly correlated with the adoption curve of agentic coding tools. The attack is cheap to execute and growing.
## What a Complete Defense Looks Like
Defending against AI hallucination hijacking requires three layers working together:
- **Pre-commit:** Validate all new dependency imports against the npm/PyPI/RubyGems registry, check package age and download history, cross-reference against your approved dependency list. Block commits that introduce suspicious packages.
- **CI/CD:** Run `npm install --ignore-scripts` as a default, validate lockfile integrity on every run, run a full SCA scan including new packages not yet in vulnerability databases (check for age, publisher reputation, and file content anomalies).
- **Lockfile hygiene:** Commit your lockfile, treat lockfile changes as security-relevant, require explicit review for any package addition or version change. The lockfile is the audit trail.
None of these checks are complex in isolation. The problem is that most development environments have none of them applied to the AI-generated code path specifically. Developers trust the agent's output more than they should, and the tooling doesn't compensate for that trust.
**LucidShark automates all three layers.** The pre-commit hook validates new dependency names against the npm registry and your approved package list. The CI integration runs SCA with publisher reputation checks. The lockfile monitor flags drift between commits. All of it runs locally, with no code leaving your machine. Install in under a minute: `npx lucidshark init`. Full setup at [lucidshark.com](https://lucidshark.com).
The attack is straightforward: find what the model made up, register it, wait for developers to install it. The defense is equally straightforward: validate before you install, automate the validation, and treat every AI-generated import as unverified until proven otherwise. The gap between those two positions is a pre-commit hook and a registry lookup. Close it before someone else exploits it.
### Share this article
[Share on Twitter](https://twitter.com/intent/tweet?text=AI%20Hallucinated%20Dependencies%20Are%20the%20New%20Supply%20Chain%20Attack%3A%20How%20to%20Stop%20Them&url=https%3A%2F%2Flucidshark.com%2Fblog%2Fai-hallucinated-dependencies-supply-chain-attack-2026)
[Share on LinkedIn](https://www.linkedin.com/shareArticle?mini=true&url=https%3A%2F%2Flucidshark.com%2Fblog%2Fai-hallucinated-dependencies-supply-chain-attack-2026&title=AI%20Hallucinated%20Dependencies%20Are%20the%20New%20Supply%20Chain%20Attack%3A%20How%20to%20Stop%20Them)
Top comments (0)