On March 24, 2026, the AI developer community received a stark reminder of how fragile software supply chains have become. Two versions of litellm — a widely used Python library that serves as a unified proxy for over 100 LLM providers (OpenAI, Anthropic, AWS Bedrock, Google Vertex, and many more) — were compromised on PyPI.
Versions 1.82.7 and 1.82.8 contained malicious code that turned the package into an aggressive credential stealer and Kubernetes lateral-movement tool. The attack was short-lived (the malicious releases were available for roughly 2–5 hours), but given litellm’s massive adoption — millions of daily downloads and heavy use in AI agent frameworks, MCP servers, orchestration tools, and production LLM pipelines — the potential impact is enormous.
This wasn’t typo-squatting or a fake package. It was a direct compromise of the legitimate litellm project on PyPI, attributed to the threat actor TeamPCP (the same group behind recent attacks on Trivy, Checkmarx/KICS, and other security tooling).
What Is LiteLLM and Why Does This Matter?
LiteLLM acts as the “universal API gateway” for LLMs. Developers import it once and can call any model provider with the same OpenAI-compatible interface. Because it often centralizes API keys and authentication tokens, it frequently runs with broad access to secrets — making it a high-value target.
When the malicious versions were installed (via pip install litellm or as a transitive dependency), the backdoor activated silently. Version 1.82.8 was especially dangerous: it shipped with a file called litellm_init.pth.
How the Attack Worked: Technical Breakdown
Python automatically executes any .pth file found in site-packages/ at interpreter startup — no import litellm required. The attackers abused this built-in mechanism perfectly.
-
Execution trigger:
litellm_init.pth(approximately 34 KB, heavily obfuscated with double base64 encoding) launched a child Python process containing the real payload. -
Credential harvesting: The malware scanned for and exfiltrated:
- SSH keys and configs (
~/.ssh/) - Cloud credentials (AWS, GCP, Azure)
- Kubernetes configs and service-account tokens (
~/.kube/) - LLM API keys (the very keys litellm was proxying)
-
.envfiles, Git credentials, shell histories, database passwords, and crypto wallets (Bitcoin, Ethereum, Solana, etc.) - Environment variables and cloud metadata endpoints (IMDS)
- SSH keys and configs (
-
Exfiltration: Data was bundled into a tar archive, encrypted with AES-256-CBC (session key wrapped in a hardcoded RSA public key), and POSTed to
https://models.litellm.cloud/(an attacker-controlled domain). -
Kubernetes worm behavior (if a service-account token was present):
- Read all secrets across all namespaces
- Deploy privileged
alpine:latestpods (namednode-setup-*) on every node - Install a persistent backdoor (
~/.config/sysmon/sysmon.py) via systemd user service
-
Version 1.82.7 injected its payload into
litellm/proxy/proxy_server.pybut delivered the same stealer functionality.
A bug in the malware sometimes caused an exponential fork bomb (the child process re-triggered the .pth file), which ironically helped some researchers notice the issue faster when their systems became unresponsive.
Discovery and Response Timeline
- ~10:52 UTC — Version 1.82.8 published to PyPI (no corresponding GitHub tag or release).
- Shortly after — Version 1.82.7 also confirmed malicious.
- ~12:30 UTC — FutureSearch researchers (via a transitive dependency in Cursor’s MCP plugin) raised the alarm publicly.
- PyPI admins quarantined the project and removed the malicious wheels.
- BerriAI (maintainers) confirmed the breach, linking it to stolen PyPI credentials likely obtained through their use of the previously compromised Trivy scanner in CI/CD.
By the afternoon, the malicious versions were gone and quarantine was lifted.
Who Was Affected?
Anyone who installed or upgraded to 1.82.7 or 1.82.8 on March 24, 2026 — including:
- Developer laptops
- CI/CD runners
- Docker containers and production servers
- Any project using litellm as a transitive dependency
If you run litellm in Kubernetes or handle LLM API keys, treat the environment as potentially fully compromised.
Immediate Remediation Steps (Do This Now)
- Check your installations
pip show litellm
# or, if using uv:
find ~/.cache/uv -name "litellm_init.pth"
Pin to a safe versiontxtlitellm<=1.82.6
Rotate EVERY credential that existed on any affected system:
All LLM API keys (OpenAI, Anthropic, Bedrock, etc.)
Cloud provider keys/tokens
SSH keys
Kubernetes secrets
Database passwords
Anything stored in .env files or shell history
Hunt for persistence:
Check for ~/.config/sysmon/sysmon.py and the corresponding systemd service
In Kubernetes, audit the kube-system namespace for node-setup-* pods and review secret access logs
Purge cachesBashpip cache purge
or
rm -rf ~/.cache/pip ~/.cache/uv
Broader Lessons for the AI Supply Chain
This incident is the latest in TeamPCP’s campaign targeting security tooling and AI infrastructure. It highlights three uncomfortable truths:
CI/CD dependencies are attack surfaces — LiteLLM’s reliance on Trivy gave attackers the publishing credentials they needed.
.pth files are stealthy — They execute before your code even imports the package. Many static analysis tools miss them.
AI tooling moves fast — With millions of downloads and heavy transitive usage, a single compromised release can reach thousands of production environments in minutes.
Supply-chain attacks on AI libraries are no longer theoretical — they target the exact layer where your most sensitive secrets live.
Final Advice
Pin your dependencies aggressively. Review your CI/CD pipelines for external scanners and tools. Assume any popular AI package could be next. And if you installed litellm 1.82.7 or 1.82.8 yesterday — rotate those keys immediately.
The packages have been removed, but any secrets they stole may already be in the attackers’ hands.
Stay vigilant. The AI supply chain just became significantly more dangerous.
Visit pypistats.com for PyPI pacakge analytics and AI insights
This post is based on public disclosures from BerriAI, FutureSearch, Sonatype, Endor Labs, Snyk, ARMO, and other security researchers.
Top comments (0)