DEV Community

Claudio Basckeira
Claudio Basckeira

Posted on • Originally published at edge-briefing-ai.beehiiv.com

LiteLLM Was Backdoored: What the TeamPCP Supply Chain Attack Means for Python AI Projects

On March 24, 2026, threat actor TeamPCP published two compromised versions of LiteLLM to PyPI. If you work with Python AI tooling, this one is worth understanding in detail, because the attack technique will be reused.

What Happened

Versions 1.82.7 and 1.82.8 of LiteLLM contained malicious payloads after attackers obtained the maintainer's PyPI credentials. The credential theft wasn't a direct attack on LiteLLM. It was the third step in a cascade:

  1. March 19: TeamPCP compromised Trivy, an open-source security scanner
  2. March 21: Used the compromised Trivy action to steal credentials from Checkmarx's CI pipeline
  3. March 24: Used stolen credentials from LiteLLM's CI/CD pipeline (which ran Trivy) to publish malicious packages

The malicious versions executed in two different ways. Version 1.82.7 embedded a base64-encoded payload in litellm/proxy/proxy_server.py; it fires when anything imports litellm.proxy. Version 1.82.8 was more aggressive: it added a litellm_init.pth file to site-packages, which runs on every Python interpreter startup regardless of whether LiteLLM is imported. That includes pip install, your IDE's language server, and python -c "anything".

Once triggered, the payload harvested SSH keys, cloud credentials, Kubernetes secrets, database configs, and .env files. On machines running Kubernetes, it attempted lateral movement by deploying privileged pods to every node and installed a persistent systemd backdoor that polls an attacker-controlled endpoint for additional binaries.

Why This Is Harder to Catch Than It Looks

Standard supply chain defenses focus on hash verification and suspicious package names. This attack bypassed both because the malicious content was published using the maintainer's actual credentials. The hash is correct. The package name is correct. There's nothing to flag.

The .pth mechanism in version 1.82.8 is particularly worth understanding. It's a legitimate Python feature: files ending in .pth in site-packages are processed on every interpreter startup. Any line that starts with import gets executed. This isn't a vulnerability; it's how Python works. Existing supply chain scanning tools mostly look at setup.py and __init__.py. They don't catch malicious .pth files.

Who Was Affected

LiteLLM downloads 3.4 million times per day and is present in 36% of cloud environments as a transitive dependency. You might not have installed LiteLLM directly and still have been affected. Downstream packages that pull LiteLLM transitively include DSPy, MLflow, OpenHands, CrewAI, and Arize Phoenix.

The malicious versions were live for approximately three hours before PyPI quarantined them. Detection was accidental, not by automated tooling.

What to Do

Check first: pip show litellm | grep Version

If you see 1.82.7 or 1.82.8:

  • Uninstall immediately and run pip cache purge (or rm -rf ~/.cache/uv if using uv) to prevent cached wheel re-use
  • Rotate every credential accessible from that environment: API keys, SSH keys, cloud credentials, database passwords
  • Check for persistence artifacts: ~/.config/sysmon/sysmon.py, a sysmon.service systemd unit, files in /tmp/pglog or /tmp/.pg_state
  • If Kubernetes was present: inspect kube-system namespace for unauthorized pods, review cluster audit logs

The clean version is 1.82.6.

The Broader Signal

This is part of a coordinated campaign. Three days later, the Telnyx package was hit with the same technique. TeamPCP is running systematic attacks across Python packages in the AI/ML tooling space.

There's also one detail buried in the security post-mortems that deserves separate attention: the attackers used an AI agent called "openclaw" as part of their operational pipeline. It's the first confirmed case of an AI agent used operationally in a software supply chain attack. The full scope of what it automated isn't publicly documented, but its presence in the campaign means some coordination steps that previously required manual effort are now automated.

For teams running Python AI tooling in production: pin your dependencies, monitor transitive package updates, and add .pth file detection to your supply chain scanning. The gap between what automated tooling catches and what's actually exploitable just got a bit wider.


This story is from Edge Briefing: AI, a weekly newsletter curating the signal from AI noise. Subscribe for free to get it every Tuesday.

Top comments (0)