DEV Community

Cover image for Your AI Gateway Was a Backdoor: Inside the LiteLLM Supply Chain Breach
Alessandro Pignati
Alessandro Pignati

Posted on

Your AI Gateway Was a Backdoor: Inside the LiteLLM Supply Chain Breach

If you're building with LLMs, there's a good chance you've used LiteLLM. It’s a fantastic tool that simplifies interacting with dozens of providers through a single OpenAI-compatible interface. But on March 24, 2026, that convenience became a liability.

A sophisticated threat actor group known as TeamPCP successfully compromised LiteLLM as part of a broader campaign targeting developer infrastructure. This wasn't just a simple bug; it was a calculated multi-stage supply chain attack designed to siphon credentials from the heart of AI development environments.

The TeamPCP Campaign: More Than Just LiteLLM

The breach of LiteLLM was one piece of a larger puzzle. Throughout March 2026, TeamPCP systematically targeted developer tools like Trivy, KICS, and Telnyx. By compromising these foundational components, the attackers gained a foothold in the software supply chain, allowing them to move laterally and reuse stolen credentials across different ecosystems.

This shift in tactics is a wake-up call for the developer community. Adversaries are no longer just looking for vulnerabilities in your code; they are targeting the very tools you use to build and secure it.

How the Attack Worked: A Tale of Two Versions

The attackers injected malicious payloads into two specific versions of LiteLLM released on PyPI: 1.82.7 and 1.82.8. While both were dangerous, they used different execution methods to ensure maximum impact.

Version Injection Method Execution Trigger
1.82.7 Embedded in litellm/proxy/proxy_server.py Triggered when the proxy module was imported.
1.82.8 Used a malicious litellm_init.pth file Automatic execution upon Python interpreter startup.

The use of a .pth file in version 1.82.8 was particularly insidious. According to Python's documentation, executable lines in these files run automatically when the interpreter starts. This meant that simply having the package installed was enough to trigger the malware, no import litellm required.

What Was Stolen? (Spoiler: Everything)

The payload was a comprehensive "infostealer" designed to harvest every sensitive secret it could find. Once executed, it collected and encrypted data before exfiltrating it to attacker-controlled domains like models.litellm[.]cloud.

The list of targeted data included:

  • Cloud Credentials: AWS, GCP, and Azure keys.
  • CI/CD Secrets: GitHub Actions tokens and environment variables.
  • Infrastructure Data: Kubernetes configurations and Docker credentials.
  • Developer Artifacts: SSH keys, shell history, and even cryptocurrency wallets.

To stay hidden, the malware established persistence by installing a systemd service named sysmon.service and writing a script to ~/.config/sysmon/sysmon.py. It even attempted to spread within Kubernetes clusters by creating privileged "node-setup" pods.

Are You Affected? Indicators of Compromise (IOCs)

If you were using LiteLLM around late March 2026, you need to check your environments immediately. Here are the key signs of a compromise:

  • Files to look for:
    • litellm_init.pth in your site-packages/ directory.
    • ~/.config/sysmon/sysmon.py and sysmon.service.
    • Temporary files like /tmp/pglog or /tmp/.pg_state.
  • Network activity: Outbound HTTPS connections to models.litellm[.]cloud or checkmarx[.]zone.
  • Kubernetes anomalies: Any pods named node-setup-* or unusual access to secrets in your audit logs.

How to Fix It and Stay Safe

If you find evidence of compromise, do not just upgrade the package. You must treat the entire environment as breached.

  1. Isolate and Rebuild: Isolate affected hosts or CI runners and rebuild them from known-good images.
  2. Rotate Everything: Every secret that was accessible to the compromised environment, API keys, SSH keys, cloud tokens, must be rotated immediately.
  3. Pin Your Dependencies: Use lockfiles (poetry.lock, requirements.txt with hashes) to ensure you only install verified versions of your dependencies.
  4. Scan for Malicious Code: Use tools that monitor for suspicious package behavior, not just known CVEs.

Conclusion

The LiteLLM breach is a stark reminder that our AI stacks are only as secure as their weakest dependency. As we rush to integrate LLMs into everything, we can't afford to overlook the basics of supply chain security.

Have you audited your AI dependencies lately? Let's discuss in the comments how you're securing your LLM workflows!

Top comments (0)