DEV Community

Cover image for Your Python Environment Might Be Compromised by litellm (And Here's How to Check)
Ritvik Dayal
Ritvik Dayal

Posted on

Your Python Environment Might Be Compromised by litellm (And Here's How to Check)

What Happened to LiteLLM

On March 24, 2026, someone published two malicious versions of the popular litellm Python package to PyPI. Versions 1.82.7 and 1.82.8 contained a full-blown backdoor that harvested credentials, established persistence, and phoned home to a command and control server.

The kicker? The attacker didn't hack PyPI directly. They poisoned a security scanner (Trivy) that LiteLLM's own CI/CD pipeline trusted. The scanner stole the PyPI publish token, and the attacker used it to push compromised packages that looked completely legitimate.

The malicious versions were live for about three hours before PyPI pulled them. Three hours is a long time when you have automated deployments.

The Attack Chain

Here's how the whole thing unfolded, step by step:

Attack-Chain

Two things make this especially nasty:

Version 1.82.7 embedded the payload in litellm/proxy/proxy_server.py. It triggered when you imported the module. Standard stuff for malicious packages.

Version 1.82.8 went further. It dropped a litellm_init.pth file into your site-packages directory. Python's .pth startup hook mechanism means this code executes every time any Python interpreter launches on that system, even if you never import litellm. Just running python3 -c "print('hello')" would trigger the backdoor.

What the Backdoor Actually Does

This wasn't a crypto miner or a simple credential stealer. The payload operates in three stages:

Backdoor-exploit-stage-1

Backdoor-exploit-stage-2

Backdoor-exploit-stage-3

The Kubernetes bit is particularly clever. If the compromised package runs inside a pod with sufficient permissions, it deploys privileged pods named node-setup-{node_name} across every node in the cluster via kube-system. That's full host-level access on every node.

Why "Just Uninstall It" Isn't Enough

If you had either compromised version installed at any point, even briefly, uninstalling the package doesn't undo the damage. The backdoor:

  1. Already exfiltrated your credentials
  2. Installed a persistent systemd service that survives package removal
  3. Dropped .pth files that survive pip uninstall
  4. In Kubernetes environments, deployed pods that live outside your application entirely

You need to check for all of these things. Across every Python environment on your system. That includes PyEnv versions, virtualenvs, conda environments, system Python, and the pip cache.

Doing this manually is tedious and error-prone. So I wrote a script.

The Script: litellm-sweep

litellm-sweep.sh is a single bash script that scans your entire system across 10 phases:

litellm-sweep-functioning

What It Catches

Package installations across pyenv, virtualenvs, system Python, conda, Homebrew, and the pip cache. Every found version is checked against the known compromised versions (1.82.7, 1.82.8) and flagged accordingly.

Source code references in .py files, requirements.txt, pyproject.toml, Pipfile, Dockerfile, and more. These are reported but never auto-edited because modifying source code automatically is asking for trouble.

Persistence artifacts from the actual backdoor payload:

  • ~/.config/sysmon/sysmon.py (the backdoor itself)
  • ~/.config/systemd/user/sysmon.service (persistence via systemd)
  • litellm_init.pth in any site-packages (the v1.82.8 startup hook)
  • /tmp/tpcp.tar.gz, /tmp/session.key, /tmp/payload.enc (staging files)
  • On macOS, it also checks LaunchAgents for suspicious plists

Network indicators including DNS resolution of the C2 domains (models.litellm.cloud, checkmarx.zone), active connections via lsof, and references in your shell history.

Kubernetes indicators if kubectl is available: node-setup-* pods in kube-system and privileged container detection.

Usage

# Full scan: home directory + common locations + all environments
./litellm-sweep.sh

# Add extra paths to scan
./litellm-sweep.sh --include /opt/ml /srv/apps

# Only scan specific paths, skip all environment scans
./litellm-sweep.sh --only /opt/ml /srv/apps
Enter fullscreen mode Exit fullscreen mode

Example 1: Clean System (Nothing Found)

$ ./litellm-sweep.sh

litellm-sweep — scanning for all traces of litellm
Report will be saved to: litellm-sweep-report-2026-03-27.txt
Mode: full scan (home + common locations + environments)

── Phase 1: pyenv ──
  CLEAN  Python 3.10.14
  CLEAN  Python 3.11.9
  CLEAN  Python 3.12.4

── Phase 2: virtualenvs ──
  CLEAN  /home/dev/projects/api-service/.venv
  CLEAN  /home/dev/projects/ml-pipeline/.venv

── Phase 3: System Python ──
  CLEAN  /usr/bin/python3

── Phase 4: pip cache ──
  CLEAN  No litellm in pip cache

── Phase 5: Conda ──
  SKIP   conda not installed

── Phase 6: Homebrew ──
  CLEAN  Not in Homebrew formulae

── Phase 7: Source code references ──
  Scanning /home/dev ...
  CLEAN  No source code references found

── Phase 8: Persistence artifacts (TeamPCP backdoor) ──
  CLEAN  /home/dev/.config/sysmon/sysmon.py
  CLEAN  /home/dev/.config/systemd/user/sysmon.service
  CLEAN  /tmp/tpcp.tar.gz
  CLEAN  /tmp/session.key
  CLEAN  /tmp/payload.enc
  CLEAN  sysmon.service not registered
  Scanning for litellm_init.pth files (v1.82.8 startup hook)...
  CLEAN  No persistence artifacts found

── Phase 9: Network IOCs (C2 domains) ──
  CLEAN  No network IOCs found

── Phase 10: Kubernetes IOCs ──
  CLEAN  No node-setup-* pods in kube-system

══════════════════════════════════════════
  SCAN SUMMARY
══════════════════════════════════════════
  pyenv:                    0 finding(s)
  virtualenvs:              0 finding(s)
  System Python:            0 finding(s)
  pip cache:                0 finding(s)
  Conda:                    0 finding(s)
  Homebrew:                 0 finding(s)
  Source references:        0 finding(s)
  Persistence artifacts:    0 finding(s)
  Malicious .pth files:     0 finding(s)
  Network IOCs:             0 finding(s)
  Kubernetes IOCs:          0 finding(s)
  ──────────────────────────────────────
  TOTAL: 0 findings — system is clean

Nothing to remove. Done.
Enter fullscreen mode Exit fullscreen mode

You want to see this output. This means you're good.

Example 2: Safe Version Installed (Not Compromised, But You Probably Want It Gone)

$ ./litellm-sweep.sh

litellm-sweep — scanning for all traces of litellm
Report will be saved to: litellm-sweep-report-2026-03-27.txt
Mode: full scan (home + common locations + environments)

── Phase 1: pyenv ──
  CLEAN  Python 3.11.9
  FOUND  Python 3.12.4 — litellm 1.61.2
  CLEAN  Python 3.10.14

── Phase 2: virtualenvs ──
  FOUND  /home/dev/projects/llm-gateway/.venv — litellm 1.61.2

── Phase 7: Source code references ──
  Scanning /home/dev ...
  FOUND  /home/dev/projects/llm-gateway/requirements.txt:14:litellm==1.61.2
  FOUND  /home/dev/projects/llm-gateway/src/router.py:3:import litellm

── Phase 8: Persistence artifacts (TeamPCP backdoor) ──
  CLEAN  No persistence artifacts found

── Phase 9: Network IOCs (C2 domains) ──
  CLEAN  No network IOCs found

══════════════════════════════════════════
  SCAN SUMMARY
══════════════════════════════════════════
  pyenv:                    1 finding(s)
  virtualenvs:              1 finding(s)
  Source references:        2 finding(s)
  Persistence artifacts:    0 finding(s)
  Network IOCs:             0 finding(s)
  Kubernetes IOCs:          0 finding(s)
  ──────────────────────────────────────
  TOTAL: 4 finding(s)

── Removal ──

Note: 2 source code reference(s) found.
  These are reported only — you must edit/remove them manually.
  Files with references:
    /home/dev/projects/llm-gateway/requirements.txt
    /home/dev/projects/llm-gateway/src/router.py

Removable packages:
  [1] [pyenv] Python 3.12.4 — litellm 1.61.2
  [2] [venv] /home/dev/projects/llm-gateway/.venv — litellm 1.61.2

Options:
  a — Remove all packages
  1,3,5 — Remove specific items (comma-separated)
  s — Skip (do nothing)

Choice: a

Removing: [pyenv] Python 3.12.4 — litellm 1.61.2
  Action: PYENV_VERSION=3.12.4 pyenv exec pip uninstall -y litellm
  Confirm? (y/n) y
  Successfully uninstalled litellm-1.61.2
  Removed

Removing: [venv] /home/dev/projects/llm-gateway/.venv — litellm 1.61.2
  Action: /home/dev/projects/llm-gateway/.venv/bin/pip uninstall -y litellm
  Confirm? (y/n) y
  Successfully uninstalled litellm-1.61.2
  Removed

── Done ──
Review source code references manually and remove litellm from your dependency files.
Report: litellm-sweep-report-2026-03-27.txt
Enter fullscreen mode Exit fullscreen mode

Version 1.61.2 isn't compromised, so no critical alert. But given the library's supply chain was breached, you might want to remove it anyway and evaluate alternatives.

Example 3: Compromised Version Detected (The Bad Scenario)

This is the output you do not want to see:

$ ./litellm-sweep.sh

litellm-sweep — scanning for all traces of litellm
Report will be saved to: litellm-sweep-report-2026-03-27.txt
Mode: full scan (home + common locations + environments)

── Phase 1: pyenv ──
  CLEAN  Python 3.11.9

── Phase 2: virtualenvs ──
  !!CRITICAL!!  /srv/apps/inference/.venv — litellm 1.82.8 *** COMPROMISED VERSION ***

── Phase 7: Source code references ──
  Scanning /home/deploy ...
  FOUND  /srv/apps/inference/requirements.txt:8:litellm==1.82.8

── Phase 8: Persistence artifacts (TeamPCP backdoor) ──
  !!CRITICAL!!  BACKDOOR ARTIFACT: /home/deploy/.config/sysmon/sysmon.py
  !!CRITICAL!!  BACKDOOR ARTIFACT: /home/deploy/.config/systemd/user/sysmon.service
  !!CRITICAL!!  sysmon.service is ACTIVELY RUNNING
  !!CRITICAL!!  MALICIOUS .pth FILE: /srv/apps/inference/.venv/lib/python3.12/site-packages/litellm_init.pth
  CLEAN  /tmp/tpcp.tar.gz

── Phase 9: Network IOCs (C2 domains) ──
  FOUND  C2 domain resolves: models.litellm.cloud (verify no active connections)
  !!CRITICAL!!  ACTIVE CONNECTION to C2 domain: models.litellm.cloud
  FOUND  C2 domain found in shell history: /home/deploy/.bash_history → checkmarx.zone

── Phase 10: Kubernetes IOCs ──
  !!CRITICAL!!  MALICIOUS POD in kube-system: pod/node-setup-gke-prod-01
  !!CRITICAL!!  MALICIOUS POD in kube-system: pod/node-setup-gke-prod-02
  !!CRITICAL!!  PRIVILEGED malicious pod: node-setup-gke-prod-01 (privileged, image: alpine:latest)

══════════════════════════════════════════
  SCAN SUMMARY
══════════════════════════════════════════
  virtualenvs:              1 finding(s)
  Source references:        1 finding(s)
  Persistence artifacts:    4 finding(s)
  Malicious .pth files:     1 finding(s)
  Network IOCs:             3 finding(s)
  Kubernetes IOCs:          3 finding(s)
  ──────────────────────────────────────
  TOTAL: 13 finding(s)

┌─────────────────────────────────────────────────────────────────┐
│  !! COMPROMISED VERSION (1.82.7 or 1.82.8) DETECTED !!         │
│                                                                 │
│  This system may have been backdoored by TeamPCP.               │
│  IMMEDIATE ACTIONS REQUIRED:                                    │
│                                                                 │
│  1. Rotate ALL credentials on this machine:                     │
│     - SSH keys (~/.ssh/*)                                       │
│     - AWS/GCP/Azure credentials                                 │
│     - Kubernetes tokens & configs                               │
│     - Docker registry credentials                               │
│     - API keys, database passwords                              │
│     - Cryptocurrency wallet keys                                │
│                                                                 │
│  2. Check for persistence:                                      │
│     - ~/.config/sysmon/sysmon.py                                │
│     - ~/.config/systemd/user/sysmon.service                     │
│     - litellm_init.pth in site-packages                         │
│                                                                 │
│  3. Audit cloud services for unauthorized access                │
│  4. Check k8s for node-setup-* pods in kube-system              │
│  5. Consider rebuilding from a clean environment                │
│                                                                 │
│  Ref: snyk.io/articles/poisoned-security-scanner-backdooring-litellm/
└─────────────────────────────────────────────────────────────────┘

── Removal ──

IOC artifacts found (8):
  [1] [ioc-persist] BACKDOOR ARTIFACT: /home/deploy/.config/sysmon/sysmon.py
  [2] [ioc-persist] BACKDOOR ARTIFACT: /home/deploy/.config/systemd/user/sysmon.service
  [3] [ioc-persist] sysmon.service is ACTIVELY RUNNING
  [4] [ioc-pth] MALICIOUS .pth FILE: /srv/apps/inference/.venv/.../litellm_init.pth
  [5] [ioc-network] ACTIVE CONNECTION to C2 domain: models.litellm.cloud
  [6] [ioc-k8s] MALICIOUS POD in kube-system: pod/node-setup-gke-prod-01
  [7] [ioc-k8s] MALICIOUS POD in kube-system: pod/node-setup-gke-prod-02
  [8] [ioc-k8s] PRIVILEGED malicious pod: node-setup-gke-prod-01

Removable IOC artifacts:
  /home/deploy/.config/sysmon/sysmon.py
  /home/deploy/.config/systemd/user/sysmon.service
  sysmon.service (systemd user service)
  /srv/apps/inference/.venv/.../litellm_init.pth

Remove IOC artifacts? (y/n) y
  Removing: /home/deploy/.config/sysmon/sysmon.py
  Removed
  Stopping and disabling sysmon.service...
  Stopped + disabled + removed
  Removing: /srv/apps/inference/.venv/.../litellm_init.pth
  Removed
  [ioc-network] ACTIVE CONNECTION to C2 domain — requires manual remediation
  [ioc-k8s] MALICIOUS POD in kube-system: pod/node-setup-gke-prod-01 — requires manual remediation
  [ioc-k8s] MALICIOUS POD in kube-system: pod/node-setup-gke-prod-02 — requires manual remediation

Removable packages:
  [1] [venv] /srv/apps/inference/.venv — litellm 1.82.8

Options:
  a — Remove all packages
  1,3,5 — Remove specific items (comma-separated)
  s — Skip (do nothing)

Choice: a

Removing: [venv] /srv/apps/inference/.venv — litellm 1.82.8
  Action: /srv/apps/inference/.venv/bin/pip uninstall -y litellm
  Confirm? (y/n) y
  Successfully uninstalled litellm-1.82.8
  Removed

── Done ──
Review source code references manually and remove litellm from your dependency files.

REMINDER: Compromised version was detected. Credential rotation is CRITICAL.
Do NOT just uninstall — assume all secrets on this machine are compromised.
Rebuild from a clean environment after rotating all credentials.
Report: litellm-sweep-report-2026-03-27.txt
Enter fullscreen mode Exit fullscreen mode

Notice how the Kubernetes IOCs and network connections are flagged but marked as "requires manual remediation." The script won't auto-delete pods or kill network connections because those actions need human judgment. You might be running the scan on a jump box, and blindly killing connections could make the situation worse.

Example 4: Scanning Only Specific Paths

If you manage multiple services and just want to check one deployment directory:

$ ./litellm-sweep.sh --only /srv/apps/ml-service /srv/apps/inference

litellm-sweep — scanning for all traces of litellm
Report will be saved to: litellm-sweep-report-2026-03-27.txt
Mode: --only (scanning specified paths only, skipping env scans)
  Target: /srv/apps/ml-service
  Target: /srv/apps/inference

── Phase 7: Source code references ──
  Scanning /srv/apps/ml-service ...
  CLEAN  No source code references found
  Scanning /srv/apps/inference ...
  FOUND  /srv/apps/inference/requirements.txt:8:litellm==1.82.6
  FOUND  /srv/apps/inference/src/llm_router.py:1:from litellm import completion

── Phase 8: Persistence artifacts (TeamPCP backdoor) ──
  CLEAN  No persistence artifacts found

── Phase 9: Network IOCs (C2 domains) ──
  CLEAN  No network IOCs found

── Phase 10: Kubernetes IOCs ──
  SKIP   kubectl not available (skip k8s IOC check)

══════════════════════════════════════════
  SCAN SUMMARY
══════════════════════════════════════════
  Source references:        2 finding(s)
  Persistence artifacts:    0 finding(s)
  Network IOCs:             0 finding(s)
  Kubernetes IOCs:          0 finding(s)
  ──────────────────────────────────────
  TOTAL: 2 finding(s)
Enter fullscreen mode Exit fullscreen mode

With --only, environment scans (pyenv, venvs, conda, etc.) are completely skipped. Only the source code scan runs against your specified paths, plus the IOC phases, which always run. This is useful when you're sweeping a shared server and only care about specific deployment directories.

The Bigger Picture

This attack worked because of a chain of trust that most teams never audit:

Bigger-Picture

Git tags are mutable. Anyone with write access to a repo can point a tag at a different commit. If your CI/CD references an action by tag (like @v1), you're trusting that the tag still points to the code you reviewed. In this case, it didn't.

Pin your GitHub Actions to full commit SHAs, not tags. It's one line of config that would have prevented this entire attack chain.

Get the Script

The full script is on GitHub. Single file, no dependencies beyond bash and the standard tools already on your system:

curl -O https://gist.githubusercontent.com/RitvikDayal/18d35fe1d51b49ecf5b90c6f262a8c9d/raw/litellm-sweep.sh
chmod +x litellm-sweep.sh
./litellm-sweep.sh
Enter fullscreen mode Exit fullscreen mode

Full usage docs and IOC reference: litellm-sweep on GitHub Gist

If you're running Kubernetes workloads that might have pulled litellm, run it on those nodes too. The script checks for the k8s IOCs when kubectl is available.

And if it finds a compromised version: do not just uninstall and move on. Rotate every credential on that machine. Every single one. The exfiltration happens fast, and the data was encrypted before being sent, which means it was designed to be collected and processed later. Your credentials may not have been used yet, but assume they will be.

Stay safe out there.

Top comments (0)