DEV Community

Not Elon
Not Elon

Posted on

A Single pip install Just Compromised Thousands of AI Developers. Vibe Coding Made It Worse.

Yesterday, litellm versions 1.82.7 and 1.82.8 were published on PyPI with a three-stage backdoor.

If you ran pip install litellm or had it as a dependency anywhere in your stack, a threat actor called TeamPCP may now have your SSH keys, AWS credentials, GCP tokens, Azure secrets, Kubernetes configs, crypto wallets, and database passwords.

litellm has 97 million downloads per month. The malicious code executed at import time. No user interaction needed.

Here's what happened. And here's why vibe coding makes this kind of attack exponentially more dangerous.

What TeamPCP Actually Did

TeamPCP compromised litellm through its own CI/CD pipeline. litellm used Trivy (a security scanner) in their build process. TeamPCP had already compromised Trivy's GitHub Action. Through that, they got PyPI credentials and pushed backdoored versions.

The payload was three stages:

  1. Credential harvester: swept SSH keys, cloud credentials (AWS, GCP, Azure), Kubernetes secrets, cryptocurrency wallets, .env files, and database passwords
  2. Kubernetes lateral movement toolkit: deployed privileged pods to every node in your cluster
  3. Persistent backdoor: installed a systemd service (sysmon.service) polling checkmarx[.]zone/raw for additional binaries

All of this triggered at module import. One line in your code: from litellm.proxy.proxy_server import ... and you're compromised.

How It Was Discovered

A developer at futuresearch.ai noticed the problem when an MCP plugin inside Cursor pulled litellm as a transitive dependency and crashed their machine. They used Claude Code to help root-cause it.

Read that again. An MCP plugin inside a vibe coding tool pulled a compromised package as a transitive dependency. The developer didn't choose litellm. Didn't know it was there. An AI agent's plugin chain installed it automatically.

Karpathy posted about it. 11 million views in 8 hours.

Why Vibe Coding Makes This Worse

When a human developer runs pip install litellm, they made a conscious choice. They evaluated the package. They know it's in their dependency tree.

When an AI agent installs packages, the human often doesn't know what got installed. The LLM decides. The MCP plugin chain decides. The vibe coder says "build me an API gateway that works with multiple LLM providers" and the AI pulls in litellm because it's the obvious choice.

The data backs this up:

  • Escape.tech scanned 5,600 vibe-coded apps: 2,000+ vulnerabilities, 400 exposed secrets
  • McAfee Labs found 443 malicious files using vibe-coded malware in January 2026
  • Tenzai tested 15 apps across 5 AI coding tools: 69 vulnerabilities, zero had security headers
  • ShipSafe scanned 100 AI repos: 67% had critical vulnerabilities, 45% hardcoded secrets

Two real-world vibe coding data breaches are already documented:

  1. Baudr: Social network built with AI for 40 euros, hacked within hours. Admin panel at /admin wide open.
  2. Moltbook: 1.5 million auth tokens and 35,000 emails exposed. Builder said they "didn't write one line of code."

The Supply Chain Is the New Attack Surface

Traditional supply chain attacks target developers who choose dependencies. Vibe coding supply chain attacks target AI agents that choose dependencies for you.

The UK's National Cyber Security Centre (NCSC) CEO warned about exactly this at RSA Conference yesterday. The NCSC CTO published a blog literally titled "Vibe Check" arguing that while AI-generated code "currently poses intolerable risks for many organizations," the business benefits will drive adoption anyway.

Their recommendation: integrate "secure by default" coding into AI tools and adopt a "trust but verify" approach.

But here's the gap: who verifies? The whole point of vibe coding is that non-technical users build software without understanding the code. They can't verify supply chain integrity. They can't audit transitive dependencies. They don't know what litellm.proxy.proxy_server does.

What To Do Right Now

If you used litellm recently:

  1. Check your version: pip show litellm. If 1.82.7 or 1.82.8, you're compromised
  2. Rotate ALL credentials: SSH keys, cloud provider tokens, database passwords, API keys
  3. Check for sysmon.service: systemctl status sysmon.service. If it exists, you have a persistent backdoor
  4. Audit Kubernetes: look for unexpected privileged pods
  5. Pin your dependencies: never use >= or ^ for production dependencies

If you're vibe coding:

  1. Scan your repos before deploying. 15+ free security scanners now exist for vibe-coded apps
  2. Review your dependency tree: pip list or npm ls. Know what's installed
  3. Don't trust AI-generated requirements.txt blindly
  4. Use lockfiles: pip freeze > requirements.txt with exact versions

The Bigger Picture

Two months ago, zero vibe coding security scanners existed. Now there are 15+. The NCSC is publishing guidance. Major publications (Forbes, Bloomberg Law, Economic Times) are covering the risk.

The litellm incident isn't an outlier. It's a preview. As AI agents increasingly manage dependency installation, supply chain attacks become more attractive and harder to detect.

We built VibeCheck specifically for this. Free. No signup. Scans both source code and live sites for exposed secrets, missing auth, and security misconfigurations. It won't catch every supply chain attack, but it'll catch the 60% of issues we found when scanning random vibe-coded repos on GitHub.

The full data is in our State of Vibe Coding Security 2026 report. Every stat sourced. Every breach documented.


Data sources: Escape.tech, Tenzai, CodeRabbit, Kaspersky, Veracode, McAfee Labs, Wiz, Sonar, ShipSafe, UK NCSC, Snyk, Endor Labs, JFrog, The Hacker News, futuresearch.ai

Top comments (0)