LiteLLM Got Supply-Chain Attacked — Here's Why I Use a Single-Provider Claude API Instead
You've probably seen the news: LiteLLM, one of the most popular Python packages for routing AI requests across multiple providers, was compromised in a supply-chain attack.
The community is alarmed. 574 upvotes on Hacker News. 234 comments. Developers who depend on LiteLLM for production workloads are scrambling.
This is the moment to think critically about how you architect your AI API layer.
What happened with LiteLLM
LiteLLM is a brilliant piece of software — it lets you call 100+ LLM APIs using the OpenAI format. But that breadth comes with risk: a single compromised package can exfiltrate your API keys for every provider you've connected.
OpenAI key. Anthropic key. Gemini key. Cohere key. All of them, sitting in one package.
A supply-chain attack doesn't need to break your code. It just needs to read your environment variables.
The complexity-security tradeoff
Here's the architectural truth that this incident surfaces:
More providers = more attack surface.
When you route through a multi-provider proxy:
- You have API keys for 5-10 different services
- Each key has different revocation complexity
- A single dependency compromise exposes all of them
- Rotating keys means updating configs in multiple places
When you use a single-provider proxy:
- One API key, one provider
- One thing to rotate if compromised
- Minimal attack surface
- You know exactly what your proxy does
What I built instead
I built SimplyLouie — a single-provider Claude API proxy. That's it. It does one thing: give you Claude access at $2/month instead of the full Anthropic API pricing.
It's not trying to be LiteLLM. It doesn't route to 100 providers. It doesn't have a 50,000-line dependency tree.
Here's the entire API surface:
curl https://simplylouie.com/api/chat \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{"message": "Explain quantum entanglement simply"}'
Response:
{
"response": "Quantum entanglement is when two particles become linked...",
"model": "claude-3-5-sonnet"
}
The cost argument
LiteLLM's value proposition is partly cost arbitrage — route to the cheapest model for each task. But if you're primarily using Claude (and many developers are, especially post-Claude 3.5), you don't need multi-provider routing.
And if you're spending $20/month on Claude API access, consider: SimplyLouie is $2/month for the same Claude 3.5 Sonnet access.
For solo developers and small teams, that's the math:
- LiteLLM (self-hosted) + Anthropic API: $20-50+/month depending on usage
- SimplyLouie: $2/month flat
The boring architecture wins
The LiteLLM attack is a reminder that the boring, simple architecture often wins on security.
- One provider
- One key
- One dependency to audit
- One thing that can go wrong
This isn't anti-LiteLLM. For teams that genuinely need multi-provider routing, LiteLLM is excellent software. But if you're a solo developer who just needs Claude access, you don't need the complexity.
Stay safe out there. Rotate your API keys. And maybe simplify your AI stack.
SimplyLouie is a $2/month Claude API proxy. 50% of revenue goes to animal rescue. Try it free for 7 days.
Top comments (0)