Telnyx got supply-chain attacked on PyPI — here's why I use HTTP instead of SDKs
Another week, another PyPI supply-chain attack.
This time it's Telnyx — a popular telecom/messaging SDK that developers trust for production SMS, voice, and communication infrastructure. The compromised package appeared on PyPI, meaning any pip install telnyx could have pulled malicious code.
This follows the LiteLLM attack two weeks ago. The pattern is clear: complex Python SDKs are high-value targets because they run in privileged environments, have many dependencies, and developers rarely audit them.
The SDK trust problem
Every time you pip install an AI or telecom SDK, you're trusting:
- The package maintainer's security practices
- Every dependency they pull in
- The PyPI infrastructure itself
- The CI/CD pipeline that built the package
That's a lot of trust for a pip install.
My solution: HTTP calls, not SDKs
For my AI project SimplyLouie, I made a deliberate choice: no AI SDKs. Instead, I use plain HTTP.
Here's the entire integration:
curl https://simplylouie.com/api/chat \
-H "Authorization: Bearer YOUR_KEY" \
-H "Content-Type: application/json" \
-d '{"messages": [{"role": "user", "content": "Hello!"}]}'
That's it. No pip install. No SDK. No dependency chain to audit.
Why this matters for security
Attack surface comparison:
| Approach | Dependencies | Attack surface |
|---|---|---|
| pip install litellm | 47+ packages | Massive |
| pip install anthropic | 12+ packages | Medium |
| Plain HTTP | 0 packages | Minimal |
When you use HTTP directly:
- Zero transitive dependencies
- Zero PyPI trust required
- Zero SDK update maintenance
- Full visibility into every byte sent
The hidden cost of SDK complexity
The LiteLLM attack and the Telnyx attack share a common root cause: complexity creates attack surface.
LiteLLM supports 100+ LLM providers. That's 100+ code paths, 100+ authentication flows, 100+ potential vulnerabilities. A malicious actor who compromises one SDK compromises every project using it.
What I built instead
SimplyLouie is a single-endpoint Claude API proxy:
import requests
response = requests.post(
'https://simplylouie.com/api/chat',
headers={'Authorization': 'Bearer YOUR_KEY'},
json={'messages': [{'role': 'user', 'content': 'Hello!'}]}
)
print(response.json()['content'])
One endpoint. One dependency (requests, which you already have). $2/month.
No SDK to get supply-chain attacked. No PyPI package to monitor. No complex dependency graph.
The price comparison
- ChatGPT Plus: $20/month
- Anthropic API (direct): Pay per token, $15-75/million tokens
- SimplyLouie: $2/month, unlimited messages
For developers who just want to ship features without worrying about SDK supply-chain attacks, this is the simplest path.
Try it
7-day free trial, no commitment: simplylouie.com
# Test it right now:
curl https://simplylouie.com/api/chat \
-H "Authorization: Bearer YOUR_KEY" \
-H "Content-Type: application/json" \
-d '{"messages": [{"role": "user", "content": "What is a supply-chain attack?"}]}'
Every check-in I ask myself: is the security model simple enough that I'd trust it with production traffic? HTTP + single endpoint = yes. Complex SDK dependency tree = no.
50% of SimplyLouie revenue goes to animal rescue. Simple mission, simple tech, simple price.
Top comments (0)