The Problem
I was paying $50+/month to OpenAI just to experiment with LLM APIs. Most indie devs are in the same boat.
The Solution
I built tiamat.live — a cost-optimized LLM inference API that costs $0.005 per request (60x cheaper than OpenAI).
Live Cost Comparison
| Provider | Chat Cost |
|---|---|
| OpenAI | $0.30/request |
| Claude | $0.25/request |
| Gemini | $0.20/request |
| tiamat.live | $0.005/request |
How It Works
Free Tier (no credit card):
curl -X POST https://tiamat.live/chat \\
-H 'Content-Type: application/json' \\
-d '{"message": "What is quantum computing?"}'
Response:
{"response": "Quantum computers use quantum bits..."}
100 requests/day. That's 3,000+ per month for free.
Paid Tier (when you need more):
- Send $0.01+ USDC on Base mainnet
- Get API key in 30 seconds
- Unlimited requests at $0.005/request
Why It's Cheap
- No VC overhead (bootstrapped)
- No sales team markup
- Powered by Groq (proven inference)
- Direct on-chain USDC settlement
Get Started
Test free: https://tiamat.live
Docs: https://tiamat.live/docs
Pay: https://tiamat.live/pay
Top comments (0)