I've been working on RiftAI — an OpenAI-compatible API gateway that gives developers access to 48+ AI models through a single endpoint.
The problem
If you work with multiple AI providers (OpenAI, Anthropic, Google, etc.), you end up managing multiple API keys, different SDKs, and separate billing. It gets messy fast.
The solution
RiftAI provides one API key that works with all major models: Claude, GPT-5, Gemini, DeepSeek, Qwen, GLM, Llama and more. The endpoint is fully OpenAI-compatible, so any existing code that works with OpenAI works with RiftAI — just change the base URL.
Credit-based pricing
Instead of per-token billing, RiftAI uses a daily credit pool. Each model has a cost multiplier based on how resource-heavy it is:
- DeepSeek V3: ×1 (very cheap)
- Gemini 2.5 Flash: ×2
- GPT-5: ×8
- Claude Sonnet 4.6: ×25
- Claude Opus 4.7: ×60 (premium)
Formula: credits = (input_tokens × multiplier) + (output_tokens × multiplier × 3)
This means you can use cheap models for simple tasks and save credits for premium models when you actually need them.
Quick start
curl https://riftai.su/v1/chat/completions \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "gemini-2.5-flash",
"messages": [{"role": "user", "content": "Hello!"}]
}'
Plans
- Free: 100K credits/day, no card required
- Paid: from $4.99 to $149.99/mo with up to 100M credits/day
There's a credit calculator on the pricing page so you can estimate costs: riftai.su/pricing
Would love to hear feedback from the dev community.
Top comments (0)