DEV Community

Cover image for I built an OpenAI-compatible gateway that routes across 5 free LLM providers
Devansh
Devansh

Posted on • Originally published at devanshtiwari.com

I built an OpenAI-compatible gateway that routes across 5 free LLM providers

Every LLM provider has a free tier.

Groq gives you 30 requests per minute. Gemini gives you 15. Cerebras gives you 30. Mistral gives you 5.

Combined, that's about 80 requests per minute. Enough for prototyping, internal tools, and side projects where you don't want to pay for API access yet.

The problem: each provider has its own SDK, its own rate limits, its own auth, and its own downtime. You end up writing provider-switching logic, catching 429 errors, and managing API keys across five different dashboards.

I got tired of this while building Metis, an AI stock analysis tool. Kept hitting Groq's limits while Gemini had capacity sitting idle. So I built FreeLLM.

What FreeLLM does

One endpoint. Five providers. Twenty models. All free.

curl http://localhost:3000/v1/chat/completions \
  -d '{"model": "free-fast", "messages": [{"role": "user", "content": "Hello!"}]}'
Enter fullscreen mode Exit fullscreen mode

Your existing OpenAI SDK code works. Just change the base URL. That's the whole migration.

How the routing works

When a request comes in, FreeLLM:

  1. Checks which providers are healthy (circuit breakers track this automatically)
  2. Picks the best available provider based on your model choice
  3. If that provider returns a 429 or fails, it tries the next one
  4. You get a response

Three meta-models handle routing:

free-fast   → lowest latency (usually Groq or Cerebras)
free-smart  → most capable model (usually Gemini 2.5)
free        → maximum availability across all providers
Enter fullscreen mode Exit fullscreen mode

Providers and their free tiers

Provider Models Free Tier
Groq Llama 3.3 70B, Llama 4 Scout, Qwen3 32B ~30 req/min
Gemini 2.5 Flash, 2.5 Pro, 2.0 Flash ~15 req/min
Cerebras Llama 3.1 8B, Qwen3 235B, GPT-OSS 120B ~30 req/min
Mistral Small, Medium, Nemo ~5 req/min
Ollama Any local model Unlimited

What's under the hood

This isn't a simple round-robin proxy. The routing layer handles real production concerns:

Sliding-window rate limiter. Each provider's limits are tracked independently. FreeLLM knows how many requests you've sent to Groq in the last 60 seconds and won't send another if you're near the cap.

Circuit breakers. If Gemini starts returning 500s, FreeLLM pulls it from rotation. Every 30 seconds, it sends a test request. When the provider recovers, it goes back in.

Per-client rate limiting. If you expose this to a team, each client gets their own limit. Admin auth protects the config endpoints.

Zod validation. Every request is validated before it hits any provider. Bad payloads fail fast with clear error messages.

Real-time dashboard. React frontend showing provider health, request logs, and latency. You can see which providers are healthy at a glance.

Get it running in 30 seconds

git clone https://github.com/devansh-365/freellm.git
cd freellm
cp .env.example .env   # add your free API keys
docker compose up
Enter fullscreen mode Exit fullscreen mode

API on localhost:3000. Dashboard on localhost:3000/dashboard. Done.

Using it with the OpenAI SDK

import OpenAI from "openai";

const client = new OpenAI({
  baseURL: "http://localhost:3000/v1",
  apiKey: "not-needed",
});

const response = await client.chat.completions.create({
  model: "free-fast",
  messages: [{ role: "user", content: "Explain circuit breakers in 2 sentences" }],
});
Enter fullscreen mode Exit fullscreen mode

No new SDK to learn. No migration effort.

Why I built this

I was building Metis and kept running into the same pattern: burn through Groq's free tier in 20 minutes of testing, switch to Gemini manually, hit their limit, switch to Mistral. Repeat.

Wrote a quick proxy to automate the switching. Added failover because providers go down randomly. Added circuit breakers because I didn't want to wait for timeouts. Added a dashboard because I wanted to see what was happening.

It grew into a proper tool. Open-sourced it because every developer prototyping with LLMs has this exact problem.

Stack

TypeScript, Express 5, React 19, Zod, Docker. MIT licensed.

GitHub: github.com/devansh-365/freellm

Top comments (15)

Collapse
 
kamalmost profile image
KamalMostafa

Thanks for sharing, How to create an API key for your proxy? Idea or improvement, Add feature to use multiple API key per provider this will increase dramatically the limits per user at leat avail it for development.

Collapse
 
devansh365 profile image
Devansh

thanks kamal for your suggestion. i have now added multiple API key per provider feature aswell.

Collapse
 
kamalmost profile image
KamalMostafa

Thanks. how to create the API Key?

Thread Thread
 
devansh365 profile image
Devansh

the api key is optional. Also you need to self host this for using this.

Thread Thread
 
kamalmost profile image
KamalMostafa

Yes I know I need to self-host it also its optional. but, couldn't find any reference in your docs that is why I'm asking.

Thread Thread
 
devansh365 profile image
Devansh

There's no generator for it. You just invent a random string yourself and set it as FREELLM_API_KEY in your env.

Quickest ways to make one:

openssl rand -hex 32
or
uuidgen
or in Node:
node -e "console.log(require('crypto').randomBytes(32).toString('hex'))"

Take the output, drop it into your .env as FREELLM_API_KEY=... (or into Railway/Render's env settings), and send it as Authorization: Bearer on your requests.

No signup, no account needed. It's just a password you pick so strangers on the internet can't hit your gateway and burn through your Groq/Gemini quota.

Thread Thread
 
kamalmost profile image
KamalMostafa

Okay I thought there was an API for the generation/storage flow. thanks.

Thread Thread
 
devansh365 profile image
Devansh

thanks! if you found this helpfull. Can you please also give it a star?

github.com/Devansh-365/freellm

Thread Thread
 
kamalmost profile image
KamalMostafa

yes sure Devansh

Collapse
 
nfrankel profile image
Nicolas Fränkel

It's good as a learning exercise. Otherwise, you are reinventing the wheel:

github.com/maximhq/bifrost

Collapse
 
devansh365 profile image
Devansh

what?

Collapse
 
nfrankel profile image
Nicolas Fränkel

Existing products do the same.

Thread Thread
 
devansh365 profile image
Devansh

It's have some additional usefull features. Please explore the project docs.

Thread Thread
 
nfrankel profile image
Nicolas Fränkel

You can add useful features, it won't beat an existing team project with a high velocity. Hence, it's great as a learning project, but unless you rally others around you, it will stay a learning project.

Thread Thread
 
devansh365 profile image
Devansh

yes i agreed