You are building with GPT-4, but sometimes Claude is better. Sometimes Gemini is cheaper. Switching between providers means rewriting code, managing multiple API keys, and handling different response formats. Namespace solves this with an AI gateway that routes your LLM calls intelligently.
What Is Namespace?
Namespace is an AI gateway and model router. It provides a single API endpoint that can route requests to OpenAI, Anthropic, Google, and other LLM providers. You define routing rules based on cost, latency, quality, or custom logic — and Namespace handles the rest.
The Free API
Namespace offers a free tier:
- Free plan: Generous request limits for development
- Unified API: One endpoint for all LLM providers
- Smart routing: Route by cost, latency, or model capability
- Fallback chains: Automatic failover if a provider is down
- Caching: Deduplicate identical requests to save costs
- Analytics: Track usage, cost, and latency per model
Quick Start
Install the SDK:
npm install @namespacelabs/sdk
Route your LLM calls:
import { Namespace } from "@namespacelabs/sdk";
const ns = new Namespace({ apiKey: "your-api-key" });
// Simple completion — Namespace picks the best model
const response = await ns.chat.completions.create({
messages: [{ role: "user", content: "Explain quantum computing" }],
model: "auto", // Namespace routes to optimal model
});
// Or specify routing preferences
const cheapResponse = await ns.chat.completions.create({
messages: [{ role: "user", content: "Summarize this text" }],
model: "auto",
routing: {
strategy: "cheapest",
providers: ["openai", "anthropic", "google"]
}
});
Define routing rules:
// Fallback chain: try GPT-4 first, fall back to Claude
const response = await ns.chat.completions.create({
messages: [{ role: "user", content: "Complex reasoning task" }],
routing: {
strategy: "fallback",
chain: ["gpt-4", "claude-3-opus", "gemini-pro"]
}
});
Why Developers Choose Namespace
A SaaS company was spending $15K/month on GPT-4 for all requests. After adding Namespace, they routed simple tasks (summarization, classification) to cheaper models and kept GPT-4 for complex reasoning. Their LLM costs dropped 60% with no quality loss on any task category.
Who Is This For?
- AI application developers using multiple LLM providers
- Teams wanting to optimize LLM costs without code changes
- Startups that need provider redundancy and failover
- Enterprises tracking AI spend across departments
Namespace vs. Alternatives
| Feature | Namespace | LiteLLM | OpenRouter |
|---|---|---|---|
| Smart routing | Yes | Manual | Limited |
| Automatic fallback | Yes | Yes | No |
| Cost optimization | Yes | No | No |
| Caching | Built-in | No | No |
| Analytics | Yes | Basic | Basic |
Optimize Your AI Stack
Namespace lets you use the right model for each task without managing multiple integrations. One API, all providers, smart routing.
Need help building AI applications or optimizing LLM costs? I build custom AI solutions — reach out to discuss your project.
Found this useful? I publish daily deep-dives into developer tools and APIs. Follow for more.
Top comments (0)