Running ChatGPT, Claude, and Gemini in One TypeScript App
What if you could use GPT-4o for creative writing, Claude for code review, and Gemini for document analysis — all in the same TypeScript app, with the same API?
Most developers pick one AI provider and build their entire app around it. Then six months later, when prices change or a better model drops, they're stuck rewriting everything. I've been there.
Here's how to build a multi-provider AI app from day one using a single TypeScript SDK — and why it matters more than you think.
The Problem: Three SDKs, Three Headaches
Let's say you want to use OpenAI, Anthropic, and Google AI in the same project. Here's what you're signing up for:
// OpenAI
import OpenAI from "openai";
const openai = new OpenAI({ apiKey: process.env.OPENAI_KEY });
const chat = await openai.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "user", content: "Hello" }],
});
// Anthropic
import Anthropic from "@anthropic-ai/sdk";
const anthropic = new Anthropic({ apiKey: process.env.ANTHROPIC_KEY });
const msg = await anthropic.messages.create({
model: "claude-sonnet-4-6",
max_tokens: 1024,
messages: [{ role: "user", content: "Hello" }],
});
// Google AI
import { GoogleGenerativeAI } from "@google/generative-ai";
const genAI = new GoogleGenerativeAI(process.env.GOOGLE_KEY);
const model = genAI.getGenerativeModel({ model: "gemini-2.5-pro" });
const result = await model.generateContent("Hello");
Three different SDKs. Three different response formats. Three different streaming interfaces. Three different error types. Three sets of documentation.
Now multiply that by error handling, retries, logging, and testing.
The Solution: One SDK, Three Providers
NeuroLink unifies all 13 major AI providers under one TypeScript API. Here's the same thing:
import { NeuroLink } from "@juspay/neurolink";
const ai = new NeuroLink({
providers: [
{ name: "openai", apiKey: process.env.OPENAI_KEY },
{ name: "anthropic", apiKey: process.env.ANTHROPIC_KEY },
{ name: "google-ai", apiKey: process.env.GOOGLE_KEY },
],
});
// Use GPT-4o
const story = await ai.generate({
input: { text: "Write a short story about a TypeScript developer" },
provider: "openai",
model: "gpt-4o",
});
// Use Claude
const review = await ai.generate({
input: { text: "Review this code for security issues: ..." },
provider: "anthropic",
model: "claude-sonnet-4-6",
});
// Use Gemini
const analysis = await ai.generate({
input: { text: "Summarize the key trends in this data" },
provider: "google-ai",
model: "gemini-2.5-pro",
});
Same generate() call. Same response format. Same error handling. Just swap the provider and model strings.
Streaming: Same Interface, Every Provider
Streaming is where SDK fragmentation hurts the most. Each provider has a completely different streaming API. NeuroLink normalizes all of them:
// This works identically for OpenAI, Anthropic, and Google
const stream = await ai.stream({
input: { text: "Explain quantum computing in simple terms" },
provider: "anthropic",
model: "claude-sonnet-4-6",
});
for await (const chunk of stream.stream) {
if ("content" in chunk) {
process.stdout.write(chunk.content);
}
}
Switch provider: "anthropic" to provider: "openai" and the code doesn't change. The streaming interface is identical.
Smart Routing: Use the Right Model for Each Job
Different models have different strengths. Here's a practical pattern — route tasks to the best provider automatically:
import { NeuroLink } from "@juspay/neurolink";
const ai = new NeuroLink({
providers: [
{ name: "openai", apiKey: process.env.OPENAI_KEY },
{ name: "anthropic", apiKey: process.env.ANTHROPIC_KEY },
{ name: "google-ai", apiKey: process.env.GOOGLE_KEY },
],
});
// Creative tasks → GPT-4o (excellent at creative writing)
async function generateContent(prompt: string) {
return ai.generate({
input: { text: prompt },
provider: "openai",
model: "gpt-4o",
});
}
// Code tasks → Claude (excellent at code understanding)
async function reviewCode(code: string) {
return ai.generate({
input: { text: `Review this code:\n\`\`\`\n${code}\n\`\`\`` },
provider: "anthropic",
model: "claude-sonnet-4-6",
});
}
// Cheap tasks → Gemini Flash (free tier, fast)
async function quickSummary(text: string) {
return ai.generate({
input: { text: `Summarize: ${text}` },
provider: "google-ai",
model: "gemini-2.5-flash",
});
}
This isn't premature optimization — it's practical cost management. Gemini Flash is free for most use cases. Why pay OpenAI $0.01/1K tokens for a simple summary?
Structured Output: Same Schema, Every Provider
Type-safe JSON output works across all providers with Zod schemas:
import { z } from "zod";
const SentimentSchema = z.object({
sentiment: z.enum(["positive", "negative", "neutral"]),
confidence: z.number().min(0).max(1),
reasoning: z.string(),
});
// Works with OpenAI
const openaiResult = await ai.generate({
input: { text: "Analyze: 'This product is amazing!'" },
provider: "openai",
model: "gpt-4o",
schema: SentimentSchema,
output: { format: "json" },
});
// Same schema works with Claude
const claudeResult = await ai.generate({
input: { text: "Analyze: 'This product is amazing!'" },
provider: "anthropic",
model: "claude-sonnet-4-6",
schema: SentimentSchema,
output: { format: "json" },
});
// Both return typed objects
console.log(openaiResult.object); // { sentiment: "positive", confidence: 0.95, ... }
console.log(claudeResult.object); // { sentiment: "positive", confidence: 0.92, ... }
Write your schema once. Validate across any provider. No per-provider JSON parsing gymnastics.
Automatic Failover
What happens when OpenAI goes down? (It happens more than you'd think.) NeuroLink's fallback chain handles this automatically:
const ai = new NeuroLink({
providers: [
{ name: "openai", apiKey: process.env.OPENAI_KEY, priority: 1 },
{ name: "anthropic", apiKey: process.env.ANTHROPIC_KEY, priority: 2 },
{ name: "google-ai", apiKey: process.env.GOOGLE_KEY, priority: 3 },
],
fallback: true,
});
// Tries OpenAI first. If it fails, tries Claude. If that fails, tries Gemini.
const result = await ai.generate({
input: { text: "Summarize this report" },
});
console.log(`Handled by: ${result.provider}`);
Your app stays up regardless of which provider is having a bad day.
Real Cost Comparison
Here's what it actually costs to process 100,000 requests (avg 1K tokens each):
| Provider | Model | Cost per 1K tokens | Total Cost |
|---|---|---|---|
| OpenAI | GPT-4o | ~$0.005 in + $0.015 out | ~$2,000 |
| OpenAI | GPT-4o-mini | ~$0.00015 in + $0.0006 out | ~$75 |
| Anthropic | Claude Sonnet 4.6 | ~$0.003 in + $0.015 out | ~$1,800 |
| Anthropic | Claude Haiku 4.5 | ~$0.0008 in + $0.004 out | ~$480 |
| Gemini 2.5 Flash | Free (under limits) | ~$0 |
With multi-provider routing, you can keep costs under $100/month for most workloads by routing simple tasks to Gemini Flash and only using premium models when needed.
Getting Started
# Install
npm install @juspay/neurolink
# Or use the CLI
npx @juspay/neurolink setup
The interactive setup wizard walks you through configuring each provider. It validates your API keys and lets you test each one before writing any code.
Why This Matters
The AI landscape changes fast. A year ago, GPT-4 was untouchable. Now Claude and Gemini compete on quality while undercutting on price.
Building provider-agnostic from day one means:
- No vendor lock-in: Switch providers with one line, not a rewrite
- Cost optimization: Route to the cheapest provider that meets quality needs
- Resilience: Automatic failover when providers go down
- Future-proof: When the next frontier model drops, plug it in
The cost of abstraction? Zero runtime overhead. NeuroLink calls the provider SDKs directly — there's no proxy, no middleware, no added latency.
Try it:
- GitHub: github.com/juspay/neurolink
- npm:
npm install @juspay/neurolink - Docs: docs.neurolink.ink
Which AI providers are you using together? Share your multi-provider setup in the comments.
Top comments (0)