DEV Community

ZNY
ZNY

Posted on

Building a Multi-Provider AI Setup (OpenAI + Claude + Gemini in One Project)

Relying on a single AI API provider is risky. Outages happen, pricing changes, and different models excel at different tasks. This guide shows you how to architect a multi-provider AI setup that gives you flexibility and reliability.

Why Multi-Provider?

Reliability: If one provider goes down, you switch seamlessly
Cost optimization: Use cheaper models for simple tasks, premium models for complex ones
Model diversity: Each provider has unique strengths
No vendor lock-in: You control your AI infrastructure

The Architecture


┌─────────────────────────────────────┐
│ Application Layer │
├─────────────────────────────────────┤
│ AI Gateway / Router │
│ (Unified interface for all APIs) │
├─────────┬──────────┬────────────────┤
│ OpenAI │ Claude │ Gemini │
│ ofox.ai │ ofox.ai │ Google AI │
└─────────┴──────────┴────────────────┘

Implementing a Simple AI Gateway

`javascript
// ai-gateway.js
class AIGateway {
constructor(config) {
this.providers = {
openai: {
baseURL: 'https://api.openai.com/v1',
apiKey: process.env.OPENAIAPIKEY,
},
claude: {
baseURL: 'https://api.ofox.ai/v1', // OpenAI-compatible
apiKey: process.env.OFOXAPIKEY,
},
gemini: {
baseURL: 'https://generativelanguage.googleapis.com/v1beta',
apiKey: process.env.GEMINIAPIKEY,
}
};
this.defaultProvider = 'claude';
}

async complete(prompt, options = {}) {
const provider = options.provider || this.defaultProvider;
const config = this.providers[provider];

// Route to appropriate provider
if (provider === 'gemini') {
return this.callGemini(prompt, config, options);
} else {
return this.callOpenAICompatible(prompt, config, options);
}
}

async callOpenAICompatible(prompt, config, options) {
const response = await fetch(${config.baseURL}/chat/completions, {
method: 'POST',
headers: {
'Authorization': Bearer ${config.apiKey},
'Content-Type': 'application/json',
},
body: JSON.stringify({
model: options.model || 'gpt-4o',
messages: [{ role: 'user', content: prompt }],
max_tokens: options.maxTokens || 1000,
temperature: options.temperature || 0.7,
})
});

if (!response.ok) {
const error = await response.json();
throw new Error(AI API error: ${error.error?.message || response.statusText});
}

return response.json();
}

async callGemini(prompt, config, options) {
const model = options.model || 'gemini-1.5-flash';
const url = ${config.baseURL}/models/${model}:generateContent?key=${config.apiKey};

const response = await fetch(url, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
contents: [{ parts: [{ text: prompt }] }],
generationConfig: {
maxOutputTokens: options.maxTokens || 1000,
temperature: options.temperature || 0.7,
}
})
});

return response.json();
}

// Smart routing based on task type
async completeSmart(prompt, taskType) {
const routes = {
'code': { provider: 'claude', model: 'claude-3-5-sonnet-20241022' },
'creative': { provider: 'openai', model: 'gpt-4o' },
'fast': { provider: 'gemini', model: 'gemini-1.5-flash' },
'analysis': { provider: 'claude', model: 'claude-3-opus' },
};

const route = routes[taskType] || routes['fast'];
return this.complete(prompt, route);
}
}

module.exports = { AIGateway };
`

Usage Examples

`javascript
const { AIGateway } = require('./ai-gateway');
const ai = new AIGateway();

// Direct provider call
const codeResponse = await ai.complete(
'Write a REST API endpoint in Express.js',
{ provider: 'claude', model: 'claude-3-5-sonnet-20241022' }
);

// Smart routing
const fastResponse = await ai.completeSmart(
'Summarize this article',
'fast' // Routes to Gemini Flash for speed
);

const analysisResponse = await ai.completeSmart(
'Analyze the security implications of this code',
'analysis' // Routes to Claude Opus for depth
);
`

Fallback Logic

javascript
async function completeWithFallback(prompt, providers = ['claude', 'openai', 'gemini']) {
for (const provider of providers) {
try {
const result = await ai.complete(prompt, { provider });
return { result, provider };
} catch (error) {
console.warn(Provider ${provider} failed:, error.message);
continue;
}
}
throw new Error('All AI providers failed');
}

Cost Tracking

`javascript
async function trackCost(result, provider) {
const usage = result.usage;
const costPerToken = {
claude: { input: 0.000003, output: 0.000015 }, // $/token
openai: { input: 0.000005, output: 000025 },
gemini: { input: 0.000000125, output: 0.0000005 },
};

const rates = costPerToken[provider];
const inputCost = usage.prompt_tokens * rates.input;
const outputCost = usage.completion_tokens * rates.output;

console.log(Cost: $${(inputCost + outputCost).toFixed(6)} (${provider}));
return inputCost + outputCost;
}
`

Provider Recommendations

Provider Best For OpenAI-Compatible Reliability
ofox.ai Claude access, coding ✅ Yes High
OpenAI GPT models Native High
Google AI Gemini, multimodal ⚠️ Different API High

Environment Setup

bash
.env
OPENAIAPIKEY=sk-...
OFOXAPIKEY=sk-ant-...
GEMINIAPIKEY=AIza...

Conclusion

A multi-provider AI setup gives you reliability, cost flexibility, and access to the best models for each task. Start with two providers (primary + fallback) and expand as needed.

Get started with Claude via OpenAI-compatible API: ofox.ai — offers reliable Claude API access with pay-as-you-go pricing.

Tags: openai-api, claude-api, gemini-api, multi-provider, architecture, developers
Canonical URL: https://dev.to/zny10289

Top comments (0)