π― Can You Get All OpenAI Services with Gemini?
Short answer: Yes! The Vercel AI SDK provides a unified API that works seamlessly across multiple AI providers, including Google Gemini. When you switch from OpenAI to Gemini, you get access to the same core features and services.
β¨ What Features Are Available?
The Vercel AI SDK ensures feature parity across providers. Here's what you can use with Gemini:
Core Features
- β
Streaming Text Generation (
streamText) - Real-time response streaming - β Function Calling / Tools - Define and execute custom functions
- β System Messages - Set assistant behavior and context
- β Message History - Maintain conversation context
- β
Text Generation (
generateText) - Non-streaming text generation - β Multi-modal Support - Text, images, and more
Advanced Features
- β Tool Execution - Your custom tools work exactly the same
- β
UI Message Streaming -
toUIMessageStreamResponse()works identically - β
Message Conversion -
convertToModelMessages()handles message formatting - β Error Handling - Same error handling patterns
π Quick Migration Guide
Step 1: Install the Google AI SDK Package
npm install @ai-sdk/google
Step 2: Update Your Imports
Before (OpenAI):
import { openai } from "@ai-sdk/openai";
After (Gemini):
import { google } from "@ai-sdk/google";
Step 3: Update Your Model Configuration
Before:
const result = streamText({
model: openai('gpt-4o'),
// ... rest of config
});
After:
const result = streamText({
model: google('gemini-2.0-flash-exp'), // or other Gemini models
// ... rest of config (everything else stays the same!)
});
Step 4: Set Environment Variable
Add to your .env.local or environment variables:
GOOGLE_GENERATIVE_AI_API_KEY=your_api_key_here
Get your API key from: Google AI Studio
π Available Gemini Models
| Model | Best For | Speed | Capability |
|---|---|---|---|
gemini-2.0-flash-exp |
Latest features, fast responses | β‘ Fast | High |
gemini-1.5-pro |
Complex tasks, reasoning | π’ Slower | Very High |
gemini-1.5-flash |
Cost-effective, quick responses | β‘ Fast | Good |
π‘ Complete Example: Migrating Your Chat Route
Here's a complete example showing how your existing code works with Gemini:
import { google } from "@ai-sdk/google";
import { streamText, UIMessage, convertToModelMessages, tool } from 'ai';
import { z } from 'zod';
export async function POST(req: Request) {
const { messages }: { messages: UIMessage[] } = await req.json();
const result = streamText({
model: google('gemini-2.0-flash-exp'), // Changed from openai('gpt-4o')
system: 'You are a helpful assistant at a Post office...',
messages: convertToModelMessages(messages),
tools: {
getPassbookDetails: tool({
description: 'Get passbook details from user',
inputSchema: z.object({
passbookNumber: z.string().describe('The passbook number of the user'),
}),
execute: async ({ passbookNumber }) => {
return { passbookDetails: `Passbook number: ${passbookNumber}` };
},
}),
// ... other tools work exactly the same
},
});
return result.toUIMessageStreamResponse(); // Same as before!
}
Notice: Only the import and model name changed. Everything else remains identical!
π Side-by-Side Comparison
| Feature | OpenAI | Gemini | Notes |
|---|---|---|---|
| Streaming | β | β | Same API |
| Tools/Functions | β | β | Same implementation |
| System Messages | β | β | Same syntax |
| Message History | β | β | Same format |
| Multi-modal | β | β | Both support images |
| Cost | Pay per token | Pay per token | Pricing varies |
| Response Speed | Fast | Fast | Depends on model |
π¨ Why Use Gemini?
- Cost Efficiency - Often more cost-effective for high-volume applications
- Performance - Excellent performance on specific tasks
- Multi-modal - Strong support for images and other media
- Flexibility - Easy to switch between providers
- Unified API - Same code works with multiple providers
π§ Environment Setup
Required Environment Variables
# For Gemini
GOOGLE_GENERATIVE_AI_API_KEY=your_gemini_api_key
# For OpenAI (if you want to keep both)
OPENAI_API_KEY=your_openai_api_key
Optional: Support Multiple Providers
You can even support both providers and let users choose:
import { openai } from "@ai-sdk/openai";
import { google } from "@ai-sdk/google";
export async function POST(req: Request) {
const { messages, provider = 'gemini' }: {
messages: UIMessage[],
provider?: 'openai' | 'gemini'
} = await req.json();
const model = provider === 'gemini'
? google('gemini-2.0-flash-exp')
: openai('gpt-4o');
const result = streamText({
model,
// ... rest of config
});
return result.toUIMessageStreamResponse();
}
β οΈ Important Considerations
What's the Same?
- β API structure and method signatures
- β Tool/function calling implementation
- β Streaming response format
- β Message handling
- β Error patterns
What Might Differ?
- π Model-specific capabilities (some advanced features may vary)
- π Response quality and style (each model has its strengths)
- π Rate limits and quotas (check provider documentation)
- π Pricing structure (compare costs for your use case)
π Additional Resources
π― Conclusion
Yes, you get all the same services! The Vercel AI SDK's unified API means you can switch between OpenAI and Gemini (or any other supported provider) with minimal code changes. Your existing tools, streaming, and message handling will work exactly the same way.
The only changes needed are:
- Install
@ai-sdk/google - Change the import
- Update the model name
- Set the environment variable
Everything else in your codebase remains unchanged. Happy coding! π
Top comments (0)