DEV Community

Ramandeep Singh
Ramandeep Singh

Posted on

Using Vercel AI SDK with Google Gemini: Complete Guide

🎯 Can You Get All OpenAI Services with Gemini?

Short answer: Yes! The Vercel AI SDK provides a unified API that works seamlessly across multiple AI providers, including Google Gemini. When you switch from OpenAI to Gemini, you get access to the same core features and services.

✨ What Features Are Available?

The Vercel AI SDK ensures feature parity across providers. Here's what you can use with Gemini:

Core Features

  • βœ… Streaming Text Generation (streamText) - Real-time response streaming
  • βœ… Function Calling / Tools - Define and execute custom functions
  • βœ… System Messages - Set assistant behavior and context
  • βœ… Message History - Maintain conversation context
  • βœ… Text Generation (generateText) - Non-streaming text generation
  • βœ… Multi-modal Support - Text, images, and more

Advanced Features

  • βœ… Tool Execution - Your custom tools work exactly the same
  • βœ… UI Message Streaming - toUIMessageStreamResponse() works identically
  • βœ… Message Conversion - convertToModelMessages() handles message formatting
  • βœ… Error Handling - Same error handling patterns

πŸš€ Quick Migration Guide

Step 1: Install the Google AI SDK Package

npm install @ai-sdk/google
Enter fullscreen mode Exit fullscreen mode

Step 2: Update Your Imports

Before (OpenAI):

import { openai } from "@ai-sdk/openai";
Enter fullscreen mode Exit fullscreen mode

After (Gemini):

import { google } from "@ai-sdk/google";
Enter fullscreen mode Exit fullscreen mode

Step 3: Update Your Model Configuration

Before:

const result = streamText({
    model: openai('gpt-4o'),
    // ... rest of config
});
Enter fullscreen mode Exit fullscreen mode

After:

const result = streamText({
    model: google('gemini-2.0-flash-exp'), // or other Gemini models
    // ... rest of config (everything else stays the same!)
});
Enter fullscreen mode Exit fullscreen mode

Step 4: Set Environment Variable

Add to your .env.local or environment variables:

GOOGLE_GENERATIVE_AI_API_KEY=your_api_key_here
Enter fullscreen mode Exit fullscreen mode

Get your API key from: Google AI Studio

πŸ“‹ Available Gemini Models

Model Best For Speed Capability
gemini-2.0-flash-exp Latest features, fast responses ⚑ Fast High
gemini-1.5-pro Complex tasks, reasoning 🐒 Slower Very High
gemini-1.5-flash Cost-effective, quick responses ⚑ Fast Good

πŸ’‘ Complete Example: Migrating Your Chat Route

Here's a complete example showing how your existing code works with Gemini:

import { google } from "@ai-sdk/google";
import { streamText, UIMessage, convertToModelMessages, tool } from 'ai';
import { z } from 'zod';

export async function POST(req: Request) {
    const { messages }: { messages: UIMessage[] } = await req.json();

    const result = streamText({
        model: google('gemini-2.0-flash-exp'), // Changed from openai('gpt-4o')
        system: 'You are a helpful assistant at a Post office...',
        messages: convertToModelMessages(messages),
        tools: {
            getPassbookDetails: tool({
                description: 'Get passbook details from user',
                inputSchema: z.object({
                    passbookNumber: z.string().describe('The passbook number of the user'),
                }),
                execute: async ({ passbookNumber }) => {
                    return { passbookDetails: `Passbook number: ${passbookNumber}` };
                },
            }),
            // ... other tools work exactly the same
        },
    });

    return result.toUIMessageStreamResponse(); // Same as before!
}
Enter fullscreen mode Exit fullscreen mode

Notice: Only the import and model name changed. Everything else remains identical!

πŸ”„ Side-by-Side Comparison

Feature OpenAI Gemini Notes
Streaming βœ… βœ… Same API
Tools/Functions βœ… βœ… Same implementation
System Messages βœ… βœ… Same syntax
Message History βœ… βœ… Same format
Multi-modal βœ… βœ… Both support images
Cost Pay per token Pay per token Pricing varies
Response Speed Fast Fast Depends on model

🎨 Why Use Gemini?

  1. Cost Efficiency - Often more cost-effective for high-volume applications
  2. Performance - Excellent performance on specific tasks
  3. Multi-modal - Strong support for images and other media
  4. Flexibility - Easy to switch between providers
  5. Unified API - Same code works with multiple providers

πŸ”§ Environment Setup

Required Environment Variables

# For Gemini
GOOGLE_GENERATIVE_AI_API_KEY=your_gemini_api_key

# For OpenAI (if you want to keep both)
OPENAI_API_KEY=your_openai_api_key
Enter fullscreen mode Exit fullscreen mode

Optional: Support Multiple Providers

You can even support both providers and let users choose:

import { openai } from "@ai-sdk/openai";
import { google } from "@ai-sdk/google";

export async function POST(req: Request) {
    const { messages, provider = 'gemini' }: { 
        messages: UIMessage[], 
        provider?: 'openai' | 'gemini' 
    } = await req.json();

    const model = provider === 'gemini' 
        ? google('gemini-2.0-flash-exp')
        : openai('gpt-4o');

    const result = streamText({
        model,
        // ... rest of config
    });

    return result.toUIMessageStreamResponse();
}
Enter fullscreen mode Exit fullscreen mode

⚠️ Important Considerations

What's the Same?

  • βœ… API structure and method signatures
  • βœ… Tool/function calling implementation
  • βœ… Streaming response format
  • βœ… Message handling
  • βœ… Error patterns

What Might Differ?

  • πŸ”„ Model-specific capabilities (some advanced features may vary)
  • πŸ”„ Response quality and style (each model has its strengths)
  • πŸ”„ Rate limits and quotas (check provider documentation)
  • πŸ”„ Pricing structure (compare costs for your use case)

πŸ“š Additional Resources

🎯 Conclusion

Yes, you get all the same services! The Vercel AI SDK's unified API means you can switch between OpenAI and Gemini (or any other supported provider) with minimal code changes. Your existing tools, streaming, and message handling will work exactly the same way.

The only changes needed are:

  1. Install @ai-sdk/google
  2. Change the import
  3. Update the model name
  4. Set the environment variable

Everything else in your codebase remains unchanged. Happy coding! πŸš€

Top comments (0)