DEV Community

Cover image for Using Gemini with the OpenAI Library
M Sea Bass
M Sea Bass

Posted on

Using Gemini with the OpenAI Library

Based on this article, we can now use Gemini with the OpenAI Library. So, I decided to give it a try in this article

Currently, only the Chat Completion API and Embedding API are available.

In this article, I tried using both Python and JavaScript.

Python

First, let’s set up the environment.

pip install openai python-dotenv
Enter fullscreen mode Exit fullscreen mode

Next, let's run the following code.

import os

from dotenv import load_dotenv
from openai import OpenAI


load_dotenv()
GOOGLE_API_KEY = os.getenv("GOOGLE_API_KEY")

client = OpenAI(
    api_key=GOOGLE_API_KEY,
    base_url="https://generativelanguage.googleapis.com/v1beta/"
)


response = client.chat.completions.create(
    model="gemini-1.5-flash",
    n=1,
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {
            "role": "user",
            "content": "Explain briefly(less than 30 words) to me how AI works."
        }
    ]
)

print(response.choices[0].message.content)
Enter fullscreen mode Exit fullscreen mode

The following response was returned.

AI mimics human intelligence by learning patterns from data, using algorithms to solve problems and make decisions. 
Enter fullscreen mode Exit fullscreen mode

In the content field, you can specify either a string or 'type': 'text'.

import os

from dotenv import load_dotenv
from openai import OpenAI


load_dotenv()
GOOGLE_API_KEY = os.getenv("GOOGLE_API_KEY")

client = OpenAI(
    api_key=GOOGLE_API_KEY,
    base_url="https://generativelanguage.googleapis.com/v1beta/"
)

response = client.chat.completions.create(
    model="gemini-1.5-flash",
    n=1,
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {
            "role": "user",
            "content": [
                {
                    "type": "text",
                    "text": "Explain briefly(less than 30 words) to me how AI works.",
                },
            ]
        }
    ]
)

print(response.choices[0].message.content)
Enter fullscreen mode Exit fullscreen mode

However, errors occurred with image and audio inputs.

Sample code for image input

import os

from dotenv import load_dotenv
from openai import OpenAI


load_dotenv()
GOOGLE_API_KEY = os.getenv("GOOGLE_API_KEY")

client = OpenAI(
    api_key=GOOGLE_API_KEY,
    base_url="https://generativelanguage.googleapis.com/v1beta/"
)

# png to base64 text
import base64
with open("test.png", "rb") as image:
    b64str = base64.b64encode(image.read()).decode("utf-8")

response = client.chat.completions.create(
    model="gemini-1.5-flash",
    # model="gpt-4o",
    n=1,
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {
            "role": "user",
            "content": [
                {
                    "type": "text",
                    "text": "Describe the image in the image below.",
                },
                {
                    "type": "image_url",
                    "image_url": {
                        "url": f"data:image/png;base64,{b64str}"
                    }
                }
            ]
        }
    ]
)

print(response.choices[0].message.content)
Enter fullscreen mode Exit fullscreen mode

Sample code for audio input

import os

from dotenv import load_dotenv
from openai import OpenAI


load_dotenv()
GOOGLE_API_KEY = os.getenv("GOOGLE_API_KEY")

client = OpenAI(
    api_key=GOOGLE_API_KEY,
    base_url="https://generativelanguage.googleapis.com/v1beta/"
)

# png to base64 text
import base64
with open("test.wav", "rb") as audio:
    b64str = base64.b64encode(audio.read()).decode("utf-8")

response = client.chat.completions.create(
    model="gemini-1.5-flash",
    # model="gpt-4o-audio-preview", 
    n=1,
    modalities=["text"],
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {
            "role": "user",
            "content": [
                {
                    "type": "text",
                    "text": "What does he say?",
                },
                {
                    "type": "input_audio",
                    "input_audio": {
                        "data": b64str,
                        "format": "wav",
                    }
                }
            ]
        }
    ]
)

print(response.choices[0].message.content)
Enter fullscreen mode Exit fullscreen mode

The following error response was returned.

openai.BadRequestError: Error code: 400 - [{'error': {'code': 400, 'message': 'Request contains an invalid argument.', 'status': 'INVALID_ARGUMENT'}}]
Enter fullscreen mode Exit fullscreen mode

Currently, only text input is supported, but it seems that image and audio inputs will be available in the future.

JavaScript

Let's take a look at the JavaScript sample code.

First, let’s set up the environment.

npm init -y
npm install openai
npm pkg set type=module
Enter fullscreen mode Exit fullscreen mode

Next, let’s run the following code.

import OpenAI from "openai";

const GOOGLE_API_KEY = process.env.GOOGLE_API_KEY;
const openai = new OpenAI({
    apiKey: GOOGLE_API_KEY,
    baseURL: "https://generativelanguage.googleapis.com/v1beta/"
});

const response = await openai.chat.completions.create({
    model: "gemini-1.5-flash",
    messages: [
        { role: "system", content: "You are a helpful assistant." },
        {
            role: "user",
            content: "Explain briefly(less than 30 words) to me how AI works",
        },
    ],
});

console.log(response.choices[0].message.content);
Enter fullscreen mode Exit fullscreen mode

When running the code, make sure to include the API key in the .env file. The .env file will be loaded at runtime.

node --env-file=.env run.js
Enter fullscreen mode Exit fullscreen mode

The following response was returned.

AI systems learn from data, identify patterns, and make predictions or decisions based on those patterns.
Enter fullscreen mode Exit fullscreen mode

It's great that we can use other models within the same library.

Personally, I'm happy about this because OpenAI makes it easier to edit conversation history.

Image of AssemblyAI tool

Transforming Interviews into Publishable Stories with AssemblyAI

Insightview is a modern web application that streamlines the interview workflow for journalists. By leveraging AssemblyAI's LeMUR and Universal-2 technology, it transforms raw interview recordings into structured, actionable content, dramatically reducing the time from recording to publication.

Key Features:
🎥 Audio/video file upload with real-time preview
🗣️ Advanced transcription with speaker identification
⭐ Automatic highlight extraction of key moments
✍️ AI-powered article draft generation
📤 Export interview's subtitles in VTT format

Read full post

Top comments (0)

A Workflow Copilot. Tailored to You.

Pieces.app image

Our desktop app, with its intelligent copilot, streamlines coding by generating snippets, extracting code from screenshots, and accelerating problem-solving.

Read the docs

👋 Kindness is contagious

Immerse yourself in a wealth of knowledge with this piece, supported by the inclusive DEV Community—every developer, no matter where they are in their journey, is invited to contribute to our collective wisdom.

A simple “thank you” goes a long way—express your gratitude below in the comments!

Gathering insights enriches our journey on DEV and fortifies our community ties. Did you find this article valuable? Taking a moment to thank the author can have a significant impact.

Okay