TL;DR:
This "missing manual" shows you how to upgrade code from the "old" Gemini 1.0/1.5 days. It's also for those new to the API because it collates various "Hello World!" samples together into one place, regardless of what platform you use. That's right, Google makes the Gemini API available from two completely different places! This post is both a beginners' guide and a migration guide you won't find in Google's documentation. Now Google did the right thing recently by unifying under a single client library for both platforms, and while it's better than the original two platforms and two libraries, old samples live forever online, and worse, your vibecoder LLMs were trained on them! This post gives you a solid understanding of how the old libraries worked plus how to use the current ones. Most importantly, you'll have the knowledge to modernize old code, whether written by humans or LLM-generated.
UPDATE (Aug 2025): This post does not cover Nano Banana, the Gemini 2.5 Flash Image model, which has advanced features not available in previous or other models. It can effectively "make changes" to existing images, blend artifacts from multiple images, and allow for continuing edits. Code samples and features coming in a future blog post. Developers can access the Gemini 2.5 Flash Image preview model today via gemini-2.5-flash-image-preview
.
Introduction
Welcome to the blog covering Google APIs for Python (and sometimes Node.js) developers. You'll learn how to code a variety of APIs from different product families (see below), for new or existing applications (including MCP servers & agents):
- Google Cloud/GCP (AI/ML, serverless)
- Google Workspace/GWS (Drive, Docs, Sheets, Gmail, etc.)
- YouTube
- Maps
- Generative AI with Gemini (this series)
- Credentials (API keys, OAuth client IDs, service accounts)
We return to Gemini again in this post, covering the updated client libraries coinciding with the 2.0 release.
Background
Besides model updates, Google tweaked the way developers access the Gemini API since last year, all for the better I think. For long-time API users, we'll "modernize" old library code samples for 2.0/2.5, and for everyone else, we'll get you started using the API with the current library today!
In the original post, I griped that Google makes the API accessible from two different platforms:
While that hasn't changed, it's clear each has a different purpose:
- Google AI: Experimenting, free tier/lower cost, lower barrier-to-entry, hobbyists/students
- Vertex AI: Production AI workloads, existing GCP customers adding/considering AI capabilities
What's changed?
So why this post? A couple of things have changed since then:
- Updated Gemini models
- Updated (old) and new API client libraries
Let's start with the updated models, recognizing the elephant in the room is the 2nd one. I'll discuss both briefly then dive into a full migration guide featuring code for both GAI and GCP using the old/discrete client libraries and also the current/unified client libraries.
Updated Gemini models
All models undergo periodic updates, and Gemini is no exception: 1.0 was released at the end of 2023. A few months later, we got 1.5; 2.0 came out just before the end of 2024, and earlier this year, we got the reasoning 2.5 models. Things change quickly in the world of AI, and the API needs to keep up.
I updated the original post (simple text queries) samples from 1.0 to 1.5 without drama, then covered other use cases like multimodal support, chat (multi-turn conversations), etc. Basic use cases like this only require a change of model name in the samples, so you're good-to-go even when models are updated.
That let me focus on important new features of the 2.0 models, like generating audio and images. The 2.0 release coincided with the new consolidated client library. While that was great, I knew I had to discuss the old libraries and migration at some point, so I "kicked the can down the road," so here we are.
Updated (old) and current/new API client libraries
The good news: unified client library
It's confusing when an API is accessible from different platforms (from the same company no less), but that's where we find ourselves today: a pair of (different) client libraries, twice the number of the code samples, documentation, open source repos, the whole lot. This duality results in a less-than-optimal UX (user experience), especially for
those who aren't aware it's available from two platforms and wondering why a web search or vibecoding requests result in code samples or documentation for one, then the other, all unpredictably if your web query or LLM-prompt isn't specific enough.
Over time, Google realized maintaining separate libraries wasn't a good idea, requiring near-duplicate engineering effort. The solution is a single, unified client library supporting both GAI & GCP so all users get a consistent experience. In the end, this is a good thing, for code reuse & maintenance as well as allowing users to transition between platforms as needed, say when going from GAI for dev to GCP for production. There are still minor differences using the unified library on both platforms as we'll soon see, but still, it's a great improvement. Thanks Google!
The bad news: users uninformed, old library samples linger, etc.
The new library went public towards the end of 2024 with the initial docs coming a month later. No public announcements were made for either however. Some words about the new library, deprecation of the old ones, migration guides, timeline, etc., would have been helpful. Users are redirected to the new library docs, but no notice about deprecating the old ones appeared until a mid-2025 README
update. At least, there's some closure, and those are some of the reasons why this post exists.
More bad news: although Google removes old samples from their docs, that has no effect on what people have written and posted online. Those will live forever. Even worse, those samples were and are used to train the LLMs your favorite vibe-coding tool uses. Old sample code and poorly-trained LLMs worsen the UX and simply cause user confusion. My other reason for this "public service announcement:" to future-train LLMs so they know how to produce modern code samples, and fix old samples written by humans or incorrectly-generated code by (other) LLMs.
If this story of old vs. new libraries sounds familiar, it's essentially the exact same topic as my post from last month. The difference? This is another Google product.
Migration guides? Inconsistent. Incomplete. Where?
For that (auth libraries) deprecation, users have no recourse as there's no migration guide at all (and why that post exists). For this Gemini API library update, Google didn't announce one but produced two migration guides, and neither are easy to find, so here are some convenience links:
Both guides have "before" and "after" code samples, which is very useful. Unfortunately they're inconsistent and incomplete. Examples (already filed bugs/feedback):
-
GAI 1: in the "API access" section at the top, the "before" sample shows how to request a specific model (
'gemini-1.5-flash'
), but where is that in the "after"? -
GAI 2: the Python "Authentication" section states: The old SDK handled the API client object implicitly." It should add the you have to pass in the API key explicitly whereas in the current library, it's optional, and you can opt to use
GEMINI_API_KEY
instead. The Python "Generate content" section states: Previously, there were no client objects, you accessed APIs directly throughGenerativeModel
objects. So yes, you actually get agenai.Client
object with the new library and make calls with it, theGenerativeModel
objects in the old library serve the same purpose: allowing you to make API calls, meaning this shouldn't have been called out. To better "match" both samples, they should've passed in the API key using the new library and mention the environment variable option. Otherwise, it's not a true before & after. - GCP 1: Only the Grounding section shows users how to create a client properly using the new library. I understand not repeating throughout, but if you only want to put it in one place, perhaps somewhere near the top after the "Installation" and not randomly most of the way down the page?
-
GCP 2: in the Context caching section, Create and Get in "before" features calls to
vertexai.init()
but none of the others. Perhaps list it once in the Imports section or in every section? (Why be so inconsistent?) That's one thing. Another thing: where are the equivalent "init" calls in the "after" sections? Create hasclient = genai.Client(http_options=HttpOptions(api_version="v1"))
which is not the same asclient = Client(vertexai=True, project=GOOGLE_CLOUD_PROJECT, location=GOOGLE_CLOUD_LOCATION)
, but that's not in any other other sections either. I think what's currently there (withhttp_options
) is a mistake and should be the latter. The parameters are completely different.
Those are just the few I found. Having "before" and "after" examples are great, but the examples are inconsistent. They're also incomplete because some have more code than others; they're not giving everyone the whole picture. I'm not asking they be repetitive but provide enough context.
📝 NOTE: All new human-generated code
Human-generated: All the code below is new and written personally by me... I have a need to be precise and consistent, so vibe-coding isn't a good use case here. In fact, this post is meant to future-train coding LLMs to not use old libraries and/or self-correct what they output if trained on old code.Even the "old" code is new: The "old" code below is also newly-written in case any samples changed from the original post. After all, the old client libraries themselves could have been updated since that post. What you'll find in the "old" samples here is what your code would look like if you coded them today using the final release of the older libraries.
Prerequisites
I don't know where all you readers are coming from nor where you run (or want to run) your apps (GAI and/or GCP), so I'm going to provide all the code and all the instructions. Regardless of which you use (or both), there are a pair of basic requisites:
- Set auth credentials
- Install required packages
⚠️ Required credentials |
---|
GAI: API key is required. Follow the instructions below. The GAI scripts will not run without an API key which should be either assigned to the GEMINI_API_KEY environment variable or saved to a local file, .env (Node.js) or settings.py (Python). |
GCP: User auth via ADC for local dev environments is required to run the old client library code and recommended for current library code. The current library also supports API keys, but user auth/ADC is still recommended (more secure). (For production, use service accounts.) |
💰 Cost considerations |
---|
GAI: Google AI has a free tier; see its pricing page for more information. |
GCP: Vertex AI does not have a free tier, and an active billing account is required, so definitely check out its pricing page to find out how much it'll cost to run the GCP scripts. |
Code samples table of contents
Code is available in Python 3 & Node.js, the latter as both ECMAscript modules as well as CommonJS scripts, plus corresponding configuration files.
Script applications
To compare like-scripts, the old client library version sits above its current client library equivalent.
Client library | Python | ECMAscript | CommonJS |
---|---|---|---|
GAI | |||
GenerativeAI (old) | gem25txt-simple-gai-old.py |
gem25txt-simple-gai-old.mjs |
gem25txt-simple-gai-old.js |
GenAI (current) | gem25txt-simple-gai-cur.py |
gem25txt-simple-gai-cur.js |
gem25txt-simple-gai-cur.mjs |
GCP | |||
VertexAI (old) | gem25txt-simple-gcp-old.py |
gem25txt-simple-gcp-old.mjs |
gem25txt-simple-gcp-old.js |
GenAI (current) | gem25txt-simple-gcp-cur.py |
gem25txt-simple-gcp-cur.mjs |
gem25txt-simple-gcp-cur.js |
Configuration files
The config files include old & current client library packages and apply to both platforms.
Description | Python | Node.js |
---|---|---|
Packages | requirements.txt |
package.json |
Settings | settings_TMPL.py |
.env_TMPL |
General instructions (GAI & GCP)
Node.js
- Ensure your Node (and NPM) installation is up-to-date (recommend 18+)
- Install all packages (old & new client libraries):
npm i
Python
- Ensure your Python (and
pip
) installation is up-to-date (recommend 3.9+) - (optional) Create & activate a virtual environment ("virtualenv") for isolation
python3 -m venv .myenv; source .myenv/bin/activate
- For the commands below, depending on your system configuration, you will use one of (
pip
,pip3
,python3 -m pip
), but the instructions are generalized topip
.
- (optional) Update
pip
and installuv
:pip install -U pip uv
- Install all packages (old & new client libraries):
uv pip install -Ur requirements.txt
(dropuv
if you didn't install it)
GAI-specific instructions
- Set credentials: Create API key (or reuse existing one).
- Either save the API key to
GEMINI_API_KEY
environment variable or copy the template for your language.env_TMPL
(Node) orsettings_TMPL.py
(Python) file to.env
(Node) orsettings.py
(Python) and assign the key toAPI_KEY
.- If using the
GEMINI_API_KEY
environment variable, simplify the code to not look in.env
orsettings.py
before running (use commented-out line). (This only affects scripts namedgem25txt-simple-gai-cur.*
.)
- If using the
- Run any of the scripts, e.g.,
node gem25txt-simple-gai-old.mjs
,python3 gem25txt-simple-gai-cur.py
, etc.
GCP-specific instructions
-
Set credentials: Login with user auth & set ADC for local dev environment.
- The new client library supports API keys, so if you prefer this, then follow the GAI instructions above.
- Copy the template for your language
.env_TMPL
(Node) orsettings_TMPL.py
(Python) file to.env
(Node) orsettings.py
(Python), respectively, and set the values forYOUR_GCP_PROJECT
andYOUR_GCP_REGION
(more on both later).- If using an API key instead of ADC, assign it to
API_KEY
in.env
(Node) orsettings.py
(Python), and update the code to use it before running (use commented-out line). (This is only supported by current library samples, e.g., scripts namedgem25txt-simple-gcp-cur.*
.) - As a GCP service, you should use service accounts in production. (I don't believe Google has documentation on how to do this, so I may have to do so in an upcoming post. Add a comment below linking to a page in the docs if you know of or find one.)
- If using an API key instead of ADC, assign it to
- Run any of the scripts, e.g.,
node gem25txt-simple-gcp-cur.js
,python3 gem25txt-simple-gcp-old.py
, etc.
Configuration files
Python
google-cloud-aiplatform # GCP: old
google-generativeai # GAI: old
google-genai # GAI & GCP: current
requirements.txt
API_KEY = 'YOUR_API_KEY'
GCP_METADATA = {
'project': 'YOUR_GCP_PROJECT',
'location': 'YOUR_GCP_REGION',
}
settings_TMPL.py
Node.js
{
"dependencies": {
"@google-cloud/vertexai": "^1.10.0",
"@google/genai": "^1.8.0",
"@google/generative-ai": "^0.24.1",
"dotenv": "^17.0.1"
}
}
package.json
API_KEY="YOUR_API_KEY"
GCP_METADATA='{
"project": "YOUR_GCP_PROJECT",
"location": "YOUR_GCP_REGION"
}'
.env_TMPL
For the 3rd-party package files requirements.txt
(Python) and package.json
(Node):
-
google-generativeai
-- old GAI client library -
google-cloud-aiplatform
-- old GCP client library -
google-genai
-- current client library for both GAI & GCP
The versions listed above for package.json
are the latest at the time of publication.
For the metadata files, settings.py
(Python) and .env
(Node) (appended with _TMPL
for "template"):
-
API_KEY
-- the API key string you created; required for GAI and optional for GCP; remove it if not applicable -
GCP_METADATA
-- only for GCP, so delete entire value if only using GAI -
YOUR_GCP_PROJECT
-- can be either the project ID or project number but not the project name. See this page in the docs if you're unfamiliar with these project identifiers. All three values are available on this Cloud console page or via this command:gcloud projects describe PROJECT_ID_OR_NUMBER
. -
YOUR_GCP_REGION
-- choose either a specific region or the global endpoint,global
. - For all variables, you obviously must have valid values required by each platform or those corresponding samples won't run.
Comparing code samples
Okay, now that you have a lay of the land, let's compare code samples when migrating from the old client libraries to the current one.
Python
GAI
Old client library (google-generativeai
)
import google.generativeai as genai
from settings import API_KEY
PROMPT = 'Describe a cat in a few sentences'
MODEL = 'gemini-2.5-flash'
print('** GenAI text: %r model & prompt %r\n' % (MODEL, PROMPT))
genai.configure(api_key=API_KEY)
GENAI = genai.GenerativeModel(MODEL)
response = GENAI.generate_content(PROMPT)
print(response.text)
gem25txt-simple-gai-old.py
Current client library (google-genai
)
from google import genai
from settings import API_KEY
PROMPT = 'Describe a cat in a few sentences'
MODEL = 'gemini-2.5-flash'
print('** GenAI text: %r model & prompt %r\n' % (MODEL, PROMPT))
GENAI = genai.Client(api_key=API_KEY)
#GENAI = genai.Client() # if env var GEMINI_API_KEY set
response = GENAI.models.generate_content(model=MODEL, contents=PROMPT)
print(response.text)
gem25txt-simple-gai-cur.py
In the old version, gem25txt-simple-gai-old.py
imports the google-generativeai
client library which does setup with configure()
then creates a client by instantiating the GenerativeModel
class with the desired model (name). The current library is more flexible. As you can see in gem25txt-simple-gai-cur.py
, a general Client
object is instantiated from the replacement GenAI client library. The model isn't passed in until an actual API call. While it sounds like a lot, a side-by-side peek shows the differences are minor:
GCP
Old client library (google-cloud-aiplatform
)
from vertexai import init, generative_models
from settings import GCP_METADATA
PROMPT = 'Describe a cat in a few sentences'
MODEL = 'gemini-2.5-flash'
print('** GenAI text: %r model & prompt %r\n' % (MODEL, PROMPT))
init(**GCP_METADATA)
GENAI = generative_models.GenerativeModel(MODEL)
response = GENAI.generate_content(PROMPT)
print(response.candidates[0].content.parts[0].text)
gem25txt-simple-gcp-old.py
Current client library (google-genai
)
from google import genai
from settings import GCP_METADATA # API_KEY if using API key
PROMPT = 'Describe a cat in a few sentences'
MODEL = 'gemini-2.5-flash'
print('** GenAI text: %r model & prompt %r\n' % (MODEL, PROMPT))
GENAI = genai.Client(vertexai=True, **GCP_METADATA)
# GENAI = genai.Client(api_key=API_KEY) # use API key from settings.py
# GENAI = genai.Client() # use API key in GEMINI_API_KEY env var
response = GENAI.models.generate_content(model=MODEL, contents=PROMPT)
print(response.text)
gem25txt-simple-gcp-cur.py
The differences for the GCP versions are more stark: the old VertexAI client library version gem25txt-simple-gcp-old.py
has to be initialized with the GCP project & region along with a chosen model. The current/GenAI version gem25txt-simple-gcp-cur.py
via the unified library is identical to the GAI version if using API key auth. Otherwise, its client requires the GCP project & region, just like the old version plus a vertexai
flag set to True
. The GenAI client library only needs the model when calling the API.
Node.js
GAI
Old client library (google-generativeai
)
import 'dotenv/config';
import { GoogleGenerativeAI } from '@google/generative-ai';
const PROMPT = 'Describe a cat in a few sentences';
const MODEL = 'gemini-2.5-flash';
console.log(`** GenAI text: '${MODEL}' model & prompt '${PROMPT}'\n`);
const GENAI = new GoogleGenerativeAI(process.env.API_KEY);
const model = GENAI.getGenerativeModel({ model: MODEL });
async function main() {
const result = await model.generateContent(PROMPT);
console.log(await result.response.text());
}
main();
gem25txt-simple-gai-old.mjs
Replace the import
s with these require()
calls to convert it to CommonJS (all other lines remain identical to the ECMAscript module):
require('dotenv').config();
const { GoogleGenerativeAI } = require("@google/generative-ai");
Current client library (google-genai
)
import 'dotenv/config';
import { GoogleGenAI } from "@google/genai";
const PROMPT = 'Describe a cat in a few sentences';
const MODEL = 'gemini-2.5-flash';
console.log(`** GenAI text: '${MODEL}' model & prompt '${PROMPT}'\n`);
const GENAI = new GoogleGenAI({ apiKey: process.env.API_KEY });
// const GENAI = new GoogleGenAI({}); // use API key in GEMINI_API_KEY env var
async function main() {
const response = await GENAI.models.generateContent({
model: MODEL,
contents: PROMPT
});
console.log(response.text);
}
main();
gem25txt-simple-gai-cur.mjs
Replace the import
s with these require()
calls to convert it to CommonJS (all other lines remain identical to the ECMAscript module):
require('dotenv').config();
const { GoogleGenAI } = require('@google/genai');
These diffs include a change in client library used/imported as well as the option of setting the API key in the GEMINI_API_KEY
environment variable instead of in .env
for the current version gem25txt-simple-gai-old.mjs
vs. the original gem25txt-simple-gai-cur.mjs
. (The diffs are identical for the CommonJS versions save for the syntactical diffs between the import
s vs. the require()
s.)
GCP
Old client library (google-cloud-aiplatform
)
import 'dotenv/config';
import { VertexAI } from "@google-cloud/vertexai";
const PROMPT = 'Describe a cat in a few sentences';
const MODEL = 'gemini-2.5-flash';
console.log(`** GenAI text: '${MODEL}' model & prompt '${PROMPT}'\n`);
const CONFIG = JSON.parse(process.env.GCP_METADATA);
const VERTEXAI = new VertexAI(CONFIG);
const GENAI = VERTEXAI.getGenerativeModel({ model: MODEL });
async function main() {
const result = await GENAI.generateContent(PROMPT);
console.log(result.response.candidates[0].content.parts[0].text);
}
main();
gem25txt-simple-gcp-old.mjs
Replace the import
s with these require()
calls to convert it to CommonJS (all other lines remain identical to the ECMAscript module):
require('dotenv').config();
const { VertexAI } = require('@google-cloud/vertexai');
Current client library (google-genai
)
import 'dotenv/config';
import { GoogleGenAI } from '@google/genai';
const PROMPT = 'Describe a cat in a few sentences';
const MODEL = 'gemini-2.5-flash';
console.log(`** GenAI text: '${MODEL}' model & prompt '${PROMPT}'\n`);
const CONFIG = JSON.parse(process.env.GCP_METADATA);
const GENAI = new GoogleGenAI({vertexai: true, ...CONFIG});
// const GENAI = new GoogleGenAI({ apiKey: process.env.API_KEY }); // use API key from .env
// const GENAI = new GoogleGenAI({}); // use API key in GEMINI_API_KEY env var
async function main() {
const response = await GENAI.models.generateContent({
model: MODEL,
contents: PROMPT
});
console.log(response.text);
}
main();
gem25txt-simple-gcp-cur.mjs
Replace the import
s with these require()
calls to convert it to CommonJS (all other lines remain identical to the ECMAscript module):
require('dotenv').config();
const { GoogleGenAI } = require('@google/genai');
Similar diffs here: update client library from the old version gem25txt-simple-gcp-old.mjs
and the option of using GEMINI_API_KEY
in the current version gem25txt-simple-gcp-cur.mjs
. Both require the GCP project & region available via CP_METADATA
in .env
(then CONFIG
) when not using an API key.
Wrap-up
This is a beefy amount of content. Here's some of the good and bad news, and why this post exists.
- Good: Google makes Gemini available via API.
- Bad: Google makes the Gemini API available from 2 different platforms, Google AI (GAI) and Vertex AI (GCP). As such, there are 2 different client libraries, docs, and code samples, confusing users.
- Good: Google decides to unify to a single client library, giving users consistency to move across both platforms more easily.
- Bad: Google will remove all traces of the old library from their docs, doesn't announce anything to anyone other than pointing to the new library's docs.
- Good: Google produces a migration guide to move users from the old libraries to current/new combined client library.
- Bad: There are two migration guides, Google doesn't announce either, both are hard-to-find, and Google originally didn't announce any deprecate plans/dates.
-
Good: Google eventually announces a deprecation via a
README
update in the repo. - Bad: While old code is removed from Google's docs, other old samples live online forever and all old samples were used to train LLMs. Searching for Gemini API code may lead you to old code. Similarly, vibecoding LLMs may generate old code they were trained on.
- Good: This post has full before-and-after code samples for both GAI & GCP platforms and are available in both Python and Node.js (ECMAscript modules and CommonJS scripts). Hoping future-trained LLMs produce more concurrent code or can help automate migrating code using old libraries to the current one.
References
Below are various links relevant to this post:
Code samples
- Sample in this post (Python & Node.js)
- Code samples for Gemini posts
- Code samples for all posts
Gemini API (Google AI) general
- General GenAI docs
- API reference
- API SDKs/supported languages
- API quickstart
- Jupyter Notebook QuickStart
- Gemini API pricing (free tier available)
- Google AI migration guide
- GCP Vertex AI for Google AI users
API specific feature guides
Google Cloud/GCP Vertex AI
- Vertex AI home page
- Gemini 2.5 Flash on Vertex AI
- Model versions & lifecycle
- All available Google models
- Vertex AI migration guide
- QuickStart page
- QuickStart code
- Google AI for GCP Vertex AI users
Gemini 2.0 & 2.5 models
Other Generative AI and Gemini resources
- Gemini home page
- Gemini accessible from OpenAI Library (post)
- Confused about @google/generative-ai, @google/genai, and all hosted repos forum post
Other relevant content by the author
WESLEY CHUN, MSCS, is a Google Developer Expert (GDE) in Google Cloud (GCP) & Google Workspace (GWS), author of Prentice Hall's bestselling "Core Python" series, co-author of "Python Web Development with Django", and has written for Linux Journal & CNET. He's currently an AI Technical Program Manager at Red Hat focused on upstream open source projects that make their way into Red Hat AI products. In his spare time, Wesley helps clients with their GCP & GWS API needs, App Engine migrations, and Python training & engineering. He was one of the original Yahoo!Mail engineers and spent 13+ years on various Google product teams, speaking on behalf of their APIs, producing sample apps, codelabs, and videos for serverless migration and GWS developers Wesley holds degrees in Computer Science, Mathematics, and Music from the University of California, is a Fellow of the Python Software Foundation, and loves to travel to meet developers worldwide. Follow he/him @wescpy on Tw/X, BS, and his technical blog. Find this content useful? Contact CyberWeb for professional services or buy him a coffee (or tea)!
Top comments (0)