Google API Keys Weren't Secrets—Until Gemini Broke Everything
Google spent fifteen years telling developers that API keys aren't secrets. Their documentation literally instructs you to paste them into HTML. Firebase's security checklist explicitly states it. Maps JavaScript tutorials show it as best practice. Then Gemini dropped and retroactively turned two decades of following instructions into a security disaster.
Here's the thing: Google Cloud uses a single key format (the AIza... prefix) for two completely different purposes: public project identification and sensitive API authentication. When the Gemini API gets enabled on a project, every API key in that project—including the ones you embedded in client-side code years ago—silently gains access to private Gemini endpoints. No warning. No email. No opt-in.
I've been saying "keys are not credentials" for years. That's the whole point of Google's design: API keys are for billing and routing, not secrets. But Gemini fundamentally broke that model without bothering to tell anyone who'd followed the rules.
The Silent Privilege Escalation
Here's how it happens in practice:
Three years ago: Your team creates a Google Cloud project for Maps. You generate an API key, paste it into your website's JavaScript, and ship it. Google told you this was fine. It was fine.
Last month: Someone on your team enables the Gemini API for an internal AI prototype. They don't touch any existing keys. They don't think about keys. Why would they?
Right now: The attack surface just expanded. Anyone visiting your website can view source, grab that Maps key, and use it against Gemini.
The key never changed. The code never changed. But the security posture did—silently.
This isn't a theoretical attack. The exploitation path is straightforward, and the payoff for attackers is substantial. They don't need sophisticated tooling or knowledge of your internal architecture. They just need to notice the key pattern and know where to point it.
The attack is trivial:
# Grab the key from your website's source code
API_KEY="AIzaSy..._from_your_maps_embed"
# Check if it works against Gemini
curl -s "https://generativelanguage.googleapis.com/v1beta/files?key=$API_KEY"
If you get a JSON response instead of a 403, you've got access. No authentication challenges. No MFA prompts. Just the key that's been sitting in your HTML since 2023.
What This Actually Exposes
The stakes here aren't just API calls you didn't authorize. When you have a valid Gemini API key, several sensitive endpoints become accessible:
-
/files: Lists and retrieves uploaded datasets and documents. This means PDFs loaded for analysis, CSVs of customer data, internal meeting notes—whatever your organization fed the model. -
/cachedContents: Retrieves cached conversation history. Often includes actual user queries and the model's responses, which can contain internal knowledge or sensitive business logic. -
/tunedModels: If your organization fine-tuned models, this endpoint reveals them—potentially exposing proprietary techniques or training data. -
/models: Returns the list of available models, confirming billing status and API access levels.
The attacker never touches your infrastructure. They never bypass a firewall. They just scraped your public-facing code.
And the billing impact isn't theoretical. Depending on the model and context window, a motivated actor can burn through thousands of dollars in a single day. They can also exhaust your quotas, taking down legitimate services. Denial of service as a service.
But the data exposure is worse. Imagine your legal team uploaded contract PDFs for analysis. Or your product team uploaded customer feedback spreadsheets. Or your HR team loaded policy documents. All of that is now accessible to anyone who can copy-paste your Maps API key from your website's source code.
The Scale is Absurd
Truffle Security scanned the November 2025 Common Crawl dataset—that's about 700 terabytes of publicly scraped webpages. They found 2,863 live Google API keys vulnerable to this exact privilege escalation.
The victims list reads like a who's who of "should know better": major banks, security vendors, global recruitment platforms, and most ironically, Google itself.
Google had a key embedded in a public product page that's been live since at least February 2023. The Internet Archive confirmed this. That key was deployed for Maps—public use case, zero sensitivity. When Gemini hit, that same key silently gained full API access. Truffle researchers demonstrated this by hitting the /models endpoint and getting back a 200 OK.
This wasn't a one-off. The pattern repeated across industries. E-commerce sites with Maps keys suddenly exposing customer data processing pipelines. Healthcare providers with public keys gaining access to document analysis. Financial services with billing identifiers turned into data leak endpoints.
If the vendor's own engineers fell into this trap, expecting every developer to navigate it correctly is setting people up to fail.
Why This Breaks Everything
This violates two fundamental security principles, and understanding both is crucial for grasping the scope of the problem:
CWE-1188: Insecure Defaults
When you create a new API key, it defaults to "Unrestricted." This means if any sensitive API is enabled on the project—Gemini, Vision AI, whatever—the key can access it. The UI shows a warning, but the architectural default is wide open. Security by obscurity is their fallback position.
The problem isn't that the restriction mechanism doesn't exist. Google actually allows you to limit keys to specific APIs and domains. The problem is the default assumption: if a key exists, it should have access to everything in the project. This made sense when all accessible APIs were public-facing. It stopped making sense the moment Google introduced APIs that handle private data.
CWE-269: Incorrect Privilege Assignment
This is retroactive privilege expansion. A key designed for public use (Maps) gains private capabilities (Gemini) without the owner's knowledge. The key didn't change, but its permissions did. This is privilege escalation by definition, just on an architectural timeline.
The core architectural failure is obvious in hindsight: Google conflated two fundamentally different security models. Stripe uses publishable keys for client-side code and secret keys for backend auth. The design intentionally separates "safe to leak" from "must protect." Google threw all of that onto a single key type and shipped it.
What's dangerous is how this design decision snowballed. Once Google committed to "keys aren't secrets," they had to maintain that consistency across products. Every new API had to work with the existing key infrastructure. So when they built Gemini—which fundamentally requires secrets—they forced round-peg-square-peg compatibility rather than acknowledging the model change.
How This Compares to Other Cloud Providers
It's worth comparing this to how other major cloud providers handle the same problem, because the difference is instructive.
AWS doesn't make this mistake. Their API keys (access keys) are explicitly secrets. You don't embed them in client-side code. Period. When you need public-facing services like S3 or CloudFront, they provide entirely separate mechanisms—presigned URLs, CloudFront signed cookies, or identity-based access through Cognito. The separation is enforced by design, not just encouraged by documentation.
Azure takes a similar approach. Azure Storage uses shared access signatures with explicit expiration scopes. Azure AD handles authentication, not raw credentials. If you want to embed something in client-side code, you get a token with specific permissions and a limited lifetime.
Google is unique in this "keys aren't secrets" philosophy, and Gemini just showed why that approach doesn't scale with sensitive workloads.
The irony runs deep. Google's design made sense for the original use case. Maps keys are for billing and rate limiting. If someone steals them, they hit rate limits. They don't access your data. That's a feature, not a bug. But extending that model to AI endpoints—which store actual data—is like using your WiFi password for your bank account. They look similar (both are authentication strings), but they have wildly different threat models and security requirements.
What You Should Do Right Now
I'm going to be specific here because vague advice doesn't help anyone.
Check if you're affected
# Find API keys in your repos (this is not a thorough scan, just a quick win)
grep -r "AIza" . --include="*.js" --include="*.html" --include="*.json"
# Also check for base64-encoded versions attackers might use
grep -r "QUl6Y" . --include="*.js" --include="*.html"
If you find keys in client-side code, check if your project has Gemini enabled:
- Go to Google Cloud Console → APIs & Services → Library
- Search for "Generative Language API" (or any Vertex AI endpoints)
- If it's enabled, your public keys are live
But don't stop there. Check for any other sensitive APIs enabled on the same project: Vision AI, Speech-to-Text, Translation, Custom Models. If any of these are enabled, your public keys have access.
Lock down your keys
For every API key in your project, do this:
- Go to Console → APIs & Credentials → Credentials
- Click the key → Edit
- Under "Application restrictions," either:
- Set HTTP referrers (
*.yoursite.com/*) — but remember these can be spoofed via referrer header manipulation - Set IP addresses (stronger, but doesn't work for web apps with unknown clients)
- Set HTTP referrers (
- Under "API restrictions," uncheck "Don't restrict key" and ONLY check the APIs this key actually needs
- If it's a Maps key, restrict it to "Maps JavaScript API" ONLY
- Save. Now check that your application still works.
Here's the part most people miss: after you restrict a key, test your application in production. I've seen teams lock down keys in dev, ship to prod, and discover they broke the live site because prod uses a different key or domain they didn't test.
The nuclear option: Rotate everything
If a key was ever public and your project has ANY sensitive API enabled, assume it's compromised. Don't debate this. Just rotate.
# The process, in order:
# 1. In the console, archive the old key (don't delete immediately - you might need to roll back)
# 2. Create a fresh key
# 3. Apply restrictions BEFORE you generate any code with it
# 4. Update your code
# 5. Deploy to a test environment
# 6. Verify functionality
# 7. Deploy to production
# 8. Monitor for breakage
# 9. Only AFTER everything works for 24-48 hours, delete the old key
Yes, it's painful. Yes, you might get billing disruption while everything sorts out. But the alternative—someone draining your account while accessing your data—is worse.
Consider service accounts for anything sensitive
Service account JSON keys are actual secrets. They're meant for backend use. If you need Gemini access from your application, use a service account and keep the key on your server. Never in the browser.
Better yet: use Workload Identity Federation if you're running on GKE or Cloud Run. It removes the managed key entirely and lets your infrastructure authenticate directly using IAM. No credentials in code, no rotation drama, no leaked-key panic.
The Fix Google Should Implement
I don't expect Google to rewrite their entire key infrastructure overnight. But they need to address this systematically, and here's what that looks like:
1. Separate key types
One key format for public identifiers, another for privileged APIs. The AIza prefix can stay for Maps and other public services. Create a new GOOG_SECRET or similar for sensitive APIs. Make it impossible to use the wrong key type with the wrong service. This is what Stripe does, and it's not exactly rocket science.
2. Explicit opt-in for retroactive access
When enabling Gemini (or any sensitive API) on a project with existing public keys, prompt developers explicitly: "This will grant access to sensitive data for ALL existing keys in this project. Do you want to proceed, or would you like to review and possibly revoke existing keys first?"
Force the decision. Don't let it happen silently.
3. Default deny for new keys
New keys should default to no APIs, with explicit opt-in per service. The current "unrestricted" default is dangerous. If you create a key, you should have to intentionally grant each API—no accidental access.
4. Notifications when permissions expand
Email or alert developers when a key's permissions change. "Key X in project Y now has access to Generative Language API. If this wasn't intentional, click here to revoke."
5. Project-level security defaults
Allow organizations to set default policies: "In my org, new keys are restricted to specific APIs unless explicitly overwritten." Give security teams a way to enforce standards without reviewing every key creation.
Google has started addressing this—they built an internal pipeline to discover leaked keys and began restricting access. But the fundamental design flaw remains. Until they separate "billing identifier" from "authentication credential," this problem will repeat with each new sensitive service.
The Bigger Lesson
This isn't just about Google. It's about how we handle deprecation and privilege expansion in cloud services.
When you retroactively change what a credential can access, you're not adding features—you're expanding attack surfaces. And when credentials that were explicitly "safe to leak" become secrets, you've violated the implicit contract with every developer who followed your documentation.
I've seen this pattern elsewhere. AWS credentials that used to be permissive getting scoped down unexpectedly. Azure AD tokens with new scopes being granted without opt-in. Auth0 rules changing mid-deployment. The common thread: changing the security contract without resetting expectations.
We treated these keys as billing tokens because Google told us to. Now we're supposed to treat them as secrets because Gemini needs them. That's not a security issue—that's a trust issue.
The lesson here isn't "Google's API keys are dangerous." The lesson is: when a vendor tells you something about security—anything about security—write it down. Because five years from now, when they quietly change the rules, you'll need that documentation to prove you did what you were told.
The next time this happens—and it will happen—you'll want to know exactly what documentation said, when it changed, and who made the decision. Security is about predictability. When that breaks, everything is at risk.
Actionable checklist:
- [ ] Search all repos for
AIzapatterns (including base64 variants likeQUl6Y) - [ ] Audit every Google Cloud project for enabled Gemini or other sensitive APIs
- [ ] Restrict all API keys to specific services and referrers/IPs
- [ ] Rotate any key that's ever been public, especially on projects with sensitive APIs
- [ ] Use service accounts for backend-only access, preferably with Workload Identity Federation
- [ ] Monitor Google Security Bulletins—the quiet changes are the dangerous ones
- [ ] Document why each key exists and what it's supposed to access
- [ ] Set up automated scanning for leaked secrets in your repositories
The era of "API keys aren't secrets" is over. Treat every AIza... as compromised until proven otherwise. And when the vendor changes the rules, don't assume they'll tell you—assume you need to find out.
Top comments (0)