DEV Community

Gerus Lab
Gerus Lab

Posted on

Your AI Tools Are Now a Security Attack Vector — The Vercel Breach Explained

Your AI Tools Are Now a Security Attack Vector — The Vercel Breach Explained

The Vercel security incident disclosed on April 19, 2026 is a masterclass in modern supply chain attacks. And if you're a developer using SaaS tools with Google OAuth, you need to read this.

At Gerus-lab, we work with Web3 protocols, AI systems, and cloud-native SaaS architectures daily. We've seen supply chain threats evolve rapidly — but what happened to Vercel represents a new category of risk that most development teams are completely unprepared for.

Let's break it down.

What Actually Happened

The attack chain looks like this:

  1. March 2026: Context.ai — an AI developer tooling startup — suffers an AWS breach
  2. The AWS breach exposes OAuth tokens that Context.ai held for its Google Workspace integrations
  3. The security investigation (done by CrowdStrike) missed the stolen OAuth tokens
  4. A Vercel employee used Context.ai and had granted it Google Workspace OAuth access
  5. Attackers replay the stolen OAuth tokens — no password, no MFA prompt, no friction
  6. From the employee's Google Workspace, attackers pivot laterally into Vercel's internal systems
  7. Attackers access environment variables not marked as sensitive across a subset of customer accounts

The services stayed operational. Your deployments didn't break. But API keys, database connection strings, OAuth secrets — if they weren't marked "sensitive" in Vercel — may have been read.

The New Shape of Supply Chain Attacks

Here's what's different about this attack compared to classic supply chain breaches:

The old model: Attacker compromises an npm package → malicious code ships to millions of users → chaos ensues (see: the 2024 axios compromise, the log4shell era).

The new model: Attacker compromises a SaaS app your employee uses → steals OAuth tokens → impersonates authorized apps → walks into your infrastructure through the front door.

No malicious code. No exploit. Just valid credentials being replayed.

The "supply chain" is no longer just your code dependencies. It's the constellation of SaaS apps your team authenticates to with Google or GitHub SSO. Context.ai was the weak link — and it wasn't even a service your organization was running. It was a productivity tool one engineer happened to have on their personal Google account.

This is now the attack surface.

Why AI Tools Are Especially Dangerous Here

At Gerus-lab, we've been building AI-native products for a few years now — from custom GPT-powered agents to Web3 AI integrations on TON and Solana. We're huge believers in AI tooling. But this incident highlights a risk that's growing fast:

Developer AI tools request unusually broad OAuth scopes.

Context.ai needed access to Google Workspace — mail, files, calendar — to be useful. That's the product. But when a tool holds tokens with that scope, a breach at that vendor doesn't just expose their data. It exposes yours.

Compare this to traditional SaaS integrations. A Stripe integration reads payment data. A GitHub integration reads repos. Their scope is narrow and domain-specific.

AI tools are different. They often need:

  • Email access (to summarize conversations)
  • File access (to analyze documents)
  • Calendar access (to schedule things)
  • Browser history or clipboard (for context-aware suggestions)

That's a broad footprint. And every AI tool vendor that holds those tokens is now a potential pivot point for attackers.

What We're Doing at Gerus-lab

After analyzing this incident, we're implementing these practices across all our client projects:

1. OAuth App Auditing as a Routine

Go to myaccount.google.com/security → "Third-party apps with account access". Most developers have 30-50+ apps authorized. Do you know what each one does? Revoke anything you don't actively use.

For organizational Google Workspace: Admin Console → Security → API Controls → App Access Control. You can see exactly what OAuth scopes each app holds.

2. Vercel-Specific: Mark Everything Sensitive

If you use Vercel, go to your project settings right now:

  • Settings → Environment Variables
  • For every variable that contains a secret (API keys, DB credentials, tokens): check "Sensitive"

Sensitive variables are encrypted in a way that prevents even Vercel employees (and apparently, attackers who breach Vercel's internal systems) from reading them. Non-sensitive variables can be accessed by Vercel staff for debugging — and that's what got exposed.

This is a one-time 10-minute task that eliminates a whole attack class.

3. Secret Rotation on a Schedule

If you were a Vercel customer during this incident, rotate now:

  • Every API key in your Vercel environment variables
  • Database credentials
  • OAuth client secrets
  • Any third-party service token (OpenAI, Stripe, Resend, Sentry, PostHog...)

Even if Vercel didn't notify you directly, conservative hygiene means rotating anything that was stored unencrypted-accessible.

4. ShinyHunters Claim: Take It Seriously

Security researcher @k1rallik claims the threat actor is ShinyHunters (the group behind the 2024 Ticketmaster breach) and that they have Vercel's internal database — including npm publish tokens. This hasn't been officially confirmed by Vercel.

But: if true, the downstream risk is catastrophic. Vercel publishes Next.js (~6M weekly downloads), @vercel/analytics, and other packages. A malicious version of Next.js shipped via compromised npm tokens would be the largest software supply chain attack in history.

Monitor the Vercel security bulletin. Watch your package-lock.json for unexpected version bumps. Consider pinning critical dependencies by hash, not semver.

5. Treat Every AI Tool Vendor as a High-Risk Third Party

When evaluating any new AI tool — coding assistants, email summarizers, meeting transcribers, document analyzers — ask:

  • What OAuth scopes does it request? Is that scope necessary?
  • Where are my tokens stored?
  • What's their breach disclosure policy?
  • Have they had prior security incidents?

We've started including this as a checklist item in our client onboarding. If a vendor's security posture is unclear, we recommend browser-based alternatives that don't require persistent OAuth grants.

The Bigger Picture: AI Is Expanding the Attack Surface

We're at an inflection point. AI tools are becoming table stakes for development teams. They're genuinely useful — we use several ourselves for code review, architecture analysis, and client communication. But each tool you authorize via OAuth is a new link in your trust chain.

The Vercel incident is a preview. As AI tools proliferate, as they get broader access to developer workflows, the number of high-value OAuth token stores grows. Attackers know this. ShinyHunters and groups like them are actively targeting exactly this kind of infrastructure.

The defense isn't to avoid AI tools. It's to:

  • Audit what you've authorized aggressively
  • Apply least-privilege principles to OAuth scopes
  • Rotate secrets on a schedule, not just after incidents
  • Mark secrets sensitive everywhere the platform allows

At Gerus-lab, we build AI-native and Web3 systems for clients who can't afford breaches — DeFi protocols, healthcare SaaS, B2B platforms. Security architecture isn't an afterthought in our process. It's a first-class requirement from day one.

If you want a security audit of your OAuth exposure, your secret management practices, or your cloud-native deployment pipeline — we've done this for a dozen companies and we know what to look for.

Immediate Action List

Do this today:

  • [ ] Go to myaccount.google.com/security → revoke unused third-party app access
  • [ ] In Vercel: mark all secret env vars as "Sensitive"
  • [ ] Rotate all API keys currently stored in Vercel env vars
  • [ ] Review Vercel Audit Log (Settings → Security → Audit Log) for April activity
  • [ ] Audit your Google Workspace admin console for OAuth app permissions
  • [ ] Pin or audit your Next.js version until the ShinyHunters claim is resolved

The breach happened to Vercel. The lesson is for all of us.


We at Gerus-lab specialize in AI system architecture, Web3 infrastructure, and secure SaaS development. If this incident raised questions about your own security posture, let's talk. We've helped teams across 14+ projects build systems that don't become the next cautionary tale.

Follow our engineering blog for breakdowns of incidents like this, plus tutorials on building secure, scalable systems — gerus-lab.com.

Top comments (0)