Originally published on arkensec.com
Most post-mortems this week will frame the Vercel breach as a PaaS story. That's the wrong frame. Next.js wasn't compromised. Turbopack wasn't compromised. Vercel's build pipeline and edge infrastructure did their jobs. What got compromised was a single OAuth grant in a corporate Google Workspace, and the blast radius reached customers because "encrypted at rest" doesn't help when the attacker is holding a live admin session.
The breach sat undetected for roughly 22 months — mid-2024 to April 19, 2026 — because an OAuth supply-chain attack through an AI SaaS routed around Vercel's hosting defenses entirely.
What actually happened, in order
The chain, stitched together from Vercel's bulletin, CEO Guillermo Rauch's disclosure thread, Context.ai's own advisory, and early independent write-ups:
Step 1 — The infostealer.
A Context.ai employee was infected with Lumma Stealer in early 2026. The infostealer harvested Google Workspace credentials plus keys for Supabase, Datadog, and Authkit. Nothing in that initial pile was Vercel-specific. The attacker pivoted from there into Context.ai's consumer product — the "AI Office Suite" — and got their hands on OAuth tokens that consumer users had granted to the suite.
Step 2 — The pivot.
One of those consumer users was a Vercel employee, signed up using their corporate Vercel Google Workspace identity, who had clicked "Allow All" on the OAuth consent screen. The attacker used that token to move into Vercel's Workspace, took over the employee's account, and from there reached Vercel's internal environments. They read customer environment variables that weren't flagged "sensitive."
Step 3 — The exposure window.
Public timelines put the initial OAuth compromise at roughly mid-2024. That's a ~22-month detection gap. Stolen data was listed on BreachForums for $2M by an account claiming the ShinyHunters name; the real ShinyHunters have denied involvement. Attribution is still shaking out.
The MITRE ATT&CK mapping here is pretty clean: initial access via T1078.004 (Valid Accounts: Cloud Accounts), credential access via T1555.005 (Credentials from Password Stores: Password Managers — infostealer category), and lateral movement through T1550.001 (Use Alternate Authentication Material: Application Access Token).
Why "not marked sensitive" is doing so much work
Vercel's environment variable model has two tiers. Variables flagged sensitive are encrypted at rest and unreadable even from internal admin sessions — not even Vercel support can read them back. Everything else is readable by anyone with sufficient control-plane access.
That second bucket is where most teams leave their secrets, because nobody reads the onboarding docs that closely.
Database URLs. Stripe secret keys. OpenAI and Anthropic tokens. Webhook signing secrets. S3 access keys. On the seed-stage SaaS teams I've looked at, the split skews heavily toward the non-sensitive tier. Most founders I've asked didn't know the sensitive flag existed until this week.
I'll say the thing other write-ups won't: Vercel's non-sensitive tier is a UX footgun. The default should be encrypted-at-rest, opt-out to read. Anything else assumes every operator understands the threat model before they paste a key into a form field, and most don't.
If a credential lived in a non-sensitive Vercel env var any time before April 19, 2026, it was readable from within a compromised admin session. Whether yours specifically was read is what Vercel is still investigating. Whether it could have been is already settled.
How to check if you were exposed
Vercel has been contacting affected customers directly. If you haven't heard from them, you're probably in the unaffected majority. "Probably" is not "confirmed" on a breach with a 22-month exposure window.
Here's what to run tonight.
1. Pull the Vercel audit log
Team → Settings → Audit Log. Filter for env.read, env.list, and env.getSensitive events from IPs or user agents you don't recognize, especially across March and April 2026.
Vercel's audit log exports as JSON. If you want to grep it locally:
# Download from Vercel dashboard as JSON, then:
cat vercel-audit-log.json | jq '.[] | select(
.type == "env.read" or
.type == "env.list" or
.type == "env.getSensitive"
) | {time: .createdAt, user: .user.email, ip: .meta.ipAddress, type: .type}'
Anything from an IP that isn't your office range or a known CI/CD provider is worth investigating.
2. Check provider-side logs for every credential that ever lived in Vercel
For AWS — look up any access key that was ever in a Vercel env var:
aws cloudtrail lookup-events \
--lookup-attributes AttributeKey=AccessKeyId,AttributeValue=AKIAIOSFODNN7EXAMPLE \
--start-time 2024-06-01 \
--end-time 2026-04-20 \
--query 'Events[].{
Time:EventTime,
EventName:EventName,
Source:EventSource,
IP:CloudTrailEvent
}' \
--output table
For Stripe, pull recent API key usage from the dashboard under Developers → Logs, and filter for requests from unexpected IPs or unusual user agents. Stripe's API also exposes this:
curl https://api.stripe.com/v1/events \
-u sk_live_YOUR_KEY: \
-d "type=api_key.created" \
-d "created[gte]=1717200000" \
-G | jq '.data[].created'
3. Grep your repo history for leaked artifacts
.vercel/ directories sometimes slip past .gitignore. So do env files that got committed once and then deleted — deletion doesn't remove them from git history.
# Check for .vercel/ artifacts in history
git log --all --full-history --oneline -- ".vercel/*"
# Search for Vercel tokens committed anywhere in history
git log -p --all -S "VERCEL_TOKEN" -- .
# Broader search for any env-style secrets
git log -p --all --grep="NEXT_PUBLIC_" -- .
# Check if .env was ever committed
git log --all --full-history --oneline -- ".env" ".env.local" ".env.production"
If any of those return results, the secret was public the moment that commit was pushed, regardless of what happened with Vercel.
4. Check for NEXT_PUBLIC_ variables that shipped to the browser
This one catches people off guard. Any variable prefixed NEXT_PUBLIC_ gets bundled into your client-side JavaScript and served to every visitor. It doesn't matter whether it was marked sensitive in Vercel — it was already public.
# In your built output
grep -r "NEXT_PUBLIC_" .next/static/ | grep -v ".map" | head -20
# Or check what's actually in your production bundle
curl -s https://yourdomain.com/_next/static/chunks/main-*.js | \
grep -oP 'NEXT_PUBLIC_\w+' | sort -u
If you find API keys in there, rotate them immediately. That's a separate problem from the Vercel breach, but it's the same class of exposure.
The actual fix: lock down your Google Workspace OAuth grants
The check that would have caught this breach upstream isn't a Vercel setting. It's a Google Workspace admin review.
Open your Workspace admin console: Security → API Controls → App access control.
You'll see every third-party app that has OAuth access across your organization. For each one, ask:
- Do I have a written justification for why this app has these scopes?
- Is this app still actively used?
- Did an admin approve this, or did an employee click through a consent screen?
Revoke anything you can't justify. Then set new OAuth grants to require admin approval:
Admin Console → Security → API Controls → App access control → Settings → Require admin approval for all third-party apps
That single config change closes the vector that caught Vercel. It's annoying — employees will file tickets when their new AI tool doesn't connect — and that friction is the point.
For apps you do approve, enforce least-privilege scopes. "Allow All" is never a reasonable choice for a corporate identity. An AI writing assistant doesn't need https://mail.google.com/ scope. If it's asking for it, that's a red flag about the vendor's architecture, not just their consent screen copy.
You can also enumerate current OAuth grants programmatically with the Admin SDK:
# Requires domain-wide delegation and admin credentials
# List all third-party apps with OAuth access
gam all users show tokens
GAM is the open-source Google Workspace admin CLI — if you're managing Workspace at any scale and you're not using it, you're doing it the hard way.
What to change this week, in order
1. Rotate and re-flag credentials.
Every deploy token, API key, and database credential that ever lived as a non-sensitive Vercel env var needs rotation. When replacements go back in, mark them sensitive. The flag exists for a reason.
# Vercel CLI — set a sensitive env var
vercel env add DATABASE_URL production --sensitive
# Or via API
curl -X POST "https://api.vercel.com/v10/projects/YOUR_PROJECT_ID/env" \
-H "Authorization: Bearer $VERCEL_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"key": "DATABASE_URL",
"value": "postgres://...",
"type": "sensitive",
"target": ["production"]
}'
2. Lock down Google Workspace OAuth.
Admin Console → Security → API Controls → App access control. Revoke apps without written justifications. Require admin approval for future grants. Do this before you do anything else.
3. Turn on deploy notifications.
Vercel → Project Settings → Git → Deploy Notifications. Connect Slack or email — whichever you actually read. If an attacker pushes to production in your name, you want to know in minutes, not when an upstream provider flags a leaked key a week later.
4. Scan your external perimeter.
The Vercel breach exposed secrets stored internally. A separate class of problem is secrets that are already public from your own application — NEXT_PUBLIC_ variables in your JS bundle, unauthenticated webhook endpoints, forgotten staging subdomains. Run a free external scan against your production domain at arkensec.com/scan — 17 checks, about two minutes, no signup. It won't tell you whether Context.ai's attacker touched your specific keys, but it will tell you what's reachable from your perimeter right now.
The shape of the next one
The same attack class — OAuth grant to a third-party SaaS with over-broad scopes, infostealer hits one of their employees, attacker pivots through the token, customer blast radius widens — will keep landing until platform defaults stop treating "Allow All" as a reasonable user choice.
The next breach in this shape won't be Vercel. It'll be whatever agentic AI tool half your engineering team connected to Workspace last quarter. Scattered Spider ran a version of this playbook against Okta in 2023. The tooling got cheaper and the AI SaaS attack surface got much larger. The math isn't complicated.
Some Vercel customers got upstream leaked-credential alerts from GitHub secret scanning and Stripe days before Vercel's official disclosure. The hosting platform was the last to know. That's not a knock on Vercel specifically — it's structural. Your credential providers see anomalous usage before your hosting provider sees anything. Set up those alerts if you haven't.
I got one thing wrong in my initial read of this incident: I assumed the 22-month gap meant the attacker was being careful about access patterns to avoid detection. The more likely explanation is simpler — nobody was looking at the audit log. env.read events from an unfamiliar IP in a control plane most teams never open. That's not sophisticated evasion. That's just a blind spot.
Run the audit log query above. Check your Workspace OAuth grants. Rotate the credentials. Two hours of work is cheaper than finding out about your exposure through someone else's incident report.
Top comments (0)