Vercel disclosed a security incident today. The short version: a third-party AI tool (Context.ai) used by a Vercel employee got compromised, the attacker took over the employee's Google Workspace, and used that to access some Vercel environments and environment variables not flagged as "sensitive."
I checked my dashboard expecting the all-clear.
One of my env vars had a "Need to Rotate" tag. I was in the affected subset.
Here's the full cleanup, in the order I did it, with what I'd do differently.
What the tag looks like
Vercel surfaced this in two ways:
- The affected subset gets an email from Vercel security
- The dashboard shows a yellow "Need to Rotate" badge on the specific env var
I didn't get the email (or it went to a spam folder I didn't check). The dashboard tag is what told me.
Lesson already learned: the dashboard is the authoritative source during an incident. Don't wait for the email.
The 10-minute recovery
Step 1: Do NOT revoke the old key first
I almost did. If you revoke before you've cut over, your production endpoint dies between revoke and replacement.
Order matters: new key → update env → verify → revoke old.
Step 2: Generate a new key
Go to the provider (in my case, the email API vendor whose key was exposed). Create a new key with the same permissions as the old one. Name it with the date — vendor-prod-20260420 — so future-me knows what it was for.
Copy the key immediately. Most providers show it exactly once.
Step 3: Update the Vercel env var
In Vercel dashboard: Project → Settings → Environment Variables. Find the compromised one. Edit.
Paste the new value.
And here's the critical part — toggle "Sensitive" on. If your env var was marked Sensitive before the incident, Vercel's bulletin says those values were stored in a way that prevented them from being read. The attacker only had access to non-sensitive ones.
Small gotcha: Sensitive env vars can't exist in the Development environment. Just Production + Preview. That's fine for most setups — you use .env.local for dev anyway.
Step 4: Redeploy
An env var change does not automatically trigger a deployment. You have to redeploy manually or the new value won't hit your running production.
Deployments tab → latest production → ⋯ → Redeploy. Don't reuse the build cache.
Two minutes later, production is running the new key.
Step 5: Verify it works
Hit your endpoint with a test request. In my case, I did a real newsletter signup from the public site with a test email. Got a 200 back, saw the test contact in the email vendor's dashboard. New key live.
If you skip this step and the new key has a typo or permission gap, you won't know until the next real user hits the broken endpoint.
Step 6: Now revoke the old key
Back at the provider. Revoke. Confirm.
This is the step where the potential leak actually closes. Everything before was setup.
Step 7: Audit your other env vars
While you're in there:
- Mark everything sensitive that should be sensitive (retroactive fix)
- Delete any env vars you're no longer using
- Enable Deployment Protection if you haven't already
The relief
My service hasn't really reached anyone yet. Few subscribers, no paying users yet. If the leaked key had been exploited, the blast radius was… honestly, close to nothing.
That's luck, not good practice.
In a universe where this project was six months older and had real traffic, a compromised email API key could have meant someone sending spam from my verified domain, burning my sender reputation, and me finding out three days later when delivery rates cratered.
The time to mark env vars as sensitive is when you create them. Not after an incident.
What I'm changing going forward
Three things:
1. Sensitive flag is the default, not the exception
Every new env var starts as Sensitive unless there's a specific reason it can't be (like needing it in the Development environment for local proxy testing).
2. Secret rotation is a quarterly habit, not an emergency
The value of rotating keys every 90 days isn't that you expect a breach. It's that if one happens, the window of exposure is bounded. Even loosely-stored keys can only do so much damage if they're less than 90 days old when leaked.
I'm adding a calendar reminder. 4 reminders a year, 10 minutes each. Trivial cost.
3. Third-party AI tools are supply chain dependencies now
This breach didn't come from Vercel's security posture. It came from an AI tool one employee used. The attacker didn't need to crack Vercel — they just needed to compromise a vendor that had access to a Vercel employee's account.
Every AI integration in your dev workflow has some access scope. It reads your repo, or your terminal, or your cloud console, or your credentials. That access is attack surface.
I'm going to start writing down, in a flat text file, every AI tool I've authorized on what scope. Not to get paranoid — just so I know what to rotate if one of them makes the news.
TL;DR
- I was in the affected subset. Found out via the dashboard tag, not email.
- 10 minutes of cleanup: new key → update env → redeploy → verify → revoke old → mark sensitive.
- Lucky the project was small. In a larger context, this could have been bad.
- Going forward: Sensitive flag by default, quarterly rotation habit, AI tool access log.
Three cups of tea. No coffee. Still tired. Good day to log a lesson.
Top comments (0)