I see VERCEL hacked going around the feed. Then this morning, an email from VERCEL in my inbox. Alright, I hadn't planned any maintenance today. Apparently there's an emergency.
I open the Vercel dashboard of a project running in prod. Six orange badges on my environment variables. Need To Rotate. JWT keys, OAuth secrets, MCP tokens. Arghh.
TLDR: a breach exposes one secret. Your response to the breach exposes the rest. The default audit commands you run in post-incident panic display values in plaintext by design, and your secrets live in more places than you think. How many, exactly? That's the real question.
Before the badges, there was a Roblox cheat.
Twenty-two months ago, an employee at Context.ai, an AI tool nobody outside their team paid attention to, pulled a binary off a sketchy link on their work laptop. Searching for a way to break a Roblox level. The binary was Lumma Stealer. It took their browser sessions, their saved OAuth tokens, their Google Workspace grant. Then it went quiet.
Twenty-two months of nothing. No alerts. No anomalies. No reason to look.
Then earlier this month, the attacker walks through that OAuth grant into the Google Workspace accounts of Context.ai customers. Including a Vercel employee. Including, through that employee's access, Vercel's internal systems. They find the environment variables customers had marked non-sensitive, which meant Vercel stored them unencrypted at rest, which was the default. They copy. They list the dump on BreachForums. Asking price: two million dollars.
Sunday morning my phone vibrates with the bulletin email.
Now the orange badges.
Sunday Morning, 6 Orange Badges

The badges aren't a guess. They're based on Vercel's access logs during the incident window. Each one means a specific variable was read by the compromised OAuth app during the breach. Which means on the other side, in that BreachForums dump, those values sit in plaintext.
Six badges on one project: JWT_PRIVATE_KEY_JWK, OAUTH_SECRET, OAUTH_CLIENT_ID, DASHBOARD_PASSWORD, MCP_AUTH_TOKEN, CAROUSEL_RENDER_SECRET. I have nine projects. Realistic count sits between 25 and 40 variables once you factor in the downstream apps that cache those tokens.
And my secrets don't live only on Vercel.
Thirty minutes later I finish the first project. Methodical, pleased with myself. Instead of stopping, I decide to audit everything else.
That's where it gets interesting.
The Command That Leaked More Than The Breach
To map the secrets on my Convex self-hosted instance, I type the most natural command in the world.
bunx convex env list
I expect names. Like aws iam list-users, gh secret list, vercel env ls. Those all return names. Just names. Values stay hidden behind a second explicit command.
My terminal fills up. GitHub PAT with prod write scope. Beehiiv admin JWT. Fal.ai key. OpenRouter key. YouTube Data API key. RapidAPI key. Twelve more below. All values. All in plaintext. No flag to pass. No warning banner. Just the dump.
Four seconds of staring. Then I close my eyes.
Triple persistence. Bash history. Terminal scrollback. The Claude Code transcript I had open while auditing, because of course I had one open, I was working.
Eighteen secrets, unrelated to Vercel, now sitting in three places I don't fully control. My machine isn't compromised. Probability of malicious use is low. But low probability, high blast radius is the math that got us into this mess.
Four hours.
Your Secrets Live In More Places Than You Think

Count them. Really count them.
A single secret in a modern stack sprawls across four stores, not one. You only realize it the day you have to rotate, and by then it's late.
The hosting runtime comes first. Vercel, Netlify, Railway, Render, Fly.io. That's the store the breach hit.
If you've built anything non-trivial, you also have a self-hosted backend. Convex self-hosted, a custom VPS, a Fly machine running the app layer. Second copy, deployed last week, probably slightly out of sync with the first.
Then the external vault. Infisical, 1Password, Doppler, HashiCorp Vault. If you're disciplined enough to use one. Supposedly the source of truth. It never really is.
And .env.local on the dev machine. The digital equivalent of a Post-It under your keyboard. Always there. Even when you swore you cleaned it up last Friday.
You tell yourself the vault is the source of truth. Realistically, Vercel has its own copy synced last month, your Convex instance has another copy from last week, and your .env.local is a snapshot from three months ago with two rotated values still in plaintext at the bottom.
A breach at one provider exposes one store. Fair enough. That's the blast radius you signed up for.
The audit commands you run in the next thirty minutes, panicking and caffeinated, expose the three others. That's the blast radius nobody warned you about.
Rule: treat the audit with the same care as the rotation. Every listing command is a release candidate. Every listing command has a failure mode. Every listing command deserves the flag check you skip.
The audit is the incident.
Commands That Dump Everything (And What To Run Instead)
bunx convex env list isn't a one-off. It's a family.
Any CLI that ships an env list or secrets list command should be assumed guilty until the man page proves otherwise. Read the flag behavior before you type. Once the values hit your screen, they're in your bash history, your terminal scrollback, and whatever AI tool happens to be reading your shell right now.
The guilty parties I know about. gh secret list --show-value prints every GitHub repo secret in plaintext. aws ssm get-parameter --with-decryption does the AWS equivalent. vercel env pull dumps every Vercel variable into a local .env file that sits on disk until you remember to delete it. And of course bunx convex env list and npx convex env list, both equivalent.
What to run instead.
For Convex self-hosted, since env list is a trap:
bunx convex env get STRIPE_SECRET_KEY 2>/dev/null | wc -c
Returns the byte length, not the value. Exists if > 0, missing if 0.
For Infisical, don't use the UI past twenty secrets, use the API:
curl -s "https://app.infisical.com/api/v3/secrets/raw/STRIPE_SECRET_KEY" \
-H "Authorization: Bearer $INFISICAL_TOKEN" \
| jq '.secret.updatedAt'
Returns a timestamp. Never prints the value.
For Vercel, the UI has the new overview dashboard. For CLI, vercel env ls prints names, timestamps, environments. No values. Stay away from vercel env pull until the audit phase is over.
For .env.local, grep the name, don't cat the file:
grep -l "STRIPE_SECRET_KEY" .env.local
For GitHub, gh secret list alone (no --show-value) returns names. Enough for an audit.
Build a little text matrix as you go. For each secret, write down which stores hold it. Something like STRIPE_SECRET_KEY = Vercel + Convex + Infisical + .env.local. That's your rotation plan. You now know exactly how many places need to be re-synchronized, in what order, and which downstream apps need fresh tokens.
The man page is cheaper than the rotation.
The Playbook
The incident produced three rules, not three sets of rules. They're stack-agnostic. They work whether you're on Vercel, Netlify, Railway, or a Fly machine nobody else in the company remembers exists.
Map. Reduce. Rotate.
Map
The first rule is cartography. Every secret in every project has one source of truth and N caches, and the map of where those caches live belongs in the repo, not in your head.
A single source of truth per secret. Infisical, 1Password, Doppler, whatever you picked. Every other store (Vercel, Convex, .env.local) is a cache of that source. Sync one-way where your plan supports it, document the sync path where it doesn't.
A CLAUDE.md or SECURITY.md in every repo, with four things: the secrets the repo uses, where each one lives across the four stores, the provider dashboards to rotate them, and the commands nobody is allowed to run against this repo. Written down. So the version of you at 11 pm panicking about a new breach doesn't reinvent the map in the dark.
The Sensitive flag at creation time, never after. Vercel shipped this in a panic response and I watched half the internet misunderstand it. Flipping Sensitive on a variable that's already compromised does not rewind anything. The value is out. It only works prospectively, which means the discipline is at creation, not after the fact.
Zero .env committed. .gitignore systematic. .env.example with empty placeholders only. Old rule, still violated weekly somewhere in your org, count on it.
Reduce
The second rule is scope reduction. The key that can do the least is the key least worth stealing, and the key that can do the most is the key the attacker actually wants.
Scoped keys wherever the provider supports them. Stripe restricted keys scoped per feature, not the generic secret key that can refund, delete customers, and issue payouts. GitHub fine-grained Personal Access Tokens, one per use case, one repo, one scope. Clerk with distinct keys per environment, so a leaked dev key stays in dev. Supabase with row-level-security policies and the anon key for everything client-side, service_role restricted to one narrow server-side job. Convex with a read-only deploy key for builds, admin key for explicit admin scripts only.
Spending hard caps on every consumption-metered provider. OpenAI, Anthropic, OpenRouter, Fal, Replicate. A compromised key generating at $500 an hour is the difference between a bad afternoon and a bad quarter. Or between a bad quarter and a Hacker News post with your name in the title.
Hardware MFA on the pivot accounts. Google Workspace, domain registrar, the hosting runtime, the code host. YubiKey or passkey. Not SMS. These four accounts are the ones that would unlock everything else if compromised, and they're exactly the ones Context.ai-style tools ask for broad OAuth grants on.
Rotate
The third rule is rotation discipline. If you only rotate after breaches, you only rotate during panic. And panic is when the audit commands leak things.
A cadence. Every 90 days for critical keys: auth, payments, high-volume LLM. Annual for the rest. Proactive beats reactive every time.
A procedure. Generate the new value at the provider. Keep the old active if dual-key is supported, otherwise keep the swap window short. Update the four stores in order, from source of truth outward. Smoke test end-to-end, not just the build. Revoke the old value only after validation. Flag Sensitive at rotation time, same screen, same workflow, never in a separate pass.
A hygiene rule. One shell session per project during rotation. Fresh terminal, fresh Claude Code conversation, no cross-project pollution. If a session's history gets compromised later, the blast radius stops at one project.
A quarterly OAuth sweep. GitHub, Workspace, the hosting runtime, Linear, Notion. Anything unused in 90 days gets revoked. No sentimentality. I've rebuilt my stack from scratch before, when a platform pulled the rug on me. Takes a weekend, not a career. The cost of keeping a stale OAuth grant around is always higher than the cost of rebuilding the integration on the day you actually need it again.
Map, reduce, rotate. Everything else is optimism.
Then The Second Email Arrived
Monday morning. Twenty-four hours after the Vercel mail. Gmail notification. The Claude application on your GitHub account is requesting additional permissions.
I blink. Same family of vector. An AI-adjacent tool already installed on my account, asking for expanded scopes, with an Allow All button and a reassuring product screenshot underneath. I click Review, not Allow. The new scopes include write access to all my repositories, private ones included. I close the tab and leave it for later.
Two emails, 24 hours apart, both from the same threat surface.
Vercel's CEO Guillermo Rauch went on record publicly saying the attack was significantly accelerated by AI. Which tracks. When your upstream threat is an AI tool holding grants into Workspace, Supabase, Datadog, and Authkit at the same time, one popped credential becomes lateral movement into a dozen downstream apps.
According to AppOmni's 2026 SaaS Security Report, cited by Before The Curve's April 21 Medium piece, 76% of employees use unapproved SaaS apps, averaging 25 apps per person, and 31% of SaaS breaches now exploit OAuth or API connections. That's the main attack surface of 2026.
One-click Allow All is the single largest attack surface in enterprise software today, and AI tools are the single largest generators of those one-click prompts. Every AI agent that integrates with your stack asks for broad scopes because asking for narrow ones means more setup friction, less product stickiness, worse onboarding metrics. So they ask for everything. You click yes. A year and a half later the tool gets popped and takes your Workspace with it.
Somewhere out there, Karen from Accounting saw the same email I saw and clicked Allow All without scrolling. She's not the problem. The button is. No amount of security training will fix a UX that makes review take four clicks and allow take one.
This isn't a ban AI tools argument. I use Claude Code eight hours a day. It's an OAuth hygiene is now table stakes argument. One specific tactic that helps: prefer narrow-scope CLI tools over broad-OAuth MCP servers whenever you have the choice. A CLI with a short-lived token on your machine has a much smaller blast radius than an MCP server with a persistent OAuth grant at the provider level. Not always possible. When it is, take it.
The bill comes due eighteen months later.
What The Breach Really Taught Me
The breach cost me thirty minutes on those six badges. The audit cost me four hours and eighteen more rotations. The write-up you just read cost another two.
The durable lesson isn't the procedure. It's the pattern.
Twenty-two months ago someone clicked a Roblox cheat link. That one click, routed through one OAuth grant with expansive scopes, eventually turned into my Sunday morning bulletin email and a weekend of rotations. Not because the attack was sophisticated (it wasn't). Because the stack between that laptop and my environment variables was a long chain of Allow All clicks, each one a delayed trap, each one waiting for a patient attacker with nothing better to do for twenty-two months.
Every default listing command in your stack is an amplifier for those traps. Every unscoped key, every shared secret, every committed .env, every SMS-based MFA is another day the next attacker spends inside before anyone notices.
A breach exposes one secret. Your response exposes the rest. Unless you mapped, scoped, and rotated before it became the only option.
The next Vercel is coming. Whether you'll be hit isn't the question.
It's how many more secrets your audit will leak on top 😬
Sources
Vercel's April 2026 security incident bulletin: https://vercel.com/kb/bulletin/vercel-april-2026-security-incident
BleepingComputer on the BreachForums listing and Rauch quote: https://www.bleepingcomputer.com/news/security/vercel-confirms-breach-as-hackers-claim-to-be-selling-stolen-data/
Trend Micro research on the OAuth supply chain angle and 22-month dwell time: https://www.trendmicro.com/en_us/research/26/d/vercel-breach-oauth-supply-chain.html
CyberScoop on the Lumma Stealer entry vector: https://www.cyberscoop.com/vercel-security-breach-third-party-attack-context-ai-lumma-stealer/
AppOmni 2026 SaaS Security Report stats, cited by Before The Curve's April 21 Medium article: https://medium.com/@beforethecurve/how-a-roblox-cheat-script-led-to-a-2m-ransom-against-vercel-079707c21f0b
Drafted with Claude Code open in the next tab (yes, the same tab that just leaked 18 secrets to its transcript, which is exactly what this article is about). Rewritten, fact-checked, and edited by me.
Top comments (0)