A Vercel employee signed up for an AI tool. They clicked Allow All on the OAuth consent screen. Three weeks later, customer environment variables were sitting on a hacker's drive with a $2 million asking price.
The tool was Context.ai, an enterprise AI platform that builds agents trained on company knowledge. The breach did not start at Vercel. It started two layers underneath, at Context.ai, where one of their employees downloaded a Lumma Stealer infected Roblox cheat script. The attacker pulled Context.ai's stored OAuth credentials from that infection, rode them into Vercel's Google Workspace, pivoted into a Vercel admin account, and walked out with non-sensitive environment variables for what Vercel describes as a limited subset of customer projects.
This is the supply chain attack version of the AI hype stack I wrote about last week. The bill arrives from layers you did not authorize. The breach arrives from layers you did not audit.
The four layers of this breach
Every coverage of this story stops at layer two. The full chain is four deep, and three of those layers exist inside ordinary developer workflows that nobody flags as risky.
| Layer | What happened | Who owns the risk |
|---|---|---|
| 1: Endpoint at the AI vendor | Context.ai employee runs malware-infected Roblox cheat | Context.ai, opaque to you |
| 2: OAuth grant scope at the customer | Vercel employee clicks Allow All on Context.ai's consent screen | You, every time you onboard a new tool |
| 3: Lateral move via Workspace | Attacker reads the employee's email, finds Vercel admin access | Your IDP and OAuth admin policy |
| 4: Decryption at the platform | Attacker enumerates and decrypts environment variables marked non-sensitive | Your platform's secret-handling defaults |
Each layer compounds the previous. Layer 1 alone gets you a stolen cookie. Layer 1 plus 2 gets you into one Workspace. Add layers 3 and 4 and you reach customer data at a billion-dollar PaaS.
Layer 1: a Roblox cheat script
The infection vector at Context.ai was a Lumma Stealer infostealer hidden inside a Roblox exploit script. Lumma sells access to itself for around $250 per month. It harvests browser-stored OAuth tokens, session cookies, password manager exports, crypto wallet files, and any cached auth artifacts the operating system has not locked down.
This is not exotic malware. It is the most common infostealer of 2026. The Roblox cheat angle is also not exotic. Lumma operators target consumer software because the install base is enormous and the targets are not running EDR.
The bridge from Context.ai's gaming-on-the-side employee to a Vercel customer's database URL is two browser tabs and a corporate single sign-on. That is the part that should make every team rethink BYOD.
Layer 2: the Allow All click
The Vercel employee signed up for Context.ai's AI Office Suite using their corporate Google Workspace account. The OAuth consent screen offered scopes. They clicked Allow All.
Allow All on a Google OAuth grant for an AI agent product means, depending on the requested scopes, a non-exhaustive list:
- Read and send mail on the user's behalf
- Read and write Drive files
- Access Calendar with full edit privileges
- Read directory metadata for the entire Workspace domain
- Stay authorized through token refresh until manually revoked
That last one is what nobody thinks about. OAuth refresh tokens do not expire on their own. Once granted, the AI tool keeps a usable session even if the user logs out, changes their password, or leaves the company. Revocation requires going to myaccount.google.com/permissions and explicitly removing the app.
In its public response, Context.ai pointed at Vercel's OAuth admin policy: "Vercel's internal OAuth configurations appear to have allowed this action to grant these broad permissions in Vercel's enterprise Google Workspace." Translation: Vercel's Workspace admin had not restricted which apps employees could authorize. Context.ai is correct that this is a Workspace admin failure. They are also still the company that got popped first.
Layer 3: the lateral move
Once inside the employee's Workspace account, the attacker did not need to break anything. Email is the lateral movement.
Most PaaS account recovery flows go through email. Most internal docs, including sensitive ones, get linked in email. Most invitations to admin consoles get sent as email. A compromised Workspace account is an effective master key for any service that uses email-as-identity, which is most of them.
Vercel's bulletin says the attacker took over the employee's Vercel account from this Workspace position, then "pivoted into a Vercel environment." The bulletin also describes the attacker as "highly sophisticated based on their operational velocity and in-depth understanding of Vercel's product API surface." Sophistication notwithstanding, the API knowledge is documented publicly. The sophistication was operational speed, not code.
Layer 4: non-sensitive environment variables
Here is the developer-specific lesson, and it is harsh.
Vercel splits environment variables into two categories. Sensitive variables are encrypted at rest with a separate key and never exposed in plaintext to authenticated dashboard users. Non-sensitive variables are encrypted at rest, but any user with project access can read them in plaintext.
The default when you create a new variable in the Vercel dashboard is non-sensitive. You have to opt into the sensitive flag. Most teams do not, because the friction is small (you cannot view the value later) and the protection is invisible until something like this happens.
Vercel's customer remediation guidance includes: "Review and rotate environment variables that were not marked as sensitive, treating them as potentially exposed." That is a long way of saying: assume your DATABASE_URL leaked.
The OAuth audit your team has probably never run
Five things to do today, in this order:
- Open
myaccount.google.com/permissionsfor every account that has touched corporate data. Revoke any third-party app you do not actively use. Pay particular attention to apps with names you do not recognize and apps you signed up for during a free trial. - If you are a Workspace admin, open the Apps with access to Google data report and audit which third-party apps your domain has authorized. Restrict the unverified-app and broad-scope categories.
- If you ship on Vercel, audit your environment variables. Anything that grants production access (database URLs, API keys, signing secrets, deployment tokens) should be marked sensitive. The audit takes ten minutes per project.
- Search your secret manager for any value also stored as a Vercel environment variable. Duplicate copies in less-protected locations are how breaches escalate.
- If your team uses any AI agent platform that requested OAuth scopes broader than profile and email, run the same exercise on that vendor's OAuth grant. AI agent products are the most likely category to request expansive scopes because their pitch is doing things on your behalf.
What "AI Office Suite" actually requests
I checked the OAuth scope requests for three popular AI agent products that bill themselves as office suites or knowledge platforms. None named here, because the point is the pattern, not the vendors:
- Product A: Gmail full access, Drive read/write, Calendar full access, Contacts read
- Product B: Gmail full access, Drive read/write, Chat read, Meet, Sheets full access
- Product C: Gmail send, Drive scoped to specific folders, Calendar read
Two of three request enough scope to read every email in the Workspace, including ones with shared admin credentials. One scopes appropriately. The product names do not predict the scope ask. The free tier signup flow does.
The asymmetry: a developer who would never paste their database password into a Discord bot will happily click Allow All on an OAuth screen for an AI tool that asks for the same access by proxy.
Two signals to watch in the next 30 days
ShinyHunters, the threat actor named in the Vercel coverage, has run this play before. Last time it ended with a public dump after the ransom window expired. If a Vercel customer dataset appears on a leak forum in May, the affected blast radius will become public, and any company that did not rotate gets to discover that publicly.
The second signal is OAuth policy changes. Google Workspace already supports admin-level allowlisting of which third-party apps can request which scopes. Most orgs have not enabled it because the friction is real. Enterprises that get burned in this incident will turn it on. SaaS vendors that do not support narrow-scope OAuth flows will start losing enterprise deals. That is the secondary effect to track over the next quarter.
Closer
Last week I argued the AI hype stack hides costs in layers most developers do not audit. This week's incident says the supply chain hides risks in the same layers, except the bill is a customer notification email, not a credit card overage.
The Allow All click takes one second. The audit takes ten minutes. How many AI tools currently have OAuth tokens to your work email? Open myaccount.google.com/permissions and tell me the count in the comments. Mine was eleven before I started writing this. It is three now.
Top comments (0)