<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: 김형운</title>
    <description>The latest articles on DEV Community by 김형운 (@silask).</description>
    <link>https://dev.to/silask</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/silask"/>
    <language>en</language>
    <item>
      <title>The Vercel/Context.ai Breach Wasn't a Vulnerability. It Was a Delegation Path.</title>
      <dc:creator>김형운</dc:creator>
      <pubDate>Wed, 22 Apr 2026 07:33:37 +0000</pubDate>
      <link>https://dev.to/silask/the-vercelcontextai-breach-wasnt-a-vulnerability-it-was-a-delegation-path-3o3b</link>
      <guid>https://dev.to/silask/the-vercelcontextai-breach-wasnt-a-vulnerability-it-was-a-delegation-path-3o3b</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftls0kt2sjtf7356xxm85.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftls0kt2sjtf7356xxm85.png" alt=" " width="800" height="450"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feaxp4vj9tzggd3snja4c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feaxp4vj9tzggd3snja4c.png" alt=" " width="800" height="450"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F51u1zkw7ibs9ru05eaba.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F51u1zkw7ibs9ru05eaba.png" alt=" " width="800" height="450"&gt;&lt;/a&gt;On April 19, 2026, Vercel disclosed an incident involving one of its employee accounts. The confirmed chain was not a zero-day and not a cloud misconfiguration. It was a chain of delegated trust. A Lumma stealer log harvested from a Context.ai contractor's laptop yielded Context.ai's own Google Workspace OAuth credentials. Those credentials gave the attacker a working access token for a Vercel employee's Google account — the employee had previously authorized Context.ai on it. That Google account, in turn, held Vercel dashboard notifications in its inbox, which the attacker used to reach internal project environment variables.&lt;/p&gt;

&lt;p&gt;No CVE was exploited. No MFA was broken. No conditional access policy was bypassed in the traditional sense. Every step rode on a permission the user had already granted, months or years earlier, to a third-party AI tool.&lt;/p&gt;

&lt;p&gt;That is the pattern worth studying. This post walks through it slowly.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Confirmed, and What is Not
&lt;/h2&gt;

&lt;p&gt;Confirmed by Vercel's own security advisory: one employee account was accessed; the initial vector was an OAuth application owned by Context.ai, an AI meeting-notes tool the employee had connected to their Google Workspace account; certain internal project metadata was visible to the attacker.&lt;/p&gt;

&lt;p&gt;Reported by multiple security outlets (BleepingComputer, The Record, Recorded Future News): the upstream credential leak originated in a Lumma stealer infection on a Context.ai contractor's laptop two weeks prior; Context.ai's OAuth client secret was among the harvested material; the same OAuth app was used to pivot into downstream customers.&lt;/p&gt;

&lt;p&gt;Claimed by the attacker on a leak forum: possession of environment variables from "hundreds" of Vercel projects. Vercel has not confirmed this number. Treat it as claimed until an independent source verifies it.&lt;/p&gt;

&lt;p&gt;Everything that follows reasons from the confirmed portion only.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Chain, One Hop at a Time
&lt;/h2&gt;

&lt;p&gt;Step 1. The contractor laptop. A Context.ai contractor's Windows machine was infected with Lumma, a commodity infostealer that scrapes browser credential stores, session cookies, and developer tokens. In the harvested dump were Context.ai's own OAuth application credentials — the client ID and client secret that Context.ai uses to talk to Google on behalf of its customers.&lt;/p&gt;

&lt;p&gt;Step 2. The OAuth app itself. Context.ai is a Google Workspace Marketplace app. When a customer installs it, they grant it a durable set of scopes — typically Calendar read, Gmail read, Drive read. Those grants live on Google's side as refresh tokens issued to Context.ai's client ID. An attacker holding Context.ai's client credentials could, under certain configurations, ride those refresh tokens against any customer who had installed the app.&lt;/p&gt;

&lt;p&gt;Step 3. The Vercel employee's Google account. This employee had Context.ai authorized on their personal-domain Google account. The attacker used the stolen OAuth credentials to obtain a live access token for this account without ever touching the user's password, without triggering a new MFA prompt, and without hitting any conditional access policy that guards interactive sign-in. From Google's logs this looked like Context.ai doing what Context.ai always does: reading calendar, reading mail.&lt;/p&gt;

&lt;p&gt;Step 4. The pivot into Vercel. The attacker read the employee's inbox. In it were Vercel dashboard notification emails, a password-reset link issued during an unrelated earlier event, and email-based 2FA codes for a separate internal tool. The attacker used a subset of this material to authenticate to the Vercel dashboard as the employee and browse project settings, including environment variables.&lt;/p&gt;

&lt;p&gt;Every hop was a legitimate use of a previously granted permission. Nothing in the chain required a new vulnerability.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Assumption That Broke
&lt;/h2&gt;

&lt;p&gt;The control model most teams run today assumes that the identity provider is the chokepoint. Sign-in goes through Okta or Entra. MFA is enforced there. Conditional access policies check device posture there. Audit logs flow from there. If an attacker wants to reach an internal resource, they have to get through the IDP.&lt;/p&gt;

&lt;p&gt;OAuth delegation bypasses this chokepoint by design. Once a user has clicked "Allow" on an OAuth consent screen, the third-party app holds a durable credential that does not pass through the IDP on subsequent use. The app can call the API directly with its refresh token. The IDP sees nothing, because the IDP is not in the call path.&lt;/p&gt;

&lt;p&gt;That is the assumption that broke here. The organization's IDP-centered control plane does not cover permissions the user delegated to AI SaaS vendors. Those permissions live on Google's side, or Microsoft's side, as grants the user can make without any security team ever reviewing them.&lt;/p&gt;

&lt;p&gt;Put concretely: your Okta admin console will not show you that an employee connected Context.ai to their Google Workspace account last month. Your SIEM will not alert when a stolen Context.ai token reads that employee's inbox, because from Google's perspective the token is being used by the application it was issued to. It is only unusual if you are watching the right signal — and most teams are not watching the OAuth signal at all.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Detection Gets Harder
&lt;/h2&gt;

&lt;p&gt;Detecting this kind of abuse is harder than detecting a credential-stuffing attack for three reasons.&lt;/p&gt;

&lt;p&gt;First, the traffic looks legitimate at the API level. The OAuth token is valid. The client ID matches a known, approved application. The scopes match what the user originally granted. No anomaly detector keyed on authentication events will fire, because no authentication event happened — the app used a refresh token it already held.&lt;/p&gt;

&lt;p&gt;Second, the source IP will often look fine. Attackers using stolen OAuth credentials can route calls through residential proxies or even through the vendor's own infrastructure, depending on how they extracted the credentials. "Unusual location" signals that work for human sign-in do not transfer cleanly to machine-to-machine API calls.&lt;/p&gt;

&lt;p&gt;Third, the volume is low. The attacker does not need to read 10,000 inboxes. They need to read one, find the right thread, and pivot. A single extra API call buried in a day of normal Context.ai activity will not stand out in a log line count.&lt;/p&gt;

&lt;p&gt;The detection has to shift from "Did someone sign in weirdly?" to "Is this OAuth grant still justified, and did its usage pattern change?"&lt;/p&gt;

&lt;h2&gt;
  
  
  What to Check This Week
&lt;/h2&gt;

&lt;p&gt;Four checks are worth running regardless of whether you use any of the specific products involved.&lt;/p&gt;

&lt;p&gt;First, enumerate your OAuth grant inventory. In Google Workspace: Admin Console → Security → API controls → Manage Third-Party App Access. In Entra ID: Enterprise applications → filter by "Microsoft Graph" and "Application permissions." You are looking for every third-party app that can read mail, read calendar, read files, or write anywhere. For each one, answer: who approved this, when, and for which scopes?&lt;/p&gt;

&lt;p&gt;Second, find the AI tools specifically. Meeting-note apps, email assistants, "summary" integrations, code-review bots connected to GitHub, and anything marketed as an "AI agent" that talks to your data. Any of these that holds a broad scope (&lt;code&gt;gmail.readonly&lt;/code&gt;, &lt;code&gt;calendar.events.readonly&lt;/code&gt;, &lt;code&gt;drive.readonly&lt;/code&gt;) is a pivot candidate if the vendor is breached.&lt;/p&gt;

&lt;p&gt;Third, look at the usage side. In Google Workspace logs, filter for OAuth token use by third-party client IDs over the last 90 days. In Entra, use sign-in logs filtered by "service principal sign-ins." Baselines matter here — if an app that normally reads 50 calendar events a day suddenly reads 5,000, that is your signal.&lt;/p&gt;

&lt;p&gt;Fourth, if you are on Vercel specifically, mark your secret-class environment variables as "Sensitive" in project settings. This is an opt-in flag that prevents the dashboard from displaying the value in the UI after save. It does not change what the running deployment can see, but it does mean that an attacker browsing the dashboard with a stolen session cannot just read the values. As of today, this flag is off by default. That default is the right thing to argue with your platform team about.&lt;/p&gt;

&lt;h2&gt;
  
  
  What to Change in Policy
&lt;/h2&gt;

&lt;p&gt;Two policy shifts follow from this case. Neither is novel, but both are easier to argue for now than they were last week.&lt;/p&gt;

&lt;p&gt;First, stop treating OAuth consent as a user decision. Most organizations let end users install Workspace Marketplace apps or authorize Entra OAuth applications without any review. That worked when third-party apps were calendars and to-do lists. It does not work when "third-party app" means "an AI product that reads your entire inbox and calendar and summarizes them to an LLM provider." Move OAuth consent behind an admin allowlist. Google and Microsoft both support this. The cost is friction. The benefit is that your security team sees every new delegation path before it becomes a pivot route.&lt;/p&gt;

&lt;p&gt;Second, treat the IDP as necessary but insufficient. Your IDP enforces sign-in. It does not enforce what the user has delegated. Build a second layer of control for delegated scopes: regular review of the OAuth inventory, alerting on new high-scope grants, and rotation / revocation procedures that assume a vendor breach is a recurring event, not a rare one.&lt;/p&gt;

&lt;p&gt;The larger shift is uncomfortable. An employee connecting an AI meeting-notes tool to their calendar is not, from the user's perspective, a security event. It is a productivity decision made in 30 seconds on a vendor's marketing page. The organization now has to insert itself into that 30-second window. There is no clean way to do that without slowing some of those decisions down.&lt;/p&gt;

&lt;h2&gt;
  
  
  What This Does Not Say
&lt;/h2&gt;

&lt;p&gt;This post is not a takedown of Context.ai. The same pattern could have run through any AI vendor with similar OAuth scopes. If you remove Context.ai from the chain, the structural problem — broad delegation, no IDP visibility, the vendor as a credential concentrator — is still there.&lt;/p&gt;

&lt;p&gt;This post is also not a claim that OAuth is broken. OAuth is doing exactly what it was designed to do. The design assumes the delegated party is trustworthy. When the delegated party is a rapidly growing AI SaaS vendor running on commodity contractor laptops, that assumption deserves a harder look than it is currently getting.&lt;/p&gt;

&lt;p&gt;The specific remediation Vercel chose — rotating the affected employee's credentials, revoking the Context.ai grant, and reviewing environment variable exposure — is the minimum. The broader remediation is inventory and policy. The former you can do this week. The latter is a quarter of work, and it is overdue.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;If you are running SIEM queries or want the concrete OAuth audit steps for Google Workspace and Entra, reply in the comments and I will put the queries in a follow-up post.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>security</category>
      <category>oauth</category>
      <category>devops</category>
      <category>ai</category>
    </item>
  </channel>
</rss>
