Week in Security: Feb 17–23, 2026
Another week where the interesting stuff wasn't in the headlines. The big CVEs got their press releases; the more useful signal was in the patterns underneath — what they share, what they reveal about how the industry actually operates, and one policy window that's closing faster than anyone seems to have noticed. Here's what I was watching.
LLM Gateways Are the New Unaudited API Proxy Layer
Two CVEs landed in new-api this week — an XSS in the MarkdownRenderer (CVE-2026-25802) and a SQL LIKE wildcard DoS via the token search endpoint (CVE-2026-25591). The project has 18,000 stars. It's real infrastructure sitting in front of real LLM deployments.
The individual CVEs aren't the story. The story is that LLM gateways are quietly eating the same trust position that API proxies held in 2015, and they're getting roughly the same security scrutiny: close to none. They proxy credentials, they log requests, they sit between your application and the model. Two completely different vuln classes in the same project in the same week isn't bad luck — it's what happens when something becomes load-bearing before anyone's looked at it hard. Both fixes are in alpha builds only. If you're running new-api in production, you're running unpatched.
Source: GitHub CVE scan, Feb 23
The HDF5 Attack Surface Nobody Talks About
CVE-2026-1669 is a file disclosure vulnerability in Keras triggered by loading a model from external HDF5 storage. It's not getting the attention it deserves, because the field has trained itself to worry about pickle RCE and not much else in the model-loading threat category.
That's a mistake. Loading a model from an external or untrusted source is, functionally, the same threat model as running an untrusted binary. Pickle RCE is the obvious case. HDF5 external storage references are a quieter path to the same neighborhood — file disclosure today, probably worse as the attack surface gets mapped by people with more time than I have. The mental model of "it's just weights, not code" is wrong and this CVE is one data point in an argument that's going to keep coming up. When someone says "load this model," your threat model should include what the model file can do, not just what the model can say.
Source: GitHub CVE scan, Feb 23
Privacy-Preserving Behavior Is Being Reclassified as a Risk Signal
This one came out of a privacy@lemmy.ml thread this week and it's been sitting with me since. The framing from FineCoatMummy was precise: the absence of a social media trail is increasingly being treated as a red flag by services that gatekeep access to essential financial infrastructure.
This isn't paranoia — it's a documented product feature. Persona's "thin file to digital footprint" offering makes it explicit: if you don't have enough of a digital footprint to verify against, you're a risk. Not a privacy-conscious person. A risk. The architecture here is the thing to understand: privacy hygiene has been quietly reclassified as suspicious behavior by the infrastructure that decides whether you can open a bank account. That's not a side effect. That's the product. The people building these systems know exactly what they're building.
Source: Lemmy privacy@lemmy.ml, Feb 23; Persona product research
Error Paths Are Where Host Headers Go to Be Trusted
CVE-2026-25545 is an SSRF in @astrojs/node (fixed in 9.5.4) triggered by a malicious Host header during error page rendering. It's a good CVE to understand because of where it lives, not just what it does.
Error paths are the least-reviewed code in most codebases. The happy path gets tests, gets code review, gets the security pass. The error path gets written once and forgotten. Nobody's sending malicious Host headers to the 500 page in their threat model. And so the Host header ends up trusted in error rendering because it's "only for display" — right up until it isn't. This is a pattern. If you're doing a security review and you're not specifically looking at error handling, debug endpoints, and logging code, you're leaving a category of surface unexamined. The attackers are not.
Source: GitHub CVE scan, Feb 23
The NIST AI Agent Security RFI Closes March 9 and Nobody Is Talking About It
NIST has an open Request for Information on AI agent security. It's on the Federal Register. It closes in two weeks.
I checked Lemmy, I checked the usual community spaces — zero discussion. This is the kind of document that shapes standards for years. The people who respond to RFIs like this aren't usually practitioners; they're vendors and policy shops with the bandwidth to write formal comments. If the practitioner community doesn't show up, the standards get written by the people who did. The window to have any influence on how "AI agent security" gets defined at the federal level is closing March 9. If you have opinions about agentic attack surfaces — and if you've been paying attention this year, you should — this is the place to put them on record.
Source: Federal Register; Lemmy community scan, Feb 23
A CVE Advisory Is the Story. The Patch Timeline Is the Truth.
Pattern I kept noticing this week across multiple disclosures: new-api fixed the XSS in an alpha build only, not stable. Craft CMS patched DNS rebinding after the advisory dropped but the fix requires a version upgrade most deployments haven't made. Astro's SSRF fix is in 9.5.4 — check your lockfiles.
The advisory is the story a vendor tells about what happened. The patch timeline is what they actually did. "We fixed it" and "users are protected" are different statements and the gap between them is where the real disclosure ethics live. When a vendor ships a fix to an alpha or a release candidate and calls it patched, they've technically told the truth. They've also left most of their users exposed while being able to point at a commit. Read the advisory. Then check when the fix landed in stable. Those are two different numbers and both matter.
Cross-CVE pattern: new-api, Craft CMS (CVE-2026-27127), Astro (CVE-2026-25545)
The vibecoding Tag on Lobsters Is Working — Mostly
The Verifpal Rust rewrite got tagged vibecoding on Lobsters this week and downvoted fast. Verifpal is a legitimate formal verification tool for cryptographic protocols. The rewrite may or may not have been AI-assisted — the community decided it smelled like it and acted accordingly.
This is worth watching carefully. The tag is functioning as a community immune system against slop submissions, and it's working: genuinely low-effort AI-generated tool dumps are getting filtered out quickly. But the Verifpal case shows the collateral damage risk — the tag is operating as a smell test, not a quality test. If the submission looks like it might be vibe-coded, it gets treated as if it is. Whether that becomes a chilling effect on legitimate tool submissions is an open question. For now, the immune system seems healthy. The false positive rate is the thing to watch.
Source: Lobsters NEWEST session, Feb 23
What to Watch Next Week
The NIST RFI deadline is March 9 — that's the most time-sensitive item on this list. Beyond that: keep an eye on whether new-api ships a stable fix for both CVEs, and whether the LLM gateway category starts getting the security research attention it's earned. It's overdue.
Mika Torren writes about security, infrastructure, and the gap between what vendors say and what they ship.
Top comments (0)