DEV Community

Cover image for Three Vulnerabilities That Quietly Rewrote the Threat Model in 2025

Three Vulnerabilities That Quietly Rewrote the Threat Model in 2025

Three Vulnerabilities That Quietly Rewrote the Threat Model in 2025

Every security vendor on the internet publishes a "top CVEs of the year" listicle. This isn't one of them.

What I want to do is take three vulnerabilities from 2025 that, individually, look like another round of patch-and-move-on — and show why, taken together, they describe a shift that most teams haven't internalized yet.

I've been building cybersecurity tooling in Rust under my company Ai2innovate SRL in Belgium (the product is called CyberXDefend — a forensics and incident response platform for EU law firms). That work keeps forcing me to stare at the gap between how we talk about vulnerabilities and how they actually compose into attack chains. These three CVEs are the ones that changed how I think about defense in 2025.

Here they are:

  1. CVE-2025-53770 — "ToolShell" — an insecure deserialization flaw in on-prem SharePoint that became a full unauthenticated RCE with persistence.
  2. CVE-2025-55182 — "React2Shell" — an unauthenticated RCE in React Server Components and Next.js that turned a server-side rendering optimization into a zero-click remote shell.
  3. CVE-2025-30066 — the tj-actions/changed-files compromise — a single malicious commit in a GitHub Action that briefly exfiltrated secrets from thousands of CI pipelines.

Each one is a different class of failure. That's why they matter together.


1. ToolShell (CVE-2025-53770) — the deserialization disease never went away

On-premises SharePoint servers got hit hard in July 2025. Microsoft issued out-of-band patches, CISA added the CVE to its Known Exploited Vulnerabilities catalog within days, and by the end of the month security researchers were reporting hundreds of confirmed compromises across government, healthcare, and legal sectors.

The root cause? Insecure deserialization. A class of bug we've known about since roughly 2016.

What actually happens

SharePoint accepts certain HTTP requests that contain serialized .NET objects — classically in __VIEWSTATE, AuthorizationCookie, or related fields. When the server deserializes these objects, it reconstructs their type graph by calling constructors and setters based on the serialized metadata. If an attacker can supply a serialized object of a type whose constructor does something useful — like spawning a process, writing a file, or loading an assembly — the attacker gets code execution as the web worker process. On SharePoint, that means NT AUTHORITY\SYSTEM-equivalent privilege on the host.

The fix for ToolShell was essentially: don't blindly deserialize attacker-supplied input with types drawn from the full BCL. Validate the type whitelist. Use DataContractSerializer instead of BinaryFormatter-adjacent primitives. But the exploit was already working against a patched variant (CVE-2025-49704) — the fix was incomplete, which is why 53770 got a second, harder patch.

Why this one matters

Three reasons:

First, it's the same bug class that hit us in ViewState abuse in 2017, in Log4Shell (yes, JNDI-triggered deserialization is the same family) in 2021, and in countless Java gadget chains in between. We keep shipping deserializers that trust their input. The defense — strict type allowlists, schema-validated formats, and moving serialization to data-only protocols like Protocol Buffers with explicit message definitions — is known. It's just not uniformly applied.

Second, SharePoint sits at a terrible intersection: it's widely deployed on-prem in exactly the regulated industries (government, law, healthcare) that can't move to cloud quickly, and it's an authentication and document system. A compromise here doesn't just give you a shell — it gives you privileged access to the documents the organization considers most sensitive.

Third, the ToolShell chain demonstrated durable access. The exploit didn't just execute code; it extracted the ASP.NET machine keys, which let attackers mint their own valid ViewState payloads indefinitely. Even after patching, organizations that didn't rotate those keys stayed compromised. This is the pattern I'd watch for in 2026: attackers stealing long-lived cryptographic material so that patching doesn't evict them.

What to actually do

  • If you run on-prem SharePoint: patch, rotate machine keys, hunt for anomalous ASPX files in LAYOUTS.
  • If you write .NET services: audit every deserialization call. Anything touching BinaryFormatter, NetDataContractSerializer, SoapFormatter, or LosFormatter with untrusted input is a latent RCE.
  • More broadly: treat "can this deserializer construct arbitrary types?" as a yes/no security property of every service boundary.

2. React2Shell (CVE-2025-55182) — when a rendering optimization becomes an exploit primitive

This one is for the web developers reading. React2Shell affected React 19's Server Components (RSC) and, by extension, Next.js applications that used them. It was a zero-click, unauthenticated remote code execution triggered by a single crafted HTTP request.

What actually happens

React Server Components were introduced to solve a real problem: server-render part of the component tree, stream it to the client, hydrate selectively. To make this work, the server and client pass around serialized component references — basically, a protocol that says "here is a component, here are its props, here are the server actions bound to it."

The vulnerability lived in how the framework resolved server actions. Under specific conditions, a malicious HTTP request could supply a serialized reference that caused the server to invoke a function with attacker-controlled arguments, in a path that wasn't meant to be directly reachable from the client. The result: arbitrary code execution on the Next.js server process.

Because RSC is part of the default rendering pipeline in modern Next.js apps, exploitation required no authentication, no user interaction, and no unusual configuration. By mid-December 2025, Shadowserver reported tens of thousands of vulnerable IPs on the public internet. AWS publicly named Chinese state-linked groups — Earth Lamia, Jackpot Panda — as already exploiting it.

Why this one matters

SSR and SSG were supposed to be the boring, secure alternative to client-heavy SPAs. React2Shell inverted that assumption. The moment you ship server-side rendering with framework-native RPC (which is what server actions are, underneath the ergonomics), you've created a new attack surface: the set of server functions reachable via the rendering protocol.

The bug itself is a symptom of a broader issue. Modern frameworks have been moving toward what I'd call implicit RPC: you write a function, the framework makes it callable from the network, the wire format is hidden. This is wonderful for DX and terrible for threat modeling. You can't audit an attack surface you can't see.

This is also why static typing alone doesn't save you here. TypeScript told you the function signatures. It did not tell you which of those functions the framework would expose to unauthenticated HTTP requests, or under what routing conditions.

What to actually do

  • Update React 19 and Next.js to the patched versions (and keep updating — there have been follow-on advisories).
  • Audit every server action and RSC boundary. Ask: what invariants does this function assume about its caller? If the answer is "it's only called from trusted server code," that's no longer true.
  • Put authentication and authorization on server actions explicitly. Don't rely on "this function isn't in the route table" as a security boundary.
  • Consider WAF rules that detect RSC protocol anomalies — the signatures for exploitation are narrow and detectable.

If I had to summarize the React2Shell lesson in one sentence: any framework that auto-exposes your functions to the network is a framework you have to audit as a network service, not as a library.


3. The tj-actions/changed-files compromise (CVE-2025-30066) — trust at the CI boundary

In March 2025, someone compromised the maintainer account for tj-actions/changed-files, a GitHub Action used in tens of thousands of CI workflows. They pushed a malicious version that dumped environment variables — including secrets — to the workflow log, where an attacker monitoring the repo could scrape them.

It was caught relatively quickly. But for a window of hours, any CI pipeline using the action with @v35 (or similar unpinned references) was leaking AWS keys, GitHub tokens, Docker registry credentials, cloud provider secrets — everything.

What actually happens

The technical mechanism is almost banal. A GitHub Action is just code that runs inside your CI runner with access to the secrets you've given the workflow. If you reference an action by a mutable tag (@v35, @main) instead of an immutable commit SHA, you're trusting whoever controls that tag to not be malicious. When the maintainer's account is compromised, that trust is violated.

The malicious code did something clever: it read process memory for the runner's internal secret store and printed obfuscated versions to stdout, which ends up in workflow logs. If the repo had public logs, the secrets were public. If the repo had private logs, any collaborator could still read them.

Why this one matters

This is the supply chain attack everyone predicted and most organizations are still not defended against. And it matters disproportionately because of what CI pipelines have access to:

  • Production deployment credentials
  • Cloud infrastructure tokens
  • Package registry publish keys (so the attacker can compromise your downstream users)
  • Source code signing keys
  • Database migration credentials

A CI compromise is a production compromise, and often a customer compromise. The blast radius is enormous.

From an EU regulatory angle, this is also where NIS2 and the Cyber Resilience Act start to bite. Under NIS2, "supply chain security" is an explicit management obligation for in-scope entities. A tj-actions-style incident that leaks credentials and cascades into customer impact is no longer just an engineering problem — it's a board-reportable incident in many EU jurisdictions.

What to actually do

Concrete checklist — these are ordered by cost-to-implement:

  1. Pin GitHub Actions to commit SHAs, not tags. uses: tj-actions/changed-files@a5b3c7d... not @v35. This alone would have prevented most of the damage.
  2. Run Dependabot or Renovate to keep the pinned SHAs current while still being explicit about what you trust.
  3. Limit secret scope per workflow. Don't give the linter job your production AWS keys.
  4. Use OIDC federation where your cloud provider supports it — no long-lived secrets stored in GitHub at all.
  5. Monitor your workflow logs for unexpected environment variable access patterns. There are open-source tools for this; I've been thinking about what a Rust-native version would look like.
  6. Keep a software bill of materials (SBOM) for your build pipeline itself, not just your application dependencies. Your CI config is part of your software.

What these three have in common

Look at them together:

  • ToolShell is a classic memory-safety-adjacent bug class (deserialization) that persists because we keep ignoring it in the legacy code nobody wants to rewrite.
  • React2Shell is a modern architecture bug: we built frameworks that make the network invisible to developers, and attackers noticed.
  • tj-actions is a trust bug: the CI supply chain is the most privileged, least audited part of most organizations.

The common thread is invisible trust boundaries. ToolShell exploits a trust boundary that developers forgot existed (the deserializer). React2Shell exploits a trust boundary the framework hid from them (the RSC protocol). tj-actions exploits a trust boundary that was never made explicit (the third-party Action).

If I had to predict where 2026 goes, it's this: the next round of high-impact CVEs will also be invisible-trust-boundary bugs. The attack surface is no longer "your server on port 443." It's every implicit contract your code has with a library, a framework, a build step, or a runtime.

The defensive move is to make those boundaries explicit — in code, in threat models, in operational tooling.

That's what I'm building toward with CyberXDefend, and it's what I think the industry has to converge on.


Further reading

If you want to go deeper on any of these:

  • Microsoft's Security Response Center advisories for ToolShell and the follow-on CVE chain
  • The Next.js security advisories for CVE-2025-55182 and related issues
  • StepSecurity's and Wiz's write-ups on the tj-actions incident — both have solid timelines
  • OWASP's 2025 Top 10 update, which added "Software Supply Chain Failures" as the third-most critical AppSec risk
  • CISA's Known Exploited Vulnerabilities catalog (check your own stack against it monthly — this is free and most teams don't do it)

If any of this is relevant to a project you're working on — EU law firm forensics, NIS2 readiness, or hardening a Rust/Next.js stack against the classes of bugs above — I'm reachable at the links below. I also do free 30-minute architecture reviews for teams under NIS2 scope; it's a good way for me to learn what's actually breaking out there, and for you to get a second pair of eyes.

Darshan Kumar
Founder, CyberXDefend
GitHub: DarshanKumar89 | X: @darshan_aqua | website : CyberXdefend

Top comments (0)