DEV Community

user32
user32

Posted on

React2Shell in the Wild: Tracking and Disrupting Scanner Pipelines

What is React2Shell and why are scanners so noisy?

React2Shell (CVE-2025-55182) is described as a pre-authentication remote code execution vulnerability in React Server Components. The issue affects specific React Server Components-related packages and versions where server function endpoints may unsafely deserialize attacker-controlled data from HTTP requests.

Affected versions (as published):

  • React Server Components versions 19.0.0, 19.1.0, 19.1.1, and 19.2.0

Fixed versions (as published):

  • 19.0.1, 19.1.2, and 19.2.1

Because React Server Components are widely used downstream (notably via frameworks like Next.js), downstream advisories and patched releases rapidly followed.

Why the Internet got loud: scanners do not need credentials, and they only need an HTTP surface that hits a server function endpoint path. Many deployments also expose non-standard ports (dev, preview, internal, and misconfigured edge paths), so "enumerate everything and spray a probe" works.


Telemetry source: a React2Shell-oriented honeypot with rule-based classification

The honeypot used rule matching to classify inbound probes. React2Shell-specific probes dominated the sample window, with the honeypot rule react2shell_probe_root at 2222 hits and react2shell_probe_next also significant. A background baseline of common recon (.env, .git/config, robots.txt) was present, but secondary in volume.

Figure: Matched rule frequency on the honeypot, showing React2Shell probes (react2shell_probe_root at 2222 hits and a large react2shell_probe_next component) outweighing generic recon (.env, .git/config, robots.txt, etc.).


Initial scanning waves and endpoint selection

Two early views illustrate bulk scanning behavior. The notable signals are:

  • Consistent use of HTTP POST
  • Targeting both / and /_next
  • Payloads delivered as multipart/form-data
  • Fast repetition, indicative of automated tooling rather than manual testing

Figure: High volume request list showing repeated React2Shell-style POST probes, consistent with automated scanning against the honeypot.

Figure: Additional high volume request list showing repeated POST probes against /_next, indicating the scanner targets multiple framework-specific paths.

From a defensive logging perspective, the /_next targeting matters because it immediately suggests a Next.js-aware scan strategy rather than generic "POST /" spraying.


Honeypot filtering via deterministic command execution

The clearest "are you real" validation probe used a small deterministic computation executed server-side. The payload attempted to run a shell expression that evaluates 41*271, which equals 11111. The use of a highly recognizable result (11111) is typical: it is easy to check automatically and avoids issues with whitespace or localization.

Observed attributes:

  • The request body included React2Shell-style structures like resolved_model and __proto__ references, consistent with known exploit patterns.
  • The payload executed a command via Node.js facilities (process.mainModule.require('child_process')...).
  • The probe also used a redirect-style construct (NEXT_REDIRECT) to force a realistic-looking response flow, which helps filter naive traps that return static error pages.

Figure: Pre-probe validation from 45.156.87.99 using an RCE check that echoes a math result (41*271 = 11111) and attempts to blend into expected framework behavior by triggering a redirect-style response.

Figure: ASN and hosting provider identified for 45.156.87.99, which ran the honeypot filtering probe.

How scanners confirm execution before escalation

Many decoys respond with static templates or obvious canned content. A deterministic arithmetic check is a cheap way to confirm "my payload executed and I got output that depends on my input." If your honeypot returns a plausible redirect chain and a correct computed value, scanners that use a two-stage pipeline will often escalate immediately, which is exactly what happened in other samples.


Second stage payload delivery (dropper behavior)

After the validation step, the next behavior observed was a staged payload retrieval:

  • A follow-up exploit payload attempted to fetch a resource from an attacker-controlled HTTP endpoint.
  • The response body was piped into a local file on disk.
  • The file permissions were modified to make it executable.

This is classic "RCE to installer" behavior. The attacker is not interested in the application itself; they want a persistent or monetizable agent.

Figure: Second stage payload from 143.20.64.84 that attempts to download a file from a remote host and write it to disk (/dev/lrt), then chmod it executable.

Figure: ASN and hosting provider identified for 143.20.64.84, the host used in the second stage payload chain.

Observed dropper IOCs worth extracting

Even without reproducing full exploit strings, the operational intent is clear and yields useful defender indicators:

  • Suspicious file targets: /dev/lrt, /etc/lrt, and execution references to /lrt
  • A recurring remote path pattern: /nuts/poop
  • Repeated use of process.mainModule.require(...) to access http, fs, and child_process from within the injected context

Command execution beyond payload retrieval (rudimentary defense evasion)

Another request in the same general pattern attempted to kill a process named watcher via pkill -9 watcher.

That suggests one of:

  • A real-world expectation that some environments run a process with that name (monitoring agent, sandbox tool, a competitor miner, or a honeypot component).
  • A simple "kill obvious monitoring" step copied from prior campaigns.

Figure: React2Shell payload attempt from 195.3.222.78 issuing a command to terminate a process named watcher, indicating post-exploitation command execution and possible defense evasion.

Figure: ASN and hosting provider identified for 195.3.222.78, the host that attempted the pkill -9 watcher step.


Same toolkit, different node, different staging server

A near-identical second stage download attempt came from 87.121.84.24, but it retrieved the payload from a different host (77.110.115.3) and wrote to a different location (/etc/lrt) before chmod. This can indicate:

  • Multiple staging servers for redundancy
  • A/B testing to see what file paths survive
  • A split between "scanner" infrastructure and "payload CDN" infrastructure

Figure: React2Shell dropper variant from 87.121.84.24 fetching a payload from 77.110.115.3 and writing it to /etc/lrt before chmod.

Figure: ASN and hosting provider identified for 87.121.84.24, one of the core scanner nodes in the observed cluster.

Infrastructure inference from the payload hosts

At this point, the data supports an operator that maintains at least:

  • One or more scanner sources (hitting targets)
  • One or more payload servers (hosting the retrieved binary)
  • A separate reporting endpoint (covered later)

Grouping activity by request header fingerprinting

The strongest clustering feature in this dataset is a pair of custom request identifiers:

  • x-nextjs-request-id: poop1234
  • x-nextjs-html-request-id: ilovepoop_<number>

These appear repeatedly across multiple requests and multiple source IPs, even as user agents vary widely (macOS Chrome 134, Android Chrome 127, ChromeOS, Linux Chrome 134).

That combination is not something real browsers emit by default. It looks like the actor's scanner is stamping a constant "tool id" and a per-request numeric id.

Two later screenshots show the same header scheme reappearing from additional IPs, reinforcing that the operator is rotating sources while keeping the same tooling fingerprint.

Figure: Follow-on attempt from 82.23.183.131 that matches the same tool family based on the reused x-nextjs-request-id value and consistent dropper behavior.

Figure: Follow-on attempt from 45.194.92.20 using the same tool headers and a payload that attempts to execute /lrt in the background, further confirming toolkit reuse across source IPs.

Fingerprinting the operator from the dataset

In this scenario, to determine if a single operator is responsible, ignore the geolocation data and focus on these persistent artifacts:

  1. Header fingerprints

    • Presence of next-action: x plus the two x-nextjs-* ids with distinctive constant values.
  2. Multipart body shapes

    • Similar multipart/form-data structure with small fixed content lengths for command-only payloads and larger for download actions.
  3. Payload semantic patterns

    • The same file paths and remote resource path segments (/nuts/poop, lrt).
  4. Response handling strategy

    • Reuse of redirect behavior or other framework-like artifacts to keep responses consistent.

Here the IP address alone is weak because these samples show deliberate rotation.


A separate actor identified: direct reverse shell attempt

Not all probes followed the staged dropper path. One attempt tried to establish an interactive shell channel back to the source via a TCP socket, by spawning /bin/sh and wiring stdin/stdout/stderr through a network connection.

This is typical of operators who want immediate control rather than a deployed agent.

Figure: Separate React2Shell attempt from 193.142.147.209 using a reverse shell-style payload (spawn shell, connect back on TCP port 12323).

Figure: ASN and hosting provider identified for 193.142.147.209, associated with the reverse shell-style attempt.

This attempt did not display the same custom request id headers seen in the poop1234 cluster, so it is a reasonable candidate for a different operator or toolkit.


IP and ASN analysis: roles, providers, and relationships

The collected IPs fall into two broad categories:

  • Scanner and execution sources: the hosts that directly sent exploit-shaped requests to the honeypot.
  • Support infrastructure: payload servers and result collection endpoints.

Core tool family (the request id cluster)

Observed sources with the reused x-nextjs-request-id fingerprint include:

  • 87.121.84.24 and 45.194.92.20, both tied to AS215925 (VPSVAULT.HOST LTD) with originated /24 ranges that include 87.121.84.0/24 and 45.194.92.0/24.
  • 82.23.183.131, within 82.23.183.0/24 announced by AS214062 (ITITAN HOSTING SOLUTIONS SRL), labeled "Private Customer" for that netblock on IPinfo and BGP sources.
  • 195.3.222.78 (shown in screenshot as a Polish hosting org, MEVSPACE)
  • 143.20.64.84 (shown in screenshot as Poland-based, also used as a payload host in a dropper chain)

Payload and logging infrastructure

Two key support nodes appear:

  • Payload host: 77.110.115.3 (referenced in the download attempt from 87.121.84.24)
  • Logging endpoint: 217.144.184.100:8080, receiving bulk scan results via POST /log (which we will see later)

The logging endpoint's /24 is associated with AS216246 (Aeza Group LLC) in IPinfo range data.
The payload infrastructure used in the observed sample is consistent with Aeza-related hosting as well (Aeza International is AS210644).

A separate validation probe cluster

The math-based probe from 45.156.87.99 used a different user agent string (Scanner/24.10) and did not show the "poop1234" header pattern. That is a strong indicator of a second tooling family.


The misconfigured VPS network that let me see inside the scanner's outbound activity

One of the scanning nodes sat on a VPS provider network where isolation was not properly enforced. That let me observe network traffic that should have been isolated, including the scanner's outbound probes to other targets.

Figure: Intercepted outbound scan traffic originating from 87.121.84.24 targeting external hosts on non-standard ports. The payload shape matched my inbound observations, including the same request id markers and "math expression" proof logic.

Figure: Additional intercepted outbound scans showing the scanner sweeping multiple IPs and ports while reusing the same header fingerprint and payload structure.

A few things jumped out immediately once I could observe outbound behavior:

  • The scanner did not just hit obvious web ports.
  • It reused the same "solve this arithmetic and reflect it back" validation approach I had already seen inbound.
  • It treated targets as vulnerable or not based on response semantics, not just status code.

That combination directly set up what happened next.

Poisoning the scanner's results feed

This phase only became possible because the VPS network the scanner ran on was misconfigured. The lack of proper tenant isolation let me observe how the scanner validated targets and how it reported findings, which gave me a clear view of what it would accept as "proof."

Once I understood the validation logic, I shifted from passively collecting probes into actively shaping responses. I started answering probes in a way that satisfied the scanner's proof checks, which made it record false positives at scale.

Figure: My interception layer responding to outbound probes in a way that satisfied the scanner's proof logic. The logs showed it embedding arithmetic like 146197+true+7467704 and 343175+true+6374154, which I computed and reflected back so the scanner recorded them as vulnerable.

Those arithmetic expressions were the scanner's verification mechanism. It tested whether it could get the target to evaluate something and reflect the result. By returning correct results, I caused the scanner to mark a large number of probes as successful.

Then I saw the part that turned this from "generic scanning" into a structured pipeline. The scanner pushed its findings to a centralized endpoint.

Figure: The scanner posting results to a centralized logging endpoint over HTTP with a static access token and a JSON body containing a large list of discovered URLs, revealing a results aggregation channel separate from the scanning nodes.

The structure was as follows:

  • HTTP POST to a /log endpoint
  • Content-Type: application/json
  • a static X-Access-Token header value
  • a JSON object containing a long array of URL strings

That was enough to infer separation of roles:

  • Multiple egress nodes did scanning and exploitation attempts.
  • One central service aggregated "hits" into a database.

At that point, I pushed the poisoning further by injecting large volumes of bogus "hits" into the same logging mechanism the scanner used.

Figure: My bulk injection run sending high volume requests containing thousands of bogus URLs per request to the same results endpoint the scanner used, with the goal of polluting the dataset and stressing ingestion.

Eventually the endpoint stopped responding, and the scanner activity from that cluster stopped.


What this means for defenders

React2Shell was loud not because it was unusually powerful, but because it was unusually cheap to test at scale. A single unauthenticated HTTP surface was enough to trigger automated tooling, and scanners quickly converged on the same small set of framework-specific paths and payload shapes.

From a defensive perspective, several takeaways stand out:

  • Volume does not imply sophistication. Most of the traffic observed here was automated, repetitive, and driven by simple validation logic. The same arithmetic checks, headers, and payload semantics were reused across many source IPs.
  • Framework awareness matters. Targeting of /_next and the presence of Next.js-specific headers immediately distinguishes informed scanners from generic recon. Logging these paths separately is useful signal.
  • Execution confirmation is the real pivot. Deterministic computation checks are the gate between scanning and exploitation. Anything that reliably satisfies or breaks that logic changes attacker behavior.
  • Infrastructure reuse is a stronger fingerprint than IPs. Headers, body structure, file paths, and staging URLs persisted even as source addresses rotated.

The most important lesson is that large-scale exploitation pipelines often depend on brittle assumptions. In this case, a misconfigured VPS network exposed the scanner's validation and reporting flow, which ran over HTTP. That made it possible to poison its results and disrupt the pipeline.

This will not stop the next scanner, or the next RCE. But it does reinforce a pattern that shows up repeatedly in real-world exploitation: attackers optimize for speed and scale, not resilience. When you can see how their tooling actually works, the whole pipeline gets easier to break.

Top comments (0)