DEV Community

Mika Torren
Mika Torren

Posted on

The Blocklist That Forgot About Time

The Blocklist That Forgot About Time

CVE-2026-27127 dropped for Craft CMS today. High severity, SSRF via DNS rebinding. Standard advisory language, easy to skim past.

But there's a detail buried in the patch notes that stopped me: this CVE is a bypass of CVE-2025-68437. That's a previous SSRF fix in the same codebase. They patched SSRF last year. The patch shipped. The pentesters signed off. And someone just walked straight through it.

That's not a bug. That's a category error that survived a security review.


What Actually Happened

The original fix added an IP blocklist. Before making any outbound HTTP request, Craft resolves the target hostname and checks the IP against a deny list: AWS metadata (169.254.169.254), GCP, Azure, RFC 1918 ranges, loopback, the usual. If the IP is on the list, the request is blocked.

Reasonable. Standard practice. Wrong.

Here's the vulnerable logic, reconstructed from the advisory:

// Validation: DNS lookup #1
$ip = gethostbyname($hostname);
if (in_array($ip, $blocklist)) {
    return false; // blocked
}

// Request: DNS lookup #2 (inside Guzzle)
$response = $client->get($url);
Enter fullscreen mode Exit fullscreen mode

Two DNS lookups. The validation uses one. The HTTP library uses another.

An attacker who controls a DNS server sets TTL=0 on their domain. The first lookup returns a safe IP, passes the blocklist check. By the time Guzzle resolves the same hostname for the actual request, the DNS record has changed to 169.254.169.254. The request goes to the AWS metadata endpoint. Credentials come back.

The blocklist never had a chance to see the real destination.


This Is TOCTOU Applied to DNS

Time-of-Check/Time-of-Use is one of the oldest bug classes in security. You check a condition at time T1, act on it at time T2, and something changes in between. Classic examples are filesystem races: check if a file is safe, then open it, and the file gets swapped in the gap.

DNS rebinding is the same bug, different substrate. The condition being checked is "does this hostname resolve to a safe IP?" The action is the HTTP request. The gap between them is exploitable whenever an attacker controls the DNS server and can return different answers to different queries.

With TTL=0, the rebinding is near-instant. There's no caching to defeat. The window is microseconds to milliseconds, tight but reliably exploitable with a cooperating DNS server.

I've seen this exact pattern in bug bounty writeups going back to at least 2019. Python webhook service, AWS keys leaked, same root cause. CVE-2024-28224 in Ollama, same root cause. The ecosystem keeps reinventing this mistake because the fix looks right. You're checking the IP. What else would you do?


Why "Better Blocklist" Doesn't Help

The instinct after seeing this bug is to improve the blocklist: add more ranges, check more thoroughly, maybe add a second validation pass. That's the wrong direction.

No blocklist, however comprehensive, fixes the structural problem. You're letting the hostname be resolved twice. Once under your control, once by the HTTP library. As long as those are separate resolutions, an attacker with DNS control can return different answers to each one.

It doesn't matter if your blocklist covers every cloud metadata range, every private IP, every loopback address. The attacker's DNS server sees your validation query, returns a safe IP. Then it sees Guzzle's query, returns whatever it wants. The blocklist is checking a different resolution than the one that matters.


Fix It at the Architecture Level

The Craft patch uses CURLOPT_RESOLVE, a libcurl option that pins a hostname to a specific IP for the duration of a request. The flow becomes:

  1. Resolve the hostname once.
  2. Validate the IP against the blocklist.
  3. Tell curl: "for this hostname, use this IP, don't resolve again."
  4. Make the request.

One resolution. One validation. The library never gets to ask DNS again.

Alternatively: rewrite the URL to use the IP directly, pass the original hostname as a Host header. Same principle. You're controlling what IP the request actually goes to, not trusting that DNS will return the same answer twice.

The pattern that gets this right, consistently: resolve at the trust boundary, validate, then pin. Never hand a hostname back to a library that will resolve it independently. The libraries getting SSRF protection right lately all share this design decision. They treat DNS resolution as a one-shot operation at the perimeter, not something that happens transparently inside the request stack.


The Part That Bothers Me

The 2025 fix was a real attempt. Someone looked at the SSRF vulnerability, identified the missing validation, wrote the blocklist, shipped it. A security review presumably happened. It passed.

"Does this IP look safe?" is the wrong question. The right question is "will the request actually go to this IP?" Those are only the same question if you resolve once and pin. The 2025 fix answered the wrong question, competently.

This is the part that doesn't show up in advisories: the bug survived review because it looked like security work. Blocklist present, IPs being checked, validation happening. The flaw is in the model of how DNS works during an HTTP request, not in the implementation of the blocklist itself.

If you're doing SSRF protection in your own code, especially if you're using a request library that handles DNS internally, the question to ask isn't "did I check the IP?" It's "did I resolve the hostname exactly once, and did I ensure the library used that same resolution for the actual request?"

If you can't answer yes to both, your blocklist is decorative.


Craft CMS fix is in 4.16.19 and 5.8.23. If you're running a self-hosted instance with GraphQL asset creation enabled, update now.

Top comments (0)