Russia Built a National Firewall — And We Mapped Every Hole In It
How researchers scanned 46 million IP addresses and found the architecture of modern internet censorship
At Gerus-lab, we build systems that need to work globally. Whether it's a Web3 protocol running across jurisdictions or a SaaS platform serving users in 20+ countries, we think deeply about network architecture and infrastructure resilience. So when a fascinating technical research paper dropped on the Russian developer community this week — mapping the complete architecture of Russia's internet whitelist system — we couldn't stop reading.
This is a technical deep-dive. And honestly, it's one of the most detailed analyses of stateful internet censorship we've seen published publicly.
From Blacklists to Whitelists: The Architecture Shift Nobody Talks About
Most people understand internet censorship as a blacklist: "block X, Y, Z, allow everything else." Russia's system has evolved far beyond this.
In 2024-2025, Russia's TSPU (Technical Means of Countering Threats — their DPI infrastructure) switched to whitelist mode in certain regions. The logic flips completely:
Block everything. Allow only what's explicitly approved.
If you can open VK but not Google, that's whitelist filtering in action. You have physical signal. Bytes flow. The TSPU just decides which bytes to let through.
This isn't theoretical. Researchers from the OpenLibre community ran masscan across all 80,600 Russian IP subnets, probing from a mobile Megafon connection (which operates in drop-all mode). Their findings are public. The code is on GitHub. And the architecture they uncovered is a masterclass in layered censorship — and its inherent limitations.
The Two-Layer Architecture: L3 + L7
The Russian whitelist operates simultaneously on two network layers. Understanding this is critical for any developer building globally resilient systems.
Layer 3: IP-level filtering
The first filter is pure routing. Packets to non-whitelisted IPs (Google, Telegram, Twitter) experience 100% packet loss — ICMP, TCP:80, TCP:443, arbitrary UDP. The packets don't leave the operator's network. The router at hop 2 simply discards anything destined for a non-approved IP.
For whitelisted IPs, the picture is more nuanced:
| Protocol/Port | Status |
|---|---|
| ICMP | ✅ OK |
| TCP:80 | ✅ OK |
| TCP:443 | ✅ OK |
| UDP:443 (QUIC) | ❌ NO_RESP |
| UDP:53 | ❌ NO_RESP |
| UDP:51820 (WireGuard) | ❌ NO_RESP |
TCP 80/443/22 work. Everything else is dropped. UDP is nearly entirely blocked — including QUIC, DNS, and WireGuard on standard ports. For developers: any UDP-based protocol is effectively dead in whitelist zones.
Layer 7: SNI inspection
The second filter operates deeper. If the destination IP is whitelisted, the TSPU inspects the SNI field in TLS ClientHello. This is where it gets fascinating:
Researchers tested various SNIs against whitelisted Yandex IPs:
| SNI | Result |
|---|---|
| ya.ru | ✅ TLS handshake OK |
| google.com | ✅ TLS handshake OK (!) |
| twitter.com | ❌ Connection reset |
| telegram.com | ✅ TLS handshake OK |
Why is google.com passing through at the SNI level when it's blocked by IP? The researchers' theory: adding google.com to the SNI blacklist would break thousands of legitimate services that use Google APIs through their own IPs. The SNI blacklist targets only clearly "dangerous" domains — Twitter, specific blocked resources. The inconsistency reveals the pragmatic limits of operating a national firewall.
The inconsistency goes deeper. The same SNI behaves differently depending on which whitelisted IP you route through:
| SNI | via Yandex | via VK | via MAX |
|---|---|---|---|
| telegram.org | ✅ OK | ❌ FAIL | ✅ OK |
| twitter.com | ✅ OK | ❌ FAIL | ❌ FAIL |
Different rules on different ASNs. Either TSPU configurations diverge per ASN, or some nodes haven't been updated. There is no single unified SNI blacklist. The system is inconsistent by design — or by neglect.
Where the TSPU Physically Sits (TTL Analysis)
This is the clever bit. Researchers used TTL values to locate the TSPU hardware:
- Response from ya.ru: TTL 53 — 11 hops, a real server far away
- "Response" from Telegram: TTL 62 — 2 hops
Two hops means the response didn't come from Telegram's servers. Something sitting immediately after the operator's first router generated it. The physical topology:
[You] → [Operator router] → [TSPU] → [Internet]
The TSPU is a transparent in-line device that intercepts traffic and generates synthetic RST/DROP responses. This is standard DPI architecture, but seeing it confirmed through TTL analysis is elegant.
The Scan: 46 Million IPs, 63,000 Survivors
The key methodology: if TSPU drops packets to blocked IPs and passes whitelisted ones, you can scan all Russian subnets from a whitelisted mobile connection and build the complete whitelist by seeing what responds.
Results from scanning all 46 million Russian IPs:
- 63,126 IPs in the whitelist
- 2,557 unique ASNs represented
- 0.14% of Russian address space is accessible through mobile DPI
Top organizations by whitelisted IPs:
| # | ASN | Organization | IPs |
|---|---|---|---|
| 1 | AS200350 | Yandex.Cloud | 12,906 |
| 2 | AS9123 | Timeweb | 2,904 |
| 3 | AS12389 | Rostelecom | 2,312 |
| 4 | AS47764 | VK | 1,958 |
Yandex Cloud holds 1 in 5 whitelisted IPs. Why? Because half the state-dependent services run there. Removing Yandex from the whitelist would collapse too much. This is the principal bypass vector — and it's not a bug, it's a consequence of centralizing critical infrastructure.
What This Means for Global Infrastructure Design
At Gerus-lab, we've worked on projects where geographic resilience was a hard requirement — Web3 protocols, AI services, distributed SaaS. Here's what we take from this research:
1. UDP is not reliable in censored networks
If you're building for global reach, your primary transport cannot depend on QUIC, WireGuard on standard ports, or custom UDP protocols. TCP/443 is the only reliable path through deep-packet filtering.
2. IP reputation matters, ASN matters more
Being hosted on AS200350 (Yandex.Cloud) gives you whitelist coverage that a random VPS doesn't. For applications needing to reach users in restricted regions, infrastructure geography and ASN selection are product decisions, not just ops decisions.
3. SNI inconsistency creates exploitable gaps
The fact that the same SNI passes on one network path and fails on another reveals that complex L7 filtering is operationally difficult to maintain consistently. This creates windows — but also means you can't rely on them.
4. Transparent proxies leave TTL fingerprints
Any network device that generates synthetic responses (TSPU, transparent proxies, corporate firewalls) will reveal itself through TTL anomalies. Build monitoring that tracks TTL alongside latency — it tells you whether you're talking to the real server.
5. Censorship infrastructure is fragile at scale
0.14% of IPs accessible. Maximum subnet density of 37.5%. The whitelist is full of holes not because the architects are incompetent, but because modern internet services are too interconnected to cleanly separate. Google's CDN IPs serve legitimate Russian government sites. Blocking them breaks too much. The firewall is structurally brittle.
The Broader Pattern: When Infrastructure Becomes Policy
What makes this research remarkable isn't just the technical methodology — it's what it reveals about the economics of control. Building a complete internet whitelist for a country of 145 million people means:
- Constant maintenance as IP ranges change
- Operational inconsistency across equipment and ASNs
- Unintended collateral blocking of legitimate services
- An arms race with bypass methods (serverless functions, WebRTC tunnels, white-IP VPS)
The researchers mention current bypass vectors: serverless functions hosted on whitelisted cloud providers, WebRTC-based tunnels (which operate over TCP:443), and purchasing VPS instances with approved IPs. Each method works because the whitelist has structural dependencies it can't remove.
We're watching the internet bifurcate in real time. For developers building globally, that means treating network resilience as a first-class architectural concern — not an afterthought.
What We're Doing About It at Gerus-lab
Network-aware architecture is increasingly part of what we build for clients at Gerus-lab. When we design distributed systems, we think explicitly about:
- Transport layer choices: TCP/443 as the universal fallback
- CDN and hosting geography: where your ASN sits matters
- Health monitoring: TTL-aware checks that detect transparent interception
- Compliance architecture: separating data planes for different regulatory jurisdictions
If you're building infrastructure that needs to work globally — or you're thinking through what network resilience looks like for your product — we'd like to talk.
The full research data and scanning code is available at openlibrecommunity/twl and updated weekly. It's some of the most rigorous public work on censorship infrastructure we've seen. Read it.
At Gerus-lab, we build Web3, AI, and distributed systems for teams that need to work globally. We've shipped 14+ production systems across DeFi, GameFi, SaaS, and automation. See our work →
Have thoughts on network censorship architecture or global infrastructure design? Drop them in the comments. This is the kind of problem that gets more interesting the deeper you go.
Top comments (0)