If you've spent any time working on networking tools or dealing with restrictive network environments, you know that Deep Packet Inspection (DPI) is one of those things that sounds reasonable in theory but creates real headaches in practice. Recently, a Rust project called MasterHttpRelayVPN-RUST showed up on GitHub trending, and it caught my eye for a few reasons.
Let's dig into what it does, how the architecture works, and what makes it interesting from a Rust development perspective.
What Is This Thing?
MasterHttpRelayVPN-RUST is a Rust port of @masterking32's original Python implementation. The core idea is straightforward: it uses a Google Apps Script deployment as a relay to tunnel your traffic, while applying TLS SNI concealment to bypass DPI filtering.
The project supports both HTTP and SOCKS5 proxy protocols, ships with a CLI and a cross-platform desktop UI, and claims zero runtime dependencies. That last part is a big deal — no Python interpreter, no Node.js, no JVM. Just a single binary.
Full credit for the original concept goes to @masterking32. The Rust port by @therealaleph focuses on performance and portability.
How the DPI Bypass Works
For those unfamiliar, DPI is a technique where network equipment inspects packet contents beyond just headers. It's used by ISPs and network administrators to identify and filter specific types of traffic.
The approach here is clever. At a high level:
Google Apps Script as a relay — Your traffic gets routed through a Google Apps Script deployment, which acts as an intermediary. Since Google's infrastructure is generally whitelisted by most networks, the traffic appears as normal HTTPS requests to Google services.
TLS SNI concealment — The Server Name Indication (SNI) field in TLS handshakes is one of the main things DPI looks at to determine what you're connecting to. By concealing or manipulating this field, the tool makes it harder for DPI to categorize the traffic.
Local proxy — The tool runs a local HTTP or SOCKS5 proxy on your machine. You point your browser or application at this proxy, and it handles the rest.
Here's roughly what the flow looks like in pseudocode:
// Simplified concept of the relay flow
async fn handle_client_request(request: Request) -> Result<Response> {
// Client connects to local SOCKS5/HTTP proxy
let target_url = extract_destination(&request);
// Instead of connecting directly (which DPI would inspect),
// route through the Google Apps Script relay
let relay_payload = encode_relay_request(target_url, request.body());
// The outgoing request looks like a normal Google API call
let relay_response = https_client
.post("https://script.google.com/macros/s/YOUR_DEPLOY_ID/exec")
.body(relay_payload)
.send()
.await?;
decode_relay_response(relay_response)
}
This is a simplified illustration — the actual implementation handles connection pooling, error recovery, and the TLS SNI manipulation at a lower level.
Why Rust Makes Sense Here
The original implementation was in Python, which is great for prototyping but comes with tradeoffs for a networking tool like this:
No runtime dependency — A compiled Rust binary just works. No
pip install, no virtualenv, no "which Python version do I need." For a tool that people in restrictive network environments need to get running quickly, this matters a lot.Performance — Async I/O in Rust (via tokio or similar) handles concurrent proxy connections efficiently without the GIL bottleneck.
Cross-compilation — Rust's cross-compilation story is solid. Building for Windows, macOS, and Linux from a single codebase is relatively painless.
Memory safety — For a networking tool handling potentially untrusted data, Rust's ownership model provides guarantees that you'd have to manually enforce in C/C++.
If you're building networking tools and haven't tried Rust yet, projects like this are a great case study.
Setting It Up
Based on the project description, the setup involves two parts: deploying the Google Apps Script relay and running the local client.
The Google Apps Script side would look something like:
// Google Apps Script relay (deployed as web app)
function doPost(e) {
// Receive the encoded request from the Rust client
var payload = JSON.parse(e.postData.contents);
// Forward the actual request to the target server
var response = UrlFetchApp.fetch(payload.url, {
method: payload.method,
headers: payload.headers,
payload: payload.body,
muteHttpExceptions: true
});
// Return the response back through Google's infrastructure
return ContentService
.createTextOutput(JSON.stringify({
status: response.getResponseCode(),
body: Utilities.base64Encode(response.getContent())
}))
.setMimeType(ContentService.MimeType.JSON);
}
Then on the client side, you'd configure the Rust binary with your Apps Script deployment URL and start the local proxy. The project offers both a CLI for headless/server usage and a desktop GUI for everyday use.
Practical Considerations
Before you go deploying this everywhere, some things to keep in mind:
Google Apps Script quotas — Google imposes daily quotas on Apps Script executions. For light browsing this is probably fine, but heavy usage could hit limits. Last I checked, free accounts get around 20,000 URL fetch calls per day.
Latency — Adding a relay hop through Google's infrastructure adds latency. This isn't a tool for gaming or video streaming. It's for getting access to resources when direct connections are blocked.
Trust model — You're routing traffic through Google's servers. Depending on your threat model, this might be acceptable or it might not. Know your requirements.
Terms of Service — Using Google Apps Script as a proxy relay is... creative. I haven't dug into whether this violates Google's ToS, but it's worth investigating before relying on this for anything critical.
The Bigger Picture
What I find interesting about projects like this isn't just the technical implementation — it's the cat-and-mouse dynamic between DPI systems and circumvention tools. Each side keeps evolving.
From a developer perspective, the Rust networking ecosystem has matured significantly. Between tokio, hyper, rustls, and reqwest, you can build production-quality networking tools without reinventing the wheel. This project is a good example of leveraging that ecosystem.
If you're interested in network privacy more broadly, the same mindset applies to other areas of your stack. Privacy-focused options like Umami or Plausible give you full data ownership for analytics without feeding user data to third parties. It's the same principle — keeping control of your traffic and data.
Should You Use This?
Honestly, it depends on your situation. If you're a developer in a network environment where legitimate resources (documentation, package registries, GitHub) are blocked by overzealous DPI, a tool like this could be genuinely useful.
If you're just curious about Rust networking patterns, the source code is worth reading regardless. Proxy implementations, async I/O patterns, and TLS handling are all transferable knowledge.
# If you want to explore the project
[dependencies]
# The project claims no runtime deps, but the Rust crate
# dependencies would typically include:
tokio = { version = "1", features = ["full"] }
reqwest = { version = "0.11", features = ["rustls-tls"] }
clap = { version = "4", features = ["derive"] } # for CLI arg parsing
Note: I haven't verified the actual Cargo.toml — these are educated guesses based on the described functionality. Check the actual repository for accurate dependency information.
The project is open source, so go poke around. Whether you use it as a tool or just study it as a Rust networking reference, there's something to learn here.
Top comments (0)