A common problem with a familiar shape: a process can dial outbound to the internet, but nothing on the internet can dial it back. Your dev server on a laptop. A service in a private VPC. A homelab app behind your router. A container in a pod with no ingress. Same shape every time — outbound works, inbound doesn't.
rift is a small Go binary I built to solve that. Run it as a server on a VPS you own, run it as a client wherever the private service lives, and the service becomes reachable from the public internet over HTTPS. Same shape as ngrok, frp, or bore — different transport underneath.
venkatkrishna07
/
rift
rift — self-hosted ngrok alternative built on QUIC.
rift
A self-hosted tunnel for local development. One binary, one VPS, no accounts.
Expose localhost to the internet over a single QUIC connection — on infrastructure you fully own. Built for sharing dev servers, testing webhooks, and demoing work in progress.
localhost:3000 ──── QUIC ────▶ https://myapp.tunnel.example.com
localhost:5432 ──── QUIC ────▶ tunnel.example.com:10247
localhost:9090 ──── QUIC ────▶ mcp.example.com (MCP server)
rift client --server tunnel.example.com --expose 3000:http:myapp
# → tunnel ready https://myapp.tunnel.example.com
That's it. Your local dev server is now reachable on the internet, over HTTPS, through a server you run.
Where rift fits
Self-hosted tunnels already exist — frp, bore, chisel. They all ride on TCP. rift is the same idea, but built on QUIC, which gives you three things you can't get over TCP:
- No head-of-line blocking between tunnels. On TCP, a lost packet on one multiplexed stream stalls every other stream on the same connection until it's…
This post walks through how it's put together and the design decisions that shaped it.
What it does
One binary, two roles. Run it as a server on a VPS, run it as a client wherever the private service lives:
rift client --server tunnel.example.com --expose 3000:http:myapp
# → tunnel ready https://myapp.tunnel.example.com
That's the entire user-facing surface for a single HTTP tunnel. Multiple tunnels work the same way — pass --expose more than once. TCP tunnels swap http for tcp and get a public port instead of a subdomain.
As long as the rift client can dial the rift server outbound, the server can route traffic back. No inbound firewall rules, no port forwarding, no public IP on the client side. No accounts, no hosted control plane, no telemetry. The server is yours, the data path is yours, the tokens are yours.
Architecture
visitor rift server rift client private app
───────── ─────────── ─────────── ──────────────
HTTPS ─────► :443 (HTTPS)
request ─ subdomain
routing QUIC
─ TLS termination ◄──── connection ───►
(one per client)
─ open new
QUIC stream ────► stream ────► TCP
dial
:3000
↓
private app
response
↓
◄──── stream ◄──── stream ◄────
HTTPS ◄─────
response
The client opens one QUIC connection to the server and authenticates with a bearer token. Each --expose flag registers a tunnel — the server assigns either a subdomain (HTTP) or a port from a configurable range (TCP) and stores the mapping.
When a visitor hits the public URL, the server figures out which tunnel they're trying to reach (subdomain for HTTP, port for TCP), opens a fresh QUIC stream to the right client, and forwards the request along it. The client receives the stream, dials the local port, and pipes bytes both ways. The response flows back along the same stream. When the request is done, the stream closes. The QUIC connection itself stays up.
So the server is doing three things at once: terminating public TLS, multiplexing per-request streams onto a long-lived QUIC connection, and tracking which tunnel belongs to which client.
Why QUIC
Most self-hosted tunnels run on TCP. QUIC gave rift three properties that matter for this kind of tool:
Stream isolation. QUIC carries multiple independent streams inside one connection, each with its own ordering and reliability state. A lost packet on one tunnel doesn't stall the others. On TCP this is impossible at the application layer — the kernel guarantees in-order delivery for the whole connection, so a hiccup on the Postgres tunnel will freeze the HTTP tunnel until retransmission catches up.
Connection migration. A QUIC connection is identified by a connection ID inside each packet, not by the four-tuple of IP and ports. Switch from Wi-Fi to a hotspot, toggle a VPN, change networks mid-session — the connection survives. The client doesn't reconnect, doesn't re-authenticate, doesn't drop tunnels. This was the property I personally cared about most while developing on the move.
TLS in the handshake. QUIC's handshake is TLS 1.3. There's no separate "now negotiate TLS" round trip after the transport comes up. Encrypted from the first byte, fewer round trips to first useful data.
Rift uses quic-go for the QUIC implementation. The server listens on UDP/443 for QUIC and TCP/443 for HTTPS on the same address — useful because public visitors arrive over plain HTTPS while clients connect over QUIC.
Auth: tokens and the admin API
Authentication is a single bearer token per client. Tokens default to a 1-hour TTL — when a token expires, the connected client is disconnected. You can set --token-ttl 0 for tokens that never expire if you'd rather manage rotation yourself.
There are two ways to provision a token. Offline, against a stopped server with direct access to the BadgerDB data directory:
rift server --db /var/lib/rift/db --add-token alice
# → rift_4a7f...
Online, through a loopback-only admin endpoint:
curl -X POST \
-H "Authorization: Bearer $RIFT_ADMIN_SECRET" \
"http://localhost/_admin/tokens?name=alice&ttl=168h"
The /_admin/tokens endpoint deliberately binds only to 127.0.0.1 and ::1, and rate-limits at 5 req/min/IP. To provision a token from elsewhere, you SSH in first. The motivation is plain: the admin endpoint is a token factory, and a token factory exposed on the public internet is a bad idea no matter how good the auth is. Loopback-only removes that whole class of risk.
One QUIC-specific detail in the auth path: tokens are never accepted in 0-RTT data. QUIC allows clients to send application data inside the very first handshake packet using cached crypto state from a previous session, which is great for latency but also means an on-path attacker can replay that packet. Auth happens after the full 1-RTT handshake completes, so a captured handshake can't be replayed to log in as someone else.
The token store itself is BadgerDB — chosen because it's an embedded key-value store with no separate process to run, no network port to secure, and good enough performance that token lookups are not the bottleneck.
HTTP routing and automatic TLS
HTTP tunnels are routed by subdomain. A tunnel registered as myapp becomes myapp.tunnel.example.com. This requires a wildcard DNS record (*.tunnel.example.com → <server-ip>) and a wildcard TLS certificate.
Rift handles the certificate piece automatically via Let's Encrypt. If you already have a wildcard cert from somewhere else, --cert and --key skip the ACME flow entirely.
WebSockets work over the same HTTP tunnels with no extra configuration — the server detects the upgrade and lets the connection through transparently. This was important to get right because a lot of the things people actually tunnel for (real-time previews, hot-reload, dev servers with HMR, self-hosted apps with live UIs) depend on WebSockets working without ceremony.
TCP tunnels
TCP tunnels skip the HTTP layer entirely. The server allocates a port from a configurable range (--tcp-port-min to --tcp-port-max, defaults 10000–65535), and incoming TCP connections on that port are forwarded over a QUIC stream to the client.
Two design choices worth calling out:
No tunnel-layer auth on TCP. Once a TCP tunnel is open, anyone who can reach the public port can reach your service. The tunnel is dumb pipe. You're expected to use the application's own auth (database passwords, mTLS, whatever) or restrict access at the firewall. Adding auth at the tunnel layer for arbitrary TCP would mean either terminating and re-encrypting (breaks everything) or some kind of port-knock scheme (security theatre). Neither felt right.
A blocked-ports list on the local side. The client refuses to expose 25, 53, 135, 139, 445, 465, 587, 3389 — the ports an open relay or accidental SMB exposure would live on. This protects against a footgun, not a determined attacker, but the footgun is real.
Reconnection
The client reconnects on transient network failures with exponential backoff from 1s to 30s. Permanent errors — invalid token, expired token, IP blocked by the server — exit the client immediately rather than spinning in a reconnect loop. The distinction matters because exponential backoff against an auth failure just produces noise in your logs and load on the server without ever recovering.
Tokens are cached in ~/.local/share/rift after first use, so subsequent connections to the same server pick them up automatically without --token on every invocation.
What's solid, what's not
HTTP and TCP tunneling, automatic TLS, token auth, reconnection, WebSockets, and connection migration are all working and stable for personal and small-team use. UDP tunnels are work-in-progress.
The honest caveat for QUIC-based tunnels in general: some networks (corporate firewalls, certain hotel and café Wi-Fi) block or rate-limit UDP/443. If your client environment lives behind one of those, a TCP-based tunnel like frp or chisel will be more reliable. On a normal connection, rift's behavior under multi-tunnel load and across network changes is where the QUIC choice pays off.
Try it
# Terminal 1 — server, dev mode, self-signed cert
rift server --dev --listen :4443
# Terminal 2 — client
rift client --server localhost:4443 --insecure --expose 3000:http:myapp
# → https://myapp.tunnel.localhost
Swap --dev for a real domain and a VPS to go public. Setup details, the systemd unit, the full CLI reference, the comparison table against other self-hosted tunnels, and the /_admin/tokens API are all in the README.
Issues, PRs, and arguments in the comments all welcome.
venkatkrishna07
/
rift
rift — self-hosted ngrok alternative built on QUIC.
rift
A self-hosted tunnel for local development. One binary, one VPS, no accounts.
Expose localhost to the internet over a single QUIC connection — on infrastructure you fully own. Built for sharing dev servers, testing webhooks, and demoing work in progress.
localhost:3000 ──── QUIC ────▶ https://myapp.tunnel.example.com
localhost:5432 ──── QUIC ────▶ tunnel.example.com:10247
localhost:9090 ──── QUIC ────▶ mcp.example.com (MCP server)
rift client --server tunnel.example.com --expose 3000:http:myapp
# → tunnel ready https://myapp.tunnel.example.com
That's it. Your local dev server is now reachable on the internet, over HTTPS, through a server you run.
Where rift fits
Self-hosted tunnels already exist — frp, bore, chisel. They all ride on TCP. rift is the same idea, but built on QUIC, which gives you three things you can't get over TCP:
- No head-of-line blocking between tunnels. On TCP, a lost packet on one multiplexed stream stalls every other stream on the same connection until it's…
Top comments (0)