IT
InstaTunnel Team
Published by our engineering team
Real-Time Pair Programming: Shared HMR via Collaborative Tunnels
Real-Time Pair Programming: Shared HMR via Collaborative Tunnels
Google Docs for your localhost. Imagine a world where “it works on my machine” isn’t a defensive excuse, but a shared reality. Remote pair programming has moved well beyond the laggy screen-shares of the early 2020s. We’ve entered an era where your CSS changes can reflect on your partner’s screen in milliseconds — even if they’re on another continent and the server is only running on your laptop.
From Screen Sharing to Port Sharing
For years, remote pair programming was a compromise. We used tools like Zoom or Slack Huddles to watch a video stream of someone else’s IDE. While tools like VS Code Live Share improved things by sharing text buffers, they often struggled with the most critical part of the feedback loop: the browser itself.
Traditional workflows forced the “follower” to either watch a blurry video of the “leader’s” browser, or attempt to pull the branch and run the environment locally — a process that’s frequently derailed by missing .env files and mismatched node_modules.
Collaborative localhost tunneling solves this by treating your dev port as a shared, live resource. By proxying the Hot Module Replacement (HMR) WebSocket through a tunnel, developers can achieve a synchronized state where every save triggers a DOM update on every connected client simultaneously.
How HMR Actually Works
Before you can share it, you need to understand it. Modern dev tools like Vite, Webpack, and Turbopack use a persistent WebSocket connection between the dev server and the browser. When you save a file:
The server recompiles the specific module that changed.
A message is sent via WebSocket to the client.
The client fetches the updated code and hot-swaps it — no full page reload required.
Vite’s HMR system dispatches a defined set of lifecycle events: vite:beforeUpdate, vite:afterUpdate, vite:beforeFullReload, vite:invalidate, and vite:error, among others. The @vite/client runtime runs in the browser, manages the WebSocket connection, and applies updates via the import.meta.hot API, which application code can use to register callbacks and handle module replacement.
CSS updates are handled by swapping tags, which prevents unstyled flashes. JavaScript updates trigger a dynamic import() of the updated module with a cache-busting timestamp. The whole system is carefully designed to avoid full-page reloads wherever possible.
The critical implication for remote sharing: by default, this WebSocket binds to 127.0.0.1. Nothing outside your machine can receive those signals. This is where tunneling comes in.
The TCP-over-TCP Problem (and Why WireGuard Solves It)
The performance bottleneck for tunneled HMR isn’t bandwidth — it’s protocol overhead. Traditional SSH-based tunnels suffer from a well-known pathology called “TCP-over-TCP” head-of-line blocking. When you wrap TCP inside TCP, packet loss at the outer layer stalls the inner stream, and the global TCP slow-start algorithm kills throughput in high-latency or lossy environments.
The tunneling ecosystem has responded by moving to WireGuard, which operates over UDP and avoids this problem entirely. WireGuard is an open-source VPN protocol integrated directly into the Linux kernel, designed from the ground up to be simpler, faster, and more auditable than IPsec or OpenVPN. Its cryptographic stack — Curve25519 for key exchange, ChaCha20-Poly1305 for encryption, BLAKE2s for hashing — is minimal and modern. Because WireGuard processes packets in kernel space rather than user space, it avoids the context-switching overhead that plagues older VPN implementations.
In real-world comparisons, WireGuard’s latency advantage is substantial. In tests using the same server location, WireGuard latency dropped to around 40ms compared to 113ms on OpenVPN (TCP), with jitter eliminated entirely. For HMR — where the signal is a tiny WebSocket message that needs to arrive fast — that difference is the gap between a snappy, delightful dev experience and one where you’re constantly wondering whether your save registered.
Technical Setup: Vite Behind a Tunnel
Getting HMR to work across a tunnel requires one non-obvious configuration change: you have to explicitly tell Vite’s HMR client where the WebSocket lives. Without this, the browser tries to connect to localhost — which is your partner’s machine, not yours — and the updates silently fail.
The key insight is that server.hmr.host tells the browser’s HMR client where to open its WebSocket connection. Setting server.host to 0.0.0.0 makes Vite bind to all network interfaces rather than only loopback, and server.allowedHosts permits traffic arriving through the tunnel’s domain.
// vite.config.js
export default {
server: {
host: '0.0.0.0',
allowedHosts: ['.your-tunnel-domain.dev'],
hmr: {
protocol: 'wss', // Secure WebSockets
clientPort: 443,
host: 'your-session.your-tunnel-domain.dev', // Your tunnel URL
},
},
}
If you’re using a reverse proxy (nginx, Caddy) in front of Vite, you also need to forward the WebSocket upgrade headers:
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
Without those two headers, the browser establishes a regular HTTP connection, the WebSocket handshake never completes, and HMR silently breaks.
The 2026 Tunneling Landscape
The market for localhost tunneling has matured and fragmented significantly. Here’s where the major players actually stand today:
ngrok
Once the near-universal default, ngrok has pivoted hard toward enterprise “Universal Gateway” features. Its free tier has become genuinely restrictive — 1 GB/month bandwidth — and in February 2026, the DDEV open-source project opened an issue to consider dropping ngrok as its default sharing provider due to these tightened limits. ngrok also has no UDP support as of 2026, which is an architectural limitation, not a configuration issue. For API and webhook debugging with its excellent request inspection and replay tooling, it remains the best in class. For collaborative HMR sharing on a budget, you’ll likely want something else.
Tailscale Funnel
Tailscale builds an encrypted peer-to-peer mesh VPN using WireGuard under the hood, and its Funnel feature lets you expose a specific port from within that private network to the public internet. Traffic flows directly between devices using WireGuard rather than routing through a central relay, which means lower latency and higher throughput. For teams already running Tailscale internally, Funnel is the lowest-friction option — personal use is free, team plans start around $5/month.
The important caveat: Funnel ingress nodes don’t gain packet-level access to your private tailnet, which is a meaningful security design property. If you’re sharing only with a specific teammate, you can skip Funnel entirely and just invite them to your tailnet, restricting their ACL to only the specific service they need.
Cloudflare Tunnel
For anything production-facing, Cloudflare Tunnel is the strongest option: free bandwidth, global CDN, DDoS protection, and a configurable WAF. It works via an outbound-only connection architecture that eliminates the need to open inbound ports. The tradeoff is that setup is more involved and it routes through Cloudflare’s infrastructure rather than peer-to-peer.
Pinggy
Pinggy’s greatest trick is requiring zero installation. You run a standard SSH command, and you get a public tunnel URL, a terminal UI with QR codes, and a built-in request inspector. It also supports UDP tunneling, which ngrok lacks. Paid plans start at $2.50/month billed annually — less than half of ngrok’s personal tier.
Localtunnel
The old open-source default. By 2025–2026, it’s effectively unusable for professional work — no sustainable funding model, slowing maintenance, and public servers with frequent downtime. Fine for a five-minute throwaway demo; not for a pair programming session.
Tool Selection at a Glance
Use Case Recommended Tool Why
Internal team access Tailscale Funnel Secure mesh, no public ports
API / webhook debugging ngrok (paid) Best request inspection on the market
Quick throwaway tunnel Pinggy Zero install, one SSH command
Public HTTP / production Cloudflare Tunnel WAF, DDoS protection, free bandwidth
UDP / game servers / IoT LocalXpose or Playit.gg Native UDP support
Self-hosted / data sovereignty frp or Inlets Full control, no vendor dependency
Practical Use Cases
The Design-to-Dev Live Loop
Instead of recording a Loom of a CSS animation, a developer shares their localhost with a designer. As cubic-bezier values are tweaked in real time, the designer sees the animation update on their own monitor — on their own machine, in their own browser — and gives immediate feedback on the “feel” of the interaction. No screen-share lag, no compression artifacts.
Complex State Debugging
Debugging a multi-step checkout form is much harder to describe than to show. With a shared tunnel, a senior developer can watch the console on their own machine while you drive the application state. You don’t have to narrate each click. They’re in the app with you.
Cross-Device Testing in One Save
Open the tunnel URL on your physical iOS device. Have your partner open it on their Android. One code change, two mobile browsers update simultaneously, zero deployments.
Security Considerations
The main risk of always-on tunnels is what some call the “dangling endpoint” — a forgotten tunnel left open that exposes unauthenticated internal APIs or local database interfaces.
Enforce ephemeral endpoints. Never use a persistent subdomain for a pair programming session. Use sessions that expire automatically when the CLI process terminates. Most modern tunnel tools support this, and some (like Pinggy) make ephemeral URLs the default.
Respect wss:// strictly. Modern browsers are increasingly aggressive about blocking HMR signals that attempt to downgrade from secure WebSockets to ws://. Always configure your Vite setup to use protocol: 'wss' when working across a tunnel.
Limit concurrent followers. Collaborative tunnels can be CPU-intensive on the host machine. A practical cap of 3–5 concurrent “followers” prevents your local dev server from throttling under the load of serving multiple remote clients.
Use ACLs when possible. If you’re on Tailscale, prefer sharing within the tailnet with ACL-restricted access over exposing a public Funnel endpoint. The smaller the blast radius, the better.
Why WireGuard Won
It’s worth being explicit about why nearly every serious tunneling tool has converged on WireGuard as the underlying protocol. The Linux kernel integration is the key architectural advantage: WireGuard operates as a virtual network device inside the kernel’s network stack, processing encrypted packets without the context-switching overhead that user-space VPN implementations incur per-packet. The codebase is around 4,000 lines — deliberately minimalist and auditable — versus OpenVPN’s ~70,000. The cryptographic primitives are pre-selected and modern, with no negotiation surface for downgrade attacks.
For HMR specifically, the UDP-based transport is what matters. WireGuard handles packet loss and reordering within its own design without the retransmission pathologies of TCP-over-TCP. High-frequency WebSocket streams — exactly what HMR generates — flow through WireGuard with consistently low latency rather than bursty, head-of-line-blocked delivery.
Best Practices
Prefer ephemeral URLs. Auto-expiring endpoints that die when the CLI exits prevent dangling access.
Always use wss://. Non-secure WebSockets are increasingly blocked by default in modern browsers.
Cap concurrent followers at 3–5 to protect your machine’s performance.
Be careful with local databases. If your dev environment connects to a local database with real or realistic data, make sure your tunnel partner can’t accidentally hit endpoints that expose it. Scope their access or use seeded dummy data.
Prefer private mesh over public Funnel when your collaborators can install a client. Peer-to-peer is faster and doesn’t expose a public endpoint.
The Bigger Picture
The tunneling ecosystem in 2026 is richer and more competitive than it has ever been. ngrok remains excellent for enterprise use cases, but its free tier is now a proof-of-concept product rather than a daily driver. For almost every other use case — collaborative HMR, internal team access, UDP services, self-hosted infrastructure — a better-fit and often cheaper option exists.
By treating your localhost port as a shared, secure, collaborative resource rather than a private one, you can close the gap between working locally and working together. The feedback loop that makes frontend development satisfying — save, see, iterate — stops being a solo experience and becomes a shared one.
The distance between two developers, whether they’re across a desk or across twelve time zones, is increasingly just a tunnel command away.
Related Topics
Top comments (0)