I spent three hours trying to SSH into a legacy industrial gateway from a coffee shop, only to realize I'd forgotten to install the Tailscale agent on that specific piece of hardware. The device was a locked-down firmware image where "installing a binary" isn't an option. That's the moment I stopped trying to put Tailscale on every single node and instead shifted to a dedicated subnet router.
If you have a multi-node Proxmox cluster, a rack of IoT sensors, or a bunch of "dumb" switches and PDUs, you can't possibly install a client on everything. You need a way to tell your Tailnet: "If you're looking for anything in the 10.0.0.x range, just send the traffic to this specific Linux box, and it'll handle the rest."
The Concept: Routing vs. Agent-based Access
Standard Tailscale is a mesh. Every device is a peer. This is great for your laptop and your primary workstation, but it's a nightmare for infrastructure. A subnet router turns a single node into a gateway. It acts as a bridge between the encrypted WireGuard mesh and your local physical network.
The magic here is that the devices on your LAN don't even know Tailscale exists. They just see traffic coming from the subnet router's local IP. You get the security of a private mesh without having to touch the network configuration of your legacy gear or your Kubernetes pods.
Implementation: The "Happy Path" and the Reality
The official docs make this look like a one-line command. While that's technically true, if you're running this on a production-grade homelab or a bare-metal node, there are a few kernel-level requirements that usually get glossed over.
First, you have to enable IP forwarding. If the Linux kernel isn't allowed to pass packets between interfaces, your subnet router is just a fancy wall.
# Enable IPv4 forwarding immediately
sudo sysctl -w net.ipv4.ip_forward=1
sudo sysctl -w net.ipv6.conf.all.forwarding=1
# Make it persist across reboots
echo "net.ipv4.ip_forward = 1" | sudo tee -a /etc/sysctl.d/99-tailscale.conf
echo "net.ipv6.conf.all.forwarding = 1" | sudo tee -a /etc/sysctl.d/99-tailscale.conf
sudo sysctl -p /etc/sysctl.d/99-tailscale.conf
Once the kernel is ready, you bring Tailscale up. I've found that explicitly forcing the kernel TUN interface is safer than relying on the default, especially if you've experimented with userspace networking in the past. Userspace networking (--tun=userspace-networking) is a death sentence for subnet routing; it simply won't work because the OS doesn't see the interface as a routable device.
# Start Tailscale as a subnet router for a specific range
# Replace 10.0.0.0/24 with your actual local subnet
sudo tailscale up --tun=kernel --advertise-routes=10.0.0.0/24
After running this, you still aren't connected. You have to go into the Tailscale Admin Console and manually approve the routes. This is a security feature to prevent a compromised node from suddenly hijacking all traffic for your entire network.
The Kubernetes and Gateway Trap
If you're running your subnet router inside a container or as part of a larger orchestration layer, you'll likely run into the gateway.bind issue. I hit this while integrating a gateway with some Kubernetes services.
When using tools like OpenClaw or custom wrappers, the default binding often fails because the application tries to bind to an interface that isn't actually the LAN. If your config looks like a generic default, you'll see the node is "online" in the dashboard, but you can't ping anything on the local subnet.
You need to explicitly tell the gateway to bind to the LAN interface. In the JSON config, it looks like this:
{
"gateway": {
"bind": "lan"
}
}
Without this, the traffic often loops back or hits a dead end in the container network. This is similar to the networking headaches I've dealt with regarding DNS resolution, like the Wildcard DNS and ndots:5 nightmare, where the system thinks it knows where to go but the underlying routing logic is fundamentally flawed.
Turning it into an Exit Node
A subnet router lets you reach your home from the outside. An exit node lets you send all your internet traffic through your home from the outside. It's the difference between "I want to see my Proxmox UI" and "I'm on public WiFi and I want to pretend I'm at home for security."
To do this, add the --advertise-exit-node flag:
sudo tailscale up --advertise-routes=10.0.0.0/24 --advertise-exit-node
Here is where most people get stuck. You'll enable the exit node, select it in your client, and then realize you have zero internet access. The packets are reaching your router, but they aren't being NAT'd back out to the web. You need a MASQUERADE rule in your iptables to handle the translation.
# Replace eth0 with your actual primary network interface
sudo iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
If you're using a modern distro with nftables, you'll need the equivalent rule there. If you don't do this, the return packets from the internet don't know how to get back to the Tailscale client because they're coming from a virtual IP range the rest of your network doesn't recognize.
The Advanced Headache: NAT-PMP and Network Namespaces
In some complex environments, specifically when dealing with certain industrial gateways or strict NAT setups, you'll encounter NAT-PMP misrouting. This happens when Tailscale tries to be too smart about the local network and accidentally routes requests to a remote subnet router instead of the local one.
The fix is ugly, but it works: isolate the Tailscale traffic into its own network namespace (netns). This prevents the daemon from interfering with the host's primary routing table in ways that cause loops.
# Create a dedicated namespace for Tailscale
ip netns add tailscale
ip link add veth0 type veth peer name veth1
ip link set veth0 netns tailscale
# Assign an IP to the virtual interface
ip addr add 10.0.0.2/24 dev veth1
ip link set veth1 up
# Run the daemon inside the namespace
ip netns exec tailscale tailscaled
This is overkill for 90% of homelabbers, but if you're building out automated infrastructure with OpenTofu and deploying these routers across multiple sites, you'll want this kind of isolation to ensure stability.
Comparison: Subnet Routers vs. Traditional VPNs
I've run OpenVPN and WireGuard (manual) for years. Here is the honest breakdown of why I switched to Tailscale subnet routers for my remote access.
| Feature | Traditional VPN (OpenVPN/WireGuard) | Tailscale Subnet Router |
|---|---|---|
| Setup | Manual certs, port forwarding, firewall rules | Zero-config NAT traversal, OAuth |
| Client Mgmt | Distributing .ovpn or .conf files |
Log in with SSO/Identity provider |
| Routing | Manual static routes on clients | Centralized route management in console |
| Maintenance | Updating keys, managing IP pools | Automatic key rotation, managed IPs |
| Performance | High (if tuned correctly) | High (WireGuard based) |
The tradeoff is the "phone home" aspect. Tailscale's coordination server knows which nodes are online. For most of us, that's a fair price to pay to avoid spending a Saturday morning debugging why a UDP port isn't opening on a residential ISP.
Gotchas and Lessons Learned
If you're setting this up, watch out for these three things:
- The "Double-Hop" Latency: If you use a subnet router and then an exit node on a different machine, your traffic is bouncing across your network multiple times. It's fine for SSH, but terrible for VoIP or gaming. Keep your subnet router and exit node on the same high-performance machine if possible.
-
DNS Leaks: Just because you can route to
10.0.0.xdoesn't mean your DNS is working. You'll still be typing IPs unless you configure "MagicDNS" or set up a global nameserver in the Tailscale admin panel that points to your internal DNS (like AdGuard Home). - Fail-Closed Policies: By default, if your subnet router goes down, you lose access to the entire LAN. If this is for a production environment, I highly recommend setting up two subnet routers in different failure domains. Tailscale doesn't do "automatic failover" in the traditional sense, but you can have multiple nodes advertising the same route.
Final Thoughts
The subnet router is the only sane way to manage remote access to a complex lab. It separates the "connectivity" layer from the "device" layer. You don't need to care if your old NAS doesn't support WireGuard or if your industrial PLC has a proprietary OS. You just need one stable Linux box with ip_forwarding enabled and a couple of iptables rules.
Reach for this technique the moment you find yourself saying, "I wish I could just SSH into this thing without having to install a client on it." If you're looking to scale this into a larger professional setup, feel free to check out my infrastructure consulting services for help with AI agent orchestration or bare-metal networking.
Top comments (0)