DEV Community

Cover image for ๐Ÿ—๏ธ Building my home server: Part 8
denesbeck
denesbeck

Posted on • Originally published at arcade-lab.io

๐Ÿ—๏ธ Building my home server: Part 8

Remote access with Tailscale

In my previous blog post, I covered setting up Jellyfin for streaming movies on my local network. In this post, I'm solving a problem that's been bugging me since the beginning: accessing my home lab from outside my local network. Until now, all my services โ€” Jellyfin, Grafana, Pi-hole, Filebrowser, and everything else โ€” were only accessible when I was home on my LAN. If I wanted to stream a movie on my phone while commuting or check my Grafana dashboards from a coffee shop, I was out of luck. Time to fix that.

๐Ÿค” Why Tailscale?

When it comes to remote access for a home lab, there are three main contenders: OpenVPN, WireGuard, and Tailscale. I also considered Cloudflare Tunnel, which takes a different approach entirely. Here's how I evaluated them:

OpenVPN is the veteran. It's battle-tested and extremely configurable, but that configurability comes at a cost. You need to set up a PKI (Public Key Infrastructure), manage certificates, configure NAT and port forwarding on your router, and maintain iptables rules. For a single-server home lab, that's a lot of overhead for what should be a simple "let me reach my server from my phone" problem.

WireGuard is the modern alternative โ€” fast, lightweight, and built into the Linux kernel. It's a massive improvement over OpenVPN in terms of performance and simplicity. But you still need to open a port on your router, manage key pairs for every device, and handle DNS configuration yourself.

Cloudflare Tunnel was tempting since I'm already using Cloudflare for my DNS (certbot DNS-01 challenge for the wildcard SSL certificate). It lets you expose services without opening any inbound ports. However, it only works for HTTP/HTTPS traffic. I also want to access my Samba shares and SSH into the server remotely, which Cloudflare Tunnel can't do. Plus, all traffic routes through Cloudflare's servers, adding latency and a privacy consideration.

Tailscale is built on top of WireGuard but removes all the operational pain. No port forwarding, no PKI, no key management, no router configuration. It uses NAT traversal to establish direct peer-to-peer WireGuard tunnels between your devices, even behind CGNAT. You install it, authenticate, and it just works. It's free for personal use (up to 100 devices, 3 users), and has excellent mobile apps for iOS and Android.

For a single-server home lab where I want full network-level access (HTTP, SSH, SMB) with minimal maintenance, Tailscale was the clear winner.

๐Ÿ“ฆ Installing Tailscale

Tailscale provides an official apt repository for Ubuntu. The installation follows the same pattern I used for Docker in Part 4 โ€” download the GPG key, add the repository, and install the package:

# Add the GPG key
curl -fsSL https://pkgs.tailscale.com/stable/ubuntu/$(. /etc/os-release && echo $UBUNTU_CODENAME).noarmor.gpg \
  | sudo tee /etc/apt/keyrings/tailscale-archive-keyring.gpg > /dev/null

# Add the apt repository
echo "deb [signed-by=/etc/apt/keyrings/tailscale-archive-keyring.gpg] https://pkgs.tailscale.com/stable/ubuntu $(. /etc/os-release && echo $UBUNTU_CODENAME) main" \
  | sudo tee /etc/apt/sources.list.d/tailscale.list

# Install
sudo apt update && sudo apt install tailscale
Enter fullscreen mode Exit fullscreen mode

After installation, enable and start the Tailscale daemon:

sudo systemctl enable --now tailscaled
Enter fullscreen mode Exit fullscreen mode

Then authenticate the server to your Tailscale network (called a "tailnet"):

sudo tailscale up --auth-key=tskey-auth-xxxxx --accept-dns=false --advertise-routes=192.168.64.15/32
Enter fullscreen mode Exit fullscreen mode

A couple of things to note here:

  • --auth-key: Instead of opening a browser on the server to authenticate (which isn't practical on a headless Ubuntu Server), you generate a one-time auth key in the Tailscale admin console and pass it directly. The key is single-use โ€” once the server is authenticated, the key can't be reused.

  • --accept-dns=false: This is important. By default, Tailscale wants to override the machine's DNS settings with its own MagicDNS. But my server runs Pi-hole, which manages DNS for the entire network. I don't want Tailscale messing with that. Setting --accept-dns=false tells Tailscale to leave the server's DNS configuration alone.

  • --advertise-routes=192.168.64.15/32: This makes the server a subnet router, advertising its LAN IP to the tailnet. I'll explain why this is needed in the Subnet Routing section below.

After authentication, the server gets a Tailscale IP in the 100.x.y.z range:

tailscale ip -4
# 100.104.44.113
Enter fullscreen mode Exit fullscreen mode

This IP is how other devices on my tailnet will reach the server.

๐Ÿ”ฅ Firewall Configuration

Tailscale creates a virtual network interface called tailscale0. Traffic from my remote devices arrives on this interface, so UFW needs to allow it. I added a single rule:

sudo ufw allow in on tailscale0
Enter fullscreen mode Exit fullscreen mode

This allows all incoming traffic on the Tailscale interface. It might seem broad, but the security model here is layered: only devices authenticated to my tailnet can send traffic through tailscale0 in the first place. Tailscale handles the authentication and encryption, and ACLs (covered later in this post) restrict which devices can reach the server. UFW just needs to let the traffic through once it arrives.

๐Ÿณ Making Pi-hole Listen on Tailscale

This is where I hit my first snag. After setting everything up, I configured the Tailscale admin console to use my server as a global DNS nameserver (so my phone would use Pi-hole for DNS even when remote). The result? My phone completely lost DNS resolution โ€” I couldn't reach anything, not even public websites.

The problem was straightforward: Pi-hole's DNS port (53) was bound to 192.168.64.15 โ€” the server's LAN IP. When my phone sent DNS queries to the Tailscale IP (100.104.44.113), Pi-hole wasn't listening there. Connection refused.

The fix was to publish Pi-hole's DNS port on both IPs:

docker run -d \
  --name pihole \
  -p 192.168.64.15:53:53/tcp \
  -p 192.168.64.15:53:53/udp \
  -p 100.104.44.113:53:53/tcp \
  -p 100.104.44.113:53:53/udp \
  -p 127.0.0.1:800:80/tcp \
  # ... rest of the Pi-hole configuration
Enter fullscreen mode Exit fullscreen mode

Now Pi-hole listens on both the LAN and Tailscale interfaces. Devices on the local network send DNS queries to 192.168.64.15, and devices on my tailnet send them to 100.104.44.113. Both get the same Pi-hole experience: ad blocking, tracker blocking, and local DNS resolution.

Local DNS Records

The local DNS records in Pi-hole only need to map to the server's LAN IP:

docker exec pihole pihole-FTL --config dns.hosts \
  '["192.168.64.15 jellyfin.arcade-lab.io", "192.168.64.15 grafana.arcade-lab.io", ...]'
Enter fullscreen mode Exit fullscreen mode

You might wonder: how does this work for remote Tailscale clients? If Pi-hole resolves jellyfin.arcade-lab.io to 192.168.64.15 (a LAN IP), how can a remote device reach it? The answer is subnet routing, which I'll cover in the next section.

An important note: I initially tried adding DNS records for both the LAN IP and the Tailscale IP (100.104.44.113) for each domain. This doesn't work. Pi-hole's dns.hosts is essentially a hosts file โ€” one IP per domain. When you add two entries for the same domain, the last one wins. I ended up with all domains resolving to the Tailscale IP, which meant local access broke when Tailscale was off. The fix was to keep only the LAN IP in Pi-hole and solve the remote access problem at the network layer instead.

Global vs Restricted DNS

When configuring Tailscale to use Pi-hole as a DNS server, there are two options in the admin console:

  • Global nameserver: All DNS queries from all Tailscale devices go through Pi-hole. You get ad blocking everywhere, but if the server is down (mine reboots daily at 3 AM), your devices lose all DNS resolution.

  • Restricted nameserver: Only queries for specific domains (e.g., arcade-lab.io) go through Pi-hole. Everything else uses the device's default DNS. Safer, but no ad blocking when remote.

I went with the global nameserver because I wanted ad blocking on my phone when I'm out. The tradeoff is that during the daily 3 AM reboot window, DNS briefly breaks on connected Tailscale devices โ€” but I'm usually asleep at that point, so it's an acceptable compromise.

๐ŸŒ Subnet Routing

With Pi-hole resolving all *.arcade-lab.io subdomains to 192.168.64.15 (the LAN IP), remote Tailscale clients need a way to actually reach that IP. By default, Tailscale only routes traffic between devices using their Tailscale IPs (100.x.y.z). A remote device has no route to 192.168.64.15 โ€” it's a private IP on my home network.

Tailscale solves this with subnet routing. You designate one node on your tailnet as a subnet router, and it advertises local network routes to the rest of the tailnet. Remote devices send traffic destined for the LAN to the subnet router, which then forwards it to the local network.

Enabling IP Forwarding

First, the server needs IP forwarding enabled โ€” without it, the kernel won't forward packets between the Tailscale and LAN interfaces:

sudo sysctl -w net.ipv4.ip_forward=1
sudo sysctl -w net.ipv6.conf.all.forwarding=1
Enter fullscreen mode Exit fullscreen mode

To make it persistent across reboots, add both to /etc/sysctl.conf (or a file in /etc/sysctl.d/).

Advertising the Route

Then, tell Tailscale to advertise the server's IP as a routable destination:

sudo tailscale set --advertise-routes=192.168.64.15/32
Enter fullscreen mode Exit fullscreen mode

I'm using /32 instead of /24 here โ€” there's no reason to expose my entire LAN subnet when all services run on a single host. Principle of least privilege.

Approving the Route

Advertising a route isn't enough โ€” Tailscale requires explicit approval in the admin console as a safety measure:

  1. Go to Tailscale Machines
  2. Find the server node and click "..." โ†’ "Edit route settings"
  3. Enable the 192.168.64.15/32 route

Once approved, remote Tailscale clients can reach 192.168.64.15 as if they were on the LAN. The full flow looks like this:

  1. Phone queries Pi-hole at 100.104.44.113 โ†’ gets back 192.168.64.15
  2. Phone sends HTTPS request to 192.168.64.15
  3. Tailscale routes the traffic to the subnet router (the server)
  4. Server forwards the packet to 192.168.64.15 โ€” which is itself
  5. The service responds, and the reply follows the same path back

The beauty of this approach is that it keeps the DNS setup simple (one IP per domain), works transparently for both local and remote clients, and doesn't interfere with the public arcade-lab.io domain hosted on Vercel โ€” Pi-hole has no record for the root domain, so those queries pass through to upstream DNS.

๐Ÿ”’ Access Controls

By default, Tailscale allows all devices on a tailnet to communicate with each other. That's fine when it's just your own devices, but it's good practice to lock things down โ€” especially with subnet routing enabled. Tailscale provides tag-based ACLs (Access Control Lists) that you configure in the admin console.

Tagging Devices

I created two tags:

  • tag:server โ€” the Ubuntu server running all services
  • tag:trusted โ€” my personal devices (laptop, phone) that should have full access

Tags are assigned to devices in the Machines page.

ACL Policy

The entire policy boils down to a single rule:

{
  "acls": [
    {
      "action": "accept",
      "src": ["tag:trusted"],
      "dst": ["tag:server:*", "192.168.64.15/32:*"]
    }
  ]
}
Enter fullscreen mode Exit fullscreen mode

This allows trusted devices to reach the server via both its Tailscale IP (100.104.44.113) and the subnet route (192.168.64.15), on all ports. Everything else is denied by default โ€” untagged devices or devices without the tag:trusted tag can't reach anything on the tailnet.

The 192.168.64.15/32:* part is important. Without it, ACLs would block traffic to the subnet route even though the route itself is approved. Tailscale evaluates ACLs against the actual destination IP, and traffic routed via the subnet uses the LAN IP (192.168.64.15), not the server's Tailscale IP.

Auto-Approving Routes

Remember the manual route approval step from the subnet routing section? You can automate that with autoApprovers:

{
  "autoApprovers": {
    "routes": {
      "192.168.64.15/32": ["tag:server"]
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

This tells Tailscale to automatically approve the 192.168.64.15/32 route whenever a tag:server node advertises it. One less manual step after running the Ansible playbook.

๐Ÿ“ Making Samba Work Over Tailscale

With Tailscale and Pi-hole sorted, I could access all my web services remotely through their *.arcade-lab.io subdomains. But my Samba shares were still unreachable. Two things were blocking it:

1. Interface Binding

Samba's configuration had bind interfaces only = yes with interfaces limited to lo, wlo1, and 192.168.64.0/24. The tailscale0 interface wasn't included, so Samba was ignoring traffic from it entirely.

Fix:

interfaces = lo wlo1 192.168.64.0/24 tailscale0
Enter fullscreen mode Exit fullscreen mode

2. Host Allow Restrictions

Each share had hosts allow = 192.168.64.0/24, which only permits connections from the LAN subnet. Tailscale IPs live in the 100.64.0.0/10 CGNAT range, which was being rejected.

Fix:

hosts allow = 192.168.64.0/24 100.64.0.0/10
Enter fullscreen mode Exit fullscreen mode

The 100.64.0.0/10 range covers all possible Tailscale IPs (100.64.0.0 through 100.127.255.255). This is safe because only authenticated devices on my tailnet can reach the Samba port through the tailscale0 interface.

After restarting Samba, I could mount my shares from my laptop over Tailscale just like I was on the LAN.

๐Ÿค– Automating with Ansible

As with everything else in this series, the entire Tailscale setup is automated with Ansible. The playbook follows the same pattern as my Docker installation: download the GPG key, add the apt repository, install the package, enable the service, and authenticate.

The auth key is stored in a variables file that's excluded from version control (like all my other secrets), and the playbook is idempotent โ€” it checks if Tailscale is already authenticated before attempting to re-auth, so running it multiple times doesn't cause issues.

The Tailscale playbook also handles subnet routing โ€” it enables IP forwarding via sysctl and passes --advertise-routes during authentication. For already-authenticated nodes, it runs tailscale set to update the advertised routes without re-authenticating.

The Pi-hole playbook was updated to publish DNS on both IPs (so it accepts queries from LAN and Tailscale clients) but registers local DNS records using only the LAN IP. Subnet routing takes care of making that IP reachable from remote Tailscale devices. The Samba template was updated to include tailscale0 in the interface list and 100.64.0.0/10 in the host allow rules. The UFW playbook got a single new rule allowing traffic on the Tailscale interface.

๐ŸŽ‰ Outcome

With Tailscale deployed, I now have:

  1. Full remote access โ€” all my web services are accessible from anywhere via their *.arcade-lab.io subdomains, exactly as they are on the LAN.
  2. Remote ad blocking โ€” Pi-hole filters DNS queries from my Tailscale devices, so I get network-wide ad blocking even when I'm away from home.
  3. Remote file access โ€” Samba shares work over Tailscale, so I can access my movies, photos, and files from anywhere.
  4. SSH from anywhere โ€” I can SSH into the server over Tailscale without exposing the SSH port to the public internet.
  5. Zero router configuration โ€” no port forwarding, no dynamic DNS, no exposed ports. Tailscale handles NAT traversal automatically.
  6. End-to-end encryption โ€” all traffic between my devices is encrypted with WireGuard, even on untrusted networks.

The best part: installing Tailscale on a new device and joining my tailnet takes about 30 seconds. No VPN profiles to configure, no certificates to install, no client configuration files to manage. Noice! ๐ŸŽ‰

You can also read this post on my portfolio page.

Top comments (0)