You just deployed your pre-production environment. It works. You're proud.
Then someone asks: "Can I access it from home?" And you think, sure, I'll just... expose it to the internet?
No. Absolutely not. Your dev and pre-production servers are full of half-finished features, debug endpoints, test data, and admin panels whose password is "admin." Putting them on the public internet is like leaving your house keys under the mat and tweeting the address.
You need a solution. One that works for your whole team — not just the engineers.
The Problems Nobody Warns You About
Here's the situation. You have a server somewhere — a Kubernetes cluster, a VPS, a rack in the basement that everyone pretends is temporary. Inside, you run multiple environments: development, pre-production, monitoring dashboards, error tracking. Internal stuff. The kind of services your team needs daily but the outside world should never touch.
The obvious move: give each service a public URL, slap on authentication, call it a day.
Four problems with that approach.
Attack surface. Every public endpoint is one. Security teams hate it. They're right. You can argue with them, but you won't win. You shouldn't.
Authentication overhead. You now need login pages on every internal tool. Your monitoring dashboard, your error tracker, your dev and pre-production environments, your reverse proxy dashboard. SSO exists, but it's complex to set up and maintain. You wanted to ship a feature, not become an identity provider. For tools five people use.
Search engine indexing. Google indexes your pre-production site. Now your half-finished app shows up in search results right next to your real product. You're explaining to a client that no, that's not the real product, and yes that error message is normal. Great first impression.
Bot traffic. Security scanners and bots crawl the internet constantly. A public endpoint gets found — not if, when. They probe it, hammer it, try default credentials. Your pre-production API — the one with debug mode on and no rate limiting — is now getting brute-forced by strangers. Not because you were targeted. Because you were visible. That's all it takes.
The real answer: make these services invisible. Not hidden behind a login page. Actually unreachable. If you're not authorized, they don't exist.
How? That's where it gets interesting.
What We Actually Need
Before diving into tools, let's be clear about the requirements.
We need a way for authorized devices to reach internal services, and for everyone else to not even know they exist. Not "protected by a login page." Actually invisible. No DNS resolution, no open port, nothing to find.
That points toward a private network — a tunnel between your device and the server that only exists for people who are allowed in. In networking terms, that's called a VPN: Virtual Private Network. Think of it as a secret hallway. Once you're in, you can access everything inside. From the outside, the hallway doesn't exist.
OK, so a VPN. But not all VPNs are created equal. Here's what matters for a real team:
- It has to work on phones. Your tester needs to check the app on their iPhone. Your product owner demos from their iPad. A VPN that only works on Linux is a VPN for one person.
- Non-technical people must be able to connect. No config files, no terminal commands, no certificates to import. An app, a button, done.
- Services need permanent, human-readable names. Not IP addresses. Real URLs that don't change when you redeploy. Your frontend needs to know where the API lives. Your backend sends emails with links pointing to the app. Configuration has to be simple — one URL per environment, stable across deploys, the same for everyone on the team.
- No open ports. The whole point is to hide services from the internet. The VPN itself shouldn't require poking holes in your firewall.
With that list in mind, let's look at the options.
The Options
OpenVPN. The grandparent of VPNs. Battle-tested, runs on everything. Also feels like it was designed when "user-friendly" meant "has a man page." Setting it up means generating certificates, managing a PKI infrastructure, writing config files. Apps exist for phones, but you need to export a .ovpn profile and import it on each device. Your product owner will stare at that file, then stare at you, then ask if there's another way. There is.
SSH tunnels. If you've ever worked on a remote server, you've used SSH — Secure Shell. It gives you a terminal on a distant machine, encrypted. SSH tunnels are a trick on top of that: you tell SSH "take this port on my laptop and connect it to that port on the server." Suddenly your local browser can reach a service running on the other side of the world, through a hidden encrypted tunnel.
The developer's duct tape. Quick, works in a pinch. But it's one tunnel per service. Need seven services? Seven tunnels. Close your laptop? All seven die. And it doesn't work on phones at all — so for anyone outside your engineering team, it's a non-starter.
WireGuard (raw). The modern answer to OpenVPN. It's a protocol — just the tunnel part — and it's excellent. Fast, simple, built into the Linux kernel. But "simple" is relative. You still generate key pairs for every device, copy public keys between machines, edit config files, manage IP assignments. Every new team member means SSH-ing into the server. Every lost phone means re-keying. For two engineers who enjoy networking on a Friday night, it's fine. For a mixed team? Non-starter.
Here's the thing though: WireGuard as a protocol won. Almost every modern VPN tool uses it under the hood. The question isn't which tunnel — it's what you build around it.
Tailscale. This is where it gets interesting. Tailscale wraps WireGuard in a layer that actually works for humans. You install an app — on Mac, Windows, Linux, iOS, Android — tap "connect," and you're on the network. No config files, no keys to manage, no ports to open. It handles NAT traversal, so it works from hotel Wi-Fi and mobile hotspots without any firewall configuration. It has built-in DNS, so services get real names instead of IP addresses.
Your product owner can do it. Your tester can do it from their iPhone. It checks every box on the list.
The catch: Tailscale has two parts. The tunnel — peer-to-peer, encrypted, WireGuard, great. And the control plane — the thing that decides who connects, hands out keys, manages DNS. That control plane lives on Tailscale's servers. They don't see your traffic, but the coordination goes through them. For a side project, fine. For a client project where someone asks "where does our VPN metadata go?" — enjoy that meeting.
Also, free tier has limits. Paid plans charge per user. The bill grows with the team.
ZeroTier, Twingate, and others. Similar category. Each has tradeoffs in pricing, features, and how much control you give up. The fundamental question is the same: do you want someone else running your control plane?
Headscale. This is the one I picked.
The One Where You Keep Your Keys
Headscale is an open-source implementation of the Tailscale control server. Read that sentence again, because it's doing a lot of work.
Same Tailscale clients on every device. Same WireGuard tunnels. Same user experience. But the control plane — who's allowed on the network, what names resolve to what — lives on your server. Your rules, your data, your problem. The good kind of problem.
Your product owner downloads Tailscale on their Mac. Changes one URL in the settings. Connected. Your tester opens the Tailscale app on their Android phone. Same URL. Connected. Your colleague on Linux installs a package and runs one command. Connected.
Everyone's on the same private network. Phones, laptops, tablets. No config files passed around. No VPN profiles to import. No "which certificate do I use?" questions. And the coordination stays on your infrastructure.
Three things sealed the deal.
The clients already exist. Tailscale has polished, well-maintained apps on every platform. By using Headscale, I get all of those for free. My tester doesn't know or care that the server is self-hosted. They see the Tailscale app. It works. That's all they need. I didn't write a single line of client code. My favorite kind of code.
No firewall configuration. NAT traversal handles everything. Devices find each other through the coordination server, then establish direct encrypted connections. Works from coffee shops, airport Wi-Fi, phone hotspots. Nothing to open, nothing to forward. Your network admin can keep sleeping.
Permanent, human-readable names. This is the one that surprised me.
When you put a service behind a VPN, you need a way to reach it. IP addresses work, but 10.43.98.210 means nothing to anyone. You'll forget it. Your team will definitely forget it. Everyone ends up maintaining a bookmark list that drifts out of sync within a week.
Headscale has MagicDNS. You define DNS records, and they resolve automatically for anyone connected to the VPN. Not through some external DNS provider. Not through hosts file hacks. Through the VPN itself.
So my team types https://monitoring.vpn.myproject.dev in their browser. It resolves. It loads. Green padlock, real TLS certificate. Every time.
The names don't change when I redeploy. They don't change when I move the cluster. They don't change when a service restarts and gets a new internal IP. The DNS record points to the reverse proxy, and the proxy routes to wherever the service lives today.
Stable URLs for unstable environments. That's the whole trick.
The Architecture: Building Blocks
So how does this actually come together? Four pieces. Each one does one job. You probably already have opinions about some of them.
Piece 1: Headscale. The coordination server. It runs as a single lightweight process, manages device registrations, hands out keys, and serves MagicDNS records. It needs a domain name so clients can reach it — this is the only part that's publicly exposed, and it only speaks the Tailscale coordination protocol. No web admin panel, no open dashboard. Just a control plane endpoint. Boring by design.
Piece 2: The subnet router. A small container running the Tailscale client inside your cluster. Its job: tell the VPN "hey, the internal network 10.43.0.0/16 is reachable through me." When someone on the VPN wants to reach an internal service, their traffic goes through an encrypted tunnel to this container, and from there into the cluster's network. One container, one role. It doesn't even know what services exist — it just bridges two networks.
Piece 3: A reverse proxy. This is where the URL routing happens. You need something that listens for incoming requests, looks at the hostname (monitoring.vpn.myproject.dev vs api-dev.vpn.myproject.dev), and sends the request to the right service. Traefik, Caddy, nginx — any of them work. I use Traefik because it integrates well with Kubernetes and handles TLS certificates automatically. Caddy would work just as well and has an even simpler config. Nginx if you're already married to it — no judgment, we all have that one tool.
The proxy also handles TLS certificates. Since these services aren't reachable from the public internet, you can't use the standard HTTP challenge for Let's Encrypt. Instead, you use a DNS challenge — prove you own the domain by creating a DNS record, and Let's Encrypt issues a wildcard certificate for *.vpn.myproject.dev. Your proxy automates this. Real certificates, no browser warnings, no self-signed hacks.
Piece 4: An IP allowlist. One middleware rule on the reverse proxy: only accept connections from the Tailscale IP range (100.64.0.0/10). Even if someone somehow discovers the internal URLs, they can't reach the services without being on the VPN. Belt and suspenders. Paranoia as a feature.
That's it. Four components. Headscale coordinates, the subnet router bridges networks, the proxy routes and encrypts, the allowlist enforces access.
Less Is More
Now look at the four problems from the beginning again. Attack surface, authentication overhead, search engine indexing, bot traffic. Four problems. Four different symptoms.
Same root cause: your services were visible.
You didn't solve four problems. You removed the one thing that caused all of them. No attack surface because there's no surface. No authentication overhead because there's nothing to authenticate — if you can reach it, you're already authorized. No indexing because Google can't find what doesn't exist on the public internet. No bot traffic because there's no port to scan.
The best security isn't adding layers. It's removing the need for them.
What Your Team Actually Sees
This is the part that matters. Because the architecture is an infra problem. The experience is what they live with.
Your product owner opens the Tailscale app on their Mac. It connects. They type https://app-ppd.vpn.myproject.dev in their browser. The pre-production app loads. They test. They find a bug. They file a ticket with the URL in it. The URL works for everyone on the team because it's the same for everyone.
Your tester opens Tailscale on their Android phone. Same URL, same app, same behavior. They test the mobile experience on a real device, on a real pre-production environment, from their couch. As it should be.
When they disconnect from the VPN, those URLs stop resolving. The services become invisible. Not "protected by a login page" invisible. Actually invisible. DNS doesn't resolve. There's nothing to attack because there's nothing to find.
No one on the team needs to understand WireGuard, port forwarding, SSH, or certificates. They install an app, connect, and use URLs. That's the bar. Everything below it is someone else's problem — mine.
Making It Work on Kubernetes
This setup runs on any Kubernetes cluster. Here's the implementation shape, without the YAML walls.
Headscale deploys as a single-replica Deployment with a small PostgreSQL database (or SQLite if you want to keep it minimal). It needs an Ingress with a public domain for the coordination endpoint. One container, persistent storage, a ConfigMap for its settings. It'll run on the cheapest node you have and still be bored.
The subnet router is another Deployment running the official Tailscale container image. It authenticates to Headscale with a pre-auth key, advertises the cluster's service CIDR as a subnet route, and needs NET_ADMIN capability to manage network interfaces. Once it's registered and the route is approved, traffic flows.
The reverse proxy — Traefik in my case — gets a DNS-01 certificate resolver configured with your DNS provider's API credentials. It issues wildcard certs for *.vpn.myproject.dev automatically. Each internal service gets an Ingress resource with the VPN hostname.
The IP allowlist is a single middleware definition: allow 100.64.0.0/10, deny everything else. Attach it to every VPN Ingress. Three lines of config, applied everywhere.
The whole stack adds maybe 200MB of memory to your cluster. Headscale is tiny. The subnet router is tiny. The proxy you probably already have.
New internal service? Add a DNS record in Headscale's config, create an Ingress, redeploy. Five minutes.
The Point
The question was never "which VPN protocol is best." WireGuard won that argument years ago. The question is: who manages the control plane, and can non-technical people actually use it?
Tailscale solved the hard part — making WireGuard usable by normal humans. Headscale lets you self-host that solution. Same clients, same experience, your infrastructure.
For my use case — internal dev and pre-production environments that need stable, permanent URLs accessible to a mixed team of developers, testers, and product owners, from laptops and phones, in under a minute — Headscale was the obvious pick. Not because it's the most powerful option. Because it's the one that disappears. You set it up once, your team installs an app, and then everyone just uses https://monitoring.vpn.myproject.dev like it's always been there.
The best infrastructure is the kind nobody notices.
That's the deal.
Top comments (0)