As the cluster grew, I needed secure access to internal services — PostgreSQL, NATS, Redis, ScyllaDB — without exposing them to the public internet. I went through three solutions: WireGuard, Tailscale, and eventually Netbird. Each one taught me something.
The Problem
Early on I was forwarding TCP ports directly through the ingress controller:
tcp:
"4222": nats/nats-cluster:4222
"5432": pgo/astring-ha:5432
"6379": redis/redis:6379
"9042": scylla/scylla-client:9042
Each service has authentication, so it's not completely open. But exposing databases directly to the public internet is bad practice regardless. I wanted internal services reachable only through a private network, with only HTTP endpoints publicly accessible.
WireGuard
WireGuard is a modern VPN — lightweight, fast, minimal codebase compared to OpenVPN. The core idea: create an encrypted tunnel between your machine and the cluster. Once connected, internal services look like they're on your local network.
Setup on Kubernetes
I deployed WireGuard using Helm:
helm repo add wireguard https://bryopsida.github.io/wireguard-chart
helm repo update
helm install wireguard wireguard/wireguard \
--namespace wireguard \
--create-namespace \
-f values.yaml
values.yaml:
service:
enabled: true
type: ClusterIP
wireguard:
clients:
- AllowedIPs: 10.34.0.2/32
PublicKey: <client_public_key>
Expose UDP port 51820 through the ingress:
udp:
"51820": wireguard/wireguard-wireguard:51820
Client Setup
Generate keys on your local machine:
wg genkey | tee privatekey | wg pubkey > publickey
Get the server's public key from the Kubernetes secret:
kubectl get secret wireguard-wireguard -n wireguard -o yaml | grep publicKey
Client config (wg0.conf):
[Interface]
PrivateKey = <your_private_key>
Address = 172.32.32.2/32
DNS = 10.43.0.10, 8.8.8.8
[Peer]
PublicKey = <server_public_key>
AllowedIPs = 10.0.0.0/16, 10.43.0.0/16, 172.32.32.0/24
Endpoint = <cluster_public_ip>:51820
PersistentKeepalive = 25
Bring the tunnel up: wg-quick up wg0
Done — internal services are now accessible through the tunnel, not exposed publicly.
WireGuard worked. But managing keys manually, updating the Helm values for each new client, and handling the configuration yourself gets old quickly. Also using PodIP was bad, it changes frequently. Then I found Tailscale.
Tailscale
Tailscale is WireGuard underneath, but with everything you'd otherwise build yourself already done — key management, device registration, DNS, access policies, a UI. You don't touch keys manually. You don't write config files. You just install it and devices appear in your network.
Kubernetes Integration
Tailscale has a Kubernetes operator. Install it:
helm repo add tailscale https://pkgs.tailscale.com/helmcharts
helm repo update
helm upgrade \
--install \
tailscale-operator \
tailscale/tailscale-operator \
--namespace=tailscale \
--create-namespace \
--set-string oauth.clientId=<client_id> \
--set-string oauth.clientSecret=<client_secret> \
--wait
Then expose a service to your Tailscale network by adding one annotation:
metadata:
annotations:
tailscale.com/expose: "true"
That's it. The service gets a Tailscale IP and a DNS name automatically. No UDP port forwarding, no ingress config, no manual key exchange.
What Made It Better
MagicDNS — every device and service on your Tailscale network gets a DNS name. Instead of remembering 10.43.x.x, you connect to postgres.tailnet-name.ts.net. Works automatically.
Access controls from the UI — you define which devices can reach which services through a policy file in the Tailscale dashboard. No YAML in the cluster, no firewall rules to manage manually.
Sharing — you can share specific services with other Tailscale accounts from the UI. Useful for giving someone temporary access without adding them to your cluster.
Multi-platform — Tailscale client runs on Linux, macOS, iOS, Android. Once installed, all your devices are on the same private network automatically.
If you're looking for a simple private networking solution for a Kubernetes cluster — accessing databases locally, connecting team members, securing internal services — Tailscale is the easiest path. I'd recommend it for most use cases.
I used Tailscale as my primary private network for a while and fully replaced WireGuard with it. But then my requirements changed.
Netbird: Why I Moved
Two things pushed me toward Netbird:
Self-hosting potential. Tailscale is a managed service. Your network topology goes through their coordination server. For now that's fine, but I want the option to fully self-host the control plane in the future. Netbird is fully open source — the management server, signal server, everything. I'm currently using Netbird cloud, but the option to move is there.
Multi-DC connectivity. The bigger reason. I have two clusters — Mongolia (on-premise) and Germany (cloud). I need them to communicate directly: NATS supercluster, Cassandra/ScyllaDB cross-DC replication, other services that need pod-to-pod reachability.
The k8ssandra-operator is a good example of why this is hard. It needs pods in one DC to directly reach pods in the other DC by their pod IPs. You can't just expose a LoadBalancer service and call it done — the operator needs the actual pod network to be routable between clusters.
Netbird handles this with its routing architecture. You configure network routes that make each cluster's pod CIDR reachable through Netbird peers. A Netbird peer in each cluster acts as a router for that cluster's network. Traffic between clusters flows through Netbird's encrypted tunnels, but pod IPs on both sides remain directly addressable.
This also works through NAT — the Mongolia cluster is behind NAT, the Germany cluster is on a public IP. Netbird's hole-punching handles the NAT traversal automatically. No static IPs required on the Mongolia side.
I tested this setup — pod-to-pod connectivity between Mongolia and Germany works. The full multi-DC architecture (NATS supercluster, cross-DC ScyllaDB replication) is a separate post.
Netbird on Kubernetes
Install the Netbird client as a DaemonSet or deployment in each cluster, register with your Netbird account, configure routes for the pod and service CIDRs. Each cluster becomes a peer on the Netbird network with routing configured for its internal networks.
The management UI lets you define access policies — which peers can reach which networks, which ports are allowed. Similar to Tailscale's access controls but fully open source.
Summary
- WireGuard — solid foundation, full control, manual everything. Good if you want to understand what's happening underneath or have specific requirements that managed solutions don't cover.
- Tailscale — WireGuard with all the operational work done for you. Best choice for most use cases: local development access, team access, securing internal services. I'd recommend this as the default.
- Netbird — open source Tailscale alternative with better routing architecture for multi-cluster setups. Right choice when you need pod-to-pod reachability across clusters or want self-hosting as an option.
Current state: Netbird for everything. WireGuard and Tailscale are gone.
Top comments (0)