TL;DR: The thing that broke my trust wasn't a breach. It was a court document.
📖 Reading time: ~39 min
What's in this article
- Why I Stopped Trusting Commercial VPNs and Started Hosting My Own
- How I'm Evaluating These (My Actual Criteria)
- WireGuard (Raw): The One Everything Else Is Built On
- Headscale: WireGuard with a Control Plane You Own
- Netbird: The One I Tried for a Team Setup
- OpenVPN: The Veteran I Keep Around for One Specific Job
- Algo VPN: The 'I Need This in 20 Minutes' Option
- Side-by-Side Comparison: The Honest Version
Why I Stopped Trusting Commercial VPNs and Started Hosting My Own
The thing that broke my trust wasn't a breach. It was a court document. I was running a threat model review for a side project when I stumbled onto a forum post linking to a published legal filing — my VPN provider, who had plastered "zero logs, ever" across their homepage, had responded to a law enforcement data request with connection timestamps and originating IPs. Not metadata. Actual connection records. The provider's defense was that they "don't log browsing activity" — which is technically true and completely dishonest at the same time. That distinction between what a VPN claims not to log and what their infrastructure physically cannot log is the entire ballgame.
Commercial VPNs have a fundamental business model problem: they need you to trust them, but they have no cryptographic or architectural reason for you to do so. A self-hosted WireGuard server on a $5/month VPS you control doesn't ask for your trust — you can inspect the config, audit the firewall rules, and you know exactly one entity has access to that machine: you. The trade-off is real though. You're giving up the multi-hop routing, the obfuscated server pools across 90 countries, and the "one click and you're private" UX. What you get back is a tunnel where the logs situation is something you configured, not something you're hoping a vendor honored.
The cost math is pretty straightforward. A Hetzner CX22 (2 vCPU, 4GB RAM, 20TB traffic) runs about €4.35/month as of 2025 pricing. A DigitalOcean Droplet with comparable specs is $6/month. Neither requires a contract. Your VPN overhead on top of a box you might already use for other things is essentially zero — WireGuard runs as a kernel module and adds maybe 1-2MB of memory overhead. Compare that to $99-$150/year for a commercial VPN where you're funding their marketing budget, their no-logs PR campaign, and their affiliate program. The real cost of self-hosting is time: expect 2-4 hours of setup the first time, and about 30 minutes/month in maintenance if something needs updating.
I'm assuming throughout this article that you have a VPS somewhere (or a spare machine at home with a stable IP), you're comfortable with SSH, and you've touched a firewall rule before. You don't need to be a networking expert — but you do need to know that ufw allow 51820/udp is opening a port, not casting a spell. If you're running Ubuntu 22.04 LTS or Debian 12, every command I show will work without modification. If you're on something else, adapt accordingly.
One thing I want to be upfront about: this setup is about owning your network tunnel and keeping your traffic private from your ISP, the coffee shop WiFi operator, and passive surveillance. It's not a solution for high-volume torrent traffic (your VPS provider has ToS, and 20TB/month sounds like a lot until it isn't), and it's not anonymity software — your VPS provider still knows the server is yours. For actual anonymity, you want Tor. For actual privacy where you control the infrastructure and the logs, self-hosted VPN is the answer.
How I'm Evaluating These (My Actual Criteria)
The thing that surprised me most when I started benchmarking these was how much variance there is between "works on paper" and "works when your coworker with an iPhone 14 on a hotel WiFi needs to connect at 11pm." So that's the bar I'm holding everything to — not lab conditions, not vendor claims.
Setup Time on a Fresh Ubuntu 24.04 LTS Box
I'm being specific: Ubuntu 24.04 LTS, a $6 DigitalOcean Droplet (2 vCPU, 1GB RAM), and a stopwatch. I count from first SSH connection to "first client connected and traffic flowing." If a solution requires me to read three separate docs pages, chase down a dependency conflict, or run a command that silently fails and requires forum archaeology — that time counts. Some of these took under 10 minutes. One took the better part of an afternoon, mostly fighting with a CA setup that the documentation assumed you already understood.
Key Rotation and Certificate Management
This is where things get ugly fast, and it's almost never covered honestly in tutorials. PKI-based solutions (anything using TLS certificates) will eventually expire on you, usually at the worst possible time. My test: rotate the server certificate and re-enroll a client without revoking everyone else's access. For solutions using pre-shared keys, I'm asking: can you rotate one peer's key without taking down the whole tunnel? WireGuard's model here is genuinely different — each peer has its own keypair and you update them independently. OpenVPN with a manual PKI setup, on the other hand, can turn into a shell-script archaeology project if you didn't document your CA setup six months ago.
Client Support Without Workarounds
My hard rule: iOS via the App Store only, Android via Play Store only. No TestFlight hacks, no APKs from a GitHub releases page. This eliminates some otherwise interesting projects immediately. The clients I care about: the official WireGuard app (iOS/Android), Tailscale (which is WireGuard under the hood), the OpenVPN Connect app, and the Cloudflare WARP client for anything MASQUE/WireGuard-adjacent. If a solution requires a third-party client on mobile that hasn't been updated in 18 months, that's a no from me regardless of how good the server side is.
Audit History
I'm looking for audits published after 2022 from firms I recognize — Cure53, Trail of Bits, Quarkslab, or similar. An audit from 2019 doesn't tell me much about the codebase today if there have been hundreds of commits since. WireGuard has a formal security audit from 2020 (Least Authority) and the protocol is relatively frozen, which matters. Tailscale published a Cure53 audit in 2022. Some others in this list have no published audit at all — I'll say that plainly rather than bury it. "No published audit" isn't automatically disqualifying for a home lab setup, but if you're routing business traffic through it, that's your call to make with full information.
Throughput From My Own iperf3 Tests
I ran these on the same Droplet over a 1Gbps uplink, client on a 500Mbps residential connection, using iperf3 in both directions:
# Server side
iperf3 -s -p 5201
# Client side — 30 second test, 4 parallel streams
iperf3 -c YOUR_SERVER_IP -p 5201 -t 30 -P 4 -R
The -R flag tests download (server-to-client), which is the direction that actually matters for most users. I also ran without -R to get upload. CPU usage on the server during the test matters too — a solution that maxes out a single core at 200 Mbps is going to be a problem on cheaper hardware. I'm not publishing these as "the truth" — your hardware, kernel version, and MTU settings will change things. But the relative differences between solutions on identical hardware tell you something real.
WireGuard (Raw): The One Everything Else Is Built On
Every polished VPN tool I'll cover later — WireGuard easy, Tailscale, Headscale — is WireGuard underneath with a management layer on top. I call this "raw" WireGuard because you are staring directly at the kernel module with nothing between you and it. You write INI files. You exchange public keys manually. You manage routing rules yourself. There's no dashboard, no API, no auto-provisioning. That's not a complaint — for a single-person setup where you actually want to understand what's happening, this is the right level of abstraction.
Installing on Ubuntu 24.04 is genuinely simple. After apt install wireguard you get the wg and wg-quick utilities, plus the kernel module if your kernel supports it (Ubuntu 24.04's 6.8 kernel has it built in). What you don't get is any configuration — no service file pre-populated, no wizard, no defaults. You start from zero.
The minimal config examples floating around the internet work fine for one peer and then silently break when you add a second one. Here's a /etc/wireguard/wg0.conf that actually holds up:
# /etc/wireguard/wg0.conf — server side
[Interface]
Address = 10.0.0.1/24
ListenPort = 51820
PrivateKey = <server_private_key>
# Required for routing client traffic out to the internet
# Run: sysctl -w net.ipv4.ip_forward=1
# And add net.ipv4.ip_forward=1 to /etc/sysctl.conf for persistence
PostUp = iptables -A FORWARD -i %i -j ACCEPT; iptables -A FORWARD -o %i -j ACCEPT; iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
PostDown = iptables -D FORWARD -i %i -j ACCEPT; iptables -D FORWARD -o %i -j ACCEPT; iptables -t nat -D POSTROUTING -o eth0 -j MASQUERADE
[Peer]
# Phone
PublicKey = <peer1_public_key>
AllowedIPs = 10.0.0.2/32
[Peer]
# Laptop
PublicKey = <peer2_public_key>
AllowedIPs = 10.0.0.3/32
The thing that bit me hardest the first time: the VPS was up, WireGuard was listening on UDP 51820 (you can verify with ss -ulnp | grep 51820), the client connected without error — and nothing worked. No traffic through the tunnel. The port wasn't blocked. The problem was IP forwarding, which is off by default on virtually every fresh Linux install. You can flip it live with sysctl -w net.ipv4.ip_forward=1, but that disappears on reboot. The permanent fix is adding net.ipv4.ip_forward=1 to /etc/sysctl.conf and running sysctl -p to reload. Half the tutorials skip this entirely and just show you the WireGuard config, which is why people spend hours debugging a "working" tunnel that doesn't route.
Also check your outbound interface name. The PostUp rule above uses eth0, but modern Ubuntu on cloud providers often names the interface ens3 or enp1s0. Run ip route | grep default to find yours before you copy-paste anything. Getting NAT wrong means your peers can reach the server but can't hit the internet through the tunnel — another silent failure mode.
Raw WireGuard is the right call in exactly two situations: you're running this for yourself only, or you're building something custom and need to understand the primitives before you add tooling on top. It's genuinely fast — sub-millisecond overhead on LAN, and on a $6/month VPS you'll see 400-600 Mbps throughput without breaking a sweat. The crypto (ChaCha20-Poly1305, Curve25519) is modern and audited. The honest downside is that peer management doesn't scale past about 5-6 peers before you're maintaining a mental spreadsheet of IPs and keys. There's no built-in key rotation, no revocation mechanism, no logging beyond what you set up yourself. The moment you're managing access for more than one or two devices, you want one of the wrappers. But if you've never run WireGuard raw at least once, you don't really understand what the wrappers are doing — and that knowledge pays off when something breaks at 2am.
Headscale: WireGuard with a Control Plane You Own
The thing that sold me on Headscale wasn't the feature list — it was the first time I added a second laptop to my lab at midnight without touching a single config file on my server. Raw WireGuard is great until your third or fourth device, at which point you're maintaining a mesh of public keys and AllowedIPs by hand, and one typo breaks everything silently. Headscale drops a proper control plane on top of WireGuard's transport layer, and suddenly device management feels like what it should have been from day one.
Headscale reimplements the Tailscale coordination server — the piece that handles key distribution, NAT traversal coordination, and ACL enforcement. Your devices still run the official Tailscale client (yes, the same binary from tailscale.com), but instead of phoning home to Tailscale's infrastructure, they talk to your server. You get the full client ecosystem — iOS, Android, Windows, Linux, macOS — without trusting a third party with your network topology. The project is at v0.23.x as of early 2026 and the API has been stable enough for production use for over a year.
Here's a Docker Compose setup that actually runs cleanly on a $6 VPS:
version: "3.8"
services:
headscale:
image: headscale/headscale:0.23.0
restart: unless-stopped
ports:
- "8080:8080"
- "9090:9090" # metrics endpoint
volumes:
- ./config:/etc/headscale
- ./data:/var/lib/headscale
command: headscale serve
headscale-ui:
image: ghcr.io/gurucomputing/headscale-ui:latest
restart: unless-stopped
ports:
- "8443:443"
environment:
- HEADSCALE_URL=https://your-domain.com:8080
And the config.yaml block that trips people up most — the DERP settings. Headscale ships with Tailscale's public DERP relays as fallback, which works but leaks metadata to Tailscale's infra. If you want full independence, point it at your own derper instance:
server_url: https://your-domain.com:8080
listen_addr: 0.0.0.0:8080
derp:
server:
enabled: false # set true only if you're running embedded DERP
urls: [] # remove Tailscale's hosted DERP list
paths:
- /etc/headscale/derp.yaml # your custom DERP map pointing at your own derper
dns:
magic_dns: false # start with this OFF — see explanation below
base_domain: home.internal
log:
level: info
The rough edge nobody puts in their blog post: the iOS Tailscale client has a background networking quirk with Headscale where the VPN tunnel drops and silently fails to re-authenticate if magic_dns is enabled from the start. The DNS resolution through the WireGuard interface races with iOS's network extension lifecycle on wake-from-sleep. I spent two evenings debugging this before finding a three-year-old GitHub issue. The fix is exactly what the config above shows — disable magic DNS initially, get all devices connected and stable first, then re-enable it once you've confirmed the tunnel stays up through sleep cycles. Android doesn't have this problem.
The three commands you'll use constantly once everything is running:
# See every device, its IP, last seen, and whether it's online
headscale nodes list
# Create a one-time auth key that expires in 24 hours (use this for new device onboarding)
headscale preauthkeys create --expiration 24h --user myuser
# After a node advertises a subnet route (e.g., 192.168.1.0/24), approve it
headscale routes enable --route <ROUTE_ID>
# On the client side — register against your own server, not tailscale.com
tailscale up --login-server=https://your-domain.com:8080
My honest take after running Headscale for about fourteen months across a team of eight: it's the right choice if you have under 20 people and want actual control-plane features without managing WireGuard peer configs manually. The ACL system works, subnet routing works, exit nodes work. What you give up compared to commercial Tailscale is support, SSO integrations out of the box, and the DERP relay network (though you can self-host derper on the same VPS for about 5ms added latency in most regions). For the privacy-focused small team use case, there's no better balance right now.
Netbird: The One I Tried for a Team Setup
The thing that caught me off guard with Netbird is how opinionated it is about auth from day one. Most WireGuard orchestration tools treat SSO as an afterthought — you get it eventually, through a reverse proxy hack or a third-party plugin. Netbird bakes OIDC directly into the control plane. You point it at Keycloak, Authentik, or any compliant IdP and your teammates log in with the same credentials they use for everything else. No separate user management, no manually copying public keys. That alone was the deciding factor for me when I needed to onboard a six-person team without writing a wiki article about WireGuard key generation.
The self-hosted stack has three moving parts and all three matter. netbird-management is the control plane — it handles device registration, network topology, and policy. netbird-signal is the STUN/TURN signaling server that brokers the initial peer-to-peer handshake. netbird-relay is what most guides under-explain: it handles traffic when a direct P2P connection fails — symmetric NAT, corporate firewalls, that situation where one peer is on a 4G hotspot. Skip the relay and about 20-30% of your connections will silently hang or fall back to nothing, with no obvious error message in the client logs. Your team will just report "VPN feels slow" or "it drops sometimes" and you'll spend hours debugging what is actually a missing relay.
Here's a trimmed but functional Docker Compose setup. The variables marked with comments are the ones that will actually break things if you skip them:
version: "3.8"
services:
netbird-management:
image: netbirdio/management:latest
restart: unless-stopped
ports:
- "443:443"
- "33073:33073"
volumes:
- ./management.json:/etc/netbird/management.json
- ./data:/var/lib/netbird
environment:
- NETBIRD_MGMT_IDP_MANAGER_CLIENT_ID=your-client-id # required — OIDC client
- NETBIRD_MGMT_IDP_MANAGER_CLIENT_SECRET=your-secret # required
- NETBIRD_MGMT_IDP_MANAGER_EXTRA_AUDIENCES=api # required if using Keycloak
- NETBIRD_DOMAIN=vpn.yourdomain.com # required — used in peer config
- NETBIRD_MGMT_IDP_MANAGER_TOKEN_ENDPOINT=https://your-idp/token
- NETBIRD_STORE_ENGINE=sqlite # optional, postgres also works
- NETBIRD_MGMT_TURN_CREDENTIALS_EXPIRATION=86400 # optional, default is fine
netbird-signal:
image: netbirdio/signal:latest
restart: unless-stopped
ports:
- "10000:10000"
netbird-relay:
image: netbirdio/relay:latest
restart: unless-stopped
ports:
- "33080:33080"
environment:
- NB_LOG_LEVEL=info
- NB_LISTEN_ADDRESS=:33080
- NB_EXPOSED_ADDRESS=relay.yourdomain.com:33080 # required — clients need this resolvable
The web UI is legitimately one of the better ones I've used in this category. Teammates opened it, saw a button that said "Add this device", clicked it, ran one shell command, and were on the network. No questions in Slack, no "what's a public key." The UI shows you real-time connection status per peer, lets you create access control groups with a few clicks, and doesn't feel like it was designed for sysadmins only. That matters more than most self-hosting guides admit — the best VPN setup is the one your non-technical colleagues will actually use without filing a support ticket every week.
The thing that made me pause during testing: the self-hosted path is noticeably less polished than the Netbird Cloud path. Network policies, in particular — things like restricting which peers can talk to which subnets — were either missing from the self-hosted version's UI or required manual JSON edits while the SaaS dashboard had them as first-class controls. The docs often describe the cloud flow and then have a small note saying "for self-hosted, see the API." The GitHub issues and Discord are active, but you'll hit moments where the answer is "that feature is SaaS-only for now" or "the self-hosted docs are behind, here's the PR." If you need the full feature set on day one, the self-hosted version may frustrate you.
Pick Netbird over Headscale specifically when your org already has an OIDC provider and you don't want to wire up SSO yourself. Headscale is excellent, but adding OIDC to it involves configuring a separate auth proxy, managing token refresh manually, and debugging things that aren't really Headscale's problem. Netbird handles the entire auth handshake natively — device enrollment, re-auth expiry, group membership pulled from IdP claims. If your situation is "ten engineers, existing Okta or Authentik setup, need this working this week," Netbird is the faster path. If you're running a personal homelab with one or two users, the OIDC machinery is overhead you don't need and Headscale will be simpler.
OpenVPN: The Veteran I Keep Around for One Specific Job
I'll be honest: OpenVPN is not what I reach for first anymore. The setup is tedious, the cert management is operational overhead that compounds over time, and the performance gap with WireGuard is real and measurable. But I still run one OpenVPN instance, and I probably will for another couple of years. The reason is boring: I have a Ubiquiti EdgeRouter X running firmware that Ubiquiti stopped updating, plus two embedded Linux boxes in a remote location that I'm not driving four hours to replace. None of them speak WireGuard. OpenVPN is the compat layer I can't get rid of yet.
The Certificate Authority: Powerful, But You Will Curse It at 2am
The CA model is genuinely strong for one thing: granular revocation. If a client cert is compromised, you revoke exactly that cert and nothing else changes. No pre-shared key rotation, no touching every other client. That's the upside. The downside is that you now own a PKI. You need to track expiry dates, store your CA key somewhere safe and offline, and remember the exact EasyRSA incantation every time you need to do anything. I've scripted this so I stop googling it:
# One-time CA setup — do this on an air-gapped machine if you care
git clone https://github.com/OpenVPN/easy-rsa.git
cd easy-rsa/easyrsa3
./easyrsa init-pki
./easyrsa build-ca nopass # omit nopass in prod; requires passphrase on sign ops
# Generate the server cert
./easyrsa gen-req server nopass
./easyrsa sign-req server server
# Generate a client cert (repeat per device)
./easyrsa gen-req client-legacy-router nopass
./easyrsa sign-req client client-legacy-router
# Revoking a compromised client — THIS is the step people forget
./easyrsa revoke client-legacy-router
./easyrsa gen-crl
# Then copy the updated CRL to your server and make sure openvpn.conf points to it:
# crl-verify /etc/openvpn/crl.pem
The part that bites people is that last step. You revoke the cert locally, but if your server config doesn't have crl-verify pointing to an up-to-date CRL file, the revoked cert still works. I've seen "revoked" clients connect for weeks because nobody copied the new CRL over. Set a cron job to check CRL expiry — EasyRSA generates them with a 180-day default validity, and an expired CRL will cause your server to reject all clients, not just the revoked one.
tls-crypt vs tls-auth: Pick the Right One
Most tutorials still show tls-auth and call it a day. The difference matters: tls-auth adds an HMAC signature to the TLS handshake, which lets the server reject unsigned packets early. Good. tls-crypt (available since OpenVPN 2.4) does that and encrypts the TLS control channel, meaning an attacker can't even confirm they're talking to an OpenVPN server. Your server becomes invisible to fingerprinting tools. If you're on OpenVPN 2.4 or later — and you should be, 2.4 dropped in 2017 — use tls-crypt. Generate the key once:
openvpn --genkey secret /etc/openvpn/ta.key
# In your server.conf:
tls-crypt /etc/openvpn/ta.key
# NOT tls-auth — that line should be absent or commented out
# Same key goes in every client config:
# tls-crypt ta.key
The operational implication is that this key is symmetric and shared across all clients. If it leaks, you rotate it everywhere at once. That's less flexible than per-client certs, but the protection it buys — blocking unauthenticated packets before any TLS processing happens — is worth the tradeoff. It also significantly reduces your attack surface for DoS against the TLS stack.
The Throughput Reality
I ran iperf3 between two 1-core/1GB Hetzner CX11 instances (€4.51/month each as of mid-2025) to get a baseline comparison. Same kernel (Debian 12, kernel 6.1), same path, same test duration. OpenVPN in UDP mode with AES-256-GCM gets me around 180–210 Mbps. Switch it to TCP mode — which some networks force you into to get through restrictive firewalls — and that drops to 110–140 Mbps because of TCP-over-TCP meltdown behavior. WireGuard on the same boxes consistently hits 850–950 Mbps. That's not a rounding error, that's a fundamentally different architecture:
# Run this yourself — adjust -c target to your second node
# On the receiver:
iperf3 -s
# On the sender, through the VPN tunnel:
iperf3 -c 10.8.0.1 -t 30 -P 4 # 4 parallel streams, 30 seconds
# OpenVPN UDP result (typical):
# [SUM] 0.00-30.00 sec 743 MBytes 207 Mbits/sec
# WireGuard result (same hardware, same path):
# [SUM] 0.00-30.00 sec 3.18 GBytes 912 Mbits/sec
The gap comes from OpenVPN running in userspace — every packet crosses the kernel/userspace boundary twice. WireGuard is implemented directly in the Linux kernel (since 5.6), so that overhead disappears. For my legacy devices that max out at 50 Mbps WAN anyway, OpenVPN's ceiling doesn't matter. The moment you're routing anything real — a team of users, high-throughput backups, media — that gap is a problem you're constantly managing around.
Verdict
Use OpenVPN if and only if you have a device or network constraint that makes WireGuard impossible. Firmware-locked routers, very old Android clients (pre-5.0 era, though that's increasingly rare), or networks that only pass TCP 443 where you need traffic to blend in with HTTPS. In those cases, OpenVPN is the right call and the cert-based auth is a genuine operational advantage. Otherwise you're carrying a heavier tool than the job needs, and you'll feel that weight every time a cert expires at 3am on a holiday weekend.
Algo VPN: The 'I Need This in 20 Minutes' Option
The thing that surprises most people about Algo is that it's not a VPN server — it's a deployment script. You run it once, it configures a hardened WireGuard (or IKEv2) server on a fresh VPS using Ansible under the hood, and hands you back client config files. No dashboard to log into, no service to maintain. That philosophy is what makes it genuinely fast to use and genuinely limited for anything beyond a short-term need.
The install flow is almost absurdly straightforward compared to every other option on this list. On a Mac or Linux machine with Python 3.10+:
# Clone and set up
git clone https://github.com/trailofbits/algo.git
cd algo
python3 -m venv .env
source .env/bin/activate
pip install -r requirements.txt
# Launch the interactive wizard
./algo
The wizard asks you which cloud provider to deploy to (DigitalOcean, AWS, Azure, Hetzner, or a generic SSH target), which region, and which users to create client configs for. You enter alice, bob, work-laptop — whatever. It provisions the droplet, hardens SSH, configures WireGuard on UDP 51820, and drops .conf files and QR codes into a configs/ directory. Total elapsed time from zero to working VPN: usually under 15 minutes if your cloud credentials are already in env vars. I've done it from a hotel lobby before a conference started.
The default iptables setup is one of the underrated parts. Algo configures a strict default-deny outbound policy for traffic not tunneled through the VPN, blocks all inbound ports except the WireGuard UDP port and SSH (rate-limited), and enables ufw with sane rules before you even touch anything. Most people rolling their own WireGuard skip the kill-switch-equivalent rules and end up with a VPN that leaks when the tunnel drops. Algo just handles it. The rules live in /etc/iptables/rules.v4 if you want to inspect them — and you should, once, to understand what you're actually running.
The limitation that will eventually frustrate you: Algo is opinionated to the point of being fragile to modify. The Ansible playbooks expect a clean server in a known state. If you try to add a second user a week later by re-running the playbook, or change the DNS resolver from the default (it uses a locally-deployed dnscrypt-proxy pointing at Cloudflare's 1.1.1.1 with DoH), you're fighting the tool rather than using it. The intended workflow is literally: deploy when you need it, use it, destroy the droplet when you're done. On DigitalOcean, a 1GB droplet in Frankfurt costs $6/month or about $0.009/hour — for a two-week trip, you're spending maybe $2.50 total.
Algo is the right call in specific situations and the wrong call in others. Use it when:
- You need a clean, audited setup for travel or a short engagement — Trail of Bits (a serious security firm) maintains it, so the defaults have been reviewed by people who do this professionally
- You want WireGuard client configs for a team of 5-10 people without standing up a control plane
- You're a developer who is comfortable with the terminal but doesn't want to become a VPN administrator
Don't reach for Algo if you need to add/remove users regularly, want a web UI for non-technical teammates, or are building something that needs to be online permanently. For permanent self-hosted infrastructure, Headscale or a manually configured WireGuard setup with wg-quick will serve you better. Algo is disposable infrastructure done right — and knowing when to use disposable infrastructure is itself a skill.
Side-by-Side Comparison: The Honest Version
Most comparison tables show you checkmarks and stars. Mine shows you what breaks at 3am. That column is the one that actually matters when you're on-call and your tunnel is down and half your team can't reach the staging environment. I've added it below, and I've tried to be blunt about each entry rather than diplomatic.
┌─────────────┬────────┬──────────────────┬──────────────┬─────────┬───────────────┬────────────┬──────────────────────────────────────┐
│ Solution │ Setup │ Peer Management │ Client OS │Audited? │ Control Plane │ Rec. Scale │ What breaks first at 3am │
│ │ (1-5) │ │ Support │ │ Included │ │ │
├─────────────┼────────┼──────────────────┼──────────────┼─────────┼───────────────┼────────────┼──────────────────────────────────────┤
│ WireGuard │ 2 │ Manual / scripts │ All major │ Yes* │ No │ 1-20 peers │ Key distribution — no built-in PKI │
│ (raw) │ │ │ + iOS/Android│ │ │ │ means you're editing wg0.conf by hand│
├─────────────┼────────┼──────────────────┼──────────────┼─────────┼───────────────┼────────────┼──────────────────────────────────────┤
│ Headscale │ 3 │ CLI + REST API │ All Tailscale│ Partial │ Yes (DERP) │ 5-100 peers│ DERP relay config; losing the DB │
│ │ │ │ clients │ │ │ │ means re-enrolling every node │
├─────────────┼────────┼──────────────────┼──────────────┼─────────┼───────────────┼────────────┼──────────────────────────────────────┤
│ OpenVPN │ 4 │ Easy-RSA / certs │ All major │ Partial │ No │ 10-500 │ Certificate expiry — it will expire │
│ │ │ │ + legacy │ │ │ │ at the worst possible time │
├─────────────┼────────┼──────────────────┼──────────────┼─────────┼───────────────┼────────────┼──────────────────────────────────────┤
│ Netbird │ 2 │ Web UI + API │ All major │ No │ Yes (hosted │ 5-200 peers│ Control plane dependency — if their │
│ │ │ │ + iOS/Android│ │ or self-host) │ │ STUN/TURN goes down, peers go dark │
├─────────────┼────────┼──────────────────┼──────────────┼─────────┼───────────────┼────────────┼──────────────────────────────────────┤
│ Algo VPN │ 1 │ Ansible playbook │ All major │ No │ No │ 1-15 users │ It's Ansible — any drift from the │
│ │ │ │ + iOS/Android│ │ │ │ expected state breaks re-provisioning│
└─────────────┴────────┴──────────────────┴──────────────┴─────────┴───────────────┴────────────┴──────────────────────────────────────┘
The audit column needs a footnote before you use it to make a security decision. The WireGuard kernel module got a professional cryptographic audit in 2020 by Least Authority — that audit covers the protocol and the kernel implementation specifically. OpenVPN's last full audit was 2017, though there have been partial code reviews since. Neither Netbird nor Algo has a published independent audit as of writing. Headscale inherits WireGuard's crypto but the control plane itself hasn't been formally audited. If you're choosing for a team with compliance requirements, verify current status yourself — I'm not going to pretend this table ages well.
The infrastructure cost angle trips people up because they expect the "enterprise-grade" options to cost more. They don't. Every single solution in this list runs fine on a $6/month Hetzner CX11 or a $4/month Vultr instance for personal or small-team use. The differences are entirely in your time budget:
- Algo: ~20 minutes to deploy, near-zero ongoing time unless you're adding users constantly
- Raw WireGuard: ~30 minutes to deploy, but every new peer costs you manual config edits and key exchanges
- Headscale: 1-2 hours initial setup including DERP, but peer management becomes nearly zero-touch after that
- OpenVPN: 45 minutes to deploy with easy-rsa, but budget time for cert rotation every 1-2 years or you will get paged
- Netbird: 30 minutes if you use their cloud control plane, 2+ hours if you self-host the management server too
The "peer management" column is what separates these tools more than anything else. Raw WireGuard forces you to touch config files on both sides of every new connection — there's no server-side enrollment flow. That's fine for two machines and painful for twenty. Headscale and Netbird both give you a proper enrollment model where a new node gets a key, calls home to the control plane, and joins the mesh. OpenVPN's cert-based model is mature and auditable but the tooling (easy-rsa especially) feels like it was designed by someone who hates you specifically.
One thing I wish I'd paid attention to earlier: the "control plane included" column has compounding implications. Tools without a control plane (raw WireGuard, OpenVPN, Algo) mean your VPS going down takes out only the traffic routing — you can still manually reconnect peers using configs you already distributed. Tools with a control plane (Headscale, Netbird) mean your VPS going down can prevent new peer negotiations entirely, even if existing tunnels stay up via WireGuard's built-in keepalives. That's a meaningful difference in your blast radius if the box reboots unexpectedly.
My Current Stack and What I'd Pick If I Started Today
The thing that surprised me most after running multiple VPN stacks simultaneously: the overhead isn't the software, it's the cognitive load of deciding which tool does what. I spent more time second-guessing my setup than actually maintaining it. So here's exactly what I run, why, and what I'd do differently with fresh eyes.
Personal devices (5 nodes) — Headscale on a $6 Hetzner VPS (CX11). I set this up in March, SSH'd in twice since to run docker pull headscale/headscale:latest and call it done. The control plane sits at roughly 15MB RAM idle. Headscale mirrors the Tailscale coordination API almost perfectly, so every Tailscale client just works — iOS, Android, Linux, macOS. The config file is 80 lines of YAML and I haven't touched it:
# headscale config.yaml — the parts that actually matter
server_url: https://hs.yourdomain.com:8080
listen_addr: 0.0.0.0:8080
ip_prefixes:
- 100.64.0.0/10
dns_config:
magic_dns: true
base_domain: yourmesh.internal
nameservers:
- 1.1.1.1
Home lab with contractor access — Netbird self-hosted. I switched to this specifically because the OIDC integration means contractors authenticate through Google Workspace and I never issue or rotate a single SSH key. When someone's contract ends, I remove them from the Google group and they're off the network within seconds. Raw WireGuard would've meant a spreadsheet of public keys and at least one awkward "hey, did you revoke that person?" Slack message. Netbird's management UI runs as a Docker Compose stack — the docker-compose.yml they ship is production-ready on a $10 Hetzner VPS without modification. One gotcha: the self-hosted setup requires you to run your own signal and relay servers, so budget for two VPS instances or accept that hole-punching reliability drops slightly compared to their cloud.
Travel and conferences — Algo, spun up fresh per trip. I keep an Ansible variables file committed to a private repo. Before a trip: ./algo, answer five prompts, 8 minutes later I have a DigitalOcean droplet running IKEv2 and WireGuard. After the trip: destroy the droplet, done. The per-trip cost is under $1. Conference hotel WiFi is genuinely hostile — I've watched mDNS traffic from other guests hit my laptop on "secured" hotel networks. Algo's ephemeral model means there's no persistent target, no stale keys, no VPS sitting idle between trips accumulating attack surface.
Legacy NAS — OpenVPN, grudgingly. My QNAP box from 2019 ships an OpenVPN package and nothing else. The client connects, files transfer, I pretend WireGuard exists. This is the only place in my stack where I'm running something I'd never choose today — OpenVPN's userspace crypto and TLS handshake overhead is measurably slower on the NAS's Celeron CPU. The moment I replace the hardware, this goes away. If you're in the same position: just live with it until the hardware turns over, the migration pain isn't worth it for a device you're already planning to replace.
The one I'd skip completely if starting fresh: raw WireGuard without a control plane. I ran it for four months managing /etc/wireguard/wg0.conf by hand across 5 peers. Every time I added a device I had to edit config on every other peer, reload the interface, and pray I got the AllowedIPs right. It's genuinely the best way to understand how WireGuard works — the mental model is clean and the kernel module is fast. But as an operational tool for more than two nodes, the manual peer management is just punishment. Headscale or Netbird give you WireGuard underneath with a real control plane on top. That's the stack worth running. If you're building out a home lab or dev environment alongside your VPN, the tooling decisions compound on each other — for a complete list of tools worth running, check out our guide on Best AI Coding Tools in 2026 (thorough Guide).
Pick Your Setup: Match Your Situation to the Right Tool
The most common mistake I see is people deploying Headscale for a solo setup or raw WireGuard for a 50-person org. The tool mismatch costs more time than the initial configuration ever would. So here's the honest breakdown by situation, not by feature checklist.
Solo developer, one laptop plus one phone: Raw WireGuard or Algo. Full stop. Don't run a coordination server you have to babysit for two peers. With raw WireGuard, you're looking at maybe 20 lines of config total. With Algo, you run one Ansible playbook and get a hardened WireGuard setup on a $6/month VPS in about 15 minutes. The thing that catches people off guard with Algo is that it's not a daemon — it's a one-shot provisioner. You run it, it builds the server, you're done. No web UI to keep patched, no service to monitor.
# Algo one-shot provisioning — run this locally, not on the server
git clone https://github.com/trailofbits/algo.git
cd algo
python3 -m pip install --user -r requirements.txt
python3 algo
# Interactive prompts: choose provider, region, users
# Output: WireGuard .conf files in configs/ directory, ready to import
Small team of 2–10 people where you're the admin: Headscale is the right call. You get the full Tailscale mesh topology — meaning every peer talks directly to every other peer after initial coordination, not through a central relay — and you own the control plane. The operational overhead is genuinely low once it's running. One binary, one config file, SQLite or Postgres for state. The thing I'd warn you about: Headscale's ACL syntax is almost identical to Tailscale's but not identical. Test your ACL policies in a throwaway namespace before applying them to production peers, because a misconfigured ACL silently blocks traffic with no helpful error on the client side.
Team with non-technical members or an SSO requirement: Netbird self-hosted is the answer here, specifically because it integrates with your existing identity provider. You point it at Keycloak, Okta, or Google Workspace via OIDC and your users authenticate the same way they log into everything else. No manual key distribution, no "email this .conf file to Bob" workflow. The self-hosted stack is more moving parts than Headscale — you're running the management API, a signal server, and a TURN relay — but the trade-off is that non-technical users actually adopt it, which matters more than operational simplicity.
If you're in a regulated environment needing formal audit logs, access records for compliance reviews, or something you can show an auditor: none of these five solutions gives you that out of the box. WireGuard logs handshakes at the kernel level but doesn't persist them. Headscale has basic logging but nothing structured enough for SOC 2 evidence. You'll need to layer something like Vector or Fluent Bit to ship logs to a SIEM, configure log retention policies, and probably write custom parsing rules for WireGuard kernel events. That's a real project, not a config flag.
Just want to understand how this works before committing: Spin an Algo instance on a $6 DigitalOcean droplet, read through what it actually configures — the iptables rules, the WireGuard interface setup, the systemd unit files — then tear it down. You'll understand the primitives well enough to make an informed choice about whether you need something more complex. Budget 90 minutes.
One red flag that should make you stop and reconsider your entire setup: peer public keys committed to a public git repo. I've seen this more than once. Public keys alone don't compromise your VPN — you can't derive the private key from the public key — but the moment you commit public keys, there's pressure to commit the full config "just for convenience," and then you have private keys in git history, sometimes for months before anyone notices. Keep all WireGuard configs in a private store, encrypted at rest, with access logging. If you're using something like Ansible to manage configs, use Ansible Vault. The operational inconvenience is worth it.
The Operational Stuff Nobody Writes About
Most VPN tutorials end right after the "and now you're connected!" moment. That's where your actual job starts. I've been burned by every single one of these operational gaps, and I'm writing this so you don't have to learn them at 2am when a contractor's laptop gets stolen.
Monitoring: Knowing Your VPN Is Actually Working
Ping checks lie. Your WireGuard interface can be up, the port can be open, and your server will still silently drop all peer traffic after a kernel panic + restart if you forgot to set SaveConfig = true or run wg-quick as a service. The only real health signal is last-handshake per peer. If that timestamp is more than 3 minutes old for an active device, something is wrong. Here's the Prometheus exporter setup I run — prometheus-wireguard-exporter by @mindflavor does exactly this:
# /etc/systemd/system/prometheus-wireguard-exporter.service
[Unit]
Description=Prometheus WireGuard Exporter
After=network.target
[Service]
ExecStart=/usr/local/bin/prometheus_wireguard_exporter \
-p 9586 \
-n /etc/wireguard/wg0.conf
Restart=always
[Install]
WantedBy=multi-user.target
The metric you care about is wireguard_peer_last_handshake_seconds. I alert on it in Alertmanager like this:
# alerts/wireguard.yml
groups:
- name: wireguard
rules:
- alert: WireGuardPeerStale
expr: (time() - wireguard_peer_last_handshake_seconds) > 180
for: 5m
labels:
severity: warning
annotations:
summary: "Peer {{ $labels.public_key }} hasn't handshaked in 3+ minutes"
# This fires for idle devices too — add a traffic threshold
# filter if you have phones that sleep aggressively
- alert: WireGuardInterfaceDown
expr: absent(wireguard_peer_last_handshake_seconds)
for: 2m
labels:
severity: critical
Fair warning: the stale-peer alert will fire for mobile clients that go idle. I suppress it with a second condition — only alert if wireguard_peer_sent_bytes_total had traffic in the last hour. Otherwise your phone sitting in a pocket becomes a PagerDuty incident at midnight.
Key Rotation When a Device Is Lost
My policy: if a device is reported lost or stolen, I treat the peer key as compromised within 15 minutes, not "sometime this week." For WireGuard the revocation is instant — remove the [Peer] block from your server config and reload:
# Remove peer by public key without bringing the interface down
sudo wg set wg0 peer <PUBKEY> remove
# Persist it so it survives a restart
sudo wg-quick save wg0
# OR if you manage config manually:
sudo wg syncconf wg0 <(wg-quick strip wg0)
For contractor offboarding I keep a simple inventory file in a private git repo — one line per peer with their name, public key, and provisioning date. When they leave, I grep for their key, remove it, commit the change, and the audit trail is there. For Headscale, it's headscale nodes delete --identifier <id> followed by headscale routes deactivate if they had subnet routes. The thing people miss: also rotate your own server's private key every 6-12 months and redistribute client configs. WireGuard doesn't force this but you should do it anyway.
UFW Rules That Will Burn You
The default ufw setup blocks inbound but — and this is the gotcha that catches almost everyone — it does not enable DEFAULT_FORWARD_POLICY=ACCEPT in /etc/default/ufw. Without that, your VPN clients can reach the server but can't route traffic through it. Your internet kill switch becomes a server-only kill switch. Do this before anything else:
# /etc/default/ufw
DEFAULT_FORWARD_POLICY="ACCEPT" # change from DROP
# /etc/ufw/before.rules — add this block BEFORE the *filter section
*nat
:POSTROUTING ACCEPT [0:0]
-A POSTROUTING -s 10.8.0.0/24 -o eth0 -j MASQUERADE
COMMIT
# Then your actual rules:
ufw default deny incoming
ufw default allow outgoing
ufw allow ssh # before you enable ufw or you lock yourself out
ufw allow 51820/udp # WireGuard
ufw allow 443/tcp # if running Headscale with HTTPS
# If your VPS has a public IPv6 address, mirror every rule:
ufw allow in on eth0 to any port 51820 proto udp
ufw enable
# Verify forwarding is actually working:
sysctl net.ipv4.ip_forward # should be 1
cat /proc/sys/net/ipv4/ip_forward
Also: never run ufw enable on a remote server without having ufw allow ssh confirmed first. Yes, this is obvious. I've done it anyway. Most cloud providers have a serial console; some don't. Don't find out which camp yours is in at the worst possible time.
Backup Strategy for Your CA and Control Plane
For OpenVPN your entire trust model lives in /etc/openvpn/easy-rsa/pki/. If that directory is gone, you cannot issue new client certs, you cannot revoke compromised ones, and you need to rebuild the entire CA and re-provision every client. Back it up encrypted, offsite, immediately after setup — not "when I get around to it." I use age for encryption and push to a private S3 bucket:
# Backup the PKI directory after any cert change
tar czf - /etc/openvpn/easy-rsa/pki/ | \
age -r <YOUR_AGE_PUBLIC_KEY> > \
/tmp/pki-backup-$(date +%Y%m%d).tar.gz.age
# Push to S3 (or any object store)
aws s3 cp /tmp/pki-backup-$(date +%Y%m%d).tar.gz.age \
s3://your-backup-bucket/vpn-pki/
For Headscale the database is a single SQLite file at /var/lib/headscale/db.sqlite by default. That file contains all your nodes, users, preauthkeys, and routes. Losing it means every device needs to be re-enrolled. I run a cron job that backs it up every 6 hours using SQLite's online backup mode so I don't get a corrupted snapshot mid-write:
# /etc/cron.d/headscale-backup
0 */6 * * * root sqlite3 /var/lib/headscale/db.sqlite \
".backup '/var/backups/headscale/db-$(date +\%Y\%m\%d-\%H\%M).sqlite'" \
&& find /var/backups/headscale/ -mtime +7 -delete
Netbird uses either an embedded SQLite or an external Postgres 14+ database depending on your setup. If you chose Postgres, you already know how to back it up. If you chose SQLite and forgot: same story as Headscale, same .backup command.
IPv6 — The AllowedIPs Line Everyone Skips
WireGuard handles IPv6 natively at the kernel level. Your VPS almost certainly has a public IPv6 /64 assigned. Most tutorials set AllowedIPs = 0.0.0.0/0 and call it a day, which routes all IPv4 through the tunnel and lets IPv6 leak out directly — completely breaking your privacy model for any client with IPv6 connectivity. The correct AllowedIPs to tunnel all traffic including IPv6:
# Client wg0.conf [Peer] section
[Peer]
PublicKey = <SERVER_PUBKEY>
Endpoint = your.server.ip:51820
AllowedIPs = 0.0.0.0/0, ::/0 # ::/0 is the part everyone forgets
# On the server side, your [Interface] needs an IPv6 address too:
[Interface]
Address = 10.8.0.1/24, fd86:ea04:1115::1/64 # private ULA range
ListenPort = 51820
PrivateKey = <SERVER_PRIVATE_KEY>
# And for each peer, assign an IPv6 address:
[Peer]
PublicKey = <CLIENT_PUBKEY>
AllowedIPs = 10.8.0.2/32, fd86:ea04:1115::2/128
The fd86:ea04:1115::/48 range I'm using is a ULA (Unique Local Address) prefix — private IPv6, not routable on the public internet, analogous to 10.0.0.0/8. Generate your own unique ULA prefix at a site like unique-local-ipv6.com rather than copying mine. Then update your NAT6 rules in ufw's before.rules the same way you did for IPv4, just with ip6tables syntax. Test for leaks with curl -6 ifconfig.io from a client — if you see your VPS's IPv6 address, you're tunneling correctly. If you see your home ISP's IPv6, you've got a leak.
Disclaimer: This article is for informational purposes only. The views and opinions expressed are those of the author(s) and do not necessarily reflect the official policy or position of Sonic Rocket or its affiliates. Always consult with a certified professional before making any financial or technical decisions based on this content.
Originally published on techdigestor.com. Follow for more developer-focused tooling reviews and productivity guides.
Top comments (0)