The Dilemma of Remote Monitoring
If you've ever tried to set up a monitoring stack for a scattered infrastructure—say, a local home server, a cheap VPS in Europe, and a ZimaBoard running at your parents' house - you've probably faced the ultimate dilemma: How do I collect metrics securely?
The classic monitoring stack involves Prometheus pulling data from targets running agents like Node Exporter or cAdvisor. But leaving ports like 9100 or 8080 wide open to the public internet is a disaster waiting to happen. Scanners will find them within hours, leading to garbage traffic, leaked system information, or worse.
Many developers solve this by installing a VPN (like WireGuard) directly on the host OS. But this approach has major flaws: it mixes your personal VPN traffic with technical traffic, requires messy iptables routing, and breaks the clean isolation that Docker is supposed to provide.
In this series, I'll show you an elegant way to connect isolated monitoring components into a single, secure network using the VPN Sidecar pattern and Docker's Network Namespace sharing.
The Architecture Concept
Instead of configuring a VPN at the OS level, we will isolate WireGuard (or AmneziaWG, if you need to bypass DPI) inside a lightweight Docker container on our Hub server.
Here is the magic trick: We won't give our monitoring services (like Prometheus) their own Docker networks. Instead, we will force them to use the VPN container's network stack.
To Prometheus, it will look like it's natively sitting inside the VPN subnet (e.g., 10.10.0.x). It will be able to scrape remote nodes using their internal VPN IPs without routing a single byte through the public internet, and without exposing any host ports.
Step 1: Setting Up the VPN Hub
Let's start by creating the foundation on our local server (the Hub), which will store the metrics.
Create a directory for your project and add a docker-compose.yaml file:
services:
wg-monitoring:
image: amneziavpn/amnezia-wg # or linuxserver/wireguard
container_name: wg-monitoring
cap_add:
- NET_ADMIN
- SYS_MODULE
sysctls:
- net.ipv4.conf.all.src_valid_mark=1
ports:
- "51820:51820/udp" # The only port we expose to the internet
volumes:
- ./wireguard/config:/config
restart: unless-stopped
networks:
- monitoring_net
networks:
monitoring_net:
driver: bridge
What's happening here?
- We grant
NET_ADMINcapabilities so the container can create thewg0network interface. - We expose UDP port
51820so our remote servers can connect to this Hub. - We mount a volume containing our standard
wg0.conf(your WireGuard/AmneziaWG server configuration).
Let's assume our Hub is assigned the IP 10.10.0.1 inside the VPN.
Step 2: The Network Namespace Magic
Now, let's add a service to the same docker-compose.yaml that actually needs to access the remote servers. Let's add Prometheus:
prometheus:
image: prom/prometheus
container_name: prometheus
# WARNING: This is where the magic happens!
network_mode: "service:wg-monitoring"
depends_on:
- wg-monitoring
volumes:
- ./prometheus:/etc/prometheus
restart: unless-stopped
Notice the key line: network_mode: "service:wg-monitoring".
Because of this line, we do not define a networks or ports block for Prometheus. From a networking perspective, the prometheus container now shares the exact same network stack as wg-monitoring.
If you were to exec into the Prometheus container and run ip a, you would see the wg0 interface with the IP 10.10.0.1. Prometheus can now directly reach any client connected to this VPN.
Step 3: Securing the Remote Nodes
On your remote server (let's call it Node 1), we need to deploy a VPN client container alongside our metrics agents.
Here is the docker-compose.yaml for the remote server:
services:
wg-client:
image: amneziavpn/amnezia-wg
container_name: wg-client
cap_add:
- NET_ADMIN
- SYS_MODULE
sysctls:
- net.ipv4.conf.all.src_valid_mark=1
volumes:
- ./wireguard/config:/config # Client config with IP 10.10.0.2
restart: unless-stopped
node-exporter:
image: prom/node-exporter
container_name: node-exporter
network_mode: "service:wg-client"
depends_on:
- wg-client
restart: unless-stopped
volumes:
- /proc:/host/proc:ro
- /sys:/host/sys:ro
- /:/rootfs:ro
We apply the exact same pattern here: node-exporter shares the network namespace with wg-client.
Now, Node Exporter is happily collecting host metrics and serving them on port 9100 — but only inside the VPN tunnel at 10.10.0.2. Not a single port scanner on the public internet can reach your metrics, because the port is never published to the host OS.
Wrapping Up Part 1
We have successfully created an invisible, encrypted L3 bridge between containers running on servers that might be thousands of miles apart. We achieved this cleanly without littering our host systems with complex iptables rules, maintaining the true "Infrastructure as Code" spirit.
In Part 2, we will dive into configuring Prometheus to scrape these remote targets through the tunnel, and we'll set up Grafana so you can safely view your dashboards via the web without exposing your database.
Top comments (0)