DEV Community

Cover image for Remote Server Monitoring over VPN: A Docker Approach (Part 2)
Alex Miller
Alex Miller

Posted on

Remote Server Monitoring over VPN: A Docker Approach (Part 2)

Recap: Where We Left Off

In Part 1 of this series, we established our secure foundation: an encrypted L3 bridge using AmneziaWG and Docker's Network Namespace sharing (network_mode). We already have our first agent, node-exporter, silently listening inside the tunnel on the remote node at 10.10.0.2:9100.

The traffic is encrypted, the ports are invisible to the public internet, and the whole thing is defined as code. But metrics sitting on a remote node are useless if nothing collects and visualizes them.


The Plan for Part 2

We still have three pieces missing from the stack. In this part, we will:

  1. Add cAdvisor to remote nodes to monitor individual containers.
  2. Configure Prometheus on the Hub to pull metrics through the tunnel.
  3. Deploy Grafana to visualize everything in one place.

By the end, you will have a fully functional monitoring stack where the entire scrape path runs inside the encrypted tunnel, with no metrics endpoints reachable from the public internet.


Step 1: Monitoring Containers on Remote Nodes

While node-exporter gives us the "big picture" of the hardware (CPU, RAM, Disk), we need cAdvisor to see what's happening inside our containers.

Extend the remote node's docker-compose.yaml with a new cadvisor service. Just like the node-exporter, it must share the network stack of the VPN client — so no ports: block is needed, and the metrics endpoint will only be reachable through the tunnel.

services:
  # ... (wg-client and node-exporter services from Part 1) ...

  cadvisor:
    image: gcr.io/cadvisor/cadvisor:latest
    container_name: cadvisor
    restart: unless-stopped
    network_mode: "service:wg-client" # Shares the VPN tunnel
    depends_on:
      - wg-client
    devices:
      - /dev/kmsg
    volumes:
      - /:/rootfs:ro
      - /var/run:/var/run:ro
      - /sys:/sys:ro
      - /var/lib/docker/:/var/lib/docker:ro
      - /dev/disk/:/dev/disk:ro
      - /etc/localtime:/etc/localtime:ro
      - /etc/timezone:/etc/timezone:ro
    command:
      - '--housekeeping_interval=30s'
Enter fullscreen mode Exit fullscreen mode

Now your remote node is serving hardware metrics on port 9100 and container metrics on port 8080, but only via the private VPN IP (e.g., 10.10.0.2).


Step 2: Configuring the Scraper on the Hub

The core of our monitoring is Prometheus. Since we will run it inside the VPN container's network stack on the Hub, it can reach remote targets by their static tunnel IPs as if they were on a local LAN.

Create a file named ./prometheus/prometheus.yml on your Hub:

global:
  scrape_interval: 15s

scrape_configs:
  - job_name: 'remote-nodes-hw'
    static_configs:
      - targets: ['10.10.0.2:9100'] # Remote server: Node Exporter

  - job_name: 'remote-nodes-containers'
    static_configs:
      - targets: ['10.10.0.2:8080'] # Remote server: cAdvisor

  - job_name: 'hub-local'
    static_configs:
      - targets: ['localhost:9090']
Enter fullscreen mode Exit fullscreen mode

Notice the target addresses: from Prometheus's point of view, 10.10.0.2 is just a regular LAN host. It does not know, and does not care, that packets are being encrypted and shipped across the public internet.


Step 3: Deploying Prometheus and Grafana on the Hub

Now let's update the Hub's docker-compose.yaml. We need to add Prometheus and Grafana, and extend the wg-monitoring service from Part 1 to publish the Prometheus UI port.

Note the networking logic: Prometheus sits inside the VPN network to "see" the remote targets. Grafana sits outside on a standard bridge network and talks to Prometheus through the VPN container's name.

services:
  wg-monitoring:
    # ... (base config from Part 1) ...
    ports:
      - "51820:51820/udp"   # VPN listener (from Part 1)
      - "9090:9090"         # Prometheus UI (see note below)

  prometheus:
    image: prom/prometheus:latest
    container_name: prometheus
    restart: unless-stopped
    network_mode: "service:wg-monitoring" # Hidden inside VPN namespace
    depends_on:
      - wg-monitoring
    volumes:
      - ./prometheus/prometheus.yml:/etc/prometheus/prometheus.yml
      - ./prometheus_data:/prometheus
      - /etc/localtime:/etc/localtime:ro
      - /etc/timezone:/etc/timezone:ro

  grafana:
    image: grafana/grafana:latest
    container_name: grafana
    restart: unless-stopped
    ports:
      - "3000:3000"         # Grafana UI (LAN access only)
    networks:
      - monitoring          # Talks to wg-monitoring via bridge
    environment:
      - GF_PROMETHEUS_URL=http://wg-monitoring:9090
    volumes:
      - ./grafana_data:/var/lib/grafana

networks:
  monitoring:
    driver: bridge
Enter fullscreen mode Exit fullscreen mode

One important subtlety: because Prometheus uses network_mode: "service:wg-monitoring", it inherits the VPN container's network stack and cannot declare its own ports: block. To reach the Prometheus UI from your LAN, you must publish port 9090 on the wg-monitoring service — that's why the ports list there now has two entries instead of one.


How the Connection Works

When you log into Grafana at http://hub-ip:3000, you simply add a Prometheus data source pointing to http://wg-monitoring:9090.

  1. Grafana sends a query to the wg-monitoring container over the monitoring bridge network.
  2. Prometheus, which shares the same network stack, intercepts that request on port 9090.
  3. Prometheus then looks at its config, sees the target 10.10.0.2, and routes the scrape request through the wg0 interface.
  4. The encrypted traffic travels across the public internet, hits the remote VPN client, and pulls metrics from the exporters.

Zero metrics ports are exposed on the remote servers, and the Hub exposes nothing to the public internet beyond the VPN listener itself.


Wrapping Up Part 2

We now have a complete monitoring pipeline: remote agents collect metrics, Prometheus scrapes them through an encrypted tunnel, and Grafana renders dashboards without ever touching the public IPs of the remote nodes.

A note on access: the Grafana UI (3000) and Prometheus UI (9090) are bound to the Hub's host ports, but they are not meant to be reachable from the public internet. In a typical home-lab setup, you access them either from inside your LAN, or by tunneling into the LAN over a separate user-facing VPN. The only port the Hub actually offers to the outside world is the AmneziaWG listener on UDP 51820.

However, this setup still has a major weakness. If your server reboots, Docker might start Prometheus before the VPN interface is actually up, leading to a "blind" stack that needs manual restarts.

In Part 3, we will tackle these Race Conditions with Docker Healthchecks and wire up Alertmanager so failures actually reach you — over Telegram, email, or whatever channel you prefer. In Part 4, we will add Loki and Promtail to aggregate logs through the same encrypted tunnel, so metrics and logs finally live in one place.

Top comments (0)