DEV Community

Cover image for Personal DNS and VPN node with Packer, Terraform Ansible and Docker
Ahsan Nabi Dar
Ahsan Nabi Dar

Posted on

Personal DNS and VPN node with Packer, Terraform Ansible and Docker

I went on to setup my own public DNS node built on top of PiHole using Cloudflared and DNSCrypt and private VPN node support with Wireguard in cloud with log monitoring using Grafana/Loki and container monitoring/management using WeaveWork Scope and Portainer all automated with Ansible, Packer and Terraform 😎. This is similar to the setup I run at home of an Ad blocking VPN with DNS over HTTPS on a Raspberry Pi4 , so why do it again ? πŸ˜…

  1. This is automated so the stack can be created in under 5mins from scratch in the cloud πŸ˜‰
  2. It's more complex when all your routing is behind a home router, this is a cloud instance can kill and provision again.
  3. It was OK when I was running the VPN when away from home locally connecting to my home network blocking all those pesky ads. Being abroad meant taking speed hit due to traffic routing out and coming back in.
  4. Now I am able to use my own public DNS node and block all ads while keeping traffic on local connection utilising all available speed πŸ’ͺ when I don't need to keep data encrypted over a public network.
  5. Also it exposes only my IP to my DNS server sending cloud IP to the DNS services adding another layer of privacy, poor man's DNS privacy 😬

So cutting the tirade short by the end you should have an idea of how to have a setup of your own beyond just those tutorials that teach you how to install PiHole with a VPN that you can't even upgrade without destroying your setup. What you should have would be something like below which will be deployed using container images fetched from a container registry.

Alt Text

My personal node is sitting in Hetzner using a CX11 cloud instance type. It is based on 1 vCPU and 2GB RAM and comes with 20GB SSD and 20TB Bandwidth. All for 2.66 Euro(excl VAT), unbelievable ini't ?

Alt Text

I am going to assume some basic understanding of the tools used for this setup. So first of what we need to do is build a base image snapshot upon which we will build our application server image which I call JARVIS. To do so we will use Packer. I am not going to go in to Ansible part as it depends on what you want in your snapshot you can just point the playbook_file to your playbook.

Packer

Base image

To build the base image make sure you generate the Hetzner API TOKEN and set it as an environment variable. This recipe will build an image on Debian 10 and save it as debian-base-snapshot. Two things to note are that you should use the location and server_type where you will run the final server. The things ansible playbook installs in this step for me are copying ssh keys, installing docker and other dev tools required as per my needs.

To build run following command
packer build packer.json

{
    "variables": {
      "hcloud_token": "{{env `HCLOUD_TOKEN`}}"
    },
    "builders": [
      {
        "token": "{{ user `hcloud_token` }}",
        "server_name": "base-packer",
        "snapshot_name": "debian-base-snapshot",
        "snapshot_labels": { "name": "debian-base-snapshot" },
        "type": "hcloud",
        "image": "debian-10",
        "location": "nbg1",
        "server_type": "cx11",
        "ssh_username": "root"
      }
    ],
    "provisioners": [
      {   
        "type": "shell",
        "inline": [
            "sleep 30",
            "apt-get update",
            "apt-get -y upgrade",
            "apt-get update && apt-get install -y wget curl gcc make python python-dev python-setuptools python-pip libffi-dev libssl-dev libyaml-dev"
        ]   
      },  
      {   
        "type": "ansible",
        "extra_arguments": ["--vault-password-file=~/.helsing_ansible_vault_pass"],
        "playbook_file": "../../../../ansible/base.yml"
      }   
   ]
  }
Enter fullscreen mode Exit fullscreen mode

Jarvis image

Once you have the base snapshot we will use that in place of Hetzner provided snapshot to build our server snapshot used to run it. You can use the same token used in previous step for this as well. The recipe is very similar to the one used for base snapshot. It uses its own playbook installing what is needed in this image, One of the things I do is git clone the source repo in this stage.

packer build packer.json

{
    "variables": {
      "hcloud_token": "{{env `JARVIS_HCLOUD_TOKEN`}}"
    },
    "builders": [
      {
        "token": "{{ user `hcloud_token` }}",
        "server_name": "jarvis-packer",
        "snapshot_name": "debian-jarvis-snapshot-base",
        "type": "hcloud",
        "image_filter": {
          "with_selector": [
            "name==debian-base-snapshot"
          ],
          "most_recent": true
        },
        "snapshot_labels": { "name": "debian-jarvis-snapshot-base" },
        "location": "nbg1",
        "server_type": "cx11",
        "ssh_username": "root"
      }
    ],
    "provisioners": [
      {   
        "type": "shell",
        "inline": [
            "sleep 30",
            "apt-get update",
            "apt-get -y upgrade",
            "apt-get update && apt-get install -y wget curl gcc make python python-dev python-setuptools python-pip libffi-dev libssl-dev libyaml-dev"
        ]   
      },  
      {   
        "type": "ansible",
        "extra_arguments": ["--vault-password-file=~/.helsing_ansible_vault_pass"],
        "playbook_file": "../../../../ansible/jarvis.yml"
      }   
   ]
  }

Enter fullscreen mode Exit fullscreen mode

Terraform

Last I worked with Terraform was while setting up my stack and the latest version at that time was 0.12 so working on this I upgraded to the latest 0.14.8 and it requires that you setup version.tf

version.tf

terraform {
  required_providers {
    hcloud = {
      source = "hetznercloud/hcloud"
      version = "~> 1.25.2"
    }
  }
  required_version = "~> 0.14"
}
Enter fullscreen mode Exit fullscreen mode

Here is the recipe that creates the infrastructure it uses the snapshot created earlier. Hetzner has recently launched their Firewall simplifying managing ports. This will create 1 server instance of cx11 and a firewall with the defined rules attached to it for traffic.

provider.tf

tarraform plan -out=jarvis.out

tarraform apply "jarvis.out"

variable "JARVIS_HCLOUD_TOKEN" {}

provider "hcloud" {
  token = var.JARVIS_HCLOUD_TOKEN
}

data "hcloud_image" "jarvis_image" {
  with_selector = "name=debian-jarvis-snapshot-base"
}

data "hcloud_ssh_keys" "all_keys" {
}

resource "hcloud_firewall" "jarvis_firewall" {
  name = "jarvis-firewall"

  rule { // SSH
    direction = "in"
    protocol  = "tcp"
    port      = "22"
    source_ips = [
      "0.0.0.0/0",
      "::/0"
    ]
  }

  rule { // HTTP
    direction = "in"
    protocol  = "tcp"
    port      = "80"
    source_ips = [
      "0.0.0.0/0",
      "::/0"
    ]
  }

  rule { // DNS
    direction = "in"
    protocol  = "udp"
    port      = "53"
    source_ips = [
      "0.0.0.0/0",
      "::/0"
    ]
  }

  rule { // DNS
    direction = "in"
    protocol  = "tcp"
    port      = "53"
    source_ips = [
      "0.0.0.0/0",
      "::/0"
    ]
  }

  rule { // HTTPS
    direction = "in"
    protocol  = "tcp"
    port      = "443"
    source_ips = [
      "0.0.0.0/0",
      "::/0"
    ]
  }


  rule { // WIREGUARD
    direction = "in"
    protocol  = "udp"
    port      = "52828"
    source_ips = [
      "0.0.0.0/0",
      "::/0"
    ]
  }

  rule { // GRAFANA
    direction = "in"
    protocol  = "tcp"
    port      = "13443"
    source_ips = [
      "0.0.0.0/0",
      "::/0"
    ]
  }

  rule { // SCOPE
    direction = "in"
    protocol  = "tcp"
    port      = "15443"
    source_ips = [
      "0.0.0.0/0",
      "::/0"
    ]
  }

  rule { // PORTAINER
    direction = "in"
    protocol  = "tcp"
    port      = "19443"
    source_ips = [
      "0.0.0.0/0",
      "::/0"
    ]
  }

}

resource "hcloud_server" "jarvis_server" {
  name         = "jarvis-hetzner"
  image        = data.hcloud_image.jarvis_image.id
  server_type  = "cx11"
  labels       = { "name" = "jarvis-hetzner" }
  location     = "nbg1"
  ssh_keys     = data.hcloud_ssh_keys.all_keys.ssh_keys.*.name
  firewall_ids = [hcloud_firewall.jarvis_firewall.id]
}

Enter fullscreen mode Exit fullscreen mode

That sorts out our infrastructure, now comes the important part what are we going to run and how. It is going to run 12 services using Docker Compose.

Alt Text

  1. OpenResty
  2. HAProxy
  3. PiHole
  4. Grafana
  5. Loki
  6. Promtail
  7. Prometheus
  8. Scope
  9. Portainer
  10. Cloudflared
  11. DNSCrypt
  12. WireGuard

All Dockerfiles setup in folder as such for convenience to be used in docker-compose file

Alt Text

Docker Compose

First of we need to create docker-compose.yml file that we will use to build container images that can then be tagged and pushed to a registry later to be pulled for running.

This file is used to create images with all the configs and settings required by each image that you can run as first step before tagging and pushing to registry. Lets call this file docker-compose-ci.yml and we build images as following

docker-compose -f docker-compose-ci.yml build

version: "3.7"

services:
  #OPENRESTY 10.0.3.2
  openresty:
    build:
      context: ./openresty
      dockerfile: ./Dockerfile
    image: jarvis/openresty

  #HAPROXY 10.0.3.6
  haproxy:
    build:
      context: ./haproxy
      dockerfile: ./Dockerfile
      args:
        BASIC_AUTH_USERNAME: ${BASIC_AUTH_USERNAME}
        BASIC_AUTH_PASSWORD: ${BASIC_AUTH_PASSWORD}
        BASIC_AUTH_REALM: ${BASIC_AUTH_REALM}
        HAPROXY_HTTP_SCHEME: ${HAPROXY_HTTP_SCHEME}
        HAPROXY_STATS_URI: ${HAPROXY_STATS_URI}
        HAPROXY_STATS_REFRESH: ${HAPROXY_STATS_REFRESH}
    image: jarvis/haproxy

  #PIHOLE 10.0.3.3
  pihole:
    build:
      context: ./pihole
      dockerfile: ./Dockerfile
      args:
        TZ: ${TZ}
        WEBPASSWORD: ${WEBPASSWORD}
        DNS1: 10.0.3.4#5053 # cloudflared IP Address
        DNS2: 10.0.3.5#5053 # DNSCrypt IP Address
    image: jarvis/pihole

  #CLOUDFLARED 10.0.3.4
  cloudflared:
    build:
      context: ./cloudflared
      dockerfile: ./Dockerfile
      args:
        TZ: ${TZ}
        TUNNEL_DNS_UPSTREAM: ${TUNNEL_DNS_UPSTREAM}
    image: jarvis/cloudflared

  #DNSCRYPT 10.0.3.5
  dnscrypt:
    build:
      context: ./dnscrypt
      dockerfile: ./Dockerfile
    image: jarvis/dnscrypt


  #WIREGUARD 10.0.3.7
  wireguard:
    build:
      context: ./wireguard
      dockerfile: ./Dockerfile
      args:
        PUID: ${PUID}
        PGID: ${PGID}
        TZ: ${TZ}
        PEERS: ${PEERS}
        PEERDNS: 10.0.3.3#53 #pi-hole IP Address
    image: jarvis/wireguard

  #GRAFANA 10.0.3.8
  grafana:
    build:
      context: ./grafana
      dockerfile: ./Dockerfile
      args:
        GF_SECURITY_ADMIN_PASSWORD: ${GF_SECURITY_ADMIN_PASSWORD}
    image: jarvis/grafana

  #LOKI 10.0.3.9
  loki:
    build:
      context: ./loki
      dockerfile: ./Dockerfile
    image: jarvis/loki

  #PROMTAIL 10.0.3.10
  promtail:
    build:
      context: ./promtail
      dockerfile: ./Dockerfile
    image: jarvis/promtail

  #SCOPE 10.0.3.11
  scope:
    build:
      context: ./scope
      dockerfile: ./Dockerfile
      args:
        ENABLE_BASIC_AUTH: ${ENABLE_BASIC_AUTH}
        BASIC_AUTH_USERNAME: ${BASIC_AUTH_USERNAME}
        BASIC_AUTH_PASSWORD: ${BASIC_AUTH_PASSWORD}
    image: jarvis/scope

  #PROMETHEUS 10.0.3.12
  prometheus:
    build:
      context: ./prometheus
      dockerfile: ./Dockerfile
    image: jarvis/prometheus

#PORTAINER 10.0.3.13
  portainer:
    build:
      context: ./portainer
      dockerfile: ./Dockerfile
    image: jarvis/portainer

Enter fullscreen mode Exit fullscreen mode

Once we have all the images built, tagged and pushed (we will get to the tagging and pushing part when we see deployment, I am using Gitlab repos so they come with private registry). We use docker-compose with right set of port, volume and command mapping to bring all the services up. Read through the file to make yourself comfortable

docker-compose up

version: "3.7"

services:
  #OPENRESTY 10.0.3.2
  openresty:
    image: registry.gitlab.com/jarvis/openresty:amd64
    container_name: jarvis_openresty
    networks:
      network:
        ipv4_address: 10.0.3.2
        aliases:
          - jarvis_openresty
    depends_on:
      - haproxy
    expose:
      - "80"
      - "443"
    ports:
    - "80:80"
    - "443:443"
    - "13443:13443"
    - "15443:15443"
    - "19443:19443"
    volumes:
      - ./openresty/ssl:/usr/local/openresty/nginx/conf/config/ssl

  #HAPROXY 10.0.3.6
  haproxy:
    image: registry.gitlab.com/jarvis/haproxy:amd64
    container_name: jarvis_haproxy
    networks:
      network:
        ipv4_address: 10.0.3.6
        aliases:
          - jarvis_haproxy
    depends_on:
      - pihole
      - grafana
      - prometheus
      - scope
    expose:
      - "80"
      - "18081"
      - "18443"
      - "13000"
      - "15000"
      - "19000"

  #PIHOLE 10.0.3.3
  pihole:
    image: registry.gitlab.com/jarvis/pihole:amd64
    container_name: jarvis_pihole
    volumes:
      - "pihole_data:/etc/pihole"
      - "pihole_dnsmasq_data:/etc/dnsmasq.d"
      - "/dev/null:/var/log/pihole.log:ro"
    depends_on:
      - cloudflared
      - dnscrypt
    expose:
      - "80/tcp"
      - "67/udp"
    ports:
      - "53:53/tcp"
      - "53:53/udp"
    environment:
      - DNSMASQ_LISTENING=all
      - IPv6=false
      - PIHOLELOG=/dev/null
    networks:
      network:
        ipv4_address: 10.0.3.3
        aliases:
          - jarvis_pihole
    dns:
      - 127.0.0.1
      - 1.1.1.1
    cap_add:
      - NET_ADMIN

  #CLOUDFLARED 10.0.3.4
  cloudflared:
    image: registry.gitlab.com/jarvis/cloudflared:amd64
    container_name: jarvis_cloudflared
    expose:
    - "49312/tcp"
    - "5053/udp"
    networks:
      network:
        ipv4_address: 10.0.3.4
        aliases:
          - jarvis_cloudflared

  #DNSCRYPT 10.0.3.5
  dnscrypt:
    image: registry.gitlab.com/jarvis/dnscrypt:amd64
    container_name: jarvis_dnscrypt
    expose:
       - "5053/tcp"
       - "5053/udp"
    volumes:
      - "dnscrypt_data:/config"
    networks:
      network:
        ipv4_address: 10.0.3.5
        aliases:
          - jarvis_dnscrypt    


  #WIREGUARD 10.0.3.7
  wireguard:
    image: registry.gitlab.com/jarvis/wireguard:amd64
    container_name: jarvis_wireguard
    volumes:
      - "wireguard_data:/config"
      - "/lib/modules:/lib/modules"
    depends_on:
      - pihole
    ports:
      - "52828:51820/udp"
    cap_add:
      - NET_ADMIN
      - SYS_MODULE
    networks:
      network:
        ipv4_address: 10.0.3.7
        aliases:
          - jarvis_wireguard
    depends_on:
      - pihole

  #GRAFANA 10.0.3.8
  grafana:
    image: registry.gitlab.com/jarvis/grafana:amd64
    container_name: jarvis_grafana
    volumes:
      - "grafana_data:/var/lib/grafana:rw"
      - ./grafana/config/datasource.yml:/etc/grafana/provisioning/datasources/datasource.yml
    expose:
      - "3000"
    networks:
      network:
        ipv4_address: 10.0.3.8
        aliases:
          - jarvis_grafana

  #LOKI 10.0.3.9
  loki:
    image: registry.gitlab.com/jarvis/loki:amd64
    container_name: jarvis_loki
    expose:
      - "3100"
    command: -config.file=/etc/loki/local-config.yaml
    networks:
      network:
        ipv4_address: 10.0.3.9
        aliases:
          - jarvis_loki

  #PROMTAIL 10.0.3.10
  promtail:
    image: registry.gitlab.com/jarvis/promtail:amd64
    container_name: jarvis_promtail
    volumes:
      - /var/log:/var/log
    command: -config.file=/etc/promtail/config.yml
    networks:
      network:
        ipv4_address: 10.0.3.10
        aliases:
          - jarvis_promtail

  # SCOPE 10.0.3.11
  scope:
    image: registry.gitlab.com/jarvis/scope:amd64
    container_name: jarvis_scope
    networks:
      network:
        ipv4_address: 10.0.3.11
        aliases:
          - jarvis_scope
    expose:
      - "4040"
    pid: "host"
    privileged: true
    labels:
      - "works.weave.role=system"
    volumes:
      - "/var/run/docker.sock:/var/run/docker.sock:rw"
    command:
      - "--probe.docker=true"
      - "--weave=false"

  #PROMETHEUS 10.0.3.12
  prometheus:
    image: registry.gitlab.com/jarvis/prometheus:amd64
    container_name: jarvis_prometheus
    volumes:
      - "prometheus_data:/var/lib/prometheus:rw"
    expose:
    - "9090"
    networks:
      network:
        ipv4_address: 10.0.3.12
        aliases:
          - jarvis_prometheus

  #PORTAINER 10.0.3.13
  portainer:
    image: registry.gitlab.com/jarvis/portainer:amd64
    container_name: jarvis_portainer
    restart: always
    networks:
      network:
        ipv4_address: 10.0.3.13
        aliases:
          - jarvis_portainer
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock:ro
      - portainer_data:/data
    expose:
      - "9000"


networks:
  network:
    driver: bridge
    ipam:
     config:
       - subnet: 10.0.3.0/24

volumes:
  wireguard_data: {}
  pihole_data: {}
  pihole_dnsmasq_data: {}
  cloudflared_data: {}
  dnscrypt_data: {}
  grafana_data: {}
  prometheus_data: {}
  portainer_data: {}

Enter fullscreen mode Exit fullscreen mode

For automating the deployment phase so that you don't need to go in to your server to run all the docker commands I found a very nice OSS not very widely know called Stack Up

GitHub logo pressly / sup

Super simple deployment tool - think of it like 'make' for a network of servers

Stack Up

Stack Up is a simple deployment tool that performs given set of commands on multiple hosts in parallel. It reads Supfile, a YAML configuration file, which defines networks (groups of hosts), commands and targets.

Demo

Sup

Note: Demo is based on this example Supfile.

Installation

$ go get -u github.com/pressly/sup/cmd/sup

Usage

$ sup [OPTIONS] NETWORK COMMAND [...]

Options










































Option Description
-f Supfile Custom path to Supfile

-e, --env=[]
Set environment variables
--only REGEXP Filter hosts matching regexp
--except REGEXP Filter out hosts matching regexp

--debug, -D
Enable debug/verbose mode
--disable-prefix Disable hostname prefix

--help, -h
Show help/usage

--version, -v
Print version

Network

A group of hosts.

# Supfile

networks:
    production:
        hosts:
            - api1.example.com
            - api2.example.com
            - api3.example.com
    staging:
        # fetch dynamic list of hosts
        inventory: curl http://example.com/latest/meta-data/hostname
Enter fullscreen mode Exit fullscreen mode

$ sup production COMMAND will run COMMAND on api1, api2 and…

It makes things quite convenient, it is similar to ansible in a sense that you just have to define instructions in YAML and it executes it over SSH. Always remember no matter what new kid on the block software there might be BASH is always the King of the hood. In below file you can see different stages defined that can be grouped together in a target to create a pipeline. Its a quick and simple CI/CD pipeline that you can control from your command line.

version: 0.5

env:
    ENV: <set var>
    PWD: <set var>
    CR_USER: <set var>
    CR_PAT: <set var>
    CONTAINER_REGISTRY: <set var>
    PROJ_ID: <set var>

networks:
    production:
        hosts:
            - root@server

commands:
    connect: 
        desc: Check host connectivity
        run: uname -a; date; hostname
        once: true
    build:
        desc: Build Docker image
        run: cd $PWD/$PROJ_ID && git pull && source ~/.bashrc && docker-compose -f docker-compose-ci.yml build
        once: true
    docker_login:
        desc: Login to Gitlab container registry
        run: docker login $CONTAINER_REGISTRY -u $CR_USER -p $CR_PAT 
        once: true
    docker_tag:
        desc: Tag images for CI registry
        run: >-
            docker images | grep "^${PROJ_ID}_" | awk '{print $1}' | xargs -I {} echo {} |
            xargs -I {} docker image tag {} $CONTAINER_REGISTRY/{}:latest
        once: true
    docker_push:
        desc: Push images to CI registry
        run: >-
            docker images | grep "^${CONTAINER_REGISTRY}" | awk '{print $1}' | xargs -I {}
            echo {}  | xargs -I {} docker image push {}:latest
        once: true
    restart:
        desc: Restart docker containers
        run: systemctl restart docker-compose@$PROJ_ID
        once: true
    docker_ps:
        desc: List docker process
        run: sleep 30; docker ps
        once: true
    test:
        desc: test
        run: uname -a; date; hostname 
        once: true

targets:
    deploy:
        - connect
        - build
        - docker_login
        - docker_tag
        - docker_push
        - restart
        - docker_ps

Enter fullscreen mode Exit fullscreen mode

sup production deploy

Now a little bit over logging and monitoring that you can run from anywhere on the globe. Grafana is pretty amazing even though it might seems daunting to use for a personal project it sure makes your life comfortable. Loki along with promtail is super efficient for streaming your logs while using prometheus to capture metrics from HAProxy and OpenResty gives some nice insights beyond logs. Also if you want you can setup Dashboards over prometheus metrics or loki logs.

haproxy

Alt Text

nginx

Alt Text

loki

Alt Text

For container monitoring Scope and Portainer both are an overkill :P but I like Scope's UI and metric presentation while Portainer's management of Docker stack is unbeatable, this allows me to not go to the server for any reason and allows me to even debug my containers from the browser as they both allow you to exec in to your running containers from the browser.

scope

Alt Text

Alt Text

Alt Text

Some times we just underestimate how low resources we need to run so many services and how we can learn so many new things because of the amazing OSS community and their contributions.

I hope you find this useful and pushes you to increase your privacy, block those pesky ads and tracking or learn to play around and setup your own cloud infrastructure.

Alt Text

Top comments (0)