DEV Community

Cover image for Build a Highly Available Pi-hole Cluster with Ansible (VRRP)
Danylo Mikula
Danylo Mikula

Posted on

Build a Highly Available Pi-hole Cluster with Ansible (VRRP)

Step-by-step guide to prepare two Linux hosts, then use Ansible to deploy a highly available Pi-hole pair with keepalived (VRRP) and a Virtual IP, plus config sync and validation - powered by my open-source playbook: ansible-pihole-cluster

Download & flash Rocky Linux 10 for Raspberry Pi

1) Get the Raspberry Pi image

  1. Go to the official Rocky Linux Download page: pick ARM (aarch64).
  2. Scroll to the Raspberry Pi Images section and download the image for your Pi.

2) Flash the image to a microSD card

You can use balenaEtcher (what I use below), or Raspberry Pi Imager—both work.

Option A — balenaEtcher

  1. Install/open balenaEtcher.
  2. Flash from file → pick the Rocky Linux RPi image.
  3. Select target → choose your microSD card.
  4. Flash! → wait for completion.

balenaEtcher

Option B — Raspberry Pi Imager

  1. Open Raspberry Pi Imager.
  2. Click Choose OS → Use custom and select the Rocky Linux RPi image.
  3. Choose your microSD card and Next.
  4. When asked “Would you like to apply OS customisation settings?” click No (we’ll configure users/SSH/hostname later).
  5. You’ll get a Warning that all data on the card will be erased — click Yes.

Raspberry Pi Imager

Repeat this flashing process for both microSD cards (one per Raspberry Pi).
Next, we’ll boot each Pi and continue with user/SSH hardening and networking.


Create an admin user, install SSH keys, disable password logins, remove the default user

Do this on both Raspberry Pis. Replace dan with your preferred username.

1) Create the user and grant admin (sudo) rights

Do this on both Raspberry Pis (run on the primary first, then repeat on the secondary).
Default user: rocky, Default password: rockylinux

# pick your username
USER=dan

# create the user and set a password (for local console; we’ll disable SSH passwords next)
sudo adduser "$USER"
sudo passwd "$USER"

# add to the admin group (wheel)
sudo usermod -aG wheel "$USER"

# give passwordless sudo (NOPASSWD)
sudo su -c "echo '$USER ALL=(ALL) NOPASSWD:ALL' > /etc/sudoers.d/$USER"
Enter fullscreen mode Exit fullscreen mode

2) Generate SSH keys on your configuration device & copy them to both Pis

Do this for both Raspberry Pis: use the matching key for each device (e.g., pihole-master → primary, pihole-backup → secondary).

  1. Generate keys (run on your laptop/desktop)
# On your configuration device
ssh-keygen -t ed25519 -C "pihole-master"  -f ~/.ssh/pihole_master
ssh-keygen -t ed25519 -C "pihole-backup"  -f ~/.ssh/pihole_backup
Enter fullscreen mode Exit fullscreen mode
  1. Copy the public keys to each Pi
# Copy public keys (you’ll be prompted for the admin user’s password this one time)
ssh-copy-id -i ~/.ssh/pihole_master.pub  dan@10.0.20.50
ssh-copy-id -i ~/.ssh/pihole_backup.pub  dan@10.0.20.51
Enter fullscreen mode Exit fullscreen mode

3) Harden SSH: disable root login and password-based login

Do this on both Raspberry Pis, using the new user you created.

1) SSH into each Pi (replace dan with your user and IPs with yours):

ssh dan@10.0.20.50 -i ~/.ssh/pihole_master
# and on the second Pi:
ssh dan@10.0.20.51 -i ~/.ssh/pihole_backup
Enter fullscreen mode Exit fullscreen mode

2) Edit the SSH daemon config:

sudo vi /etc/ssh/sshd_config
Enter fullscreen mode Exit fullscreen mode

Find and set the following:

PasswordAuthentication no
PermitRootLogin no
Enter fullscreen mode Exit fullscreen mode

Save and exit.

3) Reload SSHD:

sudo systemctl reload sshd
Enter fullscreen mode Exit fullscreen mode

4) Log out and test key-only login:

# primary
ssh dan@10.0.20.50 -i ~/.ssh/pihole_master
# secondary
ssh dan@10.0.20.51 -i ~/.ssh/pihole_backup
Enter fullscreen mode Exit fullscreen mode

You should be able to log in without any password prompt.

4) Remove the default rocky user (on both Pis)

Make sure you’re logged in as your new user (e.g., dan), not rocky.

  1. Remove rocky and its home directory:
sudo userdel -r rocky || true
Enter fullscreen mode Exit fullscreen mode

Repeat on the second Raspberry Pi.


Expand the microSD to use all available space

We’ll grow the root partition (/dev/mmcblk0p3) to fill the card, then expand the filesystem. Do this on both Raspberry Pis.

1) View current disk and partition layout

Run:

sudo parted -l
Enter fullscreen mode Exit fullscreen mode

If you see a prompt like this, type Fix:

Warning: Not all of the space available to /dev/mmcblk0 appears to be used, you
can fix the GPT to use all of the space (an extra 115845120 blocks) or continue
with the current setting?
Fix/Ignore? Fix
Enter fullscreen mode Exit fullscreen mode

You should see something similar to:

Model: SD SD64G (sd/mmc)
Disk /dev/mmcblk0: 62.2GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number  Start   End     Size    File system     Name      Flags
 1      1049kB  525MB   524MB   fat16           p.UEFI    boot, esp
 2      525MB   1062MB  537MB   linux-swap(v1)  p.swap    swap
 3      1062MB  2914MB  1852MB  ext4            p.lxroot
Enter fullscreen mode Exit fullscreen mode

sudo parted -l

2) Resize the root partition with cfdisk

From the output above, we want to expand /dev/mmcblk0p3.

sudo cfdisk /dev/mmcblk0
Enter fullscreen mode Exit fullscreen mode

In the TUI that opens:

  • Select the partition /dev/mmcblk0p3 (labeled Linux root (ARM-64)).
  • Choose Resize. sudo cfdisk /dev/mmcblk0
  • Set it to use all remaining free space. all available free space
  • Choose Write, confirm with yes (you should see “The partition table has been altered.”). Write changes yes
  • Quit. Quit

3) Confirm the partition was resized

lsblk
Enter fullscreen mode Exit fullscreen mode

Check that mmcblk0p3 now spans the expected size (e.g., ~57 GB on a 64 GB card).

Confirm the partition was resized

4) Grow the filesystem to fill the partition

sudo resize2fs /dev/mmcblk0p3
Enter fullscreen mode Exit fullscreen mode

You should see output confirming the filesystem was resized successfully.

Resize the ext4 filesystem to fill the expanded partition

5) Reboot and verify

sudo reboot
Enter fullscreen mode Exit fullscreen mode

After the Pi comes back:

df -h /
Enter fullscreen mode Exit fullscreen mode

You should now see the full capacity available on /.


Set static IP & DNS

Run these on each Pi (primary first, then secondary). If your connection name isn’t "Wired connection 1", run nmcli con show to find it and substitute accordingly.

Primary Pi (e.g., 10.0.20.50/24)

sudo nmcli con mod "Wired connection 1" \
  ipv4.addresses 10.0.20.50/24 \
  ipv4.gateway 10.0.20.1 \
  ipv4.dns "1.1.1.1 1.0.0.1" \
  ipv4.ignore-auto-dns yes \
  ipv4.method manual
Enter fullscreen mode Exit fullscreen mode

Secondary Pi (e.g., 10.0.20.51/24)

sudo nmcli con mod "Wired connection 1" \
  ipv4.addresses 10.0.20.51/24 \
  ipv4.gateway 10.0.20.1 \
  ipv4.dns "1.1.1.1 1.0.0.1" \
  ipv4.ignore-auto-dns yes \
  ipv4.method manual
Enter fullscreen mode Exit fullscreen mode

Verify after reboot:

nmcli dev show | grep -E 'IP4.ADDRESS|IP4.GATEWAY|IP4.DNS'
Enter fullscreen mode Exit fullscreen mode

Deploy Pi-hole cluster with Ansible

Prereqs: Ansible installed on your configuration device and passwordless sudo enabled on both Pis (we did this earlier).

1) Clone the repository

git clone https://github.com/danylomikula/ansible-pihole-cluster.git
cd ansible-pihole-cluster
Enter fullscreen mode Exit fullscreen mode

2) Install the required collections

ansible-galaxy collection install -r ./collections/requirements.yaml
Enter fullscreen mode Exit fullscreen mode

3) Edit inventory and variables

inventory/hosts.ini

Set your IPs, SSH key paths, and the remote user.

Use the exact key filenames you created earlier (pihole_master vs pihole-master). Adjust to match your setup.

[master]
pihole-master ansible_host=10.0.20.50 ansible_user=dan ansible_ssh_private_key_file=~/.ssh/pihole_master priority=150

[backup]
pihole-backup ansible_host=10.0.20.51 ansible_user=dan ansible_ssh_private_key_file=~/.ssh/pihole_backup priority=140
Enter fullscreen mode Exit fullscreen mode

Сhange ansible_user and the key paths accordingly.

inventory/group_vars/all.yml

Open and set the essentials for your environment (at minimum):

# Virtual IP used by keepalived (VRRP). Point your clients/DHCP to THIS address.
pihole_vip_ipv4: "10.0.20.53/24"

# Web interface password.
pihole_webpassword: "SUPER_SECURE_PASSWORD"

# Your local search domain (e.g., "homelab.local", "lan", "home", etc.)
pihole_local_domain: "homelab.local"
Enter fullscreen mode Exit fullscreen mode

4) (Optional) Quick connectivity test

ansible all -i inventory/hosts.ini -m ping
Enter fullscreen mode Exit fullscreen mode

5) Bootstrap the cluster

ansible-playbook -i inventory/hosts.ini bootstrap-pihole.yaml
Enter fullscreen mode Exit fullscreen mode

What the playbook installs (and why)

  • keepalived — Provides VRRP and the floating Virtual IP so one node is always the active DNS endpoint. If the master goes down, the backup takes over automatically.

  • unbound — A local validating, recursive DNS resolver. When enabled, Pi-hole forwards queries to Unbound on-box instead of public resolvers, improving privacy and reducing external dependency. Pi-hole’s official guide: https://docs.pi-hole.net/guides/dns/unbound/

  • nebula-sync — A lightweight watcher/synchronizer that keeps designated Pi-hole config/state in sync between nodes (e.g., lists, local files). Project: https://github.com/lovelaze/nebula-sync

  • pihole-updatelists — Automates fetching and applying block/allow lists from remote sources on a schedule, so your lists stay current without manual upkeep. Project: https://github.com/jacklul/pihole-updatelists

6) Point your network to the Virtual IP

Update your DHCP/router (or manual client settings) to use the VIP you set in group_vars/all.yml:

  • IPv4 DNS: pihole_vip_ipv4 (e.g., 10.0.20.53)
  • IPv6 DNS: pihole_vip_ipv6 (if configured)

7) Verify

On whichever node should be master (higher priority), check that the VIP is present:

ip a | grep -A2 "$(yq '.pihole_interface' inventory/group_vars/all.yml)" | grep -E '10\.0\.20\.57|vip'
# or simply:
ip a show dev eth0
Enter fullscreen mode Exit fullscreen mode

Confirm Pi-hole is answering:

dig @10.0.20.57 example.com +short
Enter fullscreen mode Exit fullscreen mode

If that resolves, you’re done — your HA Pi-hole pair is live behind a single Virtual IP.

Top comments (0)