DEV Community

Cover image for Building My Personal Website: From Idea to Automated Deployment (Part 2)
Danylo Mikula
Danylo Mikula

Posted on • Originally published at mikula.dev on

Building My Personal Website: From Idea to Automated Deployment (Part 2)

In the first part of this series, I covered the high-level architecture and the tools I chose for building my personal website. Now let's dive deeper into the technical implementation, starting with the Terraform modules.

Infrastructure Overview

To deploy the minimal infrastructure with everything needed on Hetzner Cloud, we need to configure the following components:

  • Network — private network for internal communication
  • Firewall — security rules to restrict traffic
  • SSH Key — authentication for server access
  • Server — the actual compute instance

I created Terraform modules for each of these components. Let's go through them one by one.

Network Module

The first piece of infrastructure we need is a private network. For this, I created the terraform-hcloud-network module.

This module provides comprehensive network management for Hetzner Cloud:

  • Optional creation of a new network or reuse of an existing one
  • Support for multiple subnets across different network zones and types (server, cloud, or vswitch)
  • Optional custom routes for advanced scenarios like VPN gateways
  • Consistent outputs for easy integration with other modules

Here's my network configuration:

module "network" {
  source  = "danylomikula/network/hcloud"
  version = "1.0.0"

  create_network = true
  name           = local.project_slug
  ip_range       = "10.100.0.0/16"

  labels = local.common_labels

  subnets = {
    web = {
      type         = "cloud"
      network_zone = "eu-central"
      ip_range     = "10.100.1.0/24"
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

I chose the eu-central network zone because it offers the best pricing. This configuration creates a network with a /16 CIDR block (10.100.0.0/16) and a single subnet with a /24 block (10.100.1.0/24). For a single server, this is more than enough address space.

Firewall Module

Next, we need to set up a firewall to restrict external traffic. As I mentioned in the first part, I only allow HTTP/HTTPS traffic from Cloudflare IP addresses and SSH access from my home IP.

For this, I created the terraform-hcloud-firewall module. It supports:

  • Creating multiple firewalls with custom rules
  • Both inbound and outbound rules
  • Flexible port and IP restrictions
  • Common labels across all firewalls

Here's my firewall configuration:

module "firewall" {
  source  = "danylomikula/firewall/hcloud"
  version = "1.0.0"

  firewalls = {
    "${local.resource_names.website}" = {
      rules = [
        {
          direction   = "in"
          protocol    = "tcp"
          port        = "22"
          source_ips  = [var.my_homelab_ip]
          description = "allow ssh"
        },
        {
          direction   = "in"
          protocol    = "tcp"
          port        = "80"
          source_ips  = local.cloudflare_all_ips
          description = "allow http from cloudflare"
        },
        {
          direction   = "in"
          protocol    = "tcp"
          port        = "443"
          source_ips  = local.cloudflare_all_ips
          description = "allow https from cloudflare"
        },
        {
          direction   = "in"
          protocol    = "icmp"
          source_ips  = ["0.0.0.0/0", "::/0"]
          description = "allow ping"
        }
      ]
      labels = {
        service = "firewall"
      }
    }
  }

  common_labels = local.common_labels
}
Enter fullscreen mode Exit fullscreen mode

Dynamic Cloudflare IP Fetching

Cloudflare publishes their IP ranges publicly, so I fetch them dynamically using Terraform's http data source:

data "http" "cloudflare_ips_v4" {
  url = "https://www.cloudflare.com/ips-v4"
}

data "http" "cloudflare_ips_v6" {
  url = "https://www.cloudflare.com/ips-v6"
}

locals {
  cloudflare_ipv4_cidrs = split("\n", trimspace(data.http.cloudflare_ips_v4.response_body))
  cloudflare_ipv6_cidrs = split("\n", trimspace(data.http.cloudflare_ips_v6.response_body))
  cloudflare_all_ips    = concat(local.cloudflare_ipv4_cidrs, local.cloudflare_ipv6_cidrs)
}
Enter fullscreen mode Exit fullscreen mode

This approach ensures that whenever Cloudflare updates their IP ranges, a simple terraform apply will update the firewall rules automatically.

Important: For this setup to work, you need to enable the Proxy toggle on your A and AAAA records in Cloudflare DNS settings.

SSH Key Module

Before creating the server, we need an SSH key for authentication. I created the terraform-hcloud-ssh-key module for this purpose.

This module is quite flexible and supports:

  • Automated key generation (ED25519, RSA, or ECDSA)
  • Automatic local save of generated keys
  • Uploading existing public keys
  • Referencing keys already in Hetzner Cloud by ID or name

Here's my configuration:

module "ssh_key" {
  source  = "danylomikula/ssh-key/hcloud"
  version = "1.0.0"

  create_key = true
  name       = local.project_slug

  save_private_key_locally = true
  local_key_directory      = path.module

  labels = local.common_labels
}
Enter fullscreen mode Exit fullscreen mode

This generates an ED25519 key pair (the default and recommended algorithm) and saves both the private and public keys locally for easy access.

Server Module

Finally, let's create the server itself using the terraform-hcloud-server module. Like the others, it's designed to be flexible and supports:

  • Multi-server management with a single module invocation
  • Private network attachments with static IPs
  • Firewall integration at creation time
  • Placement groups for high availability
  • All hcloud_server resource attributes

Here's my server configuration:

module "servers" {
  source  = "danylomikula/server/hcloud"
  version = "1.0.0"

  servers = {
    "${local.resource_names.website}" = {
      server_type  = "cx23"
      location     = "hel1"
      image        = data.hcloud_image.rocky.name
      user_data    = local.cloud_init_config
      ssh_keys     = [module.ssh_key.ssh_key_id]
      firewall_ids = [module.firewall.firewall_ids[local.resource_names.website]]
      networks = [{
        network_id = module.network.network_id
        ip         = "10.100.1.10"
      }]
      labels = {
        service = "website"
      }
    }
  }

  common_labels = local.common_labels
}
Enter fullscreen mode Exit fullscreen mode

I chose the cx23 server type as it's the cheapest option available and costs me less than $5 per month in the Helsinki (hel1) region. Its specifications are more than enough for a static website.

Notice how I'm passing variables from previous modules dynamically — the SSH key ID, firewall ID, and network ID are all referenced from their respective module outputs. This eliminates manual configuration and reduces the chance of errors.

Complete Configuration

Here's the full Terraform configuration with all the pieces together:

locals {
  project_slug = "mikula-dev"

  common_labels = {
    environment = "production"
    project     = local.project_slug
    managed_by  = "terraform"
  }

  resource_names = {
    website = "${local.project_slug}-web"
  }

  cloud_init_config = templatefile("${path.module}/cloud-init.tpl", {
    ansible_ssh_public_key = var.ansible_user_ssh_public_key
  })

  cloudflare_ipv4_cidrs = split("\n", trimspace(data.http.cloudflare_ips_v4.response_body))
  cloudflare_ipv6_cidrs = split("\n", trimspace(data.http.cloudflare_ips_v6.response_body))
  cloudflare_all_ips    = concat(local.cloudflare_ipv4_cidrs, local.cloudflare_ipv6_cidrs)
}

# Fetch Cloudflare IP ranges for firewall rules
data "http" "cloudflare_ips_v4" {
  url = "https://www.cloudflare.com/ips-v4"
}

data "http" "cloudflare_ips_v6" {
  url = "https://www.cloudflare.com/ips-v6"
}

module "network" {
  source  = "danylomikula/network/hcloud"
  version = "1.0.0"

  create_network = true
  name           = local.project_slug
  ip_range       = "10.100.0.0/16"

  labels = local.common_labels

  subnets = {
    web = {
      type         = "cloud"
      network_zone = "eu-central"
      ip_range     = "10.100.1.0/24"
    }
  }
}

module "ssh_key" {
  source  = "danylomikula/ssh-key/hcloud"
  version = "1.0.0"

  create_key = true
  name       = local.project_slug

  save_private_key_locally = true
  local_key_directory      = path.module

  labels = local.common_labels
}

module "firewall" {
  source  = "danylomikula/firewall/hcloud"
  version = "1.0.0"

  firewalls = {
    "${local.resource_names.website}" = {
      rules = [
        {
          direction   = "in"
          protocol    = "tcp"
          port        = "22"
          source_ips  = [var.my_homelab_ip]
          description = "allow ssh"
        },
        {
          direction   = "in"
          protocol    = "tcp"
          port        = "80"
          source_ips  = local.cloudflare_all_ips
          description = "allow http from cloudflare"
        },
        {
          direction   = "in"
          protocol    = "tcp"
          port        = "443"
          source_ips  = local.cloudflare_all_ips
          description = "allow https from cloudflare"
        },
        {
          direction   = "in"
          protocol    = "icmp"
          source_ips  = ["0.0.0.0/0", "::/0"]
          description = "allow ping"
        }
      ]
      labels = {
        service = "firewall"
      }
    }
  }

  common_labels = local.common_labels
}

module "servers" {
  source  = "danylomikula/server/hcloud"
  version = "1.0.0"

  servers = {
    "${local.resource_names.website}" = {
      server_type  = "cx23"
      location     = "hel1"
      image        = data.hcloud_image.rocky.name
      user_data    = local.cloud_init_config
      ssh_keys     = [module.ssh_key.ssh_key_id]
      firewall_ids = [module.firewall.firewall_ids[local.resource_names.website]]
      networks = [{
        network_id = module.network.network_id
        ip         = "10.100.1.10"
      }]
      labels = {
        service = "website"
      }
    }
  }

  common_labels = local.common_labels
}
Enter fullscreen mode Exit fullscreen mode

With this configuration, running terraform apply provisions the complete infrastructure in just a few minutes.

Server Bootstrapping with Ansible

Now let's look at bootstrapping the actual website. For this, I'm using an Ansible collection that I also created and published publicly: ansible-hugo-deploy.

For the operating system, I chose Rocky Linux 10. For the web server — Caddy.

The Ansible collection handles the complete deployment pipeline:

  • Hugo Static Site Deployment — automated cloning and building of Hugo websites
  • Custom Caddy Build — compiles Caddy with custom plugins from source
  • SSL/TLS Automation — automatic HTTPS certificates via Let's Encrypt with Cloudflare DNS challenge
  • Built-in Rate Limiting — protection against bots and abuse
  • Cloudflare Integration — DNS-01 ACME challenge support
  • GitHub Deploy Key Generation — automatic SSH key generation for secure repository access
  • Automated Updates — systemd timer for periodic Git pulls and site rebuilds
  • Firewall Configuration — automated firewalld setup with sensible defaults
  • Version Pinning — full control over Hugo, Caddy, and Go versions

My Ansible Configuration

Here's my complete configuration:

---
# Domain configuration.
domain: "mikula.dev"
admin_email: "admin@{{ domain }}"

# Git repository for website source.
website_repo_url: "git@github.com:danylomikula/mikula.dev.git"
website_repo_branch: "master"

# Web content paths.
website_root: "/var/www/{{ domain }}"
caddy_log_path: "/var/log/caddy"
website_public_dir: "{{ website_root }}/public"

# Deploy SSH key configuration.
deploy_ssh_key_user: "caddy"
deploy_ssh_key_group: "{{ deploy_ssh_key_user }}"
deploy_ssh_key_dir: "/var/lib/{{ deploy_ssh_key_user }}/.ssh"
deploy_ssh_key_path: "{{ deploy_ssh_key_dir }}/deploy_key"
deploy_ssh_key_type: "ed25519"
deploy_ssh_key_comment: "{{ domain }}-deploy-key"

# Website rebuild configuration.
webrebuild_schedule: "*-*-* 04:00:00"
webrebuild_boot_delay: "180"
webrebuild_service_user: "caddy"
webrebuild_service_group: "caddy"
webrebuild_commands:
  - "git pull origin {{ website_repo_branch }}"
  - "hugo --gc --minify"

hugo_version: "0.152.2"

# Caddy configuration.
caddy_version: "2.10.2"
caddy_go_version: "1.25.4"
caddy_modules:
  - github.com/mholt/caddy-ratelimit
  - github.com/caddy-dns/cloudflare

caddy_rate_limit:
  enabled: true
  events: 60
  window: "1m"

caddy_compression_formats:
  - gzip
  - zstd

# DNS / ACME configuration.
cloudflare_api_token: "{{ vault_cloudflare_api_token }}"
caddy_acme_ca: "https://acme-v02.api.letsencrypt.org/directory"

# Firewall configuration.
firewall_zone: "public"
firewall_allowed_services:
  - ssh
  - http
  - https
firewall_allowed_ports: []
firewall_allowed_icmp: true
firewall_allowed_icmp_types:
  - echo-request
Enter fullscreen mode Exit fullscreen mode

Custom Caddy Build with Plugins

Since I'm using Cloudflare with proxy enabled, the standard Caddy build isn't enough for automatic certificate provisioning. I need the caddy-dns/cloudflare module to pass the DNS-01 ACME challenge for certificate verification.

Since I'm already building a custom Caddy binary, I decided to add another useful module — caddy-ratelimit for rate limiting protection against bots and scanners.

The configuration for these modules is available in my Ansible playbook. If you don't want to use one of them or want to add additional modules, you can easily customize the caddy_modules list.

Automated Content Updates

We can now deploy the website, but one problem remains: how do we update the content automatically without manually logging into the server? I want to simply push to Git and have the website update itself after some time.

To solve this, I'm using GitHub deploy keys. These keys are read-only, meaning all they can do is read the content of the Git repository — nothing more.

The Ansible playbook generates this key automatically, outputs the public part to the console, and waits for your confirmation while you configure your GitHub repository. After confirmation, it clones the content, builds it, and starts the Hugo server.

systemd Timer for Periodic Updates

For periodic content updates, I use a simple systemd timer that runs every morning and updates the website with new content.

webrebuild.service:

[Unit]
Description=Rebuild website from Git repository
After=network-online.target
Wants=network-online.target

[Service]
Type=oneshot
User={{ webrebuild_service_user }}
Group={{ webrebuild_service_group }}
WorkingDirectory={{ website_root }}
Environment=PATH={{ caddy_webserver_rebuild_path }}
{% for command in webrebuild_commands %}
ExecStart=/usr/bin/env bash -c "{{ command }}"
{% endfor %}
StandardOutput=journal
StandardError=journal
Enter fullscreen mode Exit fullscreen mode

webrebuild.timer:

[Unit]
Description=Rebuild website daily
RefuseManualStart=no
RefuseManualStop=no

[Timer]
# Run {{ webrebuild_boot_delay }} seconds after boot for the first time.
OnBootSec={{ webrebuild_boot_delay }}
# Run daily at scheduled time.
OnCalendar={{ webrebuild_schedule }}
Unit=webrebuild.service

[Install]
WantedBy=timers.target
Enter fullscreen mode Exit fullscreen mode

With this setup, every morning at 4:00 AM the timer triggers, pulls the latest changes from the repository, and rebuilds the site with Hugo. If I need an immediate update, I can always trigger it manually with sudo systemctl start webrebuild.service.

Caddyfile Configuration

The Caddy server is configured using a config file called Caddyfile. Here's the complete template:

# Caddyfile for {{ domain }}
# Managed by Ansible - do not edit manually.

{% if (cloudflare_api_token | length > 0) or (caddy_acme_ca | length > 0) %}
{
{% if caddy_acme_ca | length > 0 %}
    acme_ca {{ caddy_acme_ca }}
{% endif %}
{% if cloudflare_api_token | length > 0 %}
    acme_dns cloudflare {env.CLOUDFLARE_API_TOKEN}
{% endif %}
}
{% endif %}

www.{{ domain }} {
    # Redirect www to non-www domain.
    redir https://{{ domain }}{uri} permanent
}

{{ domain }} {
    # Root directory for static files.
    root * {{ website_public_dir }}

    # Enable static file server.
    file_server

{% if caddy_rate_limit.enabled | default(false) %}
    # Basic rate limiting per client IP to slow down bots/scanners.
    rate_limit {
        zone per_client {
            key {remote_ip}
            events {{ caddy_rate_limit.events }}
            window {{ caddy_rate_limit.window }}
        }
    }
{% endif %}

    # Enable compression.
    encode {% for format in caddy_compression_formats %}{{ format }} {% endfor %}

    # TLS configuration with admin email.
    tls {{ admin_email }}

    # Access logging.
    log {
        output file {{ caddy_log_path }}/access.log {
            roll_size 100MiB
            roll_local_time
            roll_keep_for 15d
        }
    }

    # Security headers.
    header {
        Strict-Transport-Security "max-age=63072000; includeSubDomains; preload"
        X-Content-Type-Options "nosniff"
        Content-Security-Policy "default-src 'self'; script-src 'self' 'unsafe-inline' https://www.googletagmanager.com https://static.cloudflareinsights.com; style-src 'self' 'unsafe-inline'; img-src 'self' data: https://static.cloudflareinsights.com; font-src 'self' data:; frame-ancestors 'none'; object-src 'none'; base-uri 'self'; form-action 'self'; connect-src 'self' https://www.google-analytics.com https://www.googletagmanager.com https://static.cloudflareinsights.com https://cloudflareinsights.com"
        Referrer-Policy "strict-origin-when-cross-origin"
    }
}
Enter fullscreen mode Exit fullscreen mode

This configuration includes:

  • www to non-www redirect — all traffic to www.mikula.dev is permanently redirected to mikula.dev
  • Static file serving — serves files from the Hugo build output directory
  • Rate limiting — limits requests per client IP to protect against abuse
  • Compression — gzip and zstd compression for better performance
  • Automatic TLS — certificates via Let's Encrypt with Cloudflare DNS challenge
  • Access logging — with automatic log rotation
  • Security headers — HSTS, CSP, and other security-related headers

Conclusion

That's it! With this setup, I can deploy a fully functional, secure, and automated website infrastructure in about 15 minutes. The entire workflow is:

  1. Run terraform apply to provision the infrastructure
  2. Push content to the repository
  3. Run the Ansible playbook to configure the server
  4. Add the deploy key to GitHub

From that point on, the website updates itself automatically every day.

I hope this guide helps you set up your own website even faster than I did. Feel free to use my ready-made configurations as a starting point.

All the code is open source:


Have questions or suggestions? Feel free to reach out or open an issue on GitHub.

Top comments (0)