DEV Community

Tias
Tias

Posted on

Using Terraform to deploy a web site to a DigitalOcean droplet with Cloudflare

Image description

It took me way too long to find this golden combination.

Here are the goals:

  • The web files deploy to a nginx droplet.
  • Cloudflare manages the DNS for the domain.
  • The www subdomain redirects to the bare TLD.
  • Cloudflare does the SSL.
  • http connections are always upgraded to https.
  • Redeployments happen with zero downtime.

This isn't all, but it's worth posting so far.

main.tf

I'll paste it in sections:

Setup


terraform {
    required_version = ">= 1.0.0"
    required_providers {
        digitalocean = {
            source = "digitalocean/digitalocean"
            version = ">= 2.0"
        }
        cloudflare = {
            source = "cloudflare/cloudflare"
            version = "~> 4.0"
        }
    }
}

# DigitalOcean provider
provider "digitalocean" {
    token = var.digitalocean_token
}

# Cloudflare provider
provider "cloudflare" {
    email = var.cloudflare_email
    api_key = var.cloudflare_api_key
}
Enter fullscreen mode Exit fullscreen mode

You probably have all the provider stuff squared away, if you've arrived here from a Google search. But it's here for completeness.

Cloudflare DNS

resource "cloudflare_record" "main_app" {
    zone_id = var.cloudflare_zone_id
    name = "example.com"
    content = digitalocean_reserved_ip.main_ip.ip_address
    type = "A"
    proxied = true
}

resource "cloudflare_record" "www_cname" {
    zone_id = var.cloudflare_zone_id
    name = "www"
    type = "CNAME"
    content = "example.com"
    proxied = true
}
Enter fullscreen mode Exit fullscreen mode

The first is the important one. It directs the domain to the static IP address of the DigitalOcean droplet.

DigitalOcean calls this a "Reserved IP", but in the past it was called a "Floating IP". FYI.

The CNAME section ... I think ... will redirect the www subdomain to the bare TLD. I say "I think" because I have no idea. Feel free to experiment by leaving this off.

proxied = true makes sense because Cloudflare is proxying all of our traffic, if you didn't realize.

Cloudflare SSL

resource "cloudflare_zone_settings_override" "https_redirect" {
zone_id = var.cloudflare_zone_id
settings {
always_use_https = "on" # This ensures HTTP is upgraded to HTTPS
}
}

resource "cloudflare_ruleset" "redirect_from_value_example" {
zone_id = var.cloudflare_zone_id
name = "www"
description = "Redirects ruleset"
kind = "zone"
phase = "http_request_dynamic_redirect"

rules {
    action = "redirect"
    action_parameters {
        from_value {
            status_code = 301
            target_url {
                expression = "concat(\"https://example.com\", http.request.uri.path)"
            }
            preserve_query_string = true
        }
    }
    expression  = "(starts_with(http.host, \"www.\"))"
    description = "Redirect www to non-www"
    enabled = true
}
Enter fullscreen mode Exit fullscreen mode

}


[`always_use_https = "on"`](https://developers.cloudflare.com/ssl/edge-certificates/additional-options/always-use-https/):

> Always Use HTTPS redirects all your visitor requests from http to https, for all subdomains and hosts in your application.

Good stuff.

Now, about that `ruleset`. It works but I'm not confident that it's the ideal solution. I feel like the API should have a "Redirect Rules" surface, because that's a thing in the console. But apparently here you can only use Page Redirects or raw rulesets. Please comment if you know what it should be.

Anyway, this does a `302` redirect from `www` to the bare TLD, and I'm not concerned about preserving paths or query strings, because it takes user effort to turn a valid url into a non-working `www` url. I just don't care about that rare scenario when someone manually edits a path-bearing URL to add a www, and sends it to someone else without checking it. Their path will get stripped. Sucks for them.

### DigitalOcean project

Enter fullscreen mode Exit fullscreen mode

data "digitalocean_project" "project" {
name = "My Project"
}

resource "digitalocean_project_resources" "main_app_project" {
project = data.digitalocean_project.project.id
resources = [
"do💧${digitalocean_droplet.main_app.id}",
"do:floatingip:${digitalocean_reserved_ip.main_ip.id}"
]
}


I have a dedicated project that I'm creating these resources in. Without this, the resources would be created in the default project.

### The Reserved IP Address

Enter fullscreen mode Exit fullscreen mode


hcl
resource "digitalocean_reserved_ip" "main_ip" {
droplet_id = digitalocean_droplet.main_app.id
region = digitalocean_droplet.main_app.region
}


This `digitalocean_reserved_ip` is what worked for me. I tried getting `digitalocean_reserved_ip_assignment` to work, but it always deregistered the reserved IP from the live droplet before the replacement droplet was provisioned. This was unacceptable because it created a period of downtime lasting _minutes_. Completely unacceptable.

### SSH Key
Enter fullscreen mode Exit fullscreen mode


hcl
resource "digitalocean_ssh_key" "my_key" {
name = "mykey"
public_key = file("~/.ssh/mykey.pub")
}

This SSH key is for letting me ssh into the droplet.

### Resource-Based Invalidation
Enter fullscreen mode Exit fullscreen mode

data "external" "hashes" {
program = ["bash", "${path.module}/generate_hash.sh"]
}

When I change the html files _or_ the initialization script (see below), I want it to trigger a replacement of the droplet. By default, terraform doesn't recreate a droplet if its user_data or remote-exec blocks change. Weird, I know. So this is a way of hacking around it.

We hash all the files and use the resulting hash to invalidate.

Here is `generate_hash.sh`:
Enter fullscreen mode Exit fullscreen mode


sh

!/bin/bash

Compute hash of all files in the HTML directory

HTML_HASH=$(find ../web/dist -type f -exec sha256sum {} \; | sha256sum | awk '{print $1}')

Compute hash of the main-server-init.sh script

SCRIPT_HASH=$(sha256sum ./setup/main-server-init.sh | awk '{print $1}')

Combine the two hashes

COMBINED_HASH=$(echo -n "${HTML_HASH}${SCRIPT_HASH}" | sha256sum | awk '{print $1}')

Output the combined hash in JSON format

echo "{\"inputs_hash\": \"$COMBINED_HASH\"}"


### The droplet
Enter fullscreen mode Exit fullscreen mode


hcl
resource "digitalocean_droplet" "main_app" {
name = "main-app"
region = "nyc1"
size = "s-1vcpu-1gb"
image = "ubuntu-24-04-x64"
ssh_keys = [digitalocean_ssh_key.my_key.id]

# Trigger recreation on changes to input files.
user_data = "# inputs_hash: ${data.external.hashes.result.inputs_hash}"

lifecycle {
    create_before_destroy = true
}

connection {
    type = "ssh"
    user = "root"
    private_key = file("~/.ssh/mykey")
    host = self.ipv4_address
    timeout = "5m"
}
provisioner "remote-exec" {
    inline = [
        "mkdir -p /tmp/html",
    ]
}
provisioner "file" {
    source = "${path.module}/../web/dist/"
    destination = "/tmp/html"
}
provisioner "remote-exec" {
    script = "setup/main-server-init.sh"
}
Enter fullscreen mode Exit fullscreen mode

}


The `user_data` block now only has the invalidation hash. It doesn't have any cloud-init scripts, because I'm handling that in a custom script (`setup/main-server-init.sh`) in the `remote-exec` provisioner block.

`lifecycle.create_before_destroy` is critical to getting a zero-downtime deploy. This way, when you run `terraform apply`, your current droplet will continue to serve requests while your new droplet is building. And only after it's ready does the Reserved IP switch over to the new droplet. So, an instant cutover.

Here is `main-server-init.sh`:
Enter fullscreen mode Exit fullscreen mode


sh

!/bin/bash

set -x # echo commands
set -e # fail on errors

Upgrade to get security fixes

export DEBIAN_FRONTEND=noninteractive
apt -q update -y
apt -q upgrade -y
apt -q install -y nginx
curl -sSL https://repos.insights.digitalocean.com/install.sh | bash

Remove the default Nginx index file

rm -rf /var/www/html
mv /tmp/html /var/www/html

Nginx

systemctl enable nginx
systemctl start nginx

Configure the firewall

ufw allow OpenSSH
ufw allow 'Nginx Full'
ufw allow http
ufw allow https
ufw --force enable

Start Nginx

systemctl restart nginx

echo "Server setup complete."




Not perfect, but it's what I have.


# Improvements

I'd like to separate the html files from the nginx init script, so that I can update the deployed files without recreating the whole droplet. It's safe enough.

The DigitalOcean droplet no longer shows its graphs in the console. [This](https://docs.digitalocean.com/products/monitoring/how-to/install-agent/) is how you get it working, but `ufw` is probably misconfigured.

It could use a health check to make sure nginx started properly.

That's it!
Enter fullscreen mode Exit fullscreen mode

Top comments (0)