DEV Community

matt from bitLeaf.io
matt from bitLeaf.io

Posted on • Originally published at bitleaf.io on

Creating a DigitalOcean Droplet with Terraform - Part 3 of 3 - Cloud-init

Creating a DigitalOcean Droplet with Terraform - Part 3 of 3 - Cloud-init

In parts 1 and 2 of our Creating a DigitalOcean Droplet with Terraform series we setup our Terraform configuration and created a DigitalOcean droplet and volume. In the final part we now are going to configure that droplet so when it gets created it already has the OS setup how we want it.

To be able to setup the droplet operating system as part of our Terraform configuration we are going to use the cloud-init method. There are different ways to go about this, but cloud-init is the standard to be able to setup your cloud based instances in an automated fashion. There is a lot you can do with it and there are some examples of what cloud-init can do on the cloud-init site.

Cloud-init is just another configuration file that we can call from our Terraform configuration. Cloud-init uses the YAML format. So when working with cloud-init files, make sure to watch your indentations.

So let's take a look at the cloud-init file and then we'll go through what it's doing in our example.

#cloud-config

package_update: true
package_upgrade: true
package_reboot_if_required: true

groups:
    - docker

users:
    - name: leaf
      lock_passwd: true
      shell: /bin/bash
      ssh_authorized_keys:
      - ${init_ssh_public_key}
      groups: docker
      sudo: ALL=(ALL) NOPASSWD:ALL

packages:
  - apt-transport-https
  - ca-certificates
  - curl
  - gnupg-agent
  - software-properties-common
  - unattended-upgrades

runcmd:
  - curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
  - add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
  - apt-get update -y
  - apt-get install -y docker-ce docker-ce-cli containerd.io
  - systemctl start docker
  - systemctl enable docker
  - curl -L "https://github.com/docker/compose/releases/download/1.25.4/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
  - chmod +x /usr/local/bin/docker-compose

We start off with the line...

#cloud-config

This line is critical in telling cloud-init that we are using a cloud-config style file. You can also do a standard bash shell indication.

Our next three lines...

package_update: true
package_upgrade: true
package_reboot_if_required: true

These simply tell our operating system to update the packages to the latest version and do any necessary reboots. This is great and saves you the manual effort.

Now for the user/group information...

groups:
    - docker

users:
    - name: leaf
      lock_passwd: true
      shell: /bin/bash
      ssh_authorized_keys:
      - ${init_ssh_public_key}
      groups: docker
      sudo: ALL=(ALL) NOPASSWD:ALL

Now we are going to tell our operating system to create a new group 'docker', and a new user 'leaf'. For the user we set:

  • name: leaf ... This sets the username to 'leaf'
  • lock_passwd: true ... This turns off password logins
  • ssh_authorized_keys ... This is a variable that contains our DigitalOcean SSH key. This will let you login as 'leaf' using your existing SSH key. The ${init_ssh_public_key} is going to be set when we add the cloud-init call to our Terraform configuration.
  • groups: docker ... Assign the user to the 'docker' group
  • sudo: ALL=(ALL) NOPASSWD:ALL ... This will add the user to the Sudoers file

Now for the packages...

packages:
  - apt-transport-https
  - ca-certificates
  - curl
  - gnupg-agent
  - software-properties-common
  - unattended-upgrades

Most of this is just setting up basic package management to make it more secure and easier for package updates. The last one 'unattended-upgrades' is fantastic for your cloud servers. This will automatically install all security related updates to your server so you don't have to keep logging in and patching them (at least for security patches). You can of course also add your own packages to the list.

The final piece does our docker and docker-compose installs...

runcmd:
  - curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
  - add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
  - apt-get update -y
  - apt-get install -y docker-ce docker-ce-cli containerd.io
  - systemctl start docker
  - systemctl enable docker
  - curl -L "https://github.com/docker/compose/releases/download/1.25.4/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
  - chmod +x /usr/local/bin/docker-compose

These commands are simply a way to install docker and docker-compose on Ubuntu. There is nothing changed from if you needed to run this manually. You just have each step as a separate yaml entry.

So that's it for our cloud-init example. Now let's go see how to have that run as part of our Terraform setup


Back in Terraform land we need to update our configuration to call our cloud-init.yaml file.

Here is our updated 'droplet_volume.tf' file with our cloud-init pieces included. I'll highlight and discuss those pieces below.

# Specify the Terraform provider to use
provider "digitalocean" {
  token = var.do_token
}

data "template_file" "cloud-init-yaml" {
  template = file("${path.module}/files/cloud-init.yaml")
  vars = {
    init_ssh_public_key = file(var.ssh_public_key)
  }
}

# Setup a DO volume
resource "digitalocean_volume" "bitleaf_volume_1" {
  region = "nyc3"
  name = "biteaf-volume-1"
  size = 5
  initial_filesystem_type = "ext4"
  description = "bitleaf volume 1"
}

# Setup a second DO volume
resource "digitalocean_volume" "bitleaf_volume_2" {
  region = "nyc3"
  name = "biteaf-volume-2"
  size = 5
  initial_filesystem_type = "ext4"
  description = "bitleaf volume 2"
}

# Setup a DO droplet
resource "digitalocean_droplet" "bitleaf_server_1" {
  image = var.droplet_image
  name = "bitleaf-server-1"
  region = var.region
  size = var.droplet_size
  private_networking = var.private_networking
  ssh_keys = [
    var.ssh_key_fingerprint
  ]
  user_data = data.template_file.cloud-init-yaml.rendered
}

# Connect the volume to the droplet
resource "digitalocean_volume_attachment" "bitleaf_volume_1" {
  droplet_id = digitalocean_droplet.bitleaf_server_1.id
  volume_id = digitalocean_volume.bitleaf_volume_1.id
}

# Connect the second volume to the droplet
resource "digitalocean_volume_attachment" "bitleaf_volume_2" {
  droplet_id = digitalocean_droplet.bitleaf_server_1.id
  volume_id = digitalocean_volume.bitleaf_volume_2.id
}

# Output the public IP address of the new droplet
 output "public_ip_server" {
  value = digitalocean_droplet.bitleaf_server_1.ipv4_address
}

Most of our Terraform configuration is the same. We did add one new block.

data "template_file" "cloud-init-yaml" {
  template = file("${path.module}/files/cloud-init.yaml")
  vars = {
    init_ssh_public_key = file(var.ssh_public_key)
  }
}

This is a data block. In my setup I have my 'cloud-init.yaml' file in a 'files' directory. So the template parameter is just reading that file in. The reason it's called a 'template' parameter is because it allows us to replace variable entries in our cloud-init file. In our case we wanted to put in our ssh key. In the vars parameter we are setting the init_ssh_public_key variable with our local public key. The file means it's reading the contents of the file that is in the path that we specified in our 'variables.tf' file for ssh_public_key.

The other thing we added to our Terraform configuration is

user_data = data.template_file.cloud-init-yaml.rendered

under the Droplet resource block. This is populating the user_data DigitalOcean property with the contents of our rendered 'cloud-init.yaml' file. By rendered it just means that the template was loaded and any variable substitutions have been made.

That's it. Just those couple of changes to our Terraform configuration and we'll have a nicely customized Droplet ready to go.

I found the best way to learn what you can do with cloud-init is to check out their cloud-init examples page. The great thing about cloud-init is that it is a standard that is followed in many cloud providers. This isn't anything specific to one provider, so your knowledge will be portable.

Latest comments (0)