DEV Community

Cover image for What I've Learned Learning Terraform: Part 4
Ekin Öcalan
Ekin Öcalan

Posted on • Edited on

What I've Learned Learning Terraform: Part 4

Terraform Series


Every time you run Terraform, it records information about what infrastructure it created in a Terraform state file. [1]

Managing Terraform State

Terraform state files are the backbone of your infrastructure. Terraform can only operate on the infrastructure it knows, and state files are the only way it can know about them.

Limiting Terraform by state files is very good in the context of isolation. You may have other resources you created manually in the same provider account. Terraform doesn't know them. Thus, you cannot accidentally change or destroy those resources.

On the other hand, it means that state files are the source of truth, and thus critical to your infrastructure-as-code. If you lose or corrupt your state file, you'll no longer have the ability to maintain your resources. That's why you need some sort of backup of your state against data loss. Additionally, you need versioning against data corruption.

A version control system might be the first possible solution that comes to mind. It indeed can be a remedy to both loss and corruption. You should use version control for your Terraform code; however, you should not put your state files to your version control for two reasons:

  1. If multiple people are working on the same codebase, your state can get out-of-date because getting the latest changes requires manual pulls from the version control.
  2. State files contain secrets as plain-text. It's a bad idea to store them unencrypted.

Using remote backend instead of local

Fortunately, Terraform offers two ways to store state files. The first one is storing locally, and that's the default way. The second one is storing state files remotely.

There are several advantages to store state files remotely. If you use version control for your Terraform codebase and remote backend for your state files, you no longer depend on your local environment. Even if you lose your machine, you can still work your infrastructure with Terraform.

It also allows collaboration as well. With some of the remote backends, you can set up a lock mechanism to prevent race conditions.

Most remote backends support encryption by default. Thus, your secrets are stored encrypted. You can also set access rules, so only your codebase is allowed to access the state programmatically, and thus restricting any 3rd party access.

Moving state files to DigitalOcean Spaces

DigitalOcean Spaces is an object storage product similar to AWS S3. In fact, it's S3-compatible [2]. That's why, even though Terraform's standard remote backends do not list Spaces as a possible backend type, we can use it as an S3 type.

Creating a Space

While it's alright to create a Space manually to store state files and use its configuration, let's create it with Terraform for this article's sake. Create a space.tf file and populate it with the code below:

resource "digitalocean_spaces_bucket" "tf_state" {
  name   = "terraform-sandbox"
  region = "ams3"
  acl    = "private"
  versioning {
    enabled = true
  }
}
Enter fullscreen mode Exit fullscreen mode

Here we create a digitalocean_spaces_bucket resource. In AWS lingo, a bucket is a standalone storage unit. "tf_state" is the internal name of this resource within the codebase. Let's also take a look at the configuration parameters:

  • name: The bucket/space name. It should be unique within the region.
  • region: One of the datacenters listed at Regional Availability Matrix.
  • acl: Access control list. public for public file listing, private for restriction. Since we are going to use this space to store state files, we use private.
  • versioning: Enabling the versioning will allow us to restore the state file(s) to an older but working version in case of data corruption or human error.

Run terraform init and terraform apply. You'll see an execution plan like this:

  # digitalocean_spaces_bucket.tf_state will be created
  + resource "digitalocean_spaces_bucket" "tf_state" {
      + acl                = "private"
      + bucket_domain_name = (known after apply)
      + force_destroy      = false
      + id                 = (known after apply)
      + name               = "terraform-sandbox"
      + region             = "ams3"
      + urn                = (known after apply)

      + versioning {
          + enabled = true
        }
    }
Enter fullscreen mode Exit fullscreen mode

To apply this plan, you are going to need an access key and a secret key for Spaces. Go to API on DigitalOcean to create a pair. Then, on command line, you can export these environment variables for Terraform to pick up the credentials for Spaces:

$ export SPACES_ACCESS_KEY_ID=11111111111111111111
$ export SPACES_SECRET_ACCESS_KEY=AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
Enter fullscreen mode Exit fullscreen mode

Configuring Terraform to use remote state on the Space

Now that we have a Space on DigitalOcean to store our state files, let's configure Terraform to use the state file there. While configuring, Terraform will also ask us if we want to transfer our local state to the remote or start from scratch. Since we already have some resources created, we are going to pick the transfer option to avoid multiple resource creation.

Open main.tf and find the terraform block:

terraform {
  required_providers {
    digitalocean = {
      source  = "digitalocean/digitalocean"
      version = "1.22.1"
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

What we're doing now is directly related to how Terraform will behave. That's why we are defining our remote backend in this block:

terraform {
  required_providers {
    digitalocean = {
      source  = "digitalocean/digitalocean"
      version = "1.22.1"
    }
  }

  backend "s3" {
    endpoint                    = "ams3.digitaloceanspaces.com"
    bucket                      = "terraform-sandbox"
    key                         = "terraform.tfstate"
    region                      = "ams3"
    access_key                  = "11111111111111111111"
    secret_key                  = "AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA"
    skip_credentials_validation = true
    skip_metadata_api_check     = true
    skip_region_validation      = true
  }
}
Enter fullscreen mode Exit fullscreen mode

Notice how we defined our backend as s3. Our Space on DigitalOcean is S3-compatible, and that's why we are defining our remote backend as it was S3 storage. Let's go over the configuration parameters:

  • endpoint: You can find this at your Spaces page. At the time of this writing, the structure is region.digitaloceanspaces.com.
  • bucket: The name of your Space.
  • key: The full path to the state file on remote backend. This will be your state file.
  • region: Just put your Space region. We'll ask Terraform to not use this value as it's going to look at AWS regions here, and not DigitalOcean ones.
  • access_key: Your Spaces access key.
  • secret_key: Your Spaces secret key.
  • skip_credentials_validation: We are not using AWS credentials, that's why we skip.
  • skip_metadata_api_check: We are not using AWS EC2, that's why we skip.
  • skip_region_validation: We are not using AWS, that's why we skip the region validation. Since region is a required parameter, Terraform will complain about our DigitalOcean region if we don't skip this validation.

Now when you run terraform init, you'll see a question:

Initializing the backend...
Do you want to copy existing state to the new backend? Pre-existing state was found while migrating the previous "local" backend to the newly configured "s3" backend. No existing state was found in the newly configured "s3" backend. Do you want to copy this state to the new "s3" backend? Enter "yes" to copy and "no" to start with an empty state.

  Enter a value:
Enter fullscreen mode Exit fullscreen mode

As I mentioned before, we'll say yes. Then, Terraform will transfer our local state file to Spaces, and configure itself to use the remote state:

Successfully configured the backend "s3"! Terraform will automatically use this backend unless the backend configuration changes.
Enter fullscreen mode Exit fullscreen mode

From now on, Terraform will read state values from remote, and write updated values to the remote.

A note on the Terraform configuration values

If you noticed, we used plain-text sensitive values (access key and secret key) when configuring our remote backend. That is because Terraform doesn't allow variables inside the Terraform configuration. For the same reason, we could not populate bucket, key, and region parameters from our digitalocean_spaces_bucket.tf_state resource definition.

One way of improving this process is to use partial configuration. You can remove access_key and secret_key from the configuration, and then supply them as backend configs on the initialization process:

$ terraform init -backend-config="access_key= 11111111111111111111" -backend-config="secret_key= AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA"
Enter fullscreen mode Exit fullscreen mode

Terraform State on DigitalOcean Spaces

[1]: Terraform Up & Running: Writing Infrastructure as Code by Yevgeniy Brikman (2nd edition)
[2]: S3-Compatibility on Spaces Features

Cover photo by Lÿv Jaan


Part 3.........................................................................................................Part 5

Top comments (0)