DEV Community

Joseph D. Marhee
Joseph D. Marhee

Posted on

DigitalOcean Spaces as a Terraform Backend

Amazon Web Services S3 object storage API compatibility is a very cool interface for a lot of great object storage options like Minio to run an S3-tooling compatible object storage service on your infrastructure, and likewise for hosted options like DigitalOcean Spaces, where the common tooling around AWS S3 applies to new endpoints, using the same components like authentication scheme, bucket namespacing, etc.

A common use-case that might concern DigitalOcean (and other IaaS consumers) users is storing Infrastrucuture-as-Code state data, consistently, in a reliable/durable manner, either in their infrastructure (if they run things like the Terraform-generated resources on DigitalOcean) or in-resource (if you run, for example, Minio in something like a standalone set of instances or inside of a Kubernetes cluster, etc.) among your other options for backends for state data that does not necessarily couple with your infrastructure in the same way.

Your use of Terraform in existing configurations doesn't need to change drastically; in this case, you're just adding a Terraform data block, alongside your usual provider block, to supply your data. What you'll need is your DigitalOcean Space endpoint URL:

and your access token and secret token from https://cloud.digitalocean.com/settings/api/tokens under the Spaces section:

Keep in mind this will be separate from your DigitalOcean API token for resources like compute and block storage resources, and use only for access to object storage buckets themselves.

For our purposes here, we'll cover, first, managing these credentials within Terraform (since you may be using S3-compatible tooling elsewhere, i.e. awscli or boto to manage Object Store data, I'll cover this as well afterwards).
For reasons I will explain in a moment, rather than storing your Spaces credentials in terraform.tfvars like you might normally (i.e. with your provider token for DigitalOcean), you'll store these elsewhere. In my example, I'll store them as environmental variables:

export SPACES_ACCESS_TOKEN=""
export SPACES_SECRET_KEY=""
Enter fullscreen mode Exit fullscreen mode

In your Terraform script, or in your repo in a file called provider.tf, you'll have a block like this to connect up to DigitalOcean's API:

variable "do_token" {}
provider "digitalocean" {
  token = "${var.do_token}"
}
Enter fullscreen mode Exit fullscreen mode

Great. But on Terraform run, it will dump all stateful data to your project root, which would remain locally stored (and a huge pain to check into version control reliably). So, let's setup a block to store our state data remotely using the Spaces S3-compatible bucket we just created.

Because Terraform does not allow for interpolation in the backend block for state storage, you'll need to do a couple of more things before you can store your state remotely. In your Terraform, you can provide a block like:

terraform {
  backend "s3" {
    endpoint = "nyc3.digitaloceanspaces.com"
    region = "us-west-1"
    key = "terraform.tfstate"
    skip_requesting_account_id = true
    skip_credentials_validation = true
    skip_get_ec2_platforms = true
    skip_metadata_api_check = true
  }
}
Enter fullscreen mode Exit fullscreen mode

where the endpoint is your region's Spaces endpoint ( region refers to an AWS S3 region, which you can leave set to us-west-1 which will effectively be ignored), and then, when you initialize Terraform, you can then provide your credentials for Spaces (the access token and secret key) to be given to the backend provider:

terraform init \
 -backend-config="access_key=$SPACES_ACCESS_TOKEN" \
 -backend-config="secret_key=$SPACES_SECRET_KEY" \
 -backend-config="bucket=$SPACE_BUCKET_NAME"
Enter fullscreen mode Exit fullscreen mode

Note: the above can be used to set and define any key pair from the backend config you'd like, so you can define any of the above (i.e. endpoint ) in the init command as well, not just credentials and the bucket name.
Once initialized with the Spaces credentials, you can terraform plan and apply the rest of your changes as usual.

There are also alternate methods to handling credentials:
You can also set these 3 items as environment variables on your local machine (AWS_S3_ENDPOINT, AWS_ACCESS_KEY_ID, and AWS_SECRET_ACCESS_KEY, respectively), and when you reference the S3 backend in terraform, these values can be detected; you may be familiar with this approach if you also use tooling like awscli or the botoclient package (which, because of s3-compatibility) which will also work with Spaces for s3 related usages. This can be used in place of the above instantiation (they only need to be sourced in your shell environment when the Terraform CLI is run), or alternatively, using a combination of the two approaches, you can import a ~/.aws/credentials file that you may already be using locally for these S3 tools), and then make the following modification to your Terraform configuration with the shared_credentials_file key:

shared_credentials_file=${var.shared_credentials_file}
Enter fullscreen mode Exit fullscreen mode

to your Terraform init command, as we did above.

I've created a sample Terraform-managed deployment that uses this methodology to store its state in DigitalOcean Spaces:

jmarhee / digitalocean-spaces-backend / source / - Bitbucket

Latest comments (1)

Collapse
 
bcdrme profile image
Benoit Coudour • Edited

It only works if you add a "/" to the endpoint.

endpoint = "nyc3.digitaloceanspaces.com/"