Introduction
Recently I’ve started The Big Migration™ from Ansible to Terraform in my homelab. But as soon as I started to write my first Terraform manifests, I’ve started to think about remote state storage too.
First, I thought about storing the state file on selfhosted Minio instance. But the problem is, that Minio instance will be managed by the very same Terraform manifests.
Later on, I started researching what else can I manage with Terraform. I’ve found out that Backblaze B2, where I keep backups anyway, is manageable with Terraform! That was the moment when it clicked in my head - all in all, B2 is S3 compatible, so I can use it as a remote state storage for Terraform!
Prerequisites
- Backblaze B2 account, with:
- Bucket in which you want to store the state file
- Application key with permissions to manage this bucket
- Terraform installed on your machine
Terraform configuration
The basic config for storing the state file in S3 would look like this:
terraform {
backend "s3" {
bucket = "my-terraform-state-bucket"
key = "terraform.tfstate"
region = "us-east-1"
}
}
One might think that it’s enough to change the region
to B2 substitute, add Backblaze endpoint
and we’re good to go. But it’s not that simple - B2 is almost S3 compatible, so we have to do some extra steps.
What we need to do is to skip some checks and validations that Backblaze B2 doesn’t support. And there’s actually quite a lot of them.
A fully working example of Terraform configuration for storing the state file in Backblaze B2 would look like this:
terraform {
backend "s3" {
bucket = "my-terraform-state-bucket"
key = "terraform.tfstate"
region = "us-west-004"
endpoint = "https://s3.us-west-004.backblazeb2.com"
skip_credentials_validation = true
skip_region_validation = true
skip_metadata_api_check = true
skip_requesting_account_id = true
skip_s3_checksum = true
}
}
Of course, you’ll have to replace bucket
, region
, and endpoint
values with your proper ones according to your Backblaze B2 config.
As you can see, there’s no access_key
and secret_key
provided. That’s because I provide them through environment variables (and you should too!). B2’s application key goes to AWS_SECRET_ACCESS_KEY
and key ID goes to AWS_ACCESS_KEY_ID
env var.
Some security considerations
State bucket
Keep it private. It’s not a good idea to make your state file publicly available, as it might contain secrets.
You might also want to enable versioning on this bucket. With versioning you can easily revert to the previous state if something goes wrong. I’ve seen Terraform go bananas a few times, so it’s a good idea to have this feature enabled.
Application key
Don’t use your master key for this. Create a separate application key with permissions to manage only this bucket. It’s a good practice to have separate keys for different tasks.
Don’t put your credentials in the Terraform code (or any code, really). Especially if you’re gonna ever put that code publicly eg. on GitHub. One “oops” too far and your keys leaked. Use environment variables to provide them to Terraform. I personally load them into env vars from 1password with op
cli tool.
Summary
It seems that Backblaze B2 is enough S3-compatible to be used as a remote state storage for Terraform. It’s good to keep your state file in some remote storage, as it’s a good practice to have it versioned and not stored on your local machine. And if you already use B2 for backups, why not use it for Terraform state file as well?
Top comments (0)