Managing Infrastructure as Code can be challenging, especially when working within a team. Terraform is a powerful tool for managing infrastructure resources and we briefly described it in one of the previous blog posts, but it can be tricky to keep track of the current state of your infrastructure when working in a team environment.
This is where Terraform state comes in. Terraform state is a snapshot of your infrastructure that is stored as a file on your local machine. This file contains information about the resources you've created, their dependencies, and their current configuration.
The Terraform state file is essential for managing your infrastructure, as it allows Terraform to determine the changes that need to be applied to your resources. Without a proper state file, Terraform wouldn't be able to properly manage your infrastructure resources.
In essence it's just a JSON file which is kind of like a map that tells Terraform what it has already built, and what it still needs to build. It's crucial to keep the state file safe and up-to-date, because if Terraform doesn't know what it has already built, it might accidentally create duplicate resources or overwrite existing ones, which could cause all kinds of problems.
You can store the state file locally, but there is an issue with that approach because it's not easily shareable. If you're working within a team of engineers, it's important for everyone to have access to the same state file. But if it's stored locally, it can be difficult for others to get access to it. That's why we usually store state file remotely on services like AWS S3, HashiCorp Consul or Azure Blob Storage.
In this post we will demonstrate how to set up an Azure Blob Storage backend for your Terraform state file. For that we will need to create a resource group and storage account. Of course, you will need an Azure subscription. If you don't have one already, you can create a free account. We can create these resources via Azure portal, but since we are talking about Infrastructure as Code, let's use Terraform to create them as well. In your code editor create main.tf
file with the following code:
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "3.44.1"
}
}
}
provider "azurerm" {
features {}
}
resource "random_string" "resource_code" {
length = 5
special = false
upper = false
}
resource "azurerm_resource_group" "tfstate" {
name = "tfstate"
location = "West Europe"
}
resource "azurerm_storage_account" "tfstate" {
name = "tfstate${random_string.resource_code.result}"
resource_group_name = azurerm_resource_group.tfstate.name
location = azurerm_resource_group.tfstate.location
account_tier = "Standard"
account_replication_type = "LRS"
tags = {
environment = "demo"
}
}
resource "azurerm_storage_container" "tfstate" {
name = "tfstate"
storage_account_name = azurerm_storage_account.tfstate.name
container_access_type = "private"
}
Make sure you are logged in your Azure account via Azure CLI. Now when we have Terraform configuration we run terraform init
and after that terraform apply
to create those resources. To verify if resources have been provisioned go to Azure portal and navigate to the Resource groups section where you should see tfstate
resource group with storage account tfstate
followed by the 5 characters long random string.
Let's further explore the storage account containers, it should be empty without blobs. Why is that so?
Currently, our state is stored locally inside terraform.tfstate
file and it keeps track of our resources which leads us to a chicken-egg problem when it comes to state management. So what is the solution?
Let's go and create backend.tf
file with the following code:
terraform {
backend "azurerm" {
resource_group_name = "tfstate"
storage_account_name = "tfstate<RANDOM-STRING>"
container_name = "tfstate"
key = "terraform.tfstate"
}
}
Now run terraform init
again, and if you are using the latest Terraform version you should receive a prompt similar to:
Do you want to copy existing state to the new backend?
Pre-existing state was found while migrating the previous "local" backend to the newly configured "azurerm" backend.
No existing state was found in the newly configured "azurerm" backend.
Do you want to copy this state to the new "azurerm" backend? Enter "yes" to copy and "no" to start with an empty state.
Type yes
and check tfstate
container in Azure portal. Your Terraform state is now successfully stored remotely in Azure Blob Storage.
If you did not receive this prompt you can import local state manually using the command terraform state push terraform.tfstate
To verify if everything is setup correctly delete the local instance of terraform.tfstate
file and run command terraform state list
. You should receive output like this one:
azurerm_resource_group.tfstate
azurerm_storage_account.tfstate
azurerm_storage_container.tfstate
random_string.resource_code
Let's cover another topic before conclusion. One important aspect of Terraform state management is state locking. State locking prevents multiple Terraform instances from modifying the state file at the same time, which can cause issues and inconsistencies in your infrastructure.
So, how do we implement state locking when using Azure as a backend for our Terraform state file? The good news is that Azure Blob Storage supports state locking for Terraform using native capabilities. Azure Storage blobs are automatically locked before any operation that writes state. This pattern prevents concurrent state operations, which can cause corruption.
In other words, we don't need any additional configuration, this comes out of the box when using Azure Blob Storage for Terraform backend configuration.
Top comments (0)