For this post I wanted to share how I prefer to structure my terraform code for use with multiple environments, modules and how I can work with it locally. If you have a different approach then please feel free to leave a comment - I'm always happy to learn and improve.
First off, the folder structure. The infrastructure folder sits in the root along side src, test, build and whatever else goes in the git repository.
.
├── infrastructure
│ │── environments
│ │ ├── dev
│ │ ├── live
│ │ ├── main
│ └── modules
Within an environment folder I would put local.backend.tfvars
and local.secrets.tfvars
files for local development purposes. These files should NOT be committed to git, so I always make sure to include them in the .gitignore file of the repository.
Then there is the backend.tfvars
file, which is used to configure the backend used for the terraform state. Finally the (maybe?) most important file for the environment variables.tfvars
, which contains all the environment specific variables. A good example of this is using different SKUs of Azure App Service in Development versus Production. It could also be the subscription used for the environment - maybe you want to use an Azure Dev/Test Subscription for Development and a CSP/PAYG subscription for Production.
The main folder contains the main.tf
, secrets.tf
, outputs.tf
and variables.tf
files. Depending on the size and content of the main.tf
file I tend to split it up into multiple files, so I don't end up having to scroll thousands of lines of code. The structure and naming of files can vary depending on what makes sense, but one simple approach is to divide the different types of resources into different files. There will likely be things like KeyVault or Networking, which ties into other resources, so it really depends.
The modules folder is used for reusable modules. It's not always relevant, but I tend to keep the modules outside of the environments. The structure highlights that its reusable across environments.
Working locally
When you need to run Terraform locally against the Development environment, use the steps listed below.
Ensure that you have both local.backend.tfvars
and local.secrets.tfvars
in your environment folder with the actual keys in place.
cd infrastructure/environments/main
terraform init -backend-config="../dev/local.backend.tfvars"
terraform validate
terraform plan -var-file="../dev/local.secrets.tfvars" -detailed-exitcode -out=tfplan
Let's have a look at the content of the local.backend.tfvars
when using Azure Blob Storage as the backend for the terraform state.
resource_group_name = "rg-terraform-state-we-dev"
storage_account_name = "terraformwestatedev"
container_name = "terraform-state-dev"
key = "azure/terraform.tfstate"
access_key = "<actual access key goes here>"
The content of the local.secrets.tfvars
should correspond to what you put in the secrets.tf
from within the main folder, and what is needed to create resources within Azure/AWS/GCP. For Azure it would be Service Principal details.
tenant_id = "<actual tenant id goes here>"
subscription_id = "<actual subscription id goes here>"
client_id = "<actual client id goes here>"
client_secret = "<actual client secret goes here>"
When working locally you could connect to the development environment or use a different subscription / account setup. If you use the same subscription and terraform state locally as when deploying to development, you should be aware of the changes between working locally versus deploying via CI/CD. The deployed changes might revert what you are testing locally, and other deployments might fail because the state is expecting resources, which are not part of what is being deployed.
main.tf
With the backend and secrets setup as mentioned above there is one important thing to include in the main.tf
file - the required providers and something like azurerm
as the backend like so:
terraform {
backend "azurerm" {}
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "= 2.29.0"
}
cloudflare = {
source = "terraform-providers/cloudflare"
version = "~> 2.9.0"
}
}
}
The backend part is what will be "filled" with the *backend.tfvars
files.
Azure DevOps, Terraform and Secrets
When running infrastructure as code (IaC) as part of a pipeline in something like Azure DevOps (same applies to Github Actions and other CI/CD setups), you want to keep your secrets safe and inject them when needed as part of the pipeline.
In order to run the terraform init, plan and apply commands there needs to be a backend for storing state and there needs to be secrets (like a Service Principal) for creating the actual resources in Azure. We also need to know, which environment we are deploying to, so we can select the right one (using the variables.tfvars
file that corresponds to the environment).
The secrets can be added as a variable group for the environment under Pipelines/Library in Azure DevOps. You could also store the secrets in KeyVault and connect KeyVault to Azure DevOps, so they are not visible by anyone.
The backend.tfvars
for the development environment would typically look like this:
resource_group_name = "rg-terraform-state-we-dev"
storage_account_name = "terraformwestatedev"
container_name = "terraform-state-dev"
key = "azure/terraform.tfstate"
access_key = "#{terraform_access_key}"
As you can see its pretty much the same as the local.backend.tfvars
where the only difference is the access_key
property, which will be replaced within the pipeline.
The secrets.tf
file within the main folder will be variable definitions for the secrets needed to run the terraform. These variables will need to be set in the pipeline, so the IaC can be initialized.
variable "tenant_id" {
description = "Azure subscription tenant id."
}
variable "subscription_id" {
description = "Azure subscription id."
}
variable "client_secret" {
description = "Azure provider client secret"
}
variable "client_id" {
description = "Azure provider Azure AD client id."
}
Within the variables.tfvars file for an environment I keep the variable secrets as tokens, which can later be replaced:
tenant_id = "#{tenant_id}"
subscription_id = "#{subscription_id}"
client_id = "#{client_id}"
client_secret = "#{client_secret}"
When doing this in Azure DevOps with a YAML pipeline I use the following approach for replacing the tokens with secrets and generating - here its the validation stage of the pipeline:
parameters:
variable_group_name:
environment_name:
working_directory:
source_branch:
jobs:
- job: Validate
displayName: Terraform Validate Plan
continueOnError: false
variables:
- name: tf_work_dir
value: "$(working_directory)/environments/main"
- group: ${{ format('telemetry-{0}', parameters.environment_name) }}
- ${{ if ne(parameters.variable_group_name, '') }}:
- group: ${{ format('{0}-{1}', parameters.variable_group_name, parameters.environment_name) }}
steps:
- task: qetza.replacetokens.replacetokens-task.replacetokens@3
displayName: "Replace tokens in *.tfvars with variables from the desired Variable Group"
inputs:
targetFiles: "$(working_directory)/environments/$(environment_name)/*.tfvars => *.env.tfvars"
encoding: "auto"
writeBOM: true
actionOnMissing: "warn"
keepToken: false
tokenPrefix: "#{"
tokenSuffix: "}"
- bash: |
terraform init -backend-config="../$(environment_name)/backend.env.tfvars" -input=false
displayName: Initialize configuration
workingDirectory: $(tf_work_dir)
- bash: terraform validate
displayName: Validate configuration
workingDirectory: $(tf_work_dir)
- bash: |
terraform plan -var-file="../$(environment_name)/variables.env.tfvars" -input=false
displayName: Create execution plan
workingDirectory: $(tf_work_dir)
With this setup I can work with terraform locally, and I can choose to create a throw-away environment for trying out different things locally without interfering with Development. Alternatively, I can run it against the Development environment, which can be handy if resources needs to be imported or deleted without having to involve a pipeline.
I can create any number of environments, which can have each their own subscription and resource configuration.
Here is the file structure with all of the mentioned files
The screenshot is from a real world telemetry service, which is composed of a Cloudflare Worker (point of entry / reverse proxy) and an Azure backend (API and Storage).
Top comments (8)
Thank you for the insight.
I am not a big fan of environments per folders. I think all environments should be exactly the same across environments to minimize effect on application behavior when environments began to differentiate. This is especially the case when you have multiple environments (>5) as it makes the entire repo smaller and less "scary" :)
That is not to say that all environments should have the same scale or SKU of the resource, or when resources are very expensive or take long time to create, but in these cases you have have an on/off toggle via variables.
Neither do I think that folders necessary reflect on number of state files. That is something I control via the pipeline as each environment/stage points to a differernt state file by simply including the environment name in the file name.
I have documented part of my experience in my github repo - github.com/ArieHein/terraform-train
In specifically, chapters 4 and 5 that talk about bigger scale to enterprise scale solutions. I use one folder to hold all the environment tfvars and in the Azure DevOps pipelines I just supply the tfvars file pointers per environment/stage.
If one environment, say prod, requires the app service to have scaling, while most of the others don't, I use a variable with value true in the prod.tfvars and value false in all the other tfvars, and in the code use the count condition.
When its not directly application related, or what I refer to as core infrastructure, i will separate by folders, for example core / projects. Yes there's some duplication but its minimal as it is good practice to use modules, so its just having differernt variable values.
Least i have an idea now of something I want to add to my repo to give further examples :)
Then your environments are not exactly the same, as you mentioned they should be. Not sure I see the different. I'd be afraid that the code would be cluttered with conditions or the like in order to provide different SKUs etc.
The folder structure is mostly geared towards having a variables.tfvars per environment, so its clear what resources will be used. Whether or not the same state file or subscription is used is probably a matter of taste and policy.
I'd be curious to see your approach with actual terraform code to get a better idea of the differences. I'll probably push something to github with a practical example to put it into perspective.
From perspective of resources they are the same, they all have an app service in this example. They differ in configuration. This is not about SKU, those appear as variables with appropriate values in the environment tfvars file. These files will also have a "UseScale" variable with true or false depending on the environment.
My repo shows the structure with examples of SKUs, but i didn't add an example for control, so that's going to be added soon, but the examples in there are mostly actual code as they are build gradually so might not have repeated content which is why there's a good lengthy description.
The section of the code the provisions the app service, when it reaches the scaling code, will just use count to configure the scaling on net based on the UseScale. I wouldn't say its cluttered, but its how TF implements 'IF' statements after all.
Alright, I think I misunderstood what you said about folders and environments.
As far as I can tell the only difference between the two structures is the folder and the name of the environment specific
variables.tfvars
files.For the setup in which I use this structure there is always a different subscription for development vs production. The resources are always the same, but the SKUs differ. So the folders becomes a convenient way of grouping what is needed for a specific environment in terms of variables and backend (terraform state).
Subscriptions are controlled from the pipeline thus not needing to expose any values in the terraform files. I usually use 2 subscriptions, one for prod env and one for non prod so i get 2 service connections in AzDo, but even with one subscriptions per env, its still a parameter that can be injected from the outside after it being read from a separate input/secret/variable group etc. Ill do some refactoring and place it in my repo for more clarity.
This is great structure and guide, Morten. Thank you for sharing 🎉
Thanks for sharing, I got some useful insights!
Just to mention a few things I do differently:
TF_VAR_<variable name>
.-backend-config
flag toterraform init
to configure my backend, this allows me to have a more "dynamic" approach to what I name my state files. If I use the Azure backend and I create my infrastructure using GitHub Actions I might use the following flag and value-backend-config="key=${{ github.ref_name }}/my/path/terraform.tfstate"
or whatever else makes sense. This means each branch can have its own state file.Thanks for the reply and ideas. Interesting points!