DEV Community

Kevin Mack
Kevin Mack

Posted on • Originally published at welldocumentednerd.com on

Building CI / CD for Terraform

I’ve made no secret on this blog of how I feel about TerraForm, and how I believe infrastructure as code is absolutely essential to managing any cloud based deployments long term.

There are so many benefits to leveraging these technologies. And for me one of the biggest is that you can manage your infrastructure deployments in the exact same manner as your code changes.

If your curious of the benefits of a CI /CD pipeline, there are a lot of posts out there. But for this post I wanted to talk about how you can take those TerraForm templates and build out a CI / CD pipeline to deploy them to your environments.

So for this project, I’ve build a TerraForm template that deploys a lot of resources out to 3 environments. And I wanted to do this in a cost saving manner, so I want to manage it in the following way.

  • Development: which will always exist but in a scaled down capacity to keep costs down.
  • Test environment: That will only be created when we are ready to begin testing and destroyed after.
  • Production: where our production application will reside.

Now for the sake of this exercise, I will only be building a deployment pipeline for the TerraForm code, and in a later post will examine how to integrate this with code changes.

Now as with everything, there are lots of ways to make something work. I’ve just showing an approach that has worked for me.

Configuring your template

The first part of this, is to build out your template to be able to easily be make configuration changes via the automated deployment pipeline.

The best way I’ve found to do this is variables, and whether your are doing automated deployment or not, I highly recommend using them. If you ever have more than just yourself working on a TerraForm template, or plan to create more than one environment. You will absolutely need variables. So it’s generally a good practice.

For the sake of this example, I declared the following variables in a file called “variables.tf”:

variable "location" { default = "usgovvirginia"}variable "environment\_code" { description = "The environment code required for the solution. "}variable "deployment\_code" { description = "The deployment code of the solution"}variable "location\_code" { description = "The location code of the solution."}variable "subscription\_id" { description = "The subscription being deployed."}variable "client\_id" { description = "The client id of the service prinicpal"}variable "client\_secret" { description = "The client secret for the service prinicpal"}variable "tenant\_id" { description = "The client secret for the service prinicpal"}variable "project\_name" { description = "The name code of the project" default = "cds"}variable "group\_name" { description = "The name put into all resource groups." default = "CDS"}

Also worth noting is the client id, secret, subscription id, and tenant id above. Using Azure DevOps you are going to need to be able to deploy using a service principal. So these will be important.

Then in your main.tf, you will have the following:

provider "azurerm" { subscription\_id = var.subscription\_id version = "=2.0.0" client\_id = var.client\_id client\_secret = var.client\_secret tenant\_id = var.tenant\_id environment = "usgovernment" features {}}

Now worth mentioning that when I’m working with my template I’m using a file called “variables.tfvars”, which looks like the following:

location = "usgovvirginia"environment\_code = "us1"deployment\_code = "d"location\_code = "us1"subscription\_id = "..."group\_name = "CDS"

Configuring the Pipeline

This will be important later, as you build out the automation. From here the next step is going to be to build out your Azure DevOps pipeline, and for this sample I’m going to show, I’m using a YAML pipeline:

So what I did was plan on creating a “variables.tfvars” as part of my deployment:

- script: | touch variables.tfvars echo -e "location = \""$LOCATION"\"" >> variables.tfvars echo -e "environment\_code = \""$ENVIRONMENT\_CODE"\"" >> variables.tfvars echo -e "deployment\_code = \""$DEPLOYMENT\_CODE"\"" >> variables.tfvars echo -e "location\_code = \""$LOCATION\_CODE"\"" >> variables.tfvars echo -e "subscription\_id = \""$SUBSCRIPTION\_ID"\"" >> variables.tfvars echo -e "group\_name = \""$GROUP\_NAME"\"" >> variables.tfvars echo -e "client\_id = \""$SP\_APPLICATIONID"\"" >> variables.tfvars echo -e "tenant\_id = \""$SP\_TENANTID"\"" >> variables.tfvarsdisplayName: 'Create variables Tfvars'

Now the next question being where do those values come from, I’ve declared as part of the variables in the pipeline, these values:

From there, because I’m deploying to Azure Government, I added an azure CLI step to make sure my command line context is pointed at Azure Government, doing the following:

- task: AzureCLI@2 inputs: azureSubscription: 'Kemack - Azure Gov' scriptType: 'bash' scriptLocation: 'inlineScript' inlineScript: | az cloud set --name AzureUSGovernment az account show

How do we do TerraForm plan / apply, the answer is pretty straightforward. I did install this extension, and use the “TerraForm Tool Installer” task, as follows:

- task: TerraformInstaller@0 inputs: terraformVersion: '0.12.3' displayName: "Install Terraform"

After that it become pretty straight forward to implement:

- script: | terraform init displayName: 'Terraform - Run Init'- script: | terraform validate displayName: 'Terraform - Validate tf'- script: | terraform plan -var-file variables.tfvars -out=tfPlan.txt displayName: 'Terraform - Run Plan'- script: | echo $BUILD\_BUILDNUMBER".txt" echo $BUILD\_BUILDID".txt" az storage blob upload --connection-string $TFPLANSTORAGE -f tfPlan.txt -c plans -n $BUILD\_BUILDNUMBER"-plan.txt" displayName: 'Upload Terraform Plan to Blob'- script: | terraform apply -auto-approve -var-file variables.tfvars displayName: 'Terraform - Run Apply'

Now the cool part about the above, is I took it a step further, and created a storage account in azure, and added the connection string as a secret. I then built logic that when it you run this pipeline, it will run a plan ahead of the apply, and then output that to a text file and save it in a storage account with the build number for the file name.

I personally like this as it creates a log of the activities performed during the automated build moving forward.

Now I do plan on refining this and taking steps of creating more automation around environments, so more to come.

Top comments (0)