Gitlab team is doing such a great job on their CI/CD pipelines. In this post, I will show you how to use its power to deploy infrastructure as code (IaC). For this, we are going to use Terraform, which is a tool for building, changing, and managing infrastructure in a safe, repeatable way.
Gitlab CI
I imagine that you already have an account on Gitlab (if not just go to gitlab.com and create one) or a Gitlab CE/EE installation.
So let's understand how the pipeline is configured.
First, you will need to create a file on the root of your repository called .gitlab-ci.yml
. You can change the name of the file on the repository configuration, but for now, let's keep how it is.
Stages
Let's configure the stages of the pipeline. In this example, I will only define the steps for the Terraform, or Infrastructure As Code (IaC). You can define as many as other stages that you want to.
stages:
- iac_validate
- iac_plan
- iac_apply
The stages will always run in sequence.
iac_validate
We will use this stage to validate the Terraform configuration. It will only validate the syntax and the inner reference of resources in the configuration.
iac_plan
In this stage, Terraform will run the plan of the configuration. It will check all the resources of the configuration against the state and it will show you what will be created, changed or destroyed.
iac_apply
This stage will, as the name says, apply the previous plan generated in the pipeline.
Default configuration
For this example, we will use the Terraform official Docker image to run the pipeline. So, let's define it as a default image in the Gitlab CI file:
default:
image:
name: hashicorp/terraform:latest
entrypoint:
- /usr/bin/env
- "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
before_script:
- terraform init
cache:
key: terraform
paths:
- .terraform
We have defined the entrypoint
for the image because Terraform uses terraform
binary as entrypoint. As we are running this in the Gitlab CI pipeline, we need to change it to /usr/bin/env
in order to job execute otherwise it will fail. In Docker, the entrypoint
defines the program that will be executed in the Docker container. You can find more information here. If you are running Gitlab Runner as shell you can remove the image
definition.
Another thing that we have defined in the default configuration is the before_script
. The before_script
will run before every job unless you define it in the job.
Last but not least, the cache. We defined the .terraform
directory to be cached. This directory is created automatically by Terraform on every execution.
iac_validate
As I mentioned before, this stage will only validate the syntax and the inner reference of the resources specified in the configuration.
terraform_validate:
stage: iac_validate
script:
- terraform validate
except:
refs:
- master
-
stage
specifies the stage that this job will run. You can define multiple jobs to run in the same stage -
script
receives all the commands that will be executed in the job. If any command fails, the job will fail -
except
tells the job to not run on the keys, in this case on theref master
(the branchmaster
)
We will only execute it in the feature or fix branches, not on the master
branch. I am assuming that you don't commit to master, right? ;-)
So it will execute and if there is any error, it will output to you.
For example, I have defined a wrong argument in the Terraform null_resource
, take a look at the error that terraform validate
thrown to me:
$ terraform validate
Error: Unsupported argument
on example.tf line 2, in resource "null_resource" "example":
2: my_argument = "NOK"
An argument named "my_argument" is not expected here.
If everything is ok, it will output the following message to you:
$ terraform validate
Success! The configuration is valid.
iac_plan
Here Terraform will compare your configuration with the state and will show you all the resources that need to be created, changed or destroyed.
terraform_plan:
stage: iac_plan
script:
- terraform plan --out plan
only:
refs:
- master
artifacts:
paths:
- plan
Now we have two new keys defined in the terraform_plan
job, that is only
and artifacts
.
-
only
means that this job will only run in those keys, in this case inref master
(the master branch). -
artifacts
will retain the file that will be created by theterraform plan
, with the parameter--out
. In this case, theplan
file. We will use this file in the next job to apply the infrastructure.
Again, using the last example from the terraform validate
but, this time, with a valid configuration, let's see what the plan will output to us:
$ terraform plan --out plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
------------------------------------------------------------------------
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# null_resource.example will be created
+ resource "null_resource" "example" {
+ id = (known after apply)
+ triggers = {
+ "my_variable" = "OK"
}
}
Plan: 1 to add, 0 to change, 0 to destroy.
------------------------------------------------------------------------
This plan was saved to: plan
To perform exactly these actions, run the following command to apply:
terraform apply "plan"
I will not explain in this article the terraform state, it will be a subject for a future post.
If there was nothing to create, change or destroy, terraform will output it to you:
$ terraform plan --out plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
------------------------------------------------------------------------
No changes. Infrastructure is up-to-date.
This means that Terraform did not detect any differences between your
configuration and real physical resources that exist. As a result, no
actions need to be performed.
Let's move to our final job.
iac_apply
And here is the final job that will apply the infrastructure.
terraform_apply:
stage: iac_apply
script:
- terraform apply --auto-approve plan
when: manual
allow_failure: false
only:
refs:
- master
So, new keys here. Let me explain the when
and allow_failure
keys defined in here:
-
when
defines when the job will be run. In this case, the job will run manually -
allow_failure
this specifies if the job can fail or not
This job will apply your infrastructure using the plan that was generated in the last job (terraform_plan
). Take a look at what Terraform will output to you when you execute this job:
$ terraform apply --auto-aprove plan
null_resource.example: Creating...
null_resource.example: Creation complete after 0s [id=5363055958456141136]
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
The state of your infrastructure has been saved to the path
below. This state is required to modify and destroy your
infrastructure, so keep it safe. To inspect the complete state
use the `terraform show` command.
State path: terraform.tfstate
We used the --auto-approve
parameter so you don't have to type (well you can't type actually) yes
to apply the configuration.
Resume
We reached the end of this post, let's see how our final .gitlab-ci.yml
file is:
stages:
- iac_validate
- iac_plan
- iac_apply
default:
image:
name: hashicorp/terraform:latest
entrypoint:
- /usr/bin/env
- "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
before_script:
- terraform init
cache:
key: terraform
paths:
- .terraform
terraform_validate:
stage: iac_validate
script:
- terraform validate
except:
refs:
- master
terraform_plan:
stage: iac_plan
script:
- terraform plan --out plan
only:
refs:
- master
artifacts:
paths:
- plan
terraform_apply:
stage: iac_apply
script:
- terraform apply --auto-approve plan
when: manual
allow_failure: false
only:
refs:
- master
There were few things that I didn't cover in this post but I will do in future ones, so keep visiting me! :-)
I hope you have enjoyed this article. Please leave your comment so I can improve future posts. Enjoy it!
Top comments (2)
There is a one missing step - terraform keeps the state (like uuid after creation) in
terraform.tfstate
file. It's quite important because running the pipeline second time may fail due to name conflicts.So, after running the CI Pipeline we should commit this file to the repository.
Example solution:
There is another problem too, but it's not so trivial - concurrent pipelines. Solving that is much more complicated - but the easiest way is to limit the number of concurrent pipelines to 1. The "good" way would be sharing the lock file between runners.
It is not a good practice to commit the state to the repository, instead keep the state on AWS or in gitlab