When I'm working on a project on GCP, I try to automate everything. This includes both the deployment of services as well as the provisioning of builds to do so for new services.
If I'm adding a service to a project, I don't want to waste a lot of time remembering how to configure a CI/CD pipeline for it. I want to copy an existing module and get something that works exactly the same as my other services.
To set this up, I have a Terraform repo that provisions the builds and resources needed to run everything in my GCP project. Pushing code to that repo deploys new builds and services. Practically speaking, on a push to that repo I run terraform apply
.
Each service in my project is itself continously deployed to. Once I have this setup up and running, I can update services and add new services with minimal configuration work within the GCP console itself - I just push code to the appropriate repo to change a service, add a service, or modify the infrastructure a service runs on.
While this sounds like it's overcomplicating things (and it is), with the help of a project starter repo and some common modules it's easy to get working with. Once it's up and running, I can add a new service running service to my project by:
- Linking the repo for the service to a Cloud Source Repository
- Configuring a
cloudbuild.yaml
file in the repo - Adding the Cloud Build Trigger for the project in Terraform
- Adding the resources to run the service in Terraform
This post will walk through how to set up a new GCP project for continuous infrastructure deployment using Terraform.
Setting up a New GCP Project
To begin, we set up a new project in GCP. I'm assuming we're doing this as a personal project. If we have an organization available in our environment, there is more mature tooling available.
Then, link your soon-to-be infrastructure repo as a Cloud Source Repository in your project. I mirror a private repo from Bitbucket, but you could work out of GitHub or Cloud Source itself as well.
For this example, I'm linking to this repo that I clone to as a starter. It has some terraform modules I'll use to get started and a cloudbuild.yaml
file that outputs "hello" to verify that my build trigger is executing.
Once you have this, take note of the "repository name". We'll need it to link up the build pipeline.
Adding an Automated Build
I use this infrastructure starter to kickstart new projects. To do so, I open up a Cloud Shell in the project and clone the repo. Then, I cd
into the src
directory and modify the terraform.tfvars
file to reference the project and source repo that will be managing infrastructure for the project (the one we just set up):
Then, I run:
terraform init
terraform apply
And I've got a continuous deployment pipeline for infrastructure ready to go. You can (and should!) check to see what that starter is initializing, but the highlights are:
- A service account to run the build
- A build trigger that runs on changes to our input repo
- A storage account for the logs
- A storage account for the Terraform state
Appropriate permissions to let the service account use the above
We can see the output of this as a build trigger in Cloud Build:
And we can manually run it or trigger it by pushing code to the underlying repo!
Setting up the Repo to Run Terraform
You should consult the documentation for how to properly structure your infrastructure repo, but if you're looking for a quick and dirty setup to get moving, you can set your cloudbuild.yaml to:
steps:
- id: 'Terraform Init'
name: 'hashicorp/terraform:1.0.0'
entrypoint: 'sh'
args:
- '-c'
- |
cd src
terraform init
- id: 'Terraform Plan'
name: 'hashicorp/terraform:1.0.0'
entrypoint: 'sh'
args:
- '-c'
- |
cd src
terraform plan
- id: 'Terraform Apply'
name: 'hashicorp/terraform:1.0.0'
entrypoint: 'sh'
args:
- '-c'
- |
cd src
terraform apply -auto-approve
logsBucket: '$_LOG_BUCKET_URL'
options:
logging: GCS_ONLY
Where we're pulling the log bucket URL from a variable set by the project starter.
Then, point the backend at the particular storage bucket that was created for the backend state:
terraform {
backend "gcs" {
bucket = "subtle-app-346916-tf-state"
}
}
terraform {
required_version = "~> 1.0.0"
}
Once you've got this, updating your terraform code and pushing the repo should update the infrastructure in your project.
Top comments (1)
Use Kubernetes native integration in Brainboard brainboard.co/