On a recent project, I set up several services to run on GCP Cloud Run. To make managing the infrastructure as painless as possible, I set up a Terraform repo that managed it. That way, I could add new services by:
- Linking the new service repo as a Cloud Source Repository
- Adding the build trigger in the terraform repo
- Adding the Cloud Run resources in the terraform repo
Setting up the Build Trigger
This was made manageable by a few modules I set up. First, a serviceAgent
module to wrap the IAM permissions needed by the infrastructure build:
resource "google_service_account" "agent" {
account_id = var.id
display_name = var.display_name
}
# SA setting this up needs access rights
resource "google_service_account_iam_binding" "impersonator" {
service_account_id = google_service_account.agent.name
role = "roles/iam.serviceAccountUser"
members = concat(
["serviceAccount:${var.setup_sa_email}"],
var.impersonators)
}
Then, a more involved build module to encapsulate a Cloud Build trigger that outputs a docker image to an artifact registry. This can be thought of in three parts. First, some permissions on the service agent running the build:
# do git pull
resource "google_sourcerepo_repository_iam_binding" "builder" {
project = var.project
repository = var.build_config.repo_name
role = "roles/source.reader"
members = ["serviceAccount:${var.builder_email}"]
}
# read and write docker images to registry
resource "google_artifact_registry_repository_iam_member" "writer" {
provider = google-beta
project = var.project
location = var.location
repository = var.docker_config.repo_id
role = "roles/artifactregistry.writer"
member = "serviceAccount:${var.builder_email}"
}
Second, a storage bucket to hold the build logs:
resource "google_storage_bucket" "build_logs" {
name = "${var.project}-${var.bucket_name}"
location = "US"
force_destroy = true
}
resource "google_storage_bucket_iam_binding" "admin" {
bucket = google_storage_bucket.build_logs.name
role = "roles/storage.admin"
members = ["serviceAccount:${var.builder_email}"]
}
And finally the trigger itself:
data "google_sourcerepo_repository" "infrastructure" {
name = var.build_config.repo_name
}
resource "google_cloudbuild_trigger" "infra_trigger" {
name = var.build_config.trigger_name
description = var.build_config.trigger_description
service_account = "projects/${var.project}/serviceAccounts/${var.builder_email}"
filename = "cloudbuild.yaml"
trigger_template {
branch_name = "main"
repo_name = data.google_sourcerepo_repository.infrastructure.name
}
substitutions = merge({
_LOG_BUCKET_URL = google_storage_bucket.build_logs.url
_DEV_IMAGE_NAME = "${var.location}-docker.pkg.dev/${var.project}/${var.docker_config.repo_name}/${var.docker_config.image_name}"
}, var.build_variables)
}
Here, I'm setting the log bucket URL and image name as substitutions that the repo itself can reference in its cloudbuild.yaml
file. This way the Cloud Run service can reference the same image name.
Putting this all together, we can set up a new build with:
locals {
location = "us-central1"
repo_name = "game-scorer-project"
api_image = "scoring-api"
}
resource "google_artifact_registry_repository" "primary" {
provider = google-beta
project = var.project
location = local.location
repository_id = local.repo_name
format = "DOCKER"
}
module "api_builder" {
source = "../modules/serviceAgent"
id = "scoring-api-builder"
display_name = "Scoring API Builder"
setup_sa_email = var.builder
}
module "api_build" {
source = "../modules/build"
project = var.project
location = local.location
builder_email = module.api_builder.email
bucket_name = "api-build-logs"
docker_config = {
repo_id = google_artifact_registry_repository.primary.id
repo_name = local.repo_name
image_name = local.api_image
}
build_config = {
repo_name = "bitbucket_brmatola_scoring-api"
trigger_name = "scoring-api-trigger"
trigger_description = "Scoring API Build"
}
}
Where var.project
and var.builder
reference the GCP project and service account running the infrastructure build, respectively.
This configures a cloud build trigger on our API repo (bitbucket_brmatola_scoring-api). The repo itself then must have a cloudbuild.yaml
file that publishes a docker image:
steps:
- id: 'Build Docker Image With SHA Hash'
name: 'gcr.io/cloud-builders/docker'
entrypoint: 'bash'
args: [ '-c', 'docker build -f GameScoring/Dockerfile -t ${_DEV_IMAGE_NAME}:$COMMIT_SHA .' ]
- id: 'Tag Docker Image with dev tag'
name: 'gcr.io/cloud-builders/docker'
args: [ 'tag', '${_DEV_IMAGE_NAME}:$COMMIT_SHA', '${_DEV_IMAGE_NAME}:dev' ]
- id: 'Update dev tag with image to run'
name: 'gcr.io/cloud-builders/docker'
args: [ 'push', '${_DEV_IMAGE_NAME}:dev']
images: ['${_DEV_IMAGE_NAME}:$COMMIT_SHA']
logsBucket: '$_LOG_BUCKET_URL'
options:
logging: GCS_ONLY
Running the Service on Cloud Run
Once we have our build continuously deploying docker images, we want to actually run the image in Cloud Run. This process is documented here, but boils down to running a cloud
command after publishing the image. To do so, however, we'll need a Cloud Run instance to deploy to.
We'll set up a service account to run the Cloud Run instance as well as some IAM permissions to let the build service account run the service:
module "api_runner" {
source = "../modules/serviceAgent"
id = "scoring-api-runner"
display_name = "Scoring API Runner"
setup_sa_email = var.builder
impersonators = ["serviceAccount:${module.api_builder.email}"]
}
resource "google_cloud_run_service_iam_binding" "runner" {
location = local.location
service = google_cloud_run_service.api.name
role = "roles/run.developer"
members = ["serviceAccount:${module.api_builder.email}"]
}
The key here is that both the service build and infrastructure build service accounts need the iam.serviceAccountUser
role on the Cloud Run service. Additionally, the service build account needs the run.developer
role in order to deploy the service.
Then, if we want our service to be available to users, we'll need to give the run.invoker
role to everyone:
resource "google_cloud_run_service_iam_binding" "builder" {
location = local.location
service = google_cloud_run_service.api.name
role = "roles/run.invoker"
members = ["allUsers"]
}
Then, we need to build the actual service:
resource "google_cloud_run_service" "api" {
name = "scoring-api-service"
location = local.location
template {
spec {
service_account_name = module.api_runner.email
containers {
image = "${local.location}-docker.pkg.dev/${var.project}/${google_artifact_registry_repository.primary.repository_id}/${local.api_image}:dev"
}
}
}
traffic {
percent = 100
latest_revision = true
}
lifecycle {
ignore_changes = [
metadata.0.annotations
]
}
}
Finally, in our service build we modify our Cloudbuild.yaml file to actually deploy the service:
steps:
- id: 'Build Docker Image With SHA Hash'
name: 'gcr.io/cloud-builders/docker'
entrypoint: 'bash'
args: [ '-c', 'docker build -f GameScoring/Dockerfile -t ${_DEV_IMAGE_NAME}:$COMMIT_SHA .' ]
- id: 'Tag Docker Image with dev tag'
name: 'gcr.io/cloud-builders/docker'
args: [ 'tag', '${_DEV_IMAGE_NAME}:$COMMIT_SHA', '${_DEV_IMAGE_NAME}:dev' ]
- id: 'Update dev tag with image to run'
name: 'gcr.io/cloud-builders/docker'
args: [ 'push', '${_DEV_IMAGE_NAME}:dev']
- id: 'Deploy dev tag to Cloud Run'
name: 'gcr.io/cloud-builders/gcloud'
args: ['run', 'deploy', '${_CLOUD_RUN_NAME}', '--image', '${_DEV_IMAGE_NAME}:dev', '--region', 'us-central1']
images: ['${_DEV_IMAGE_NAME}:$COMMIT_SHA']
logsBucket: '$_LOG_BUCKET_URL'
options:
logging: GCS_ONLY
Where we've configured the cloud run name as a substitution in the infrastructure so the build can just reference it.
Top comments (0)