One of the tools for managing Google Cloud resources is Terraform. Let's use it.
- Install the gcloud command to be able to create Google Cloud credentials
- Create a Cloud Storage bucket for Terraform state management
- Initialize Terraform and deploy resources
Proceed with this flow.
Install gcloud command
Install the gcloud command locally. This command is used to connect the local machine to the Google Cloud project and generate credentials. It is also used to create a Cloud Storage bucket for managing Terraform state.
https://cloud.google.com/sdk/docs/install#mac
My machine is macOS. Please refer to this document and consult with your environment to install it.
Create a Google Cloud project and generate credentials
If you do not already have a Google Cloud project, create one.
https://cloud.google.com/resource-manager/docs/creating-managing-projects#creating_a_project
Once you have created a Google Cloud project, you are ready to generate credentials. Credentials are necessary to connect to Google Cloud from your local machine, and there are two main ways to obtain them.
- create a service account on Google Cloud and download the credential
- create a credential using your user account (@gmail.com) under which you created your Google Cloud project.
Here we will proceed with the second method.
First, configure the gcloud config. Run the following command.
gcloud config configurations create <optional, but the name of the project you created is safe>.
# Example
gcloud configurations create graphql-training
Created [graphql-training].
Activated [graphql-training].
Next, set the account name and project name to the created profile.
gcloud config set core/account <the email address you used for your Google Cloud login>.
gcloud config set core/project <Project name for Google Cloud>.
# Example
gcloud config set core/account test@example.com
gcloud config set core/project graphql-training
This operation updates the status of the gcloud command and sets the project name and account name as config. From here, a "login" operation will create credential information on the local machine. Enter the following command
gcloud auth login
A browser will then open and ask you to authenticate. Select the corresponding Google Cloud account and allow the integration. This will generate credentials on your local machine. The location is ~/.config/gcloud/credentials.db
. This credential will be used when executing the following command line on the local machine
- gcloud
- bq # CLI for BigQuery
- gsutil # CLI for Cloud Storage
And one more authentication process.
gcloud auth application-default login
This will also open a browser and ask you to authenticate. If you allow the linkage, another credential will be generated on your local machine. The location is ~/.config/gcloud/application_default_credentials.json.db
. This credential is used for authentication when running programs using the Google Cloud SDK; Terraform will also use this file.
Now you are ready to operate on your Google Project from your local machine.
Create buckets for Terraform state management
Terraform manages the state of resources created on Google Cloud with a file called tfstate
and detects differences. Of course, you can keep tfstate
locally, but if you work with colleagues, you can prevent conflicts by managing this file in cloud storage. Cloud Storage is available as shared storage, so let's manage the status here. We will use the gsutil
command, so install this command as well.
https://cloud.google.com/storage/docs/gsutil_install
Then, create a bucket for state management.
gsutil mb gs://<any bucket name>
# Example.
gsutil mb gs://graphql-training-artifacts
mb
is making a bucket (maybe). It's Ready.
Terraform initialization
You can install Terraform and use it, but since Terraform provides Docker image as well, you don't need to install it. Create a bash file like the following.
#! /bin/bash
command=${@:1}
docker run -it --rm \
-v $PWD:/work \}
-v $HOME/.config/gcloud:/.config/gcloud \
-w /work \
-e GOOGLE_APPLICATION_CREDENTIALS=/.config/gcloud/application_default_credentials.json \
--entrypoint "/bin/sh" \ -e
hashicorp/terraform:latest \
-c "terraform $command"
We are running the terraform
command using a container called hashicorp/terraform
. You are also using the credentials you just created on your local machine. Now, let's define the Terraform resources. Create the following file.
terraform {
required_version = "~> 1.0.0"
backend "gcs" {
prefix = "tfstate/v1"
}
}
## project ##
provider "google" {
project = var.gcp_project_id
region = var.primary_region
}
This file does not create a resource yet, but let's check it once it works.
chmod +x tf.sh
./tf.sh init -backend-config="bucket=<bucket created by gsutil>"
# Example :
./tf.sh init -backend-config="bucket=graphql-training-artifacts"
---
Initializing the backend...
Successfully configured the backend "gcs"! Terraform will automatically
Terraform will automatically use this backend unless the backend configuration changes.
Initializing provider plugins...
- Finding the latest version of hashicorp/google-beta...
- Finding the latest version of hashicorp/google...
- Installing hashicorp/google-beta v4.6.0...
If you see something like this, you have succeeded. Please look at graphql-training-artifacts/tfstate/v1
in Google Cloud Storage in your browser. The state management file default.tfstate
should have been created.
Create an Artifact Repository repository
Once you've done this, you can create as many resources as you want. You can create any resources you want, create an Artifact Registry repository. Create the file modules/artifact-registry.tf
.
tree
.
├── README.md
├──main.tf
├── modules
modules └── artifact-registry
modules │ └── artifact-registry.tf
└── tf.sh
variable "gcp_project_id" {}
variable "artifact_registry_location" {
type = string
# https://cloud.google.com/storage/docs/locations
description = "Where to locate the Artifact Registry location."
}
# Artifact Registry repository for backend applications
resource "google_artifact_registry_repository" "backend" {
provider = google-beta
project = var.gcp_project_id
location = var.artifact_registry_location
repository_id = "backend"
description = "backend application"
format = "DOCKER"
}
Also modify main.tf
.
# Artifact Registry repository to use for Cloud Run deployments.
+module "artifact-registry" {
+module "artifact-registry" { source = ". /modules/artifact-registry"
+ gcp_project_id = var.gcp_project_id
+ artifact_registry_location = var.primary_region
+}
You see the var.xxxx
type of variable in the above example, you can incorporate information that you don't want to expose. Create a new file variable.tf
to inject values.
variable "gcp_project_id" {}
variable "primary_region" {}
By defining this file, you can call it in your resource definitions like var.gcp_project_id
. Next, how do you inject the actual value? There are several ways, here we will use the create a terraform.tfvars
file and do not version control this file policy.
https://www.terraform.io/language/values/variables#variable-definitions-tfvars-files
gcp_project_id = "gql-training"
primary_region = "us-central1"
Now that we have set the values of the variables to be injected, let's run init
and plan
. These commands will not create any resources.
./tf.sh init
Initializing modules...
- artifact-registry in modules/artifact-registry
If you define a new module, you will need to do init
again.
./tf.sh plan
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# module.artifact-registry.google_artifact_registry_repository.backend will be created
+ resource "google_artifact_registry_repository" "backend" {
+ create_time = (known after apply)
+ description = "backend application"
+ format = "DOCKER"
+ id = (known after apply)
+ location = "us-central1"
+ name = (known after apply)
+ project = "xxxxxxxxx"
+ repository_id = "backend"
+ update_time = (known after apply)
}
Plan: 1 to add, 0 to change, 0 to destroy.
You should see something like this. If the plan is as you intended, type the command to actually apply it.
./tf.sh apply
You may get an error message saying that you need to enable the API. Since Terraform uses such APIs to manage resources, some things can only be created with the API enabled. Just follow the guide and enable the API from your browser. After that, run it again.
./tf.sh apply
Resources: 1 added, 0 changed, 0 destroyed.
If you see a message like this, you are good to go. Finally, view the Artifact Registry in your browser and confirm that the resources have been created.
Summary
We created Google Cloud resources using Terraform. Please give it a try.
Sources
The source code introduced in this article is available in the following repository.
https://github.com/cm-wada-yusuke/gql-nest-prisma-training/tree/main/google-cloud-terraform
Top comments (0)