DEV Community

Michael Mekuleyi
Michael Mekuleyi

Posted on

Utilizing Google Cloud Storage as a remote backend for Terraform

Introduction

In this article, I will be discussing using Google Cloud storage as a remote backend for your Terraform configuration, This article is a sequel to my article on Deploying a Remote backend with AWS S3 and Terraform , feel free to check out that article to learn more on remote state backends using AWS.

In this article, we will provision a Google Cloud Storage (GCS) bucket and utilize it to store its own state, then we will go ahead to provision a compute instance on Google Cloud Platform and store its statefile in the remote backend we enabled earlier. This article assumes a working knowledge of Google Cloud (cloud.google.com ) and an understanding of Terraform (https://www.terraform.io/ ). You can find the repository for this tutorial here

Setting up the remote backend

The idea of a remote backend is to safely move your statefile from your local computer to a reliable and remote location, this is to ease collaboration and multi-tasking. To get started please head to global-resources folder in the GitHub repository to view the configuration scripts. First, we will deploy a GCS bucket using the local state, then we will use the GCS bucket to manage its own state. Head over to global-resources/terraform.tf.



terraform {
required_version = ">= 1.3.0, < 2.0.0"
 /* backend "gcs" {
    bucket = "<YOUR-BUCKET-NAMR>"
    prefix = "global-resources/"
  } */

  required_providers {
    google = {
      source  = "hashicorp/google"
      version = "~> 4.40"
    }
  }
}

provider "google" {
  project = var.project_id
  region  = var.region
  zone    = var.zone
}


Enter fullscreen mode Exit fullscreen mode

Here we initialize the required providers and set the necessary values for the GCS bucket, note that the backend is commented out, this is because we are yet to deploy the GCS bucket. Head over to global-resources/bucket.tf to see the configuration to deploy the cloud storage bucket.



resource "google_storage_bucket" "default" {
  name          = var.bucket_name
  force_destroy = true
  location      = "US"
  storage_class = "STANDARD"
  versioning {
    enabled = true
  }
}


Enter fullscreen mode Exit fullscreen mode

Here, we define the bare necessary values for a GCS bucket and we enable versioning to help us preserve data. Also, remote backends with GCS support state-locking by default, hence is no need to provision a Key store. After entering the necessary variables in global-resources/variables.tf, we go on to deploy the configuration.

First we initialize the configuration,



michael@monarene:~$ terraform init


Enter fullscreen mode Exit fullscreen mode

Terraform Init on Configuration

Then we go on to check the configuration plan,



michael@monarene:~$ terraform plan


Enter fullscreen mode Exit fullscreen mode

Terraform plan on Configuration

Next we apply the configuration,



michael@monarene:~$ terraform apply --auto-approve


Enter fullscreen mode Exit fullscreen mode

Next, we log in to the GCP console to check that the storage bucket is already created.

Google Cloud Storage Bucket on GCP

Now we are going to switch the storage bucket to use itself to manage its state file. Head over to global-resources/terraform.tf and uncomment the backend object in the terraform block.



terraform {
required_version = ">= 1.3.0, < 2.0.0"
 backend "gcs" {
    bucket = "<YOUR-BUCKET-NAMR>"
    prefix = "global-resources/"
  }

  required_providers {
    google = {
      source  = "hashicorp/google"
      version = "~> 4.40"
    }
  }
}

provider "google" {
  project = var.project_id
  region  = var.region
  zone    = var.zone
}


Enter fullscreen mode Exit fullscreen mode

Now migrate to the remote state by re-initializing the Terraform configuration.



michael@monarene:~$ terraform init


Enter fullscreen mode Exit fullscreen mode

When prompted on copying the existing state to the new backend, type "yes".

Re-initializing the Terraform State file

Now we head over to the console to confirm that our remote state is in GCS, you can find the state file in the global-resources folder in the GCS bucket.

State file in Google Cloud Storage Bucket

Now we have a well configured backend with a Google Cloud storage bucket.

Applying the remote backend in other configurations

Now we will provision three compute instances using the count keyword in Terraform and store the statefile in the GCS bucket. First, head over to compute-instance/terraform.tf to see the terraform configuration.



terraform {
  required_version = "~> 1.3"
  required_providers {
    google = {
      source  = "hashicorp/google"
      version = "~> 4.40"
    }
  }
  backend "gcs" {
    bucket = "<YOUR-BUCKET-NAME>"
    prefix = "compute-instance"
  }
}

provider "google" {
  project = var.project_id
  region  = var.region
  zone    = var.zone
}


Enter fullscreen mode Exit fullscreen mode

Here we declare the necessary variables to run the configuration, also notice that we set the backend object in the terraform block to gcs and we are pointing to our remote backend which was created earlier. Let's head over to compute-instance/main.tf to view the main configuration.



resource "google_compute_instance" "this" {
  provider     = google
  count        = 3
  name         = "${var.server_name}-${count.index}"
  machine_type = var.machine_type
  zone         = var.zone

  boot_disk {
    initialize_params {
      image = "debian-cloud/debian-11"
    }
  }
  network_interface {
    network = "default"
    access_config {
      // Ephemeral public IP
    }
  }
  metadata_startup_script = file("startup.sh")

  tags = ["http-server"]
}


Enter fullscreen mode Exit fullscreen mode

Here we define 3 compute instances using the count block, we also set other important variables like the machine type and server name. You can also check out compute-instance/startup.sh for the startup script that runs when the server is spun, finally I have also added a http-server tag to allow ingress on default http ports. Please go ahead to study the configuration to understand the different connecting parts.

To deploy this configuration, we start by initializing it.



michael@monarene:~$ terraform init


Enter fullscreen mode Exit fullscreen mode

Initiliazing the Google Compute Resource on Terraform

Note the movement to Google Cloud storage backend. Next, we view the plan to run the configuration,



michael@monarene:~$ terraform plan


Enter fullscreen mode Exit fullscreen mode

Image description

Next, we apply the configuration and get its outputs.



michael@monarene:~$ terraform apply --auto-approve


Enter fullscreen mode Exit fullscreen mode

Applying the Terraform Configuration

To verify that our compute instances have been accurately deployed, we login to the compute console on GCP to check,

GCP Compute Console showing the deployed Compute Resources

Finally, we check our GCS bucket to verify that the instance state file is stored in the bucket.

Image description

We have successfully created a remote backend with a GCS bucket and we have utilized the bucket in storing our state files. Please go ahead to destroy all the resources you have created to avoid extra-billing charges.

Conclusion

In this article we have explored creating a Google Cloud storage bucket, using it to store our state files and further utilizing the state files for other deployments. We have done everything using Terraform as an IaC tool to manage infrastructure. You can also find the github repository for this article here,I hope you learnt a lot, feel free to like, share and comment on this article. Thank you!

Top comments (0)