<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: sam-nash</title>
    <description>The latest articles on DEV Community by sam-nash (@samnash).</description>
    <link>https://dev.to/samnash</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/samnash"/>
    <language>en</language>
    <item>
      <title>Deploy Docker images on AWS</title>
      <dc:creator>sam-nash</dc:creator>
      <pubDate>Thu, 12 Dec 2024 11:41:45 +0000</pubDate>
      <link>https://dev.to/samnash/deploy-docker-images-on-aws-4efa</link>
      <guid>https://dev.to/samnash/deploy-docker-images-on-aws-4efa</guid>
      <description>&lt;p&gt;Deploying Docker images on Amazon Elastic Container Registry (ECR) and running them as containers on AWS can be accomplished through several AWS services. Here are the most common and effective options:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Amazon Elastic Container Service (ECS)&lt;/strong&gt;&lt;br&gt;
Overview: ECS is a fully managed container orchestration service. It supports Docker containers and allows you to deploy, manage, and scale applications easily.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Modes&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fargate&lt;/strong&gt;: A serverless compute engine that runs containers without needing to manage the underlying infrastructure. It’s a good option if you want to focus on your containers without managing EC2 instances.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;EC2&lt;/strong&gt;: Run containers on EC2 instances you manage. This option gives more control over the underlying infrastructure, allowing you to configure networking, storage, and more.&lt;/p&gt;

&lt;p&gt;&lt;u&gt;Integration with ECR&lt;/u&gt;: ECS has direct integration with ECR, allowing you to pull images securely and easily. Simply specify the ECR image URI in the ECS task definition.&lt;/p&gt;

&lt;p&gt;Use Cases: Suitable for microservices architectures, batch processing, web applications, and serverless deployments where you want AWS to handle much of the infrastructure management.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Amazon Elastic Kubernetes Service (EKS)&lt;/strong&gt;&lt;br&gt;
Overview: EKS is a managed Kubernetes service for running Kubernetes applications on AWS.&lt;/p&gt;

&lt;p&gt;Flexibility: EKS offers more flexibility, especially if you are familiar with Kubernetes or have workloads that need Kubernetes features. EKS allows you to leverage Kubernetes' powerful ecosystem for advanced orchestration and customizations.&lt;/p&gt;

&lt;p&gt;&lt;u&gt;Integration with ECR&lt;/u&gt;: EKS can pull images directly from ECR, leveraging IAM roles for secure access.&lt;/p&gt;

&lt;p&gt;Use Cases: Ideal for organizations with experience in Kubernetes, or for applications that need complex orchestration features, hybrid-cloud, or multi-cloud deployments.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. AWS App Runner&lt;/strong&gt;&lt;br&gt;
Overview: AWS App Runner is a fully managed service that provides a simple way to deploy containerized web applications or APIs from ECR without managing servers or infrastructure.&lt;/p&gt;

&lt;p&gt;Advantages: Minimal setup, fast deployment, and automatic scaling. It handles much of the operational overhead.&lt;/p&gt;

&lt;p&gt;&lt;u&gt;Integration with ECR&lt;/u&gt;: Connects directly with ECR for deploying images. App Runner automatically manages load balancing and scaling.&lt;/p&gt;

&lt;p&gt;Use Cases: Suitable for simpler applications or services (such as web apps and APIs) where you need speed and simplicity in deployment rather than complex orchestration.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. AWS Lambda (with Docker Image support)&lt;/strong&gt;&lt;br&gt;
Overview: AWS Lambda allows you to run containerized applications as Lambda functions, which is ideal for event-driven or serverless workloads.&lt;/p&gt;

&lt;p&gt;&lt;u&gt;Integration with ECR&lt;/u&gt;: Lambda can pull images directly from ECR, allowing you to deploy container images up to 10 GB in size.&lt;/p&gt;

&lt;p&gt;Use Cases: Best for microservices that benefit from serverless architecture, short-lived processes, or workloads that don’t require the full orchestration capabilities of ECS or EKS.&lt;/p&gt;

&lt;h2&gt;
  
  
  Comparison Summary
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Service&lt;/th&gt;
&lt;th&gt;Best For&lt;/th&gt;
&lt;th&gt;Management Level&lt;/th&gt;
&lt;th&gt;Scalability&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;ECS (Fargate)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Microservices, low-management applications&lt;/td&gt;
&lt;td&gt;Fully managed&lt;/td&gt;
&lt;td&gt;Auto-scalable&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;ECS (EC2)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;More control over infrastructure&lt;/td&gt;
&lt;td&gt;Semi-managed&lt;/td&gt;
&lt;td&gt;Manual scaling&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;EKS&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Advanced orchestrations, hybrid cloud&lt;/td&gt;
&lt;td&gt;Kubernetes-managed&lt;/td&gt;
&lt;td&gt;Kubernetes-native&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;App Runner&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Quick web app deployment, low complexity&lt;/td&gt;
&lt;td&gt;Fully managed&lt;/td&gt;
&lt;td&gt;Auto-scalable&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Lambda&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Event-driven, serverless functions&lt;/td&gt;
&lt;td&gt;Fully managed&lt;/td&gt;
&lt;td&gt;Event-based&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

</description>
    </item>
    <item>
      <title>Terraform Module Sources</title>
      <dc:creator>sam-nash</dc:creator>
      <pubDate>Thu, 12 Dec 2024 11:36:51 +0000</pubDate>
      <link>https://dev.to/samnash/terraform-module-sources-32gn</link>
      <guid>https://dev.to/samnash/terraform-module-sources-32gn</guid>
      <description>&lt;p&gt;How Terraform Finds Reusable Modules&lt;/p&gt;

&lt;p&gt;Terraform lets you break down your infrastructure configuration into smaller, reusable pieces called modules. But where does Terraform find the code for these modules? That's where the source argument comes in.&lt;/p&gt;

&lt;p&gt;The source argument tells Terraform where to look for the module's code. There are several options available, depending on where you store your modules:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Local Files&lt;/strong&gt;: Perfect for keeping modules within the same project. Just use a path starting with ./ or ../ to point Terraform to the module directory.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;module "local_database" {
  source = "./modules/database"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Terraform Registry&lt;/strong&gt;: This is a terraform platform for discovering providers, modules and security policies. You can specify the module's name and version to use a specific one.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;module "aws_ec2_instance" {
  source  = "hashicorp/aws/ec2"
  version = "~&amp;gt; 4.0"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Version Control Systems (VCS)&lt;/strong&gt;: If your modules live in a Git repository (like GitHub or Bitbucket), or a Mercurial repository, you can provide the URL to the repository.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;module "private_module" {
  source = "git@github.com:your-org/your-private-module.git"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Http/s URLs&lt;/strong&gt;: Modules can also be downloaded from web servers using HTTP or HTTPS URLs. Terraform can even follow redirects to other source locations.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;module "http_module" {
  source = "https://your-company-registry.com/modules/my-module"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Cloud Storage&lt;/strong&gt;: Modules stored in Amazon S3 buckets can be accessed by prefixing the URL with &lt;code&gt;s3::&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;module "s3_module" {
  source = "s3::s3.amazonaws.com/your-bucket/my-module.zip"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
    </item>
    <item>
      <title>Terraform Commands Cheat Sheet</title>
      <dc:creator>sam-nash</dc:creator>
      <pubDate>Wed, 11 Dec 2024 13:39:55 +0000</pubDate>
      <link>https://dev.to/samnash/terraform-commands-cheat-sheet-36ij</link>
      <guid>https://dev.to/samnash/terraform-commands-cheat-sheet-36ij</guid>
      <description>&lt;p&gt;This post serves as a comprehensive cheat sheet for commonly used Terraform commands, offering detailed explanations and where possible with some practical examples. Whether you're initializing a workspace, creating execution plans, managing state files, or destroying resources, I think this guide has you covered. Perfect for beginners and seasoned practitioners alike, it ensures you can leverage Terraform to its full potential.&lt;/p&gt;

&lt;h2&gt;
  
  
  Terraform General Commands
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;terraform version&lt;/strong&gt; : Show the Terraform version.&lt;br&gt;
&lt;strong&gt;terraform init&lt;/strong&gt; : Initialize the working directory and download the required providers and plugins.&lt;br&gt;
&lt;strong&gt;terraform init -reconfigure&lt;/strong&gt; : Reconfigure the backend and update the configuration.&lt;br&gt;
&lt;strong&gt;terraform init -get-plugins=false&lt;/strong&gt; : Initialize without downloading plugins (deprecated, as plugins are automatically managed as providers).&lt;br&gt;
&lt;strong&gt;terraform init -verify-plugins=false&lt;/strong&gt; : Initialize without verifying plugin signatures (deprecated, use with caution).&lt;br&gt;
&lt;strong&gt;terraform fmt&lt;/strong&gt; : Format the Terraform configuration files to canonical format.&lt;br&gt;
&lt;strong&gt;terraform fmt -recursive&lt;/strong&gt; : Format all files in the directory and subdirectories recursively.&lt;br&gt;
&lt;strong&gt;terraform fmt -check&lt;/strong&gt; : Check the format of the Terraform configuration files. Does not modify files.&lt;br&gt;
&lt;strong&gt;terraform validate&lt;/strong&gt; : Validate the syntax and consistency of Terraform configuration files.&lt;br&gt;
&lt;strong&gt;terraform validate -backend=false&lt;/strong&gt; : Validate configuration without initializing the backend.&lt;br&gt;
&lt;strong&gt;terraform validate -json&lt;/strong&gt; : Validate configuration and output results in JSON format.&lt;/p&gt;

&lt;h2&gt;
  
  
  Terraform Plan Commands
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;terraform plan&lt;/strong&gt; : Create an execution plan showing the changes Terraform will make.&lt;br&gt;
&lt;strong&gt;terraform plan -out=tf_plan.out&lt;/strong&gt; : Save the generated execution plan to a file.&lt;br&gt;
&lt;strong&gt;terraform plan -destroy&lt;/strong&gt; : Create a plan to destroy all infrastructure.&lt;br&gt;
&lt;strong&gt;terraform plan -var-file=my_vars.tfvars&lt;/strong&gt; : Include variables from the specified file in the plan.&lt;br&gt;
&lt;strong&gt;terraform plan -var=my_variable=value&lt;/strong&gt; : Override variables with specific values for the plan.&lt;br&gt;
&lt;strong&gt;terraform plan -parallelism=4&lt;/strong&gt; : Limit the number of parallel resource operations (default is 10).&lt;/p&gt;

&lt;h2&gt;
  
  
  Terraform Apply Commands
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;terraform apply&lt;/strong&gt; : Apply changes based on the current configuration.&lt;br&gt;
&lt;strong&gt;terraform apply tf_plan.out&lt;/strong&gt; : Apply changes based on a saved execution plan file.&lt;br&gt;
&lt;strong&gt;terraform apply -auto-approve&lt;/strong&gt; : Apply changes without prompting for approval.&lt;br&gt;
&lt;strong&gt;terraform apply -target=resource_address&lt;/strong&gt; : Apply changes only to the specified resource(s).&lt;br&gt;
&lt;strong&gt;terraform apply -var=my_variable=value&lt;/strong&gt; : Override variables with specific values during apply.&lt;br&gt;
&lt;strong&gt;terraform apply -var-file=my_vars.tfvars&lt;/strong&gt; : Use variables from the specified file during apply.&lt;br&gt;
&lt;strong&gt;terraform apply -lock=true&lt;/strong&gt; : Lock the statefile before applying changes (default behavior).&lt;br&gt;
&lt;strong&gt;terraform apply -lock-timeout=20s&lt;/strong&gt; : Set a timeout for acquiring a state lock.&lt;br&gt;
&lt;strong&gt;terraform apply -input=false&lt;/strong&gt; : Suppress interactive input prompts.&lt;br&gt;
&lt;strong&gt;terraform apply -replace=resource_address&lt;/strong&gt; : Force replacement of a specific resource.&lt;br&gt;
&lt;strong&gt;terraform apply -parallelism=4&lt;/strong&gt; : Limit the number of concurrent resource operations during apply.&lt;br&gt;
&lt;strong&gt;terraform apply -refresh-only&lt;/strong&gt; : Refresh the state without applying changes (available from Terraform 1.0).&lt;/p&gt;

&lt;h2&gt;
  
  
  Terraform Destroy Commands
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;terraform destroy&lt;/strong&gt; : Destroy all managed infrastructure.&lt;br&gt;
&lt;strong&gt;terraform destroy -auto-approve&lt;/strong&gt; : Destroy resources without prompting for confirmation.&lt;br&gt;
&lt;strong&gt;terraform destroy -target=resource_address&lt;/strong&gt; : Destroy specific resource(s).&lt;br&gt;
&lt;strong&gt;terraform destroy -var=my_variable=value&lt;/strong&gt; : Override variables during destroy.&lt;br&gt;
&lt;strong&gt;terraform destroy -var-file=my_vars.tfvars&lt;/strong&gt; : Use variables from the specified file during destroy.&lt;/p&gt;

&lt;h2&gt;
  
  
  Terraform State Commands
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;terraform state list&lt;/strong&gt; : List all resources in the state file.&lt;br&gt;
&lt;strong&gt;terraform state show resource_address&lt;/strong&gt; : Show the details of a specific resource.&lt;br&gt;
&lt;strong&gt;terraform state mv resource_address new_resource_address&lt;/strong&gt; : Move a resource to a new address.&lt;br&gt;
&lt;strong&gt;terraform state rm resource_address&lt;/strong&gt; : Remove a resource from the state file.&lt;br&gt;
&lt;strong&gt;terraform state pull &amp;gt; terraform.tfstate&lt;/strong&gt; : Download the current state file.&lt;br&gt;
&lt;strong&gt;terraform state push terraform.tfstate&lt;/strong&gt; : Upload a state file to a remote backend.&lt;br&gt;
&lt;strong&gt;terraform state replace-provider source_provider target_provider&lt;/strong&gt; : Replace a provider in the state file.&lt;/p&gt;

&lt;h2&gt;
  
  
  Terraform Import Commands
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;terraform import resource_address external_id&lt;/strong&gt; : Import existing infrastructure into the Terraform state.&lt;/p&gt;

&lt;h2&gt;
  
  
  Terraform Workspace Commands
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;terraform workspace list&lt;/strong&gt; : List all available workspaces.&lt;br&gt;
&lt;strong&gt;terraform workspace show&lt;/strong&gt; : Show the currently selected workspace.&lt;br&gt;
&lt;strong&gt;terraform workspace new workspace_name&lt;/strong&gt; : Create a new workspace.&lt;br&gt;
&lt;strong&gt;terraform workspace select workspace_name&lt;/strong&gt; : Switch to a specific workspace.&lt;br&gt;
Terraform Graph Commands&lt;br&gt;
&lt;strong&gt;terraform graph&lt;/strong&gt; : Generate a visual representation of the dependency graph.&lt;br&gt;
&lt;strong&gt;terraform graph -type=plan&lt;/strong&gt; : Generate a graph based on the execution plan.&lt;br&gt;
&lt;strong&gt;terraform graph -type=apply&lt;/strong&gt; : Generate a graph based on the apply plan.&lt;br&gt;
&lt;strong&gt;terraform graph -draw-cycles&lt;/strong&gt; : Highlight cyclic dependencies in the graph.&lt;/p&gt;

&lt;h2&gt;
  
  
  Terraform Output Commands
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;terraform output&lt;/strong&gt; : Display all output variables.&lt;br&gt;
&lt;strong&gt;terraform output -json&lt;/strong&gt; : Display output variables in JSON format.&lt;br&gt;
&lt;strong&gt;terraform output variable_name&lt;/strong&gt; : Show the value of a specific output variable.&lt;/p&gt;

&lt;h2&gt;
  
  
  Terraform Show Commands
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;terraform show&lt;/strong&gt; : Show the current state or plan.&lt;br&gt;
&lt;strong&gt;terraform show -json&lt;/strong&gt; : Output the state or plan in JSON format.&lt;/p&gt;

&lt;h2&gt;
  
  
  Other Commands
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;terraform refresh&lt;/strong&gt; : Update the state file to reflect the current state of resources.&lt;br&gt;
&lt;strong&gt;terraform taint resource_address&lt;/strong&gt; : Mark a resource for recreation on the next apply.&lt;br&gt;
&lt;strong&gt;terraform untaint resource_address&lt;/strong&gt; : Remove the taint from a resource.&lt;br&gt;
&lt;strong&gt;terraform providers&lt;/strong&gt; : List providers used in the configuration.&lt;br&gt;
&lt;strong&gt;terraform providers schema&lt;/strong&gt; : Display provider schema details (available in recent versions).&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>cheatsheet</category>
      <category>infrastructureascode</category>
    </item>
    <item>
      <title>Terraform Basics</title>
      <dc:creator>sam-nash</dc:creator>
      <pubDate>Wed, 11 Dec 2024 07:49:20 +0000</pubDate>
      <link>https://dev.to/samnash/tearraform-basics-268m</link>
      <guid>https://dev.to/samnash/tearraform-basics-268m</guid>
      <description>&lt;p&gt;Terraform is an Infrastructure as Code (IaC) tool that enables you to manage your infrastructure through code instead of manual processes. Using Terraform, you define your infrastructure in configuration files (written primarily in HashiCorp Configuration Language - HCL), specifying the components required to run your applications, such as servers, storage, and networking resources.&lt;/p&gt;

&lt;p&gt;Here’s a more detailed breakdown of how Terraform operates:&lt;/p&gt;

&lt;h2&gt;
  
  
  Configuration Files:
&lt;/h2&gt;

&lt;p&gt;These files define the desired state of your infrastructure, detailing resources like virtual machines, databases, and network settings. Configuration files are written with a declarative style syntax in HashiCorp Configuration Language (HCL) format. These files define the resources you want to create or modify.&lt;/p&gt;

&lt;p&gt;Terraform uses the resource type identifiers (e.g., aws_instance for AWS EC2, google_compute_instance for Google Compute Engine) to route the request to the correct provider plugin.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_instance" "example" {
    ami           = "ami-0c55b159cbfafe1f0"
    instance_type = "t2.micro"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Writing a Terraform Configuration File
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;1. Structure of Configuration Files&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Terraform configuration is split into different sections:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Provider&lt;/strong&gt;: Specifies the cloud provider or platform (e.g., Google, AWS, Azure) you’re working with.&lt;br&gt;
&lt;strong&gt;Resource&lt;/strong&gt;: Defines the resources to be created or managed.&lt;br&gt;
&lt;strong&gt;Variable&lt;/strong&gt;: Declares inputs that can be reused across the configuration.&lt;br&gt;
&lt;strong&gt;Output&lt;/strong&gt;: Defines outputs to display or pass values after applying changes.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;2. Organizing Terraform Files&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can split your Terraform configuration across multiple .tf files in a single directory. Terraform loads all .tf files in the directory and processes them together. Here's a typical structure:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/my-terraform-project
│
├── main.tf           # Main configuration file
├── variables.tf      # Variables declaration
├── outputs.tf        # Outputs declaration
└── terraform.tfvars  # Variables values (optional)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;(provider.tf) : Contains the provider information&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;provider "google" {
  project = "my-project-id"
  region  = "us-west1"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;(main.tf) : Contains the core configuration.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "google_compute_instance" "example" {
  name         = "example-instance"
  machine_type = "e2-medium"
  zone         = "us-west1-a"

  boot_disk {
    initialize_params {
      image = "projects/debian-cloud/global/images/family/debian-11"
    }
  }

  network_interface {
    network = "default"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;variables.tf: Defines reusable variables.&lt;br&gt;
outputs.tf: Declares outputs for the module.&lt;br&gt;
terraform.tfvars: Stores values for variables (if not provided directly or via environment variables).&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;3. Declaring Variables&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Variables are defined using the variable block in variables.tf:&lt;/p&gt;

&lt;p&gt;Example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "project_id" {
  description = "The GCP project ID"
  type        = string
}

variable "machine_type" {
  description = "The type of VM machine"
  type        = string
  default     = "e2-medium"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;4. Passing Variables&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Variables can be passed in several ways:&lt;/p&gt;

&lt;p&gt;In the terraform.tfvars file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;project_id = "my-project-id"
machine_type = "e2-standard-4"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Via the command line:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform apply -var="project_id=my-project-id"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Using Environment Variables: Set an environment variable with the TF_VAR_ prefix:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export TF_VAR_project_id="my-project-id"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;5. Referencing Variables&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Variables are accessed using the var keyword:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;provider "google" {
  project = var.project_id
  region  = "us-west1"
}

resource "google_compute_instance" "example" {
  name         = "example-instance"
  machine_type = var.machine_type
  zone         = "us-west1-a"

  boot_disk {
    initialize_params {
      image = "projects/debian-cloud/global/images/family/debian-11"
    }
  }

  network_interface {
    network = "default"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;&lt;ol&gt;
&lt;li&gt;Variable Data Types&lt;/li&gt;
&lt;/ol&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;String: Represents a single line of text.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Number: Represents a numeric value.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Bool: Represents a boolean value, either true or false.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;List: Represents a collection of values in a specific order.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Map: Represents a collection of key-value pairs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Object: Represents a structured collection of named attributes, each with its own type.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Set: Represents a collection of unique values with no specific order.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Providers:
&lt;/h2&gt;

&lt;p&gt;Terraform uses providers to interact with various cloud services and platforms. Providers are plugins that specify the resources available for a given service. For example, the AWS provider allows Terraform to manage resources like EC2 instances and S3 buckets. Similar providers exist for Google Cloud, Azure, Kubernetes, and more.&lt;/p&gt;

&lt;h3&gt;
  
  
  Terraform Providers and API Calls
&lt;/h3&gt;

&lt;p&gt;Terraform providers are essentially plugins that allow Terraform to interact with various cloud services, like AWS, Google Cloud, and Azure. These providers are responsible for making the necessary API calls to these services to create, update, or delete resources as specified in your configuration files.&lt;/p&gt;

&lt;p&gt;Here’s a step-by-step overview of the process:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Initialization (&lt;code&gt;terraform init&lt;/code&gt;)&lt;/strong&gt;: When you run this command, Terraform checks your configuration files for the required providers.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Downloading Providers&lt;/strong&gt;: Terraform downloads the necessary provider plugins from the Terraform Registry (or other specified sources). These plugins contain the code to interact with the provider’s API.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Authentication&lt;/strong&gt;: Each provider plugin includes mechanisms for authentication. For example, with AWS, you might need to configure access keys or use IAM roles. The provider plugin handles these details and establishes a secure connection to the API.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;API Calls&lt;/strong&gt;: Once initialized and authenticated, Terraform uses the provider plugins to validate the syntax and translate the configuration into API calls specific to the provider. For example, if your configuration includes an compute instance, the Google provider plugin will make the appropriate API calls to GCP to create the compute instance with the specified parameters.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Example of GCP Provider Initialization
&lt;/h3&gt;

&lt;p&gt;Here’s a snippet of a Terraform configuration that uses the Google provider:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;provider "google" {
  project = "your-project-id"
  region  = "us-west1"
}

resource "google_compute_instance" "example" {
  name         = "example-instance"
  machine_type = "e2-micro"
  zone         = "us-west1-a"

  boot_disk {
    initialize_params {
      image = "projects/debian-cloud/global/images/family/debian-11"
    }
  }

  network_interface {
    network       = "default"
    access_config {
      # This is necessary to allow external internet access
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When you run &lt;em&gt;terraform init&lt;/em&gt; the output might look something like below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Initializing the backend...

Successfully configured the backend "gcs"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing provider plugins...
- Finding latest version of hashicorp/google...
- Installing hashicorp/google v6.11.2...
- Installed hashicorp/google v6.11.2 (signed by HashiCorp)
Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.

Terraform has been successfully initialized!
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;When you run terraform init, Terraform will download the Google provider plugin.&lt;/li&gt;
&lt;li&gt;This plugin includes the code required to authenticate with Google Cloud and interact with its APIs.&lt;/li&gt;
&lt;li&gt;When you run terraform apply, the plugin uses Google Cloud APIs to create and manage resources, such as Compute Engine instances or Cloud Storage buckets, based on your configuration.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Terraform Format
&lt;/h2&gt;

&lt;p&gt;The &lt;em&gt;terraform fmt&lt;/em&gt; command automatically formats your Terraform configuration files to follow the standard formatting conventions. This includes aligning indentation, organizing blocks, and correcting syntax spacing. Consistent formatting improves the readability of your code, making it easier for teams to collaborate and review changes. For example, if a configuration file has inconsistent indentation or misplaced brackets, running terraform fmt will fix these issues automatically. Here’s how you use it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform fmt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command scans all &lt;code&gt;.tf&lt;/code&gt; files in the current directory (and subdirectories if specified) and updates them to match the standard format. When formatting is successful, you might see an output that will look like.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;main.tf
variables.tf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Terraform validate
&lt;/h2&gt;

&lt;p&gt;The terraform validate command checks the syntax and logical structure of your configuration files for errors. It ensures that the configuration is syntactically correct and that all required arguments are specified. However, it does not interact with the remote provider or check resource availability; it only validates the local files. For example, if you define a google_compute_instance resource but forget to specify a mandatory field like machine_type, terraform validate will return an error indicating the missing attribute:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform validate
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Sample output for a successful validation:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Success! The configuration is valid.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Plan
&lt;/h2&gt;

&lt;p&gt;The terraform plan command is used to preview the changes that will be made to your infrastructure based on the current configuration files.&lt;/p&gt;

&lt;p&gt;What It Does:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;It compares the current state of the infrastructure (recorded in the state file) to the desired state (defined in the configuration files).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It shows you a list of the actions that Terraform will take, such as creating, modifying, or destroying resources.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It provides a dry-run of what will happen without actually making any changes to the infrastructure.&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform plan -var-file="$TARGET_GCP_PROJECT.tfvars" -out=tfplan
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Result:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;google_project_service.cloudresourcemanager: Refreshing state... [id=gcp-project-id/cloudresourcemanager.googleapis.com]
google_project_service.iamcredentials: Refreshing state... [id=gcp-project-id/iamcredentials.googleapis.com]
google_project_service.iam: Refreshing state... [id=gcp-project-id/iam.googleapis.com]
google_project_service.sts: Refreshing state... [id=gcp-project-id/sts.googleapis.com]
google_service_account.sa: Refreshing state... [id=projects/gcp-project-id/serviceAccounts/project-sa@gcp-project-id.iam.gserviceaccount.com]
google_storage_bucket.terraform_state: Refreshing state... [id=gcp-project-id-terraform-state]
google_compute_instance.vm_instance-2: Refreshing state... [id=projects/gcp-project-id/zones/asia-southeast1-a/instances/my-vm-instance]

Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # google_compute_instance.vm_instance will be created
  + resource "google_compute_instance" "vm_instance" {
      + can_ip_forward       = false
      + cpu_platform         = (known after apply)
      + creation_timestamp   = (known after apply)
      + current_status       = (known after apply)
      + deletion_protection  = false
      + effective_labels     = {
          + "goog-terraform-provisioned" = "true"
        }
      + id                   = (known after apply)
      + instance_id          = (known after apply)
      + label_fingerprint    = (known after apply)
      + machine_type         = "e2-medium"
      + metadata_fingerprint = (known after apply)
      + min_cpu_platform     = (known after apply)
      + name                 = "my-vm-instance"
      + project              = "gcp-project-id"
      + self_link            = (known after apply)
      + tags_fingerprint     = (known after apply)
      + terraform_labels     = {
          + "goog-terraform-provisioned" = "true"
        }
      + zone                 = "asia-southeast1-a"

      + boot_disk {
          + auto_delete                = true
          + device_name                = (known after apply)
          + disk_encryption_key_sha256 = (known after apply)
          + kms_key_self_link          = (known after apply)
          + mode                       = "READ_WRITE"
          + source                     = (known after apply)

          + initialize_params {
              + image                  = "debian-cloud/debian-11"
              + labels                 = (known after apply)
              + provisioned_iops       = (known after apply)
              + provisioned_throughput = (known after apply)
              + resource_policies      = (known after apply)
              + size                   = (known after apply)
              + type                   = (known after apply)
            }
        }

      + confidential_instance_config (known after apply)

      + guest_accelerator (known after apply)

      + network_interface {
          + internal_ipv6_prefix_length = (known after apply)
          + ipv6_access_type            = (known after apply)
          + ipv6_address                = (known after apply)
          + name                        = (known after apply)
          + network                     = "default"
          + network_ip                  = (known after apply)
          + stack_type                  = (known after apply)
          + subnetwork                  = (known after apply)
          + subnetwork_project          = (known after apply)

          + access_config {
              + nat_ip       = (known after apply)
              + network_tier = (known after apply)
            }
        }

      + reservation_affinity (known after apply)

      + scheduling (known after apply)
    }

Plan: 1 to add, 0 to change, 0 to destroy.

──────────────────────────────────────────────────

Saved the plan to: tfplan

To perform exactly these actions, run the following command to apply:
    terraform apply "tfplan"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Apply
&lt;/h2&gt;

&lt;p&gt;Once satisfied with the proposed changes, you use terraform apply to execute the plan. This command:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It executes the changes identified in the terraform plan step.&lt;/li&gt;
&lt;li&gt;It updates or creates resources to match the desired state defined in the configuration files.&lt;/li&gt;
&lt;li&gt;It modifies the infrastructure while ensuring that the state file is updated accordingly to reflect the new configuration.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For instance, if your configuration includes creating a Google Compute Engine instance with specific parameters, terraform apply will send API requests to Google Cloud to provision the instance according to the configuration.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Similarly, if you modify an attribute, such as the machine type, Terraform will update the existing resource without recreating it if possible.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Apply complete! Resources: 0 added, 1 changed, 0 destroyed.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The combination of plan and apply ensures a safe, iterative process to manage infrastructure changes, giving you visibility and control at every step.&lt;/p&gt;

&lt;h2&gt;
  
  
  State Management
&lt;/h2&gt;

&lt;p&gt;State management is a critical aspect of Terraform's infrastructure-as-code workflow. Terraform uses a state file to store information about the current state of your infrastructure. This file, typically named &lt;em&gt;terraform.tfstate&lt;/em&gt; that acts as a snapshot of the resources managed by Terraform, recording their attributes and configurations. The state file is essential for ensuring that Terraform can manage your infrastructure efficiently and track changes over time.&lt;/p&gt;

&lt;h3&gt;
  
  
  How State Management Works
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Initial State Creation:&lt;br&gt;
When you run terraform apply for the first time, Terraform provisions the resources defined in your configuration and generates a state file. For example, if you create a Google Compute Engine instance, the state file records details such as the instance name, machine type, zone, and IP address.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Tracking and Comparing Changes:&lt;br&gt;
Each time you modify your configuration and run terraform plan or terraform apply, Terraform compares the current state in the state file with the desired state in your configuration. Based on this comparison, Terraform identifies the required actions, such as adding new resources, updating existing ones, or destroying obsolete resources.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;State Updates:&lt;br&gt;
After applying changes, Terraform updates the state file to reflect the current state of your infrastructure. This ensures that subsequent operations use accurate and up-to-date information.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;State File Locking&lt;br&gt;
Terraform implements state file locking to prevent concurrent operations that could corrupt the state file or lead to inconsistent changes. When using a remote backend (e.g., Google Cloud Storage, AWS S3, or Terraform Cloud), Terraform automatically locks the state file before performing any operations.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;For example, if one user is running terraform apply, any other attempts to run Terraform commands that modify the state file will block until the lock is released. Once the operation completes, Terraform unlocks the state file automatically. This mechanism is crucial for teams working collaboratively, ensuring that only one process can modify the state file at a time.&lt;/p&gt;

&lt;p&gt;In cases where locking is supported by the backend but fails (e.g., due to misconfiguration), Terraform will notify you with an error message. Manually unlocking a state file should only be done when you're certain no other process is running, as it can lead to inconsistencies.&lt;/p&gt;

&lt;h3&gt;
  
  
  Key Features of State Management
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Resource Dependencies:&lt;br&gt;
The state file tracks resource dependencies, ensuring that Terraform applies changes in the correct order. For instance, it will provision a network before creating resources that depend on it.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Efficient Updates:&lt;br&gt;
Terraform uses the state file to identify and execute only the necessary changes, avoiding resource recreation unless explicitly required.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Remote State Backends:&lt;br&gt;
Storing state files remotely enables better collaboration, secure storage, and features like locking and versioning. Example of configuring a remote backend on Google Cloud Storage:&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
terraform {
  backend "gcs" {
    bucket = "your-bucket-name"
    prefix = "terraform/state"
  }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
Sensitive Data in State Files:
State files may contain sensitive information such as passwords or API keys. Use encryption and access controls to secure the state file, especially when stored remotely.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Best Practices
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Use Remote State Backends: For team environments, always store the state file in a remote backend with locking enabled to avoid conflicts.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Encrypt State Files: Use encryption to protect sensitive data stored in the state file.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Version Control: Check the .terraform.lock.hcl file into version control to ensure consistent provider versions but exclude terraform.tfstate from version control using .gitignore.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Avoid Manual Edits: Never manually edit the state file, as this can lead to inconsistencies and unexpected behavior.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;State File Backup: Enable versioning on remote backends to recover previous states if needed.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Modules and Reusability:
&lt;/h2&gt;

&lt;p&gt;In Terraform, modules are a fundamental concept that allows you to organize and reuse infrastructure code. By using modules, you can break down your infrastructure into smaller, reusable components, improving maintainability, scalability, and reducing redundancy. Modules provide a way to group related resources together and treat them as a single unit, making it easier to manage complex infrastructures.&lt;/p&gt;

&lt;h3&gt;
  
  
  What Are Modules?
&lt;/h3&gt;

&lt;p&gt;A module in Terraform is a collection of resource definitions and configurations that can be reused across different parts of your infrastructure. Modules help you encapsulate resources that serve a specific purpose, such as setting up a web server, creating a database, or configuring networking components. Instead of defining these resources multiple times, you can use a module to reuse the same configuration in different places or projects.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Root Module&lt;/strong&gt;: The configuration in the directory where you run terraform apply is considered the root module. This is where your main configuration lives.&lt;br&gt;
&lt;strong&gt;Child Modules&lt;/strong&gt;: These are modules that are called by the root module or other modules. They are typically stored in separate directories and referenced in the root module using module blocks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Structure&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxs9lzoev4vp4bv84cwwg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxs9lzoev4vp4bv84cwwg.png" alt=" " width="584" height="964"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Benefits of Using Modules
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Reusability: Once a module is written, it can be reused across different projects, environments, or regions. This reduces the need to repeat similar resource definitions and minimizes the risk of errors.&lt;/li&gt;
&lt;li&gt;Maintainability: By organizing your infrastructure code into modules, you make it easier to maintain and update. Changes to a module only need to be made in one place and can be propagated throughout all instances where the module is used.&lt;/li&gt;
&lt;li&gt;Modularization: Modules enable you to break down complex infrastructure into smaller, more manageable pieces. Each module can focus on a specific task, such as setting up compute resources, networking, or storage, and be reused independently.&lt;/li&gt;
&lt;li&gt;Consistency:
Modules promote best practices and standardize how resources are configured and managed. This ensures infrastructure is provisioned in a predictable and reliable way.&lt;/li&gt;
&lt;li&gt;Collaboration:
Teams can share modules across the organization, streamlining the process of setting up common infrastructure components. This encourages collaboration, shared knowledge, and the adoption of infrastructure as code practices.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  How to Create and Use Modules
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Creating a Module&lt;/strong&gt;: A module is simply a directory containing Terraform configuration files. For example, you might create a module for a Google Compute Engine instance.&lt;/p&gt;

&lt;p&gt;Example of a module to create a Google Compute Engine instance (modules/instance/main.tf):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "google_compute_instance" "example" {
  name         = var.name
  machine_type = var.machine_type
  zone         = var.zone

  boot_disk {
    initialize_params {
      image = var.image
    }
  }

  network_interface {
    network       = "default"
    access_config {}
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;(modules/instance/variables.tf)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "name" {}
variable "machine_type" {}
variable "zone" {}
variable "image" {}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Using a Module&lt;/strong&gt;: Once a module is created, it can be called and reused in the root configuration or other modules. You call a module by using the module block, where you specify the &lt;strong&gt;source&lt;/strong&gt; and any necessary input variables.&lt;/p&gt;

&lt;p&gt;Example of using the instance module in a root module:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;module "web_server" {
  source        = "./modules/instance"
  name          = "web-server-1"
  machine_type  = "e2-medium"
  zone          = "us-west1-a"
  image         = "projects/debian-cloud/global/images/family/debian-11"
}

module "db_server" {
  source        = "./modules/instance"
  name          = "db-server-1"
  machine_type  = "e2-medium"
  zone          = "us-west1-b"
  image         = "projects/debian-cloud/global/images/family/debian-11"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Passing Variables to Modules&lt;/strong&gt;: You can pass variables to a module to customize its behavior for different use cases. For instance, you might pass different machine types, zones, or images when using the same module for different servers.&lt;/p&gt;

&lt;p&gt;Example of passing variables:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;module "web_server" {
  source        = "./modules/instance"
  name          = "web-server-1"
  machine_type  = "e2-medium"
  zone          = "us-west1-a"
  image         = "projects/debian-cloud/global/images/family/debian-11"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Output from Modules: Modules can also define outputs that expose information about the resources they create. This allows you to reference values from a module in other parts of your configuration.&lt;/p&gt;

&lt;p&gt;Example of defining an output in a module (modules/instance/outputs.tf):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;output "instance_name" {
  value = google_compute_instance.example.name
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Example of using the output in the root module:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;module "web_server" {
  source        = "./modules/instance"
  name          = "web-server-1"
  machine_type  = "e2-medium"
  zone          = "us-west1-a"
  image         = "projects/debian-cloud/global/images/family/debian-11"
}

resource "google_compute_firewall" "allow_http" {
  name    = "allow-http-${**module.web_server.instance_name**}"
  network = "default"

  allow {
    protocol = "tcp"
    ports    = ["80"]
  }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Best Practices for Using Modules
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Use Clear Naming Conventions: Use descriptive names for both modules and variables to ensure they are easy to understand and reuse.&lt;/li&gt;
&lt;li&gt;Version Control for Modules: Store modules in separate directories or even version-controlled repositories to keep track of changes over time.&lt;/li&gt;
&lt;li&gt;Use Module Sources: You can source modules from local directories, versioned Git repositories, or the Terraform Registry. The Terraform Registry contains publicly available modules for many common use cases, such as AWS, Google Cloud, and Azure.&lt;/li&gt;
&lt;li&gt;Avoid Hardcoding Values: Use variables for values that might change between environments or regions to make modules more flexible and reusable.&lt;/li&gt;
&lt;li&gt;Modularize Common Resources: Create modules for commonly used resources (e.g., networking, security groups, VMs) to prevent duplication of configuration across your infrastructure.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The above example illustrates the use of modules from a local file system. Refer to &lt;a href="https://dev.to/samnash/terraform-module-sources-32gn"&gt;Terraform module sources&lt;/a&gt; to learn more about how modules can be passed from other sources too.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;## Terraform Destroy: 
The terraform destroy command is used to safely and efficiently delete all resources managed by a Terraform configuration. This command ensures that your infrastructure is removed in a controlled manner, based on the dependency graph that Terraform maintains.

### Key Features of terraform destroy:
**Dependency-Aware Deletion**:
Terraform determines the correct order to delete resources, respecting dependencies between them. For example, it will detach a disk from a compute instance before deleting the instance itself.

**State File Utilization**:
Terraform uses the state file to track all resources under management. It compares the current state with the configuration to identify which resources need to be deleted.

**Selective Resource Deletion**:
While terraform destroy typically removes all resources defined in the configuration, you can use targeted commands to delete specific resources selectively.

### How to Use terraform destroy
Basic Syntax - 
Run the following command in your Terraform workspace to delete all managed resources:

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;terraform destroy&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
Terraform will:

1. Load the state file.
2. Generate a plan for deleting resources.
3. Prompt for confirmation before proceeding.

### Options for terraform destroy
**Skip Confirmation**: Add the -auto-approve flag to bypass the confirmation prompt:
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;terraform destroy -auto-approve&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;⚠️ Use this cautiously, especially in production environments, as it immediately initiates resource destruction.

**Target Specific Resources**: If you want to delete specific resources only, use the -target flag:
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;terraform destroy -target=google_compute_instance.example&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;This command will destroy only the specified resource while leaving others intact.

**Specify a State File**: Use the -state flag to specify a state file if you’re working outside the default location:
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;terraform destroy -state=custom_state.tfstate&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
Example output:

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;google_compute_instance.example: Refreshing state... [id=example-instance]&lt;br&gt;
Plan: 0 to add, 0 to change, 1 to destroy.&lt;br&gt;
Do you really want to destroy? Terraform will delete all resources.&lt;br&gt;
  Enter a value: yes&lt;/p&gt;

&lt;p&gt;google_compute_instance.example: Destroying... [id=example-instance]&lt;br&gt;
google_compute_instance.example: Destruction complete after 12s.&lt;br&gt;
Destroy complete! Resources: 1 destroyed.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
### Additional Considerations
**State File Locking**:
During terraform destroy, Terraform locks the state file to prevent concurrent operations that could cause resource drift or corruption.

**Remote State Management**:
If using a remote backend (e.g., Google Cloud Storage or AWS S3), ensure your state file is accessible and unlocked. Terraform automatically locks and unlocks the remote state during operations.

**Preventing Accidental Deletion**:
Use lifecycle.prevent_destroy in your resource configuration to safeguard critical resources:
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;resource "google_compute_instance" "example" {&lt;br&gt;
  name         = "critical-instance"&lt;br&gt;
  machine_type = "e2-medium"&lt;br&gt;
  zone         = "us-west1-a"&lt;/p&gt;

&lt;p&gt;lifecycle {&lt;br&gt;
    prevent_destroy = true&lt;br&gt;
  }&lt;br&gt;
}&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
**Handling Orphaned Resources**:
If resources were deleted manually or outside of Terraform, terraform destroy might not find them in the state file. Use terraform state rm to clean up the state file.

### Best Practices for Using terraform destroy

- 
Test in Non-Production Environments:
Always test terraform destroy in staging or test environments to understand its impact.

- 
Backup Your State File:
Before running terraform destroy, back up your state file to ensure you can recover from unintended changes.

- 
Dry Run with terraform plan:
Run terraform plan -destroy to preview the destruction plan before executing the actual command:
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;terraform plan -destroy&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;


In summary, Terraform is a robust tool for automating the creation, deployment, and management of infrastructure using code. Its compatibility with multiple cloud providers and platforms makes it a flexible and powerful choice for infrastructure management.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

</description>
      <category>infrastructureascode</category>
      <category>terraform</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Infrastructure as Code</title>
      <dc:creator>sam-nash</dc:creator>
      <pubDate>Tue, 10 Dec 2024 11:42:19 +0000</pubDate>
      <link>https://dev.to/samnash/infrastructure-as-code-3k3j</link>
      <guid>https://dev.to/samnash/infrastructure-as-code-3k3j</guid>
      <description>&lt;p&gt;In the fast-evolving realm of IT, Infrastructure as Code (IaC) has emerged as a game-changer. This blog aims to explore the nuances of IaC, its importance, benefits, and how it's reshaping the landscape of IT infrastructure management.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Infrastructure as Code?
&lt;/h2&gt;

&lt;p&gt;Infrastructure as Code is a practice in IT that involves managing and provisioning computing infrastructure through machine-readable definition files, rather than through physical hardware configuration or interactive configuration tools. In essence, it allows you to treat your infrastructure in the same way you treat your application code.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why IaC Matters
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Consistency and Repeatability&lt;/strong&gt;: IaC ensures that the same environment is recreated consistently every time. This eliminates the "it works on my machine" problem and provides a uniform environment for development, testing, and production.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Scalability&lt;/strong&gt;: With IaC, you can easily scale your infrastructure up or down based on demand. This agility is crucial for modern applications that need to handle varying loads.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Speed and Efficiency&lt;/strong&gt;: Automation is at the heart of IaC. By automating infrastructure provisioning and management, teams can deploy applications faster and more efficiently.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Cost-Effectiveness&lt;/strong&gt;: Automating infrastructure management reduces the need for manual intervention, which can lead to significant cost savings. Plus, IaC allows for better resource management, ensuring you only pay for what you use.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Version Control&lt;/strong&gt;: Just like application code, infrastructure code can be version controlled. This means you can track changes, revert to previous versions, and understand the history of your infrastructure configurations.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Key Tools in IaC
&lt;/h2&gt;

&lt;p&gt;Several tools have become synonymous with IaC, each offering unique features and capabilities. Some of the most popular ones include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Terraform&lt;/strong&gt;: A versatile tool by HashiCorp, known for its platform-agnostic capabilities and wide community support.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AWS CloudFormation&lt;/strong&gt;: Amazon's offering, specifically tailored for managing AWS resources.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Ansible&lt;/strong&gt;: A Red Hat tool that provides simple yet powerful automation for a variety of IT environments.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Puppet&lt;/strong&gt;: Known for its strong configuration management features and scalability.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Chef&lt;/strong&gt;: Another powerful configuration management tool, focusing on automation across the entire IT stack.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Getting Started with IaC
&lt;/h2&gt;

&lt;p&gt;Starting with IaC can be a bit overwhelming, but here are some steps to guide you:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Choose the Right Tool&lt;/strong&gt;: Assess your current infrastructure and choose a tool that best fits your needs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Learn the Basics&lt;/strong&gt;: Invest time in understanding the basic concepts and syntax of your chosen tool.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Start Small&lt;/strong&gt;: Begin with a small project to get hands-on experience.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Integrate with CI/CD Pipelines&lt;/strong&gt;: Integrate IaC with your Continuous Integration and Continuous Deployment (CI/CD) pipelines for streamlined deployments.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Collaborate and Share&lt;/strong&gt;: Make use of version control systems like Git to collaborate and share your infrastructure code with your team.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Infrastructure as Code is a paradigm shift in how IT infrastructure is managed. By treating infrastructure like code, organizations can achieve greater consistency, scalability, and efficiency, all while reducing costs and speeding up deployment times.&lt;/p&gt;

</description>
      <category>infrastructureascode</category>
      <category>terraform</category>
    </item>
    <item>
      <title>Google Pub/Sub</title>
      <dc:creator>sam-nash</dc:creator>
      <pubDate>Wed, 20 Nov 2024 10:38:43 +0000</pubDate>
      <link>https://dev.to/samnash/pub-sub-710</link>
      <guid>https://dev.to/samnash/pub-sub-710</guid>
      <description>&lt;p&gt;&lt;strong&gt;Introduction: The Power of Event-Driven Architectures&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Have you ever wondered how large-scale systems like Google, Netflix, or Spotify manage to process millions of real-time events, such as sending notifications, updating dashboards, or recommending personalized content, all without missing a beat? The secret lies in event-driven architectures, and at the heart of such systems is a powerful tool: Google Cloud Pub/Sub.&lt;/p&gt;

&lt;p&gt;In a world where businesses demand seamless integration and instant processing of data, traditional point-to-point messaging systems often fall short. Enter Google Pub/Sub, a fully managed messaging service designed to decouple applications, handle massive data streams, and enable real-time communication with unparalleled reliability and scalability.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why Pub/Sub is Essential for Modern Systems&lt;/strong&gt;&lt;br&gt;
Modern applications require more than just static workflows; they thrive on dynamic, event-driven processes that can adapt and respond to real-time data. Here are some key reasons why Google Pub/Sub has become indispensable:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Real-Time Processing&lt;/strong&gt;: From IoT devices generating millions of sensor readings to financial systems detecting fraudulent transactions, Pub/Sub ensures these events are captured and processed in real time.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Scalability&lt;/strong&gt;: Pub/Sub seamlessly scales to handle millions of messages per second, supporting the ever-growing demands of modern businesses.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Decoupling of Services&lt;/strong&gt;: It simplifies architectures by allowing independent components to communicate asynchronously, improving maintainability and flexibility.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Global Reach&lt;/strong&gt;: Built on Google's global infrastructure, Pub/Sub offers low-latency message delivery across regions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Integration-Friendly&lt;/strong&gt;: It integrates seamlessly with other Google Cloud services like BigQuery, Dataflow, Cloud Functions, and Cloud Storage, making it a cornerstone for building robust, interconnected systems.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Google Cloud Pub/Sub is a fully managed messaging service designed for real-time communication, enabling independent applications to send and receive messages seamlessly. It's a powerful tool for building scalable and reliable systems. &lt;/p&gt;

&lt;p&gt;In this tutorial, we'll delve into the core concepts of Pub/Sub, explore practical examples using the command-line interface (CLI) and Python SDK, and discuss various use cases.&lt;/p&gt;
&lt;h2&gt;
  
  
  Key Concepts &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Topic: A named resource to which messages are published.&lt;/li&gt;
&lt;li&gt;Subscription: A named resource that receives messages from a specific topic.&lt;/li&gt;
&lt;li&gt;Publisher: An application that sends messages to a topic.&lt;/li&gt;
&lt;li&gt;Subscriber: An application that receives messages from a subscription.&lt;/li&gt;
&lt;li&gt;Message: The core unit of communication in Pub/Sub, consisting of data (payload) and attributes (key-value metadata).&lt;/li&gt;
&lt;li&gt;Acknowledgment (Ack): Subscribers must confirm receipt of a message by acknowledging it. Unacknowledged messages are redelivered to ensure reliable processing.&lt;/li&gt;
&lt;li&gt;Push vs. Pull Subscriptions

&lt;ul&gt;
&lt;li&gt;Pull: Subscribers explicitly fetch messages.&lt;/li&gt;
&lt;li&gt;Push: Pub/Sub pushes messages to an HTTP(S) endpoint.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Dead Letter Queue (DLQ): A separate topic for routing undeliverable messages after a defined number of delivery attempts. Useful for troubleshooting.&lt;/li&gt;
&lt;li&gt;Retention Policy: Controls how long messages are stored if they remain unacknowledged or acknowledged. Default: 7 days.&lt;/li&gt;
&lt;li&gt;Filters: Enable subscribers to receive only messages that match specific criteria based on attributes.&lt;/li&gt;
&lt;li&gt;Ordering Keys: Ensures messages with the same ordering key are delivered in the order they were published.&lt;/li&gt;
&lt;li&gt;Exactly Once Delivery: Guarantees no duplicate messages are delivered when configured correctly.&lt;/li&gt;
&lt;li&gt;Flow Control: Helps subscribers manage the rate of message delivery to prevent being overwhelmed.&lt;/li&gt;
&lt;li&gt;IAM Permissions and Roles: Pub/Sub uses Google Cloud IAM to control access. 

&lt;ul&gt;
&lt;li&gt;Key roles:&lt;/li&gt;
&lt;li&gt;Publisher: roles/pubsub.publisher&lt;/li&gt;
&lt;li&gt;Subscriber: roles/pubsub.subscriber&lt;/li&gt;
&lt;li&gt;Viewer: roles/pubsub.viewer&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Message Deduplication: Prevents duplicate messages caused by retries or network issues.&lt;/li&gt;
&lt;li&gt;Message Expiration (TTL): Automatically drops messages from a subscription after a specified time-to-live, reducing storage costs.&lt;/li&gt;
&lt;li&gt;Backlog: The collection of unacknowledged messages in a subscription, enabling message replay if required.&lt;/li&gt;
&lt;li&gt;Schema Registry: Defines structured message formats (e.g., Avro, Protocol Buffers), ensuring consistent data validation.&lt;/li&gt;
&lt;li&gt;Batching: Groups multiple messages together for efficient publishing and delivery, improving performance.&lt;/li&gt;
&lt;li&gt;Regional Endpoints: Publish messages to region-specific endpoints for compliance and reduced latency.&lt;/li&gt;
&lt;li&gt;Cross-Project Topics and Subscriptions: Share Pub/Sub resources across projects for multi-project architectures.&lt;/li&gt;
&lt;li&gt;Monitoring and Metrics: Integrated with Cloud Monitoring, providing insights like throughput, acknowledgment latency, and backlog size.&lt;/li&gt;
&lt;li&gt;Snapshot: Captures the state of a subscription at a specific time, enabling message replay from that point.&lt;/li&gt;
&lt;li&gt;Message Encryption: Messages are encrypted in transit and at rest by default, with the option to use Customer-Managed Encryption Keys (CMEK).&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Getting Started &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;
&lt;h3&gt;
  
  
  Pre-requisites &lt;a&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Set Up Your Google Cloud Project:

&lt;ul&gt;
&lt;li&gt;Create a new Google Cloud project or select an existing one.&lt;/li&gt;
&lt;li&gt;Enable the Pub/Sub API.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Install the Google Cloud SDK:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Download and install the SDK from the official website.&lt;/li&gt;
&lt;li&gt;Authenticate your Google Cloud account using
&lt;/li&gt;
&lt;/ul&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt; &lt;span class="c"&gt;# using cli&lt;/span&gt;
 gcloud auth login
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Creating a Topic and Subscription &lt;a&gt;&lt;/a&gt;
&lt;/h3&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Create a topic using cli&lt;/span&gt;
gcloud pubsub topics create my-topic

&lt;span class="c"&gt;# Create a subscription using cli&lt;/span&gt;
gcloud pubsub subscriptions create my-subscription &lt;span class="nt"&gt;--topic&lt;/span&gt; my-topic
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Using the Python SDK
&lt;/span&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;google.cloud&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;pubsub_v1&lt;/span&gt;

&lt;span class="c1"&gt;# Create a Publisher client
&lt;/span&gt;&lt;span class="n"&gt;publisher&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;pubsub_v1&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;PublisherClient&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="n"&gt;topic_path&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;publisher&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;topic_path&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;your-project-id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;my-topic&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;publisher&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create_topic&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;topic_path&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Create a Subscriber client
&lt;/span&gt;&lt;span class="n"&gt;subscriber&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;pubsub_v1&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;SubscriberClient&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="n"&gt;subscription_path&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;subscriber&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;subscription_path&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;your-project-id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;my-subscription&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;subscriber&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create_subscription&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;subscription_path&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;topic_path&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  Publishing Messages &lt;a&gt;&lt;/a&gt;
&lt;/h3&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# using cli&lt;/span&gt;
gcloud pubsub topics publish my-topic &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--message&lt;/span&gt; &lt;span class="s2"&gt;"Hello, Pub/Sub!"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# using the Python SDK
# Publish a message
&lt;/span&gt;&lt;span class="n"&gt;message&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Hello, Pub/Sub!&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="n"&gt;future&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;publisher&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;publish&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;topic_path&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;encode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;utf-8&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;future&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;result&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  Subscribing to Messages &lt;a&gt;&lt;/a&gt;
&lt;/h3&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Using the CLI&lt;/span&gt;
gcloud pubsub subscriptions pull my-subscription &lt;span class="nt"&gt;--auto-ack&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Using the Python SDK
&lt;/span&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;callback&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Received message: {}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;format&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
    &lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;ack&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="n"&gt;subscriber&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;subscribe&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;subscription_path&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;callback&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;callback&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# The subscriber will stay alive until interrupted
&lt;/span&gt;&lt;span class="k"&gt;while&lt;/span&gt; &lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;sleep&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;60&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h2&gt;
  
  
  Use Cases with Practical Examples &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;
&lt;h3&gt;
  
  
  Triggers with Cloud Functions &lt;a&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Use Case: Automatically trigger a process whenever a new message is published to a Pub/Sub topic. For example, sending an email notification when a new user registers.&lt;/p&gt;

&lt;p&gt;Solution: Use Cloud Functions to listen to Pub/Sub topics and handle messages.&lt;/p&gt;

&lt;p&gt;Steps:&lt;br&gt;
Pre-Requisites&lt;br&gt;
-- Enable the apis &lt;code&gt;run.googleapis.com, cloudbuild.googleapis.com, eventarc.googleapis.com&lt;/code&gt;&lt;br&gt;
-- Grant the role &lt;code&gt;roles/logging.logWriter&lt;/code&gt; to the default compute servie account&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gcloud projects add-iam-policy-binding &lt;span class="nv"&gt;$GCP_PROJECT&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--member&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"serviceAccount:&amp;lt;ID&amp;gt;-compute@developer.gserviceaccount.com"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--role&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"roles/logging.logWriter"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Create a Pub/Sub topic:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gcloud pubsub topics create user-registration
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs6py4c58t46f8lcw58fv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs6py4c58t46f8lcw58fv.png" alt="pub-sub-topic" width="800" height="188"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Write a Cloud Function:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;send_email_notification&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;event&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;base64&lt;/span&gt;
    &lt;span class="n"&gt;message&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;base64&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;b64decode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;event&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;data&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]).&lt;/span&gt;&lt;span class="nf"&gt;decode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;utf-8&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Sending email notification for: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Deploy the Cloud Function:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;  gcloud functions deploy sendEmailNotification &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--runtime&lt;/span&gt; python39 &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--trigger-topic&lt;/span&gt; user-registration &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--entry-point&lt;/span&gt; send_email_notification
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Publish a message to test:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;  gcloud pubsub topics publish user-registration &lt;span class="nt"&gt;--message&lt;/span&gt; &lt;span class="s2"&gt;"New User: John Doe"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Result: The Cloud Function logs the email &lt;br&gt;
notification process.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpf8yiwq0mn96ocxxjri5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpf8yiwq0mn96ocxxjri5.png" alt="cloud-function-run" width="800" height="107"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Streaming Data to BigQuery &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Use Case: Process and analyze large volumes of event data, such as IoT sensor readings or transaction logs, by streaming messages from Pub/Sub to BigQuery.&lt;/p&gt;

&lt;p&gt;Solution: Use a pre-built Dataflow template to ingest data into BigQuery.&lt;/p&gt;

&lt;p&gt;Steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Create a Pub/Sub topic and subscription:&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight shell"&gt;&lt;code&gt;gcloud pubsub topics create sensor-data
gcloud pubsub subscriptions create sensor-data-sub &lt;span class="nt"&gt;--topic&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;sensor-data
&lt;/code&gt;&lt;/pre&gt;




&lt;/li&gt;

&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6iryucwuw2l0nohq3cod.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6iryucwuw2l0nohq3cod.png" alt="pub-sub-topic" width="800" height="274"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuqxjbq7ah7owpixpdlse.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuqxjbq7ah7owpixpdlse.png" alt="pub-sub-subscription" width="800" height="330"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Enable BigQuery API and create a BigQuery dataset and table:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="err"&gt;#&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;schema.json&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"temperature"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"FLOAT"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"mode"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"NULLABLE"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"humidity"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"FLOAT"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"mode"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"NULLABLE"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ol&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;```
bq mk sensor_dataset
bq mk --table sensor_dataset.sensor_table \ 
schema.json
```
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdllgn3vknlo9crroloxo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdllgn3vknlo9crroloxo.png" alt=" " width="800" height="162"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Enable the Dataflow API and run the Dataflow job:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# enable dataflow api&lt;/span&gt;
https://console.developers.google.com/apis/api/dataflow.googleapis.com/overview?project&lt;span class="o"&gt;=&lt;/span&gt;your-project-id
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ol&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;```bash
# run the dataflow job
gcloud dataflow jobs run pubsub-to-bigquery \
--gcs-location gs://dataflow-templates-us-central1/latest/PubSub_to_BigQuery \
--region us-central1 \
--parameters inputTopic=projects/your-project-id/topics/sensor-data,outputTableSpec=your-project-id:sensor_dataset.sensor_table
```
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6s7pj0r5ovis1czdixdc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6s7pj0r5ovis1czdixdc.png" alt=" " width="800" height="130"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F50x3x4hx0iknwpus4zmx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F50x3x4hx0iknwpus4zmx.png" alt=" " width="800" height="447"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Publish messages to the topic:&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight shell"&gt;&lt;code&gt;gcloud pubsub topics publish sensor-data &lt;span class="nt"&gt;--message&lt;/span&gt; &lt;span class="s1"&gt;'{"temperature": 25, "humidity": 60}'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;




&lt;/li&gt;

&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwk1gunkmy0g6slqyprjn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwk1gunkmy0g6slqyprjn.png" alt=" " width="800" height="45"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Result: The message data should appear in the BigQuery table for further analysis.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3sqd0k6i848k82ogp4iq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3sqd0k6i848k82ogp4iq.png" alt=" " width="800" height="451"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Logging Events with Sinks &lt;a&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Use Case: Centralize and process logs by routing Cloud Logging events to a Pub/Sub topic for downstream processing, like alerting or archiving.&lt;/p&gt;

&lt;p&gt;Solution: Create a logging sink targeting a Pub/Sub topic.&lt;/p&gt;

&lt;p&gt;Steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create a Pub/Sub topic:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   gcloud pubsub topics create log-events
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Create a logging sink:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;    gcloud logging sinks create log-sink &lt;span class="se"&gt;\&lt;/span&gt;
    pubsub.googleapis.com/projects/your-project-id/topics/log-events
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Set IAM permissions for the sink service account:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gcloud pubsub topics add-iam-policy-binding log-events &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--member&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;serviceAccount:service-account-email &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--role&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;roles/pubsub.publisher
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Create a subscriber to process the logs:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;google.cloud&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;pubsub_v1&lt;/span&gt;

&lt;span class="n"&gt;subscriber&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;pubsub_v1&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;SubscriberClient&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="n"&gt;subscription_path&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;subscriber&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;subscription_path&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;your-project-id&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;log-events-sub&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;callback&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Received log: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;decode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;utf-8&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;ack&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="n"&gt;subscriber&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;subscribe&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;subscription_path&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;callback&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;callback&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Listening for log events...&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Result: Log messages flow into Pub/Sub and can be processed by your custom subscriber.&lt;/p&gt;

&lt;h2&gt;
  
  
  Streaming Messages to Cloud Storage &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Use Case: Archive real-time data, such as application usage metrics or user activity logs, to Cloud Storage for long-term storage.&lt;/p&gt;

&lt;p&gt;Solution: Use Dataflow to write Pub/Sub messages to a Cloud Storage bucket.&lt;/p&gt;

&lt;p&gt;Steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create a Pub/Sub topic and subscription:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gcloud pubsub topics create app-metrics
gcloud pubsub subscriptions create app-metrics-sub &lt;span class="nt"&gt;--topic&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;app-metrics
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Create a Cloud Storage bucket:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gsutil mb gs://your-bucket-name
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Run the Dataflow job:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gcloud dataflow &lt;span class="nb"&gt;jobs &lt;/span&gt;run pubsub-to-gcs &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--gcs-location&lt;/span&gt; gs://dataflow-templates-us-central1/latest/PubSub_to_Cloud_Storage_Text &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--region&lt;/span&gt; us-central1 &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--parameters&lt;/span&gt; &lt;span class="nv"&gt;inputTopic&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;projects/your-project-id/topics/app-metrics,outputDirectory&lt;span class="o"&gt;=&lt;/span&gt;gs://your-bucket-name/data/,outputFilenamePrefix&lt;span class="o"&gt;=&lt;/span&gt;metrics,outputFileSuffix&lt;span class="o"&gt;=&lt;/span&gt;.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Publish messages:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gcloud pubsub topics publish app-metrics &lt;span class="nt"&gt;--message&lt;/span&gt; &lt;span class="s1"&gt;'{"event": "page_view", "user": "123"}'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Result: Messages are written as JSON files in the specified Cloud Storage bucket.&lt;/p&gt;

&lt;p&gt;Conclusion&lt;br&gt;
Google Pub/Sub is a versatile tool for building scalable and reliable applications. By understanding the core concepts and leveraging the CLI and Python SDK, you can effectively utilize Pub/Sub to solve a variety of real-world problems.&lt;/p&gt;

</description>
      <category>pubsub</category>
    </item>
    <item>
      <title>Streamline Your GitHub Workflows with Composite Actions</title>
      <dc:creator>sam-nash</dc:creator>
      <pubDate>Tue, 19 Nov 2024 07:50:57 +0000</pubDate>
      <link>https://dev.to/samnash/streamline-your-github-workflows-with-composite-actions-2pme</link>
      <guid>https://dev.to/samnash/streamline-your-github-workflows-with-composite-actions-2pme</guid>
      <description>&lt;p&gt;GitHub Actions has become a go-to tool for automating workflows, enabling developers to streamline everything from CI/CD to code quality checks. While it’s powerful out of the box, workflows can quickly become unwieldy as the number of repeated steps grows. Composite actions provide a solution to this problem, enabling you to package reusable steps into modular, maintainable components.&lt;/p&gt;

&lt;p&gt;In this blog, we’ll explore what composite actions are, why they’re beneficial, and how to create one using a real-world example.&lt;/p&gt;

&lt;p&gt;What Are Composite Actions?&lt;/p&gt;

&lt;p&gt;Composite actions allow you to encapsulate multiple steps into a single reusable action. Instead of duplicating code across workflows, you can create a standalone block that performs specific tasks and use it across repositories. Composite actions live within your repository and are defined using YAML, making them easy to version, extend, and customize.&lt;/p&gt;

&lt;p&gt;Benefits of using Composite Actions&lt;/p&gt;

&lt;p&gt;Here are some reasons to embrace composite actions:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Reusability: Share the same logic across multiple workflows and repositories.&lt;/li&gt;
&lt;li&gt;Maintainability: Update logic in one place, and all workflows using it benefit.&lt;/li&gt;
&lt;li&gt;Readability: Simplify complex workflows by abstracting repetitive logic.&lt;/li&gt;
&lt;li&gt;Modularity: Create building blocks that other developers or teams can easily use.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Example: Terraform Operations as a Composite Action&lt;/p&gt;

&lt;p&gt;Let’s dive into a practical example. Suppose you have a GitHub Actions workflow that runs Terraform commands such as init, fmt, validate, and plan for managing infrastructure on Google Cloud. Instead of duplicating these steps across workflows, you can package them into a composite action.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The Problem: Repetition in Workflows&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Here’s a snippet of a typical Terraform workflow:&lt;/p&gt;

&lt;p&gt;jobs:&lt;br&gt;
  terraform:&lt;br&gt;
    runs-on: ubuntu-latest&lt;br&gt;
    steps:&lt;br&gt;
      - name: Terraform Init&lt;br&gt;
        run: terraform init -backend-config="bucket=my-project-tfstate"&lt;br&gt;
        working-directory: terraform&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  - name: Terraform Format
    run: terraform fmt
    working-directory: terraform

  - name: Terraform Validate
    run: terraform
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Each step is repeated every time Terraform commands are needed. If you manage multiple workflows or projects, this quickly becomes inefficient and error-prone.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The Solution: A Composite Action&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;We’ll create a composite action to handle all Terraform operations.&lt;/p&gt;

&lt;p&gt;Step 1: Set Up the Directory Structure&lt;/p&gt;

&lt;p&gt;Composite actions are stored in the .github/actions/ directory. Create a folder for your action:&lt;/p&gt;

&lt;p&gt;.github/actions/terraform/&lt;/p&gt;

&lt;p&gt;Inside the folder, create a file named action.yml.&lt;/p&gt;

&lt;p&gt;Step 2: Define the Composite Action&lt;/p&gt;

&lt;p&gt;Here’s the action.yml file for the Terraform composite action:&lt;/p&gt;

&lt;p&gt;name: Terraform Operations&lt;br&gt;
description: Executes Terraform commands like init, fmt, validate, and plan&lt;br&gt;
inputs:&lt;br&gt;
  working_directory:&lt;br&gt;
    description: Terraform working directory&lt;br&gt;
    required: true&lt;br&gt;
  gcp_project:&lt;br&gt;
    description: Target GCP project&lt;br&gt;
    required: true&lt;br&gt;
  apply_changes:&lt;br&gt;
    description: Whether to apply changes&lt;br&gt;
    required: false&lt;br&gt;
    default: false&lt;br&gt;
runs:&lt;br&gt;
  using: "composite"&lt;br&gt;
  steps:&lt;br&gt;
    - name: Terraform Init&lt;br&gt;
      run: terraform init -backend-config="bucket=${{ inputs.gcp_project }}-tfstate"&lt;br&gt;
      working-directory: ${{ inputs.working_directory }}&lt;br&gt;
      shell: bash&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Terraform Format
  run: terraform fmt
  working-directory: ${{ inputs.working_directory }}
  shell: bash

- name: Terraform Validate
  run: terraform validate
  working-directory: ${{ inputs.working_directory }}
  shell: bash

- name: Terraform Plan
  run: terraform plan -var-file="${{ inputs.gcp_project }}.tfvars" -out=tfplan
  working-directory: ${{ inputs.working_directory }}
  shell: bash

- name: Terraform Apply
  if: ${{ inputs.apply_changes == 'true' }}
  run: terraform apply -auto-approve tfplan
  working-directory: ${{ inputs.working_directory }}
  shell: bash
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This composite action supports inputs like the working directory and target GCP project, making it adaptable to different projects.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Using the Composite Action in a Workflow&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Once your composite action is defined, you can use it in your workflows like this:&lt;/p&gt;

&lt;p&gt;name: Terraform Workflow&lt;/p&gt;

&lt;p&gt;on:&lt;br&gt;
  push:&lt;br&gt;
    branches:&lt;br&gt;
      - main&lt;/p&gt;

&lt;p&gt;jobs:&lt;br&gt;
  terraform:&lt;br&gt;
    runs-on: ubuntu-latest&lt;br&gt;
    steps:&lt;br&gt;
      - name: Checkout Code&lt;br&gt;
        uses: actions/checkout@v2&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  - name: Run Terraform Operations
    uses: ./.github/actions/terraform
    with:
      working_directory: terraform
      gcp_project: my-gcp-project
      apply_changes: false
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;How This Simplifies Your Workflow&lt;/p&gt;

&lt;p&gt;• Reduced Duplication: The Terraform logic is centralized in the composite action.&lt;br&gt;
• Improved Readability: The main workflow is concise and easy to follow&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Automating Google Cloud Resource Management with GitHub Actions and Workload Identity Federation</title>
      <dc:creator>sam-nash</dc:creator>
      <pubDate>Sat, 16 Nov 2024 13:21:47 +0000</pubDate>
      <link>https://dev.to/samnash/automating-google-cloud-resource-management-with-github-actions-and-workload-identity-federation-5704</link>
      <guid>https://dev.to/samnash/automating-google-cloud-resource-management-with-github-actions-and-workload-identity-federation-5704</guid>
      <description>&lt;p&gt;In this blog post, we’ll demonstrate how to leverage GitHub Actions (GHA) and Google Workload Identity Federation (WIF) to securely authenticate and create resources on Google Cloud Platform (GCP) using Terraform.&lt;/p&gt;

&lt;p&gt;We’ll use two GitHub repositories and two GCloud projects:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Main GitHub Repository&lt;/strong&gt;:&lt;br&gt;
Contains the Terraform code defining the infrastructure to be provisioned in the Target Project.&lt;br&gt;
Triggers the GitHub Actions workflow upon code changes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Workflow Repository&lt;/strong&gt;:&lt;br&gt;
Hosts the GitHub Actions workflow that automates the deployment process.&lt;br&gt;
Authenticates to the Host Project using Workload Identity Federation (WIF).&lt;br&gt;
Executes Terraform commands to apply the infrastructure changes to the Target Project.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Host Google Cloud Project&lt;/strong&gt;:&lt;br&gt;
Serves as the landing zone for the GitHub Actions workflow.&lt;br&gt;
Configured with WIF, utilises a service account that’s been granted the necessary permissions to access the Target Project.&lt;br&gt;
Does not directly host any infrastructure resources.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Target Google Cloud Project&lt;/strong&gt;:&lt;br&gt;
Receives the infrastructure changes from the workflow.&lt;br&gt;
Hosts the provisioned resources defined in the Terraform code.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcimkhaxgy6aab63vrdlc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcimkhaxgy6aab63vrdlc.png" alt=" " width="800" height="487"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let’s dive into the setup!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1&lt;/strong&gt;: Setting Up Google Workload Identity Federation&lt;br&gt;
1.1 - On your host project create a Google Cloud Service Account&lt;br&gt;
Navigate to the Google Cloud Console.&lt;br&gt;
Go to IAM &amp;amp; Admin &amp;gt; Service Accounts.&lt;br&gt;
Create a new service account with the necessary roles for managing your GCP resources.&lt;br&gt;
1.2 - Configure Workload Identity Federation&lt;br&gt;
Go to IAM &amp;amp; Admin &amp;gt; Workload Identity Federation.&lt;br&gt;
Create a Workload Identity Pool and a Provider linked to your GitHub repository.&lt;br&gt;
Follow Google’s official WIF setup guide for detailed instructions. More in a separate Blog Post soon.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2&lt;/strong&gt;: Permissions&lt;br&gt;
2.1 - Ensure that the target gcp project has a terraform state bucket (ex: project_id-tfstate)&lt;br&gt;
2.2 - Assign relevant roles(roles/editor &amp;amp; &lt;br&gt;
roles/storage.admin) to the Service Account of the Host Project to be able to create/edit resources on the Target Google Project&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  gcloud projects add-iam-policy-binding gcp_project_name \
    --member="serviceAccount:service-acct-name@host_gcp_project.iam.gserviceaccount.com" \
    --role="roles/editor"

  gcloud projects add-iam-policy-binding gcp_project_name \
    --member="serviceAccount:service-acct-name@host_gcp_project.iam.gserviceaccount.com" \
    --role="roles/storage.admin"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 3&lt;/strong&gt;: Configure the Main Repo&lt;br&gt;
This repository (e.g., org_name/google_cloud) will contain your Terraform code to manage GCP resources.&lt;/p&gt;

&lt;p&gt;Example main.tf:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;provider "google" {
  project = var.project
  region  = var.region
}

resource "google_storage_bucket" "bucket" {
  name     = "${var.project}-bucket"
  location = var.region
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 4&lt;/strong&gt;: Add a Workflow to the Main Repo&lt;/p&gt;

&lt;p&gt;Create a GitHub Actions workflow (e.g., .github/workflows/dispatch.yml) in the main repository. This workflow is triggered by events such as push, pull request, or tag creation. When triggered, it makes a GitHub API call to initiate a workflow in another repository (referred to as the Workflow Repo).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 5&lt;/strong&gt;: Configure the Workflow Repo&lt;/p&gt;

&lt;p&gt;In the Workflow Repository (Workflow Repo), set up a GitHub Actions workflow dedicated to running Terraform-specific tasks, such as generating a Terraform plan and applying it to the infrastructure. For example, you can name this file .github/workflows/terraform_plan_apply_gcp.yml. Let’s refer to it as the Terraform Workflow.&lt;/p&gt;

&lt;p&gt;How It Works&lt;/p&gt;

&lt;p&gt;1.Triggering Event:&lt;br&gt;
This Terraform Workflow is designed to be triggered by repository_dispatch events. These events are custom webhook events sent by the main repository via a GitHub API call. When a repository_dispatch event occurs with a specific type, it starts this workflow.&lt;br&gt;
2.Event Types:&lt;br&gt;
You can define specific event types that the workflow should listen to. In this example, the workflow listens for two custom event types:&lt;br&gt;
    • terraform_plan: Used to trigger the workflow for generating a Terraform plan.&lt;br&gt;
    • terraform_apply: Used to trigger the workflow for applying the Terraform changes.&lt;br&gt;
3.Workflow Configuration:&lt;br&gt;
Add the following YAML configuration to define the trigger mechanism in the Terraform Workflow:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;on:
  repository_dispatch:
    types: [terraform_plan, terraform_apply]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;• The on: repository_dispatch block specifies that this workflow listens for repository_dispatch events.&lt;br&gt;
• The types field restricts the workflow to respond only to the specified event types (terraform_plan and terraform_apply).&lt;/p&gt;

&lt;p&gt;4.Integration with Main Repository:&lt;br&gt;
In the main repository, another workflow (e.g., .github/workflows/dispatch.yml) sends these repository_dispatch events using a GitHub API call. The event payload includes the event type (e.g., terraform_plan) and any additional data required by the Terraform Workflow.&lt;br&gt;
5.Example Workflow Behavior:&lt;br&gt;
• If the main repository triggers a repository_dispatch event with the type terraform_plan, the Terraform Workflow starts and runs tasks related to generating a Terraform plan.&lt;br&gt;
• Similarly, if the event type is terraform_apply, the workflow executes the Terraform apply process to update the infrastructure.&lt;/p&gt;

&lt;p&gt;This mechanism decouples the triggering mechanism in the main repository from the Terraform-specific operations in the Workflow Repo, enabling modular and maintainable workflows.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scenarios to demonstrate how the Main Repo Workflow Operates&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Scenario 1&lt;/strong&gt;: Push with a Tag&lt;br&gt;
A developer pushes Terraform code to a feature branch and adds a tag (e.g., gcp_project_TFPLAN_01).&lt;br&gt;
The dispatch.yml workflow triggers when a tag matching a specific pattern is pushed (e.g., '[a-z]+-[a-z]+&lt;em&gt;TFPLAN&lt;/em&gt;[0-9]+').&lt;br&gt;
The workflow determines the Terraform action (e.g., plan) and triggers the Workflow Repo's Terraform workflow via a GitHub API repository_dispatch event.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scenario 2&lt;/strong&gt;: Pull Request Creation&lt;br&gt;
When a pull request is opened from a feature branch to develop, the workflow sends a repository_dispatch event with details such as:&lt;br&gt;
event_type: terraform_plan&lt;br&gt;
GCP project name &amp;amp; PR metadata (e.g., PR number, status=opened, merged=false).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scenario 3&lt;/strong&gt;: Pull Request Merge&lt;br&gt;
When a pull request is merged, the workflow sends a repository_dispatch event with:&lt;br&gt;
event_type: terraform_apply&lt;br&gt;
GCP project name &amp;amp; PR metadata (e.g., PR number, status=closed, merged=true).&lt;/p&gt;

&lt;p&gt;Full workflow&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# This GitHub Actions workflow is designed to trigger a Terraform workflow based on specific events.

name: Trigger Terraform Workflow

on:
  push:
    tags:
      - '[a-z]+-[a-z]+_PLAN_[0-9]+'
    # branches:
    #   - 'feature/*'
  pull_request:
    types: [opened, synchronize, closed]
    branches:
      - develop

jobs:
  trigger:
    runs-on: ubuntu-latest
    if: &amp;gt;
      github.event_name == 'push' || 
      (github.event_name == 'pull_request' &amp;amp;&amp;amp; 
        (github.event.action == 'opened' || 
          github.event.action == 'synchronize' || 
          (github.event.action == 'closed' &amp;amp;&amp;amp; github.event.pull_request.merged == true)))

    steps:
      - name: Checkout repository
        uses: actions/checkout@v2

      # Debug step to print the GITHUB_REF
      - name: Retrieve GitHub Data
        if: github.event_name == 'push' || github.event_name == 'pull_request' 
        run: |
          echo "GITHUB_REF=${GITHUB_REF}"
          TAG_NAME=${GITHUB_REF#refs/tags/}
          echo "TAG_NAME=$TAG_NAME" &amp;gt;&amp;gt; $GITHUB_ENV

      - name: Print PR Information
        if: github.event_name == 'pull_request'
        run: |
          # Get the latest TAG Name from the source branch
          git fetch --tags

          # Get latest tag
          TAG_NAME=$(git tag --sort=-creatordate | head -n 1)

          # Output the tag
          if [ -z "$TAG_NAME" ]; then
            echo "No tags found on the source branch: $SOURCE_BRANCH"
          else
            echo "The latest tag on the source branch ($SOURCE_BRANCH) is: $TAG_NAME"
            echo "TAG_NAME=$TAG_NAME" &amp;gt;&amp;gt; $GITHUB_ENV
          fi
          echo "PR Number: ${{ github.event.pull_request.number }}"
          echo "PR Action: ${{ github.event.action }}"
          echo "PR Merged: ${{ github.event.pull_request.merged }}"

      - name: Extract Information from Tag or PR
        id: extract_info
        run: |
          GCP_PROJECT=$(echo $TAG_NAME | cut -d'_' -f1)
          echo "GCP_PROJECT=$GCP_PROJECT" &amp;gt;&amp;gt; $GITHUB_ENV
          echo "The Tag Name is: $TAG_NAME"
          echo "The Target GCP Project is: $GCP_PROJECT"
          if [[ "${{ github.event_name }}" == "push" ]]; then
            ACTION="plan"
          elif [[ "${{ github.event_name }}" == "pull_request" ]]; then
            if [[ "${{ github.event.action }}" == "closed" &amp;amp;&amp;amp; "${{ github.event.pull_request.merged }}" == "true" ]]; then
              ACTION="apply"
            else
              ACTION="plan"
            fi
          fi
          echo "ACTION=$ACTION" &amp;gt;&amp;gt; $GITHUB_ENV
          echo "The Terraform Action is: $ACTION"

      - name: Trigger Terraform Workflow for Push Commits
        if: github.event_name == 'push'
        run: |
          echo "This was triggered as a result of the Event: ${{ github.event_name }} commits"
          curl -X POST \
            -H "Accept: application/vnd.github.v3+json" \
            -H "Authorization: Bearer ${{ secrets.GH_DISPATCH_PAT }}" \
            https://api.github.com/repos/${{ github.repository }}/dispatches \
            -d "{\"event_type\":\"terraform_${{ env.ACTION }}\", \"client_payload\": {\"repository\": \"${{ github.repository }}\", \"project_name\": \"${{ env.GCP_PROJECT }}\", \"tag_name\": \"${{ env.TAG_NAME }}\"}}"

      - name: Trigger Terraform Workflow for PR
        if: github.event_name == 'pull_request'
        run: |
          echo "This was triggered as a result of PR Number: ${{ github.event.pull_request.number }} being ${{ github.event.action }}"
          curl -X POST \
            -H "Accept: application/vnd.github.v3+json" \
            -H "Authorization: Bearer ${{ secrets.GH_DISPATCH_PAT }}" \
            https://api.github.com/repos/${{ github.repository }}/dispatches \
            -d "{\"event_type\":\"terraform_${{ env.ACTION }}\", \"client_payload\": {\"repository\": \"${{ github.repository }}\", \"pr_number\": \"${{ github.event.pull_request.number }}\", \"pr_event\": \"${{ github.event.action }}\", \"pr_merged\": \"${{ github.event.pull_request.merged  }}\", \"project_name\": \"${{ env.GCP_PROJECT }}\", \"action\": \"${{ env.ACTION }}\"}}"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;How the Terraform Workflow Operates&lt;/strong&gt;&lt;br&gt;
This workflow is hosted in the Workflow Repo and is triggered by repository dispatch events. Below, we break down each step with detailed explanations and corresponding code.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1&lt;/strong&gt;: Define Workflow Triggers&lt;br&gt;
The workflow listens for certain event types:&lt;/p&gt;

&lt;p&gt;repository_dispatch: Triggered by the Main Repo for Terraform plan and apply actions. More on repository dispatch can be found on the &lt;a href="https://docs.github.com/en/actions/writing-workflows/choosing-when-your-workflow-runs/events-that-trigger-workflows#repository_dispatch" rel="noopener noreferrer"&gt;Official GitHub Documentation&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: Terraform CI/CD

on:
  workflow_dispatch:
  repository_dispatch:
    types: [terraform_plan, terraform_apply]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 2&lt;/strong&gt;: Set Up the Terraform Job&lt;br&gt;
Define a Terraform job that runs on &lt;code&gt;ubuntu-latest&lt;/code&gt; with relevant permissions for GitHub Actions to interact with the repository and pull requests.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;jobs:
  terraform:
    runs-on: ubuntu-latest
    permissions:
      contents: read
      id-token: write
      pull-requests: write
      repository-projects: write
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 3&lt;/strong&gt;: Set Environment Variables&lt;br&gt;
Capture the payload data from the triggering event(the main workflow) and set them as environment variables for subsequent steps.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Set ENV variables
  run: |
    echo "TARGET_GCP_PROJECT=${{ github.event.client_payload.project_name }}" &amp;gt;&amp;gt; $GITHUB_ENV
    echo "CLOUDSDK_CORE_PROJECT=${{ github.event.client_payload.project_name }}" &amp;gt;&amp;gt; $GITHUB_ENV
    echo "GH_REPOSITORY=${{ github.event.client_payload.repository }}" &amp;gt;&amp;gt; $GITHUB_ENV
    GH_REPO_NAME=$(echo "${{ github.event.client_payload.repository }}" | cut -d'/' -f2)
    echo "GH_REPO_NAME=${GH_REPO_NAME}" &amp;gt;&amp;gt; $GITHUB_ENV       
    echo "GH_PR_NUMBER=${{ github.event.client_payload.pr_number }}" &amp;gt;&amp;gt; $GITHUB_ENV
    echo "GH_PR_EVENT=${{ github.event.client_payload.pr_event }}" &amp;gt;&amp;gt; $GITHUB_ENV
    echo "GH_PR_MERGED=${{ github.event.client_payload.pr_merged }}" &amp;gt;&amp;gt; $GITHUB_ENV
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 4&lt;/strong&gt;: Checkout the Code&lt;br&gt;
Clone the Main Repo containing the Terraform code.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Checkout code
  uses: actions/checkout@v2
  with:
    repository: ${{ env.GH_REPOSITORY }}
    token: ${{ secrets.GITHUB_TOKEN }}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 5&lt;/strong&gt;: Set Up Terraform&lt;br&gt;
Set up Terraform to run commands such as init, plan, and apply.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Set up Terraform
  uses: hashicorp/setup-terraform@v3

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 6&lt;/strong&gt;: Authenticate to Google Cloud&lt;br&gt;
Use Workload Identity Federation to securely authenticate with Google Cloud.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Authenticate to Google Cloud
  id: authenticate
  uses: google-github-actions/auth@v2
  with:
    create_credentials_file: true
    workload_identity_provider: ${{ secrets.GCP_WORKLOAD_IDENTITY_PROVIDER }}
    service_account: ${{ secrets.GCP_SERVICE_ACCOUNT }}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 7&lt;/strong&gt;: Initialize Terraform&lt;br&gt;
Set the GCP project and initialize Terraform with the remote backend configuration.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Terraform Init
  id: init
  run: terraform init -backend-config="bucket=$TARGET_GCP_PROJECT-tfstate"
  working-directory: ${{ env.TF_WORKING_DIR }}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 8&lt;/strong&gt;: Validate Terraform Code&lt;br&gt;
Ensure the Terraform configuration is correctly formatted and syntactically valid.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Terraform Format
  id: fmt
  run: terraform fmt
  working-directory: ${{ env.TF_WORKING_DIR }}

- name: Terraform Validate
  id: validate
  run: terraform validate
  working-directory: ${{ env.TF_WORKING_DIR }}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 9&lt;/strong&gt;: Generate Terraform Plan&lt;br&gt;
Generate a plan and output the details for review.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Terraform Plan
  id: plan
  run: terraform plan -var-file="$TARGET_GCP_PROJECT.tfvars" -out=tfplan
  working-directory: ${{ env.TF_WORKING_DIR }}

- run: terraform show -no-color tfplan
  id: show
  working-directory: ${{ env.TF_WORKING_DIR }}
## We will use the output of terraform show to write the plan as a comment to the pull request
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 10&lt;/strong&gt;: Comment on Pull Requests&lt;br&gt;
If triggered by a pull request, post the plan as a comment for review.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: PR Comment
  uses: actions/github-script@v7
  if: github.event.action == 'terraform_plan' &amp;amp;&amp;amp; ( env.GH_PR_EVENT == 'opened' || env.GH_PR_EVENT == 'synchronize' )
  env:
    PLAN: "terraform\n${{ steps.show.outputs.stdout }}"
  with:
    github-token: ${{ secrets.GH_PAT }}
    script: |
      const { data: comments } = await github.rest.issues.listComments({
        owner: context.repo.owner,
        repo: process.env.GH_REPO_NAME,
        issue_number: process.env.GH_PR_NUMBER,
      })
      const botComment = comments.find(comment =&amp;gt; comment.body.includes('Terraform Format and Style'))
      const output = `#### Terraform Plan\n\`\`\`\n${process.env.PLAN}\n\`\`\``

      if (botComment) {
        github.rest.issues.updateComment({
          owner: context.repo.owner,
          repo: process.env.GH_REPO_NAME,
          comment_id: botComment.id,
          body: output
        })
      } else {
        github.rest.issues.createComment({
          owner: context.repo.owner,
          repo: process.env.GH_REPO_NAME,
          issue_number: process.env.GH_PR_NUMBER,
          body: output
        })
      }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 11&lt;/strong&gt;: Apply the Terraform Plan&lt;br&gt;
If the event is terraform_apply, apply the plan to create resources.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Terraform Apply
  if: github.event.action == 'terraform_apply'
  id: apply
  run: terraform apply -auto-approve tfplan
  working-directory: ${{ env.TF_WORKING_DIR }}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;By integrating GitHub Actions and Google Workload Identity Federation, you can establish a secure, automated CI/CD pipeline for managing GCP resources using Terraform. This approach ensures that Terraform plans are reviewed, validated, and applied only after thorough approval, enhancing both security and operational efficiency.&lt;/p&gt;

</description>
      <category>githubactions</category>
      <category>googlecloud</category>
      <category>github</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>BDD Style API Tests using Cypress and Cucumber</title>
      <dc:creator>sam-nash</dc:creator>
      <pubDate>Thu, 23 Feb 2023 00:28:36 +0000</pubDate>
      <link>https://dev.to/samnash/bdd-style-api-tests-using-cypress-and-cucumber-21i3</link>
      <guid>https://dev.to/samnash/bdd-style-api-tests-using-cypress-and-cucumber-21i3</guid>
      <description>&lt;p&gt;I am just here to beat Medium(the formatting sucks, you cant read more than 3 stories...).&lt;/p&gt;

&lt;p&gt;I like BDD(It has a great syntax which anybody can understand). Especially testers, business users, product team who aren't technical. &lt;/p&gt;

&lt;p&gt;But BDD shouldn't be used to automate end users' acceptance testing. The whole idea of acceptance testing is to let the customer(end user) look at, feel and use the application like they normally do in a real world, but not automate their behaviour.&lt;/p&gt;

&lt;p&gt;BDD might be useful for Product Owners/Business Analysts, but again, is it worth the effort of a automation engineer to implement BDD for those 1 or 2 people?&lt;/p&gt;

&lt;p&gt;And especially for API Testing, I firmly believe BDD is a &lt;strong&gt;NO&lt;/strong&gt;. I think a PO or BA can see the developer or tester demo it to them in the Sprint review. &lt;/p&gt;

&lt;p&gt;BDD is GOOD but I think it is an overkill for API Testing(Or any backend tests). But we'll still explore how to do that today.&lt;/p&gt;

&lt;p&gt;Recently, I have become a great fan of &lt;a href="https://www.cypress.io/" rel="noopener noreferrer"&gt;cypress.io&lt;/a&gt; and started to use it for almost everything.&lt;/p&gt;

&lt;p&gt;Cypress is an open-source end-to-end testing framework used to test web applications. It provides a fast, reliable, and easy-to-use testing experience, with an intuitive API and an extensive set of built-in commands. &lt;br&gt;
Cypress is great. In the last 1 year there have been 3 major version upgrades, easy to use, open source, community support is increasing. &lt;/p&gt;

&lt;p&gt;Cucumber is a popular tool used for behavior-driven development (BDD). It allows you to define test cases in a human-readable format, which can be easily understood by non-technical stakeholders. By using Cucumber with Cypress, you can create tests that are easy to write, easy to read, and easy to maintain.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Overview of the Git Repo&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://github.com/sam-nash/cypress-cucumber" rel="noopener noreferrer"&gt;git repo&lt;/a&gt; contains a sample project that demonstrates how to use Cypress with Cucumber. &lt;/p&gt;

&lt;p&gt;The tests are organized into feature files, which are meant to describe different aspects of the weather api. For example, the &lt;code&gt;stations.feature&lt;/code&gt; file contains test cases related to the functionality of weather station creation, &amp;amp; we could add another solarRadion.feature file that can contain test cases related to the Solar Radiation API.&lt;/p&gt;

&lt;p&gt;Each feature file contains one or more scenarios, which describe a specific test case. Scenarios are written in Gherkin syntax, which is a human-readable format that allows you to describe the expected behavior of the web application. &lt;/p&gt;

&lt;h2&gt;
  
  
  Our API under Test
&lt;/h2&gt;

&lt;p&gt;We'll make use of the &lt;a href="https://openweathermap.org/stations" rel="noopener noreferrer"&gt;OpenWeatherMap Stations API&lt;/a&gt; for our tests.&lt;/p&gt;

&lt;p&gt;The API exposes the following methods :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;POST(to register a stations)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;PUT(to update the information about a station)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;GET(to retrieve all the stations or a specific station using the stationId)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;DELETE(a station that was created by the current user)&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Generate an API Key
&lt;/h3&gt;

&lt;p&gt;The Open Weather uses an API Key for Authentication.&lt;br&gt;
To generate and use the apiKey in your tests :&lt;br&gt;
Register online at OpenWeather and generate an api key.&lt;/p&gt;

&lt;h2&gt;
  
  
  Set Up and Install
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;If you want to skip all the step-by-step installations and set up, then I recommend that you clone this &lt;a href="https://github.com/sam-nash/cypress-cucumber" rel="noopener noreferrer"&gt;repo&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Once cloned, cd to the cloned directory and perform &lt;code&gt;npm install&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Browse through some of the tests that I wrote already to perform POST + GET operations and to verify the response.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Cypress configuration
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The default cypress configuration &lt;code&gt;cypress.config.js&lt;/code&gt; has the basics that we need to test the API + Reporting.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Additionally create a &lt;code&gt;cypress.env.json&lt;/code&gt; file at the root of your project folder &amp;amp; update the file like below. Replace &lt;code&gt;apiKey&lt;/code&gt; with the key that was generated previously.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;example : &lt;br&gt;
&lt;code&gt;{ "appid": "&amp;lt;apiKey&amp;gt;" }&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NOTE&lt;/strong&gt; : Let this &lt;code&gt;cypress.env.json&lt;/code&gt; file be local to your machine &amp;amp; gitignored. It is a best practice to generate this dynamically or store this as an env var in the CI server.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cypress custom commands
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;As we might make use of cy.request() multiple times with different methods, I've created custom commands in (commands.js)[cypress/support/commands.js] &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;This makes it user friendly(for users who don't want to be too bothered with the technical details of how to make requests). &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Better readability of code.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  To Run the tests
&lt;/h2&gt;

&lt;p&gt;From your proj dir: type this command on the console and hit return.&lt;br&gt;
&lt;code&gt;npm run cucumberTest&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;You should see the tests running and pass. You should also see the test results printed on the console. &lt;/p&gt;

&lt;p&gt;If you'd like to see a html report, navigate to cypress/reports and open the index.html&lt;/p&gt;

&lt;h2&gt;
  
  
  Under the Hood
&lt;/h2&gt;

&lt;p&gt;So what actually happened here ?&lt;/p&gt;

&lt;p&gt;The Tests rely on two files - the &lt;code&gt;Feature&lt;/code&gt; and the &lt;code&gt;spec or implementation&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;Feature&lt;/strong&gt; file is the file which contains the &lt;strong&gt;Feature&lt;/strong&gt; &amp;amp; the associated &lt;strong&gt;scenarios&lt;/strong&gt; that are being tested in the &lt;code&gt;Given&lt;/code&gt;  &lt;code&gt;When&lt;/code&gt;  &lt;code&gt;Then&lt;/code&gt;  &lt;code&gt;And&lt;/code&gt; format.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;Spec&lt;/strong&gt; file is the implementation of the feature file. This is where Cypress code resides that makes API calls and uses assertions to verify the response received.&lt;/p&gt;

&lt;p&gt;The Given, When, Then &amp;amp; And sections in the &lt;strong&gt;Feature&lt;/strong&gt; file have matching Given, When, Then &amp;amp; And code blocks in the &lt;strong&gt;Spec&lt;/strong&gt; file.&lt;/p&gt;

&lt;p&gt;Feature = 'saying'&lt;br&gt;
Spec = 'doing'&lt;/p&gt;

&lt;h3&gt;
  
  
  Construct - Feature/Test Organization
&lt;/h3&gt;

&lt;p&gt;a. All Feature files must be created with the file name convention name.feature under the cypress/e2e folder [example: stations.feature] &lt;br&gt;
b. Create a corresponding folder under e2e that has the same name as the feature name above(without '.feature') [example : stations] &lt;br&gt;
c. Create/add your cypress spec/feature implementation under this folder [example : /e2e/stations/apiTest.cy.js]&lt;br&gt;
d. To add your own scenarios and tests, edit the feature file and edit/add more API test scenarios using the Given, When, Then &amp;amp; And construct.&lt;br&gt;
e. And then add the matching tests to the &lt;code&gt;apiTest.js&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Approaches
&lt;/h2&gt;

&lt;p&gt;To demonstrate API Testing with Cypress &amp;amp; Cucumber, I have added tests that make use of two approaches as described below.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
DATATABLE - Feature is in the stations.feature file that has the test data in a tabular form(to test multiple data combos) &amp;amp; the cypress spec is in the e2e/stations/apiTest.js which has the implementation. The cypress spec loops through the data to POST and performs a response code verification &amp;amp; stores each stationId in an array. It then also sends a GET request for each stationId &amp;amp; verifies the response to check if it matches the original request that was previously sent in the POST call.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;For any &lt;strong&gt;Scenario&lt;/strong&gt; :&lt;br&gt;
&lt;u&gt;&lt;strong&gt;GIVEN&lt;/strong&gt;&lt;/u&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In the Given section of the feature file we pass a table as the data.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Given I have the following station data to post:
            | external_id  | name                       | latitude | longitude | altitude |
            | DEMO_TEST001 | Team Demo Test Station 001 | 33.33    | -122.43   | 222      |
            | DEMO_TEST002 | Team Demo Test Station 002 | 44.44    | -122.44   | 111      |
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;In the Given section of the spec(implementation) file, We pass the above datatable as a parameter &amp;amp; then make use of the Cucumber method: &lt;code&gt;hashes()&lt;/code&gt; that belongs to the class: &lt;code&gt;dataTable&lt;/code&gt; to return an array of objects &amp;amp; then convert this datatable into a &lt;a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Map" rel="noopener noreferrer"&gt;Map&lt;/a&gt;.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Given('I have the following station data to post:', (dataTable) =&amp;gt; {
  // Convert the data table to an array of objects with header keys
  requestData = dataTable.hashes().map((row)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
We then convert string representation of numbers and floats to their respective types. If we dont do this the numbers and float values will be converted to strings.(We cant send latitiude as a string to the API).
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    Object.keys(row).forEach((key) =&amp;gt; {
      const value = row[key];
      if (!isNaN(value)) {
        if (value.includes('.')) {
          row[key] = parseFloat(value);
        } else {
          row[key] = parseInt(value, 10);
        }
      }
    });
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Then we wrap this array of objects as an alias for a later use.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cy.wrap(requestData).as('requestData');
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;u&gt;&lt;strong&gt;WHEN&lt;/strong&gt;&lt;/u&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In the feature file we specify the expected outcome of the POST call by specifying the http status code as a number.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;When I post the station data request body to the Create Stations API, then I receive a response status code 201 and an unique alphanumeric stationId in the response

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;In the spec file, we make a POST call using the cypress command we developed for this(the command basically sends a cy.request() with the required parameters like method, requestbody, apiKey).
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;When(
  'I post the station data request body to the Create Stations API, then I receive a response status code {int} and an unique alphanumeric stationId in the response',
  (statusCode) =&amp;gt; {
    cy.get('@requestData').each((request) =&amp;gt; {
      cy.createStation(apiKey, request).then((response) =&amp;gt; {
        //create an alias that stores the response received from the api.
        expect(response.status).to.eq(statusCode); //verify the http status code
        expect(response.body.ID).to.match(/^[a-z0-9]+$/i);
        //store the stationId for later use to retrive the station for verification
        stationIds.push(response.body.ID);
      });
    });
  }
);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;&lt;p&gt;We then perform assertions on the statusCode that was passed as parameter to When.&lt;br&gt;
Note that we passed the actual status code as &lt;strong&gt;201&lt;/strong&gt; in the feature but in the implementation, we specified it as {int}. This is to tell the spec what to read from this position from the feature file.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;We then store the &lt;code&gt;stationIds&lt;/code&gt; for a later use to make the GET call.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;u&gt;&lt;strong&gt;THEN&lt;/strong&gt;&lt;/u&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Finally, in the Then section we make another API GET call for each station that was previously extracted and stored in an array &amp;amp; perform the required assertions.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Then(
  'When the unique stationId is queried using a GET request, the api returns a {int} status code',
  (statusCode) =&amp;gt; {
    //Call the GET API with the stationId created using the POST Request &amp;amp; store its response in an alias
    stationIds.forEach((stationId) =&amp;gt; {
      cy.getStation(stationId).then((response) =&amp;gt; {
        expect(response.status).to.eq(statusCode);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
Without DATA TABLE - Feature is in the apiTests.feature &amp;amp; its implementation in the e2e/apiTests/apiTests.js. Here the scenarios have been separated for each POST Request(1 per each test data) and corresponding GET requests verify the data.&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The ONLY intention to separate it this way is to demonstrate that an api request body can be added in the scenario's Given or When.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;When it comes to reporting, in the former approach, the test report displays the various tests(for each test data combo) under the same scenario and in the latter, the report displays the various tests/scenarios separately.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Happy Testing!&lt;/p&gt;

</description>
      <category>discuss</category>
    </item>
  </channel>
</rss>
