<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Kawi Neal</title>
    <description>The latest articles on DEV Community by Kawi Neal (@kawineal).</description>
    <link>https://dev.to/kawineal</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/kawineal"/>
    <language>en</language>
    <item>
      <title>Infrastructure as Code: Build &amp; Test GCP Load Balancer with Terraform</title>
      <dc:creator>Kawi Neal</dc:creator>
      <pubDate>Fri, 20 Nov 2020 22:39:57 +0000</pubDate>
      <link>https://dev.to/kawineal/build-test-gcp-http-load-balancer-with-terraform-3oak</link>
      <guid>https://dev.to/kawineal/build-test-gcp-http-load-balancer-with-terraform-3oak</guid>
      <description>&lt;p&gt;The primary goal of this post is to  :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Describe configuration &amp;amp; infrastructure build out and testing of Google Cloud Platform (GCP) HTTP Load Balancer using &lt;a href="https://www.terraform.io/" rel="noopener noreferrer"&gt;Hashicorp Terraform&lt;/a&gt;, an open source "Infrastructure As Code" (IaC) tool.   &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Provide a high-level overview of Terraform and highlight a number of key elements of Hashicorp's Configuration Language (HCL) used in the configuring resources for deploying HTTP Load Balancer.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Google Cloud (GCP) load balancing is implemented at the edge of GCP network, offering load balancing to distribute incoming network traffic across multiple virtual machines (VM) instances. This allows for your network traffic to be distributed &amp;amp; load balanced across single or multiple regions close to your users.  &lt;/p&gt;

&lt;p&gt;Some of the features offered by GCP Load Balancing are :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Automatic intelligent autoscaling of your backends based on CPU utilization, load capacity &amp;amp; monitoring metrics.&lt;/li&gt;
&lt;li&gt;Traffic routing to the closest virtual instance.&lt;/li&gt;
&lt;li&gt;Global load balancing for when your applications are available across the world.&lt;/li&gt;
&lt;li&gt;High availability &amp;amp; redundancy  which means that if a component(e.g virtual instance) fails, it is automatically restarted or replaced.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  &lt;b&gt;Prerequisites / Setup&lt;/b&gt;
&lt;/h2&gt;

&lt;p&gt;This article will assume that you have some familiarity with cloud computing infrastructure &amp;amp; resources,  Infrastructure as Code (IaC)  and Terraform. In order to set up your environment &amp;amp; create components you will need a Google account , have access to &lt;a href="https://console.cloud.google.com/" rel="noopener noreferrer"&gt;Google Cloud Console&lt;/a&gt; and rights within that account to create and administer projects via Google Console.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;b&gt;GCP SETUP&lt;/b&gt;
&lt;/h2&gt;

&lt;p&gt;Setup needed within GCP : &lt;br&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create project&lt;/li&gt;
&lt;li&gt;Create Service Account &amp;amp; associated key to allow Terraform to access GCP Project.  We will only grant the Service Account  minimum permission required for this effort.&lt;/li&gt;
&lt;li&gt;Create a storage bucket to store infrastructure state via Terraform.&lt;/li&gt;
&lt;li&gt;Add public SSH key to GCP so that Terraform can connect to GCP via remote SSH with a private key. 

&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Create Project
&lt;/h2&gt;

&lt;p&gt;Log into your google account and use URL below to create project. For this effort we can name the project "http-loadbalancer". &lt;br&gt;&lt;br&gt;
&lt;a href="https://console.cloud.google.com/projectcreate" rel="noopener noreferrer"&gt;https://console.cloud.google.com/projectcreate&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstorage.googleapis.com%2Fhttp-loadbalancer%2Fimages%2Fgcp_NewProject.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstorage.googleapis.com%2Fhttp-loadbalancer%2Fimages%2Fgcp_NewProject.png" title="GCP Project Create" alt="alt text" width="800" height="400"&gt;&lt;/a&gt;&lt;br&gt;
&lt;br&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Service Account
&lt;/h2&gt;

&lt;p&gt;Before we start creating infrastructure resources via Terraform we need to create a &lt;b&gt;Service Account&lt;/b&gt; via Google Console. Service Accounts can be used by applications(e.g Terraform) to make authorized API calls to create infrastructure resources. Service Accounts are not user accounts and it does not have passwords associated with them. Service Accounts are associated with private/public RSA key-pairs that are used for authentication to Google.&lt;br&gt;
Select your project, Click &lt;b&gt;IAM &amp;amp; Admin&lt;/b&gt; menu, &lt;b&gt;Service Accounts&lt;/b&gt; option and then click &lt;b&gt;+ Create Service Account&lt;/b&gt; button.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstorage.googleapis.com%2Fhttp-loadbalancer%2Fimages%2Fgcp_CreateServiceAccount_1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstorage.googleapis.com%2Fhttp-loadbalancer%2Fimages%2Fgcp_CreateServiceAccount_1.png" title="Create Service Account-1" alt="alt text" width="800" height="400"&gt;&lt;/a&gt;&lt;br&gt;
&lt;br&gt;&lt;/p&gt;

&lt;p&gt;Enter a name  and description for the Service Account and click the &lt;b&gt;CREATE&lt;/b&gt; button.  &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstorage.googleapis.com%2Fhttp-loadbalancer%2Fimages%2Fgcp_CreateServiceAccount_2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstorage.googleapis.com%2Fhttp-loadbalancer%2Fimages%2Fgcp_CreateServiceAccount_2.png" title="Create Service Account-2" alt="alt text" width="800" height="400"&gt;&lt;/a&gt;&lt;br&gt;
&lt;br&gt;&lt;/p&gt;

&lt;p&gt;Give the newly created Service Account project permissions. Add the following roles (Compute Admin &amp;amp; Storage Admin) below and click the &lt;b&gt;CONTINUE&lt;/b&gt; button. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstorage.googleapis.com%2Fhttp-loadbalancer%2Fimages%2Fgcp_CreateServiceAccount_ProjectAccess.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstorage.googleapis.com%2Fhttp-loadbalancer%2Fimages%2Fgcp_CreateServiceAccount_ProjectAccess.png" title="ServiceAccount ProjectAccess" alt="alt text" width="800" height="400"&gt;&lt;/a&gt;&lt;br&gt;
&lt;br&gt;&lt;/p&gt;

&lt;p&gt;Next is to generate our authentication key file (&lt;b&gt;JSON&lt;/b&gt;) that will be used by Terraform to log into to GCP. Click the on &lt;b&gt;Actions&lt;/b&gt; column as shown and select &lt;b&gt;Create key&lt;/b&gt; to create key. &lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstorage.googleapis.com%2Fhttp-loadbalancer%2Fimages%2Fgcp_CreateServiceAccount_CreateKey_Select.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstorage.googleapis.com%2Fhttp-loadbalancer%2Fimages%2Fgcp_CreateServiceAccount_CreateKey_Select.png" title="ServiceAccount Create Key" alt="alt text" width="800" height="400"&gt;&lt;/a&gt;&lt;br&gt;
&lt;br&gt;&lt;br&gt;
Select JSON , click on the &lt;b&gt;CREATE&lt;/b&gt; button and JSON file is downloaded to your computer. Rename the  file to "http-loadbalancer.json" and store in a secure folder for use later in our Terraform project. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstorage.googleapis.com%2Fhttp-loadbalancer%2Fimages%2Fgcp_CreateServiceAccount_PrivateKey_Saved.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstorage.googleapis.com%2Fhttp-loadbalancer%2Fimages%2Fgcp_CreateServiceAccount_PrivateKey_Saved.png" title="ServiceAccount Save Private Key" alt="alt text" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Storage Bucket
&lt;/h2&gt;

&lt;p&gt;We will need to create a GCP storage bucket to support the remote state feature of Terraform backends.   By default, Terraform stores infrastructure state locally in a file, &lt;code&gt;terraform.tfstate&lt;/code&gt;. We could have used local state for this effort, we however are using remote state(GCP storage bucket) to highlight this feature in Terraform.  With remote state enabled Terraform writes the state (infrastructure) data to a remote data store. Remote state can be shared between team members and depending on the provider allows for locking &amp;amp; versioning. &lt;br&gt; &lt;br&gt;&lt;br&gt;
Click on the Storage menu in Google Console or use URL below to get to Storage, in order to create a storage bucket for the http-loadbalancer project.&lt;br&gt;
&lt;a href="https://console.cloud.google.com/storage/browser?project=http-loadbalancer" rel="noopener noreferrer"&gt;https://console.cloud.google.com/storage/browser?project=http-loadbalancer&lt;/a&gt; &lt;br&gt;&lt;br&gt;&lt;br&gt;
Click the &lt;b&gt;CREATE BUCKET&lt;/b&gt; menu, enter &lt;code&gt;http-loadbalancer&lt;/code&gt; for bucket name and then click the &lt;b&gt; CREATE&lt;/b&gt; button to create a storage bucket.   &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstorage.googleapis.com%2Fhttp-loadbalancer%2Fimages%2Fgcp_CreateStorageBucket_Create.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstorage.googleapis.com%2Fhttp-loadbalancer%2Fimages%2Fgcp_CreateStorageBucket_Create.png" title="CreateStorage Bucket" alt="alt text" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After creating the bucket, if you select the &lt;code&gt;http-loadbalancer&lt;/code&gt; bucket and go the the &lt;b&gt;Permissions&lt;/b&gt; tab you should see &lt;code&gt;terraform-account&lt;/code&gt; service account as a member with Admin Role for this storage bucket.&lt;br&gt;
&lt;br&gt;&lt;br&gt;
In Google Console, from the Navigation menu (top left) select the &lt;b&gt;Compute Engine&lt;/b&gt; to make sure the Compute Engine API is enabled for your project (http-loadbalancer).&lt;br&gt;
&lt;br&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  SSH Key
&lt;/h2&gt;

&lt;p&gt;If you don't already have an SSH key you can use the following &lt;a href="https://confluence.atlassian.com/bitbucketserver/creating-ssh-keys-776639788.html" rel="noopener noreferrer"&gt;link&lt;/a&gt; to generate it first. This will result in two files (e.g &lt;b&gt;id_rsa&lt;/b&gt; &amp;amp; &lt;b&gt;id_rsa.pub&lt;/b&gt;).  Contents of your  xxxx.pub file needs to be added to GCP and the associated key (&lt;b&gt;id_rsa&lt;/b&gt;) file needs to be stored for use later with Terraform.&lt;/p&gt;

&lt;p&gt;Within GCP , go to &lt;b&gt;Compute Engine → Metadata&lt;/b&gt; &lt;br&gt;
&lt;br&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstorage.googleapis.com%2Fhttp-loadbalancer%2Fimages%2Fgcp_SSHKey_Metadata.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstorage.googleapis.com%2Fhttp-loadbalancer%2Fimages%2Fgcp_SSHKey_Metadata.png" title="GCP Metadata" alt="alt text" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Select &lt;b&gt;SSH Keys&lt;/b&gt; tab and add contents of your xxxx.pub (e.g id_rsa.pub) file.&lt;br&gt;
&lt;br&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstorage.googleapis.com%2Fhttp-loadbalancer%2Fimages%2Fgcp_SSHKey_Edit.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstorage.googleapis.com%2Fhttp-loadbalancer%2Fimages%2Fgcp_SSHKey_Edit.png" title="GCP SSH Keys" alt="alt text" width="800" height="400"&gt;&lt;/a&gt;&lt;br&gt;
&lt;br&gt;&lt;br&gt;&lt;/p&gt;

&lt;p&gt;The user name associated with the key creation should be  displayed in the user name column. &lt;/p&gt;

&lt;p&gt;The net result of the above steps should result in two files (service account JSON &amp;amp; SSH private key) that will be needed to be placed into the Terraform project once it has been downloaded.&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;b&gt;Getting Started with Terraform on GCP &lt;/b&gt;
&lt;/h2&gt;


&lt;h2&gt;
  
  
  &lt;b&gt;Terraform Basics&lt;/b&gt;
&lt;/h2&gt;

&lt;p&gt;The  HTTP Load Balancer can manually be configured and provisioned via Google Console. We, however, want to take advantage of key benefits that IaC (e.g Terraform) provides with respects to provisioning and maintaining cloud infrastructure. We are essentially applying the same principles around developing software applications to infrastructure definition and provisioning.  These benefits include :&lt;br&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;b&gt;Reuse &amp;amp; Efficiency &lt;/b&gt; -  Reliaby rebuild any resource of infrastructure reducing risk. With IaC, once you have created code to set up one environment(e.g DEV), it can be easily configured to replicate another environment (QA/PROD). Code once and reuse many times (e.g Terraform modules)
&lt;/li&gt;
&lt;li&gt;
&lt;b&gt;Version Control &amp;amp; Collaboration&lt;/b&gt; - Provide history of changes &amp;amp; traceability of infrastructure when your infrastructure is managed via code. Allows for internal teams to share code between and applies policies to manage infrastructure as it would apply to code.
&lt;/li&gt;
&lt;li&gt;
&lt;b&gt;Validation&lt;/b&gt; - Allows for effective testing of components individually or entire systems to support specific workflow. 
&lt;/li&gt;
&lt;li&gt;
&lt;b&gt;Documentation&lt;/b&gt; - Code/comments serves to document infrastructure.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Terraform is an IaC tool for provisioning, updating and managing infrastructure via Hashicorp Configuration Language(HCL). HCL is a declarative language where you specify(declare) the end state and terraform executes a plan to build out that infrastructure. Using providers plug-ins Terraform supports multiple cloud environments  (AWS, Google, Azure &amp;amp; many more). The HCL language &amp;amp; core concepts are applicable to all providers and do not change per provider.  &lt;/p&gt;
&lt;h3&gt;
  
  
  &lt;b&gt;Introduction to Hashicorp Terraform&lt;/b&gt; &lt;br&gt;
&lt;/h3&gt;

&lt;p&gt;Below is an excellent overview of Terraform.&lt;br&gt;
&lt;br&gt;&lt;br&gt;
&lt;a href="http://www.youtube.com/watch?v=h970ZBgKINg" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqtdfvdy5aawt7f62db7l.jpg" alt="Introduction to HashiCorp Terraform with Armon Dadgar" width="480" height="360"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Terraform lifecycle/workflow consist of :&lt;/p&gt;

&lt;p&gt;&lt;b&gt;INIT&lt;/b&gt; - Terraform initializes the working directory containing the configuration files and installs all the required plug-ins that are referenced in configuration files. &lt;/p&gt;

&lt;p&gt;&lt;b&gt;PLAN&lt;/b&gt; -  Stage where Terraform determines what needs to be created, updated, or destroyed to move from the real/current state of the infrastructure to the desired state. Plan run will result in an update of Terraform state to reflect the intended state.  &lt;/p&gt;

&lt;p&gt;&lt;b&gt;APPLY&lt;/b&gt; - Terraform apply executes that the generated plan to apply the changes in order to move infrastructure  resources to the desired state.&lt;/p&gt;

&lt;p&gt;&lt;b&gt;DESTROY&lt;/b&gt; - Terraform destroy is used to remove/delete &lt;b&gt;only&lt;/b&gt; Terraform managed resources. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstorage.googleapis.com%2Fhttp-loadbalancer%2Fimages%2Fterraform_Workflow.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstorage.googleapis.com%2Fhttp-loadbalancer%2Fimages%2Fterraform_Workflow.png" title="Terraform Workflow" alt="alt text" width="800" height="400"&gt;&lt;/a&gt;&lt;br&gt;
&lt;br&gt;&lt;br&gt;&lt;/p&gt;

&lt;p&gt;Below are some key terms used in Terraform that will touch upon as part of this article. &lt;/p&gt;

&lt;p&gt;&lt;b&gt;Provider:&lt;/b&gt; It is a plugin to interact with APIs of public cloud providers (GCP, AWS, Azure) in order to access &amp;amp; create Terraform managed resources.&lt;/p&gt;

&lt;p&gt;&lt;b&gt;Variables:&lt;/b&gt; Also used as input-variables, it is a key-value pair used by Terraform modules to allow customization. Instead of using hard-coded strings in your resource definition/module you can seperate the values out into data files(vars) and reference&lt;br&gt;
via variables.&lt;/p&gt;

&lt;p&gt;&lt;b&gt;State:&lt;/b&gt; It consists of cached information about the infrastructure managed by Terraform and the related configurations.&lt;/p&gt;

&lt;p&gt;&lt;b&gt;Modules:&lt;/b&gt; Reusable  container for one or more resources that are used together. Modules have defined input variables which are used to create/update resources and allow for defined output variables that other resources or modules can use.&lt;/p&gt;

&lt;p&gt;&lt;b&gt;Data Source:&lt;/b&gt; It is implemented by providers to return reference on resources within infrastructure to Terraform.&lt;/p&gt;


&lt;h2&gt;
  
  
  &lt;b&gt;Install Terraform &lt;/b&gt;
&lt;/h2&gt;

&lt;p&gt;The Terraform distribution is a single binary file that you can download and install on your system   &lt;a href="https://www.terraform.io/downloads.html" rel="noopener noreferrer"&gt;Hashicorp Download&lt;/a&gt;. Find the right binary for your operating system (Windows, Mac,etc) to install. A single binary named terraform from zip file is needed and it has to be added to your system PATH.&lt;/p&gt;

&lt;p&gt;After completing installation verify install by running &lt;code&gt;'terraform -version'&lt;/code&gt; on command line:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ terraform -version
Terraform v0.13.4
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;You can get list of available commands by running &lt;code&gt;'terraform'&lt;/code&gt; without any arguments :&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ terraform
Usage: terraform [-version] [-help]  [args]

The available commands for execution are listed below.
...
...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  &lt;b&gt;Install GIT &amp;amp; Clone project&lt;/b&gt;
&lt;/h2&gt;

&lt;p&gt;If you don't already have GIT installed, use this  &lt;a href="https://git-scm.com/downloads" rel="noopener noreferrer"&gt;link&lt;/a&gt; to install GIT locally in order to pull down Terraform code for this effort. After installing GIT, clone the project locally by running :&lt;br&gt;
&lt;br&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone https://github.com/KawiNeal/http-loadbalancer.git

cd http-loadbalancer/envs
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Copy the generated service account JSON file and private key file (e.g http-loadbalancer.json &amp;amp; id_rsa) into the  &lt;b&gt;&lt;code&gt;envs&lt;/code&gt;&lt;/b&gt; folder of project. In the  &lt;b&gt;&lt;code&gt;envs&lt;/code&gt;&lt;/b&gt; folder edit &lt;code&gt;dev.env.tfvars&lt;/code&gt; to make sure that the variable assignments for &lt;code&gt;gcp_auth_file&lt;/code&gt; and &lt;code&gt;id_rsa&lt;/code&gt; match the names of the files. &lt;br&gt;
&lt;br&gt;&lt;br&gt;
 &lt;code&gt;../http-loadbalancer/envs/dev.env.tfvars&lt;/code&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;



```

# GCP authentication file
gcp_auth_file = "http-loadbalancer.json"
&lt;/pre&gt;

&lt;/div&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;

```



# remote provisioning - private key file
stress_vm_key = "id_rsa"
&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Replace the user name with your user name used to create SSH key. &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;



```

# remote provisioning - user
user = "kawi.neal"  &amp;lt;---- Add your user name here
&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Edit &lt;code&gt;dev.env.tf&lt;/code&gt; to make sure that the &lt;code&gt;bucket name&lt;/code&gt; and &lt;code&gt;credentials&lt;/code&gt; are assigned to bucket name created and JSON filename.  Backend resource definitions in Terraform does not allow the use of variables.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;

```



backend "gcs" {
    bucket      = "http-loadbalancer"
    prefix      = "dev"
    credentials = "http-loadbalancer.json"
  }
&lt;/pre&gt;

&lt;/div&gt;


&lt;h2&gt;
  
  
  &lt;b&gt;Project Structure&lt;/b&gt;
&lt;/h2&gt;

&lt;p&gt;The diagram below provides the components that are used to build out and test your GCP HTTP Load Balancer.  Having a clear picture of the components of your infrastructure &amp;amp; their relationships serves as a guide to defining Terraform project code for provisioning your infrastructure.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstorage.googleapis.com%2Fhttp-loadbalancer%2Fimages%2FArchitecture.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstorage.googleapis.com%2Fhttp-loadbalancer%2Fimages%2FArchitecture.png" title="Architecture Overview" alt="alt text" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This infrastructure can be broken down into these sets of resources :&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Compute Resources - Instance Group manager for creating/scaling compute resources.&lt;/li&gt;
&lt;li&gt;Network - Cloud Network and subnets &lt;/li&gt;
&lt;li&gt;Network Services - Network components for cloud balancing service.&lt;/li&gt;
&lt;li&gt;Stress Test VM - Virtual machine to test load balancer.

&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The Terraform folder structure has been defined to map to the resource grouping with each component within the group represented as a module.  &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;
&lt;code&gt;   
├───compute
│   ├───auto_scaler
│   ├───instance_template
│   └───region_instancegroupmgr
├───envs
├───network
│   ├───firewall_rule
│   └───network_subnet
├───networkservices
│   └───load_balancer
│       ├───backend_service
│       ├───forwarding_rule
│       ├───health_check
│       ├───target_proxy
│       └───url_map
└───test
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;b&gt;&lt;code&gt;envs&lt;/code&gt;&lt;/b&gt; folder is where the Http Load Balancer Terraform project is defined.  It contains the provisioner,  variables, remote backend, modules and data sources for this project.  We will start with the &lt;code&gt;main.tf&lt;/code&gt;, which serves as the root module (starting point) for the Terraform configuration. The root module makes all the calls to child-modules &amp;amp; data sources needed to create all resources for HTTP Load Balancer.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;



```

├───envs                        
│   │   dev.env.tf              ----&amp;gt; all variables needed for DEV environment
│   │   dev.env.tfvars          ----&amp;gt; variables assignments for DEV
│   │   http-loadbalancer.json  * copied into project (service account)
│   │   id_rsa                  * copied into project (SSH key)
│   │   main.tf                  -----&amp;gt; terraform, GCP provider &amp;amp; modules 
&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The &lt;code&gt;dev.env.tf&lt;/code&gt; has all the variables associated with the DEV configuration including Terraform block to define require version and cloud provider (GCP).  Took the approach of isolating all variables for a specific environment into one file.&lt;/p&gt;

&lt;p&gt;&lt;u&gt;&lt;b&gt;&lt;code&gt;dev.env.tf&lt;/code&gt;&lt;/b&gt;&lt;/u&gt;&lt;/p&gt;



&lt;p&gt;{% gist &lt;a href="https://gist.github.com/KawiNeal/6f0dbe46045cfb444d66646bbe6c59fd.js" rel="noopener noreferrer"&gt;https://gist.github.com/KawiNeal/6f0dbe46045cfb444d66646bbe6c59fd.js&lt;/a&gt; %}&lt;/p&gt;

&lt;p&gt;The &lt;span&gt;terraform&lt;/span&gt; block sets which provider to retrieve from the Terraform Registry. Given that this is for GCP infrastructure we need to use the google provider source ("hashicorp/google"). Within the &lt;span&gt;terraform&lt;/span&gt; block the &lt;code&gt;'required_version'&lt;/code&gt; sets a version of Terraform to use in your configuration when the configuration is initialized.  The &lt;code&gt;'required_version'&lt;/code&gt; takes a  &lt;a href="https://www.terraform.io/docs/configuration/version-constraints.html" rel="noopener noreferrer"&gt;version constraint string&lt;/a&gt; which ensures that a range of acceptable versions can be used. In our project we are specifying that any version that is greater than or equal to 0.13.&lt;/p&gt;

&lt;p&gt;The &lt;span&gt;provider&lt;/span&gt; block sets which provider to retrieve from the Terraform Registry. &lt;a href="https://www.terraform.io/docs/configuration/providers.html" rel="noopener noreferrer"&gt;Providers&lt;/a&gt; are essentially plug-ins that give the Terraform configuration access to a set of resource types per each provider. Note that multiple providers can be specified in one configuration. You can also define multiple configurations for the same provider and select the provider to use within each module or by resource. The &lt;span&gt;provider&lt;/span&gt; block sets the version, project &amp;amp; GCP credentials to allow for access  to a specific project within GCP. The provider uses  variables that are declared and defined in &lt;code&gt;dev.env.tf&lt;/code&gt; and &lt;code&gt;dev.env.tfvars&lt;/code&gt;. &lt;/p&gt;

&lt;p&gt;The &lt;span&gt;backend&lt;/span&gt; block enables storing the infrastructure state in a remote data store. Remote backends are highly recommended when working in teams to modify same infrastructure or parts of the same infrastructure. Advantages are with collaboration, security (sensitive info) and remote operations. The backend we have defined is GCP ("gcs") using the storage bucket we defined as part of the setup. Access to the storage bucket is obtained with a service account key(JSON).  One thing to note, you can not use variables within the definition of backend, all input must be hard-coded.  You can see that difference between the definition of the &lt;span&gt;provider&lt;/span&gt; block and the &lt;span&gt;backend&lt;/span&gt; block.&lt;br&gt;&lt;br&gt;&lt;br&gt;
The variables after the &lt;span&gt;backend&lt;/span&gt; block defines that variable types that are needed to be passed to all the modules. &lt;/p&gt;

&lt;p&gt;&lt;u&gt;&lt;b&gt;&lt;code&gt;dev.env.tfvars&lt;/code&gt;&lt;/b&gt;&lt;/u&gt;&lt;/p&gt;



&lt;p&gt;{% gist &lt;a href="https://gist.github.com/KawiNeal/73171999e47eb57246b65f438dbd4902.js" rel="noopener noreferrer"&gt;https://gist.github.com/KawiNeal/73171999e47eb57246b65f438dbd4902.js&lt;/a&gt; %}&lt;/p&gt;

&lt;p&gt;Hard-coding values in Terraform configuration is not recommended. The use of variables is to ensure configuration can be easily maintain, reused and also serve as parameters to Terraform modules.  Variables declarations are put defined in variables TF file and their associated values assignments are put into TFVARS file. The variables in these files represent sets of inputs to modules for this infrastructure. &lt;/p&gt;

&lt;p&gt;For example, the VPC Network and Subnets input from &lt;code&gt;dev.env.tfvars&lt;/code&gt; is defined as :&lt;/p&gt;



&lt;p&gt;{% gist &lt;a href="https://gist.github.com/KawiNeal/182b0a8c7b2acc88c3280d3dba362afd.js" rel="noopener noreferrer"&gt;https://gist.github.com/KawiNeal/182b0a8c7b2acc88c3280d3dba362afd.js&lt;/a&gt; %}&lt;/p&gt;

&lt;p&gt;The inputs (&lt;code&gt;project_id, vpc,vpc_subnets&lt;/code&gt;) are passed to the network_subnet module within the network group folder(../network/network_subnet).&lt;/p&gt;



&lt;p&gt;{% gist &lt;a href="https://gist.github.com/KawiNeal/e7ddb523615e97ce01bd4e6f4f8f187d.js" rel="noopener noreferrer"&gt;https://gist.github.com/KawiNeal/e7ddb523615e97ce01bd4e6f4f8f187d.js&lt;/a&gt; %}&lt;/p&gt;
&lt;h2&gt;
  
  
  Modules
&lt;/h2&gt;

&lt;p&gt;The &lt;code&gt;network_subnet&lt;/code&gt; module illustrates how a module can be used to call/re-use other modules.  The &lt;code&gt;network_subnet&lt;/code&gt; module calls "version 2.5.0" of an available &amp;amp; verified &lt;a href="https://registry.terraform.io/namespaces/terraform-google-modules" rel="noopener noreferrer"&gt;network module&lt;/a&gt; that creates network/subnets using required input parameters. There is a &lt;a href="https://registry.terraform.io/" rel="noopener noreferrer"&gt;Terraform registry of modules&lt;/a&gt; that can be used to create resources for multiple providers (AWS, GCP, Azure, etc). &lt;br&gt;&lt;br&gt;
&lt;u&gt;&lt;b&gt;&lt;code&gt;Module network_vpc&lt;/code&gt;&lt;/b&gt;&lt;/u&gt;&lt;/p&gt;



&lt;p&gt;{% gist &lt;a href="https://gist.github.com/KawiNeal/b6bcbc1969970a917fa6af39d68559aa.js" rel="noopener noreferrer"&gt;https://gist.github.com/KawiNeal/b6bcbc1969970a917fa6af39d68559aa.js&lt;/a&gt; %}&lt;br&gt;
Modules not only allow you to re-use configuration but also makes it easier to organize your configuration into clear and logical components of your infrastructure. Proper definition and grouping of modules will allow for easier navigation &amp;amp; understanding of larger cloud infrastructures that could exist across multiple cloud providers and have hundreds of resources.&lt;br&gt;&lt;br&gt;
Similar to web services, modules should follow an "input-output" pattern.  We want to have a clear contract that defines our inputs to the module and outputs from the module.  These reusable pieces of components(modules) can then be logically glued together to produce a functional infrastructure.&lt;/p&gt;

&lt;p&gt;Example of two &lt;code&gt;network services&lt;/code&gt; modules below : &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;

```



...
...
│   │───target_proxy
│   │     ├───input.tf
│   │     ├───output.tf
│   │     └───target_proxy.tf
│   │───url_map
│   │     ├───input.tf
│   │     ├───output.tf
│   │     └───url_map.tf
...
...
&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Output values defined in &lt;code&gt;output.tf&lt;/code&gt; are the return values of the Terraform module that can be used to pass resource attributes/references to the parent module. Other modules in the root module can use these attributes as input, creating an &lt;b&gt;implicit&lt;/b&gt; dependency.  In the example above, the &lt;code&gt;target_proxy&lt;/code&gt; has a dependency on an url map. The output from &lt;code&gt;url_map&lt;/code&gt; child-module to the root module is &lt;code&gt;url_map_id&lt;/code&gt;, which is passed as an input to the &lt;code&gt;target_proxy&lt;/code&gt; child-module.&lt;br&gt;
&lt;u&gt;&lt;b&gt;&lt;code&gt;Module url_map output&lt;/code&gt;&lt;/b&gt;&lt;/u&gt;&lt;/p&gt;




&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;
&lt;br&gt;
In the root/parent module, outputs from the child module can be referenced and made available as &lt;b&gt;module.MODULE_NAME.OUTPUT_NAME&lt;/b&gt;. In case of &lt;code&gt;url_map&lt;/code&gt; output it can be referenced as &lt;code&gt;module.url_map.id&lt;/code&gt; as shown below from the root module in &lt;code&gt;main.tf&lt;/code&gt;.&lt;br&gt;
&lt;u&gt;&lt;b&gt;&lt;code&gt;Module http_proxy input&lt;/code&gt;&lt;/b&gt;&lt;/u&gt;




&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;Terraform by default will take into account the implicit dependency as far as the order in which resources are created. In the case of &lt;code&gt;url_map&lt;/code&gt; and &lt;code&gt;target_proxy&lt;/code&gt; above, the url_map will be created prior &lt;code&gt;target_proxy&lt;/code&gt;.&lt;br&gt;
Terraform also allows for declaring &lt;b&gt;explicit&lt;/b&gt; dependencies with the use of &lt;code&gt;depends_on&lt;/code&gt;.&lt;br&gt;
One method of testing the HTTP Load Balancer was to create a virtual instance (&lt;code&gt;stress_test_vm&lt;/code&gt;) and drive traffic from that virtual instance to the load balancer.  The load balancer should forward traffic to the region that is closest to the virtual machine's region/location. The (&lt;code&gt;stress_test_vm&lt;/code&gt;) is a stand-alone instance that has no implicit dependency on resources/modules defined in the root module. The (&lt;code&gt;stress_test_vm&lt;/code&gt;) does require that the resources associated with HTTP Load Balancer be in place in order to forward traffic to it. The &lt;code&gt;depends_on = [module.network_subnet, module.fowarding_rule]&lt;/code&gt; sets this explicit dependency. Before creating a test VM we want to ensure that the network/subnets and  externally exposed IP address are in place prior to generating traffic to external IP.&lt;/p&gt;

&lt;p&gt;&lt;u&gt;&lt;b&gt;&lt;code&gt;Module - test&lt;/code&gt;&lt;/b&gt;&lt;/u&gt;&lt;/p&gt;




&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;h2&gt;
  
  
  Load Balancer &amp;amp; Testing
&lt;/h2&gt;

&lt;p&gt;Additional details for configuring GCP Load Balancer can be found &lt;a href="https://console.cloud.google.com/" rel="noopener noreferrer"&gt;here&lt;/a&gt;. From GCP perspective per our architecture diagram, our configuration consists of:&lt;br&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;b&gt;HTTP, health check, and SSH firewall rules&lt;/b&gt; &lt;br&gt;&lt;br&gt;
To allow HTTP traffic to backends, TCP traffic from GCP Health checker &amp;amp; remote SSH from Terraform to stress test VM.&lt;br&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;b&gt;Instance templates (2)&lt;/b&gt; &lt;br&gt;&lt;br&gt;
Resource to create VM instances and managed instance groups(mig). Templates define machine type, boot disk, and other instance properties. Startup script is also executed on all instances create by instance template to install Apache.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;b&gt;Managed instance groups (2)&lt;/b&gt; &lt;br&gt;&lt;br&gt;
Managed instance groups use instance templates to create a group of identical instances that offers autoscaling based on autoscaling policy/metrics.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;b&gt;HTTP Load Balancer(IPv4 &amp;amp; IPV6)&lt;/b&gt; &lt;br&gt;&lt;br&gt;
Load balancer consists of backend service to balance traffic between two backend managed instance groups( mig in US &amp;amp; EU). Load balancer includes creating HTTP health checks(port80) to determine when instances will receive new connections. Forwarding-rule(frontend) is created as part of load balancer. Frontends determine how traffic will be directed. For our configuration we are defaulting to http port(80). &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;b&gt;Stress Test VM&lt;/b&gt;&lt;br&gt;
&lt;br&gt;VM is created to simulate load on the HTTP Load Balancer. As part of VM startup &lt;b&gt;siege&lt;/b&gt;, a http load testing utility is installed. Via Terraform's &lt;code&gt;remote-exec&lt;/code&gt; we execute &lt;b&gt;siege&lt;/b&gt; utility to direct traffic to HTTP Load Balancer. &lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Terraform &lt;b&gt;data sources&lt;/b&gt; were used in the &lt;code&gt;test&lt;/code&gt; module to retrieve the external IP address (frontend) of the load balancer, in order for &lt;b&gt;siege&lt;/b&gt; to route traffic to it. Data sources allow Terraform to retrieve existing resource configuration information.  One item to note here is that Terraform's data source can query any resource within the provider, it does not have to be a resource managed/created by Terraform. &lt;/p&gt;

&lt;p&gt;&lt;u&gt;&lt;b&gt;&lt;code&gt;Module - "test" (stress_test_vm.tf)&lt;/code&gt;&lt;/b&gt;&lt;/u&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;



```

# get forward rule to obtain frontend IP
data "google_compute_global_forwarding_rule" "http" {
  name = var.forward_rule_name

}
&lt;/pre&gt;

&lt;/div&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;

```



inline = [
      "sleep 280",
      "siege -c 255 -t580 http://${data.google_compute_global_forwarding_rule.http.ip_address} &amp;amp;",
      "sleep 600"
    ]
&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The inline block contains the command line for executing &lt;b&gt;siege&lt;/b&gt; utility on the &lt;code&gt;stress_test_vm&lt;/code&gt;. There is a pause (280 seconds) prior to running siege to allow for Load Balancer front-end IP to become available.  The command generates 255 concurrent user requests at a rate of  1-3 seconds between each request for 580 seconds. One notable &amp;amp; interesting  issue I ran into was the &lt;b&gt;siege&lt;/b&gt; command would not continue running over a time period. After SSH connection and command line was executed it would immediately end the session and terminate the &lt;b&gt;siege&lt;/b&gt; process.  Work-around this issue was to run &lt;b&gt;siege&lt;/b&gt; as background process and add &lt;code&gt;sleep&lt;/code&gt; to delay closing of SSH session and terminating &lt;b&gt;siege&lt;/b&gt; prior to the needed execution time.&lt;br&gt;
&lt;br&gt;&lt;br&gt;
Although available, provisioning an instance with Terraform over SSH(remote-exec) is not recommended by Hashicorp. The issue faced with the &lt;b&gt;siege&lt;/b&gt; process seems to highlight their recommendation.  For this effort it was convenient for testing purposes. Hashicorp provides configuration management provisioner product, &lt;a href="https://packer.io/" rel="noopener noreferrer"&gt;Hashicorp Packer&lt;/a&gt;, that automates creation of a VM instance image.&lt;/p&gt;

&lt;h2&gt;
  
  
  INIT
&lt;/h2&gt;

&lt;p&gt;We can now proceed to go through this Terraform project lifecycle : INIT, PLAN , APPLY and eventually DESTROY when done. &lt;br&gt;&lt;br&gt;
Run &lt;b&gt;&lt;code&gt;'terraform init'&lt;/code&gt;&lt;/b&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;



```

C:\http-loadbalancer\envs&amp;gt;terraform init
Initializing modules...
Initializing the backend...
Initializing provider plugins...
- Using previously-installed hashicorp/google v3.46.0
Terraform has been successfully initialized!.
&lt;/pre&gt;

&lt;/div&gt;

&lt;h2&gt;
  
  
  PLAN
&lt;/h2&gt;

&lt;p&gt;Terraform PLAN needs to be executed to set to variables from the &lt;code&gt;dev.env.tfvars&lt;/code&gt; file. Terminal output will display all resources that will be generated and provide the number of resources at the end of plan output.&lt;/p&gt;

&lt;p&gt;Run &lt;b&gt;&lt;code&gt;'terraform plan -var-file dev.env.tfvars'&lt;/code&gt;&lt;/b&gt; &lt;br&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;

```



C:\http-loadbalancer\envs&amp;gt;terraform plan -var-file dev.env.tfvars

..
..
Plan: 22 to add, 0 to change, 0 to destroy.
&lt;/pre&gt;

&lt;/div&gt;

&lt;h2&gt;
  
  
  APPLY
&lt;/h2&gt;

&lt;p&gt;Terraform PLAN will indicate that 22 GCP resources will be created. Next we will to run APPLY to execute the generated plan  in order to move our infrastructure to desired stated.  Note when we run APPLY the &lt;code&gt;stress_test_vm&lt;/code&gt; will also be provisioned after all other resources. After that short period of time (1-2 minutes) web traffic will be directed to the load balancer.&lt;/p&gt;

&lt;p&gt;Run &lt;b&gt;&lt;code&gt;'terraform apply -var-file dev.env.tfvars -auto-approve'&lt;/code&gt;&lt;/b&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;



```

C:\http-loadbalancer\envs&amp;gt;terraform apply -var-file dev.env.tfvars -auto-approve
PS C:\Users\Kawi\Terraform\Repos\http-loadbalancer\envs&amp;gt; terraform apply -var-file dev.env.tfvars -auto-approve
module.network_subnet.module.network_vpc.module.vpc.google_compute_network.network: Creating...
module.healthcheck.google_compute_health_check.healthcheck: Creating...
module.healthcheck.google_compute_health_check.healthcheck: Creation complete after 3s [id=projects/http-loadbalancer/global/healthChecks/http-lb-health-check]
module.network_subnet.module.network_vpc.module.vpc.google_compute_network.network: Still creating... [10s elapsed]
module.network_subnet.module.network_vpc.module.vpc.google_compute_network.network: Creation complete after 15s [id=projects/http-loadbalancer/global/networks/http-lb]
..
..
&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;After Terraform has created GCP resources and the &lt;code&gt;remote-exec&lt;/code&gt; process is running, you can use GCP console to view traffic flow to &lt;a href="https://console.cloud.google.com/net-services/loadbalancing/advanced/backendServices/details/http-lb-backend?project=http-loadbalancer&amp;amp;duration=PT1H" rel="noopener noreferrer"&gt;backends&lt;/a&gt;.  Given that &lt;code&gt;stress_test_vm&lt;/code&gt; is in a closer region the majority of traffic will be routed to &lt;code&gt;europe-west&lt;/code&gt; managed instance groups. The managed instance group will create additional VMs to handle the uptick in web traffic to the load balancer.&lt;br&gt;&lt;br&gt;
From the the GCP console navigation menu select : &lt;br&gt;&lt;br&gt;
&lt;b&gt;Network Services --&amp;gt; Load Balancing --&amp;gt; Backend (tab) &lt;/b&gt; Select "http-lb-backend" from list below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstorage.googleapis.com%2Fhttp-loadbalancer%2Fimages%2Fgcp_BackEnd_Traffic.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstorage.googleapis.com%2Fhttp-loadbalancer%2Fimages%2Fgcp_BackEnd_Traffic.png" title="Architecture Overview" alt="alt text" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To view the instances created to handle traffic from &lt;code&gt;stress_test_vm&lt;/code&gt;.  From the the GCP console navigation menu select : &lt;br&gt;&lt;br&gt;
&lt;b&gt;Compute Engine --&amp;gt; VM instances&lt;/b&gt; &lt;/p&gt;

&lt;p&gt;After the &lt;code&gt;remote-exec&lt;/code&gt; process completes the number of instances created would scale down via instance group manager. If you need to run testing again, the &lt;code&gt;stress_test_vm&lt;/code&gt; can be marked as TAINTed and APPLY can be re-executed which will only destroy the &lt;code&gt;stress_test_vm&lt;/code&gt; and then re-created it.  &lt;/p&gt;

&lt;p&gt;Run &lt;b&gt;&lt;code&gt;'terraform taint module.test.google_compute_instance.stress_test_vm'&lt;/code&gt;&lt;/b&gt; and then &lt;br&gt; &lt;br&gt;
&lt;b&gt;&lt;code&gt;'terraform apply -var-file dev.env.tfvars'&lt;/code&gt;&lt;/b&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;

```



C:\http-loadbalancer\envs&amp;gt;terraform taint module.test.google_compute_instance.stress_test_vm
Resource instance module.test.google_compute_instance.stress_test_vm has been marked as tainted.

C:\http-loadbalancer\envs&amp;gt; terraform apply -var-file dev.env.tfvars
..
..
Plan: 1 to add, 0 to change, 1 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value:
&lt;/pre&gt;

&lt;/div&gt;

&lt;h2&gt;
  
  
  DESTROY
&lt;/h2&gt;

&lt;p&gt;Terraform DESTROY needs to be executed to clean up all resources. To be able to check that destroy will remove 22 resources,  run DESTROY without the &lt;code&gt;-auto-approve&lt;/code&gt; parameter.  You will then get prompted to answer 'yes' to accept removal of all resources. &lt;br&gt;&lt;br&gt;
Run &lt;b&gt;&lt;code&gt;'terraform destroy -var-file dev.env.tfvars'&lt;/code&gt;&lt;/b&gt; &lt;br&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;



```



C:\http-loadbalancer\envs&amp;gt;terraform destroy -var-file dev.env.tfvars -auto-approve
..
..
Plan: 0 to add, 0 to change, 22 to destroy.

Do you really want to destroy all resources?
  Terraform will destroy all your managed infrastructure, as shown above.
  There is no undo. Only 'yes' will be accepted to confirm.

  Enter a value: 
&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Last but not least, the resources (storage bucket, service account) created as part of the project setup will need to be deleted when you are done.&lt;/p&gt;

&lt;h2&gt;
  
  
  Future Posts/Topics
&lt;/h2&gt;

&lt;p&gt;For future posts, I will build upon this project to add support for additional Terraform features to address these topics below : &lt;br&gt;&lt;/p&gt;

&lt;p&gt;&lt;b&gt;Terraform Workspaces&lt;/b&gt; - The current project implementation maps to one set of infrastructure resources to a DEV environment . With Terraform workspaces you can manage collections of infrastructure resources, which allows you to segregate your environments (DEV, QA, PROD) while using same infrastructure resource definitions. &lt;/p&gt;

&lt;p&gt;&lt;b&gt;Provisioning&lt;/b&gt; - The stress_test_vm used in this effort created with Terraform over SSH(remote-exec). As previously stated, this not recommended by Hashicorp.  Hashicorp provides configuration management provisioner product, &lt;a href="https://packer.io/" rel="noopener noreferrer"&gt;Hashicorp Packer&lt;/a&gt; to handling provisioning. &lt;br&gt;
&lt;br&gt;&lt;/p&gt;

&lt;p&gt;And that's all folks...hope this post provided some insight into Terraform. &lt;/p&gt;

</description>
      <category>iac</category>
      <category>terraform</category>
      <category>gcp</category>
      <category>devops</category>
    </item>
  </channel>
</rss>
