<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Angel Rivera</title>
    <description>The latest articles on DEV Community by Angel Rivera (@punkdata).</description>
    <link>https://dev.to/punkdata</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/punkdata"/>
    <language>en</language>
    <item>
      <title>What developers get, out-of-the-box, from the most generous free plan anywhere</title>
      <dc:creator>Angel Rivera</dc:creator>
      <pubDate>Tue, 11 Jan 2022 13:00:00 +0000</pubDate>
      <link>https://dev.to/circleci/what-developers-get-out-of-the-box-from-the-most-generous-free-plan-anywhere-2n4l</link>
      <guid>https://dev.to/circleci/what-developers-get-out-of-the-box-from-the-most-generous-free-plan-anywhere-2n4l</guid>
      <description>&lt;p&gt;Freemium plans are a great way for companies to introduce developers to their products and offer a hands-on demonstration of the value they provide. But it can be extremely frustrating for developers when a free tier limits access to key features or doesn’t provide enough capacity to evaluate how the product performs in real-world development scenarios. Not only is this frustrating, but it also diminishes the overall developer experience, resulting in negative and sometimes inaccurate perceptions of the product.&lt;/p&gt;

&lt;p&gt;Many companies struggle to strike the right balance between which features to include in their free tier and at what level of usage they should require a paid account. CircleCI is no exception. Until recently, our free plan just wasn’t enough to provide the best developer experience and demonstrate the value and power that CircleCI can bring to your continuous delivery and release processes. That’s why we decided to overhaul our offerings to provide the most comprehensive free plan available on the market.&lt;/p&gt;

&lt;p&gt;In this post, I’ll discuss the newly released CircleCI Free plan, highlighting some of the most impactful changes and how they will improve the developer experience.&lt;/p&gt;

&lt;h2&gt;
  
  
  Free plan details
&lt;/h2&gt;

&lt;p&gt;Developers continually adopt innovative strategies and concepts like continuous delivery and release management processes with the goal of building and releasing software faster and more efficiently. CircleCI enables your team to maximize development velocity by automating their software build and release practices. With our new free tier offerings, you get access to all the features and capabilities you need to make the most of your build minutes. Here’s what you get with the new free plan:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Unlimited users&lt;/li&gt;
&lt;li&gt;30,000 credits per month—enough for up to 6,000 build minutes, depending on your compute type&lt;/li&gt;
&lt;li&gt;Access to multiple execution environments, including Docker, Linux, Arm, and Windows, and larger resource classes&lt;/li&gt;
&lt;li&gt;30 concurrent job runs on any of the available compute options&lt;/li&gt;
&lt;li&gt;5 self-hosted runners to run jobs on your own machines&lt;/li&gt;
&lt;li&gt;1 GB of network data transfer to self-hosted runners and 2 GB of data storage for saving caches and workspaces, uploading test results, and building artifacts&lt;/li&gt;
&lt;li&gt;Docker layer caching to speed up your Docker builds&lt;/li&gt;
&lt;li&gt;Flaky test detection on our Insights dashboard for up to 5 tests&lt;/li&gt;
&lt;li&gt;The ability to create private orbs for sharing configuration code with other members of your team&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now that I’ve briefly described the awesome products and features included in the new Free plan, let’s take a closer look at some of the features that matter most to developers and their teams.&lt;/p&gt;

&lt;h2&gt;
  
  
  Larger resource classes
&lt;/h2&gt;

&lt;p&gt;Release cycles are growing increasingly shorter so that teams can ship changes and new features to users as quickly as possible. For developers to meet their release cycle requirements, it’s critical that they execute highly efficient CI/CD pipelines on the changes to be released. There are many ways to optimize your CI/CD pipelines, and one of the easiest and most effective is to control the pipeline’s underlying compute node capacity via a feature known as the &lt;a href="https://circleci.com/docs/2.0/configuration-reference/#resourceclass"&gt;resource class&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The resource class feature enables teams to configure and manage the capacity of compute node resources such as CPU and RAM, ensuring that the pipeline has the appropriate horsepower to successfully and expediently complete pipeline jobs. More often than not, pipeline job resource classes are configured with insufficient resource capacities, resulting in dramatically slower runs. These slower runs incrementally extend the duration of pipeline completions, which can lead to delays in the release process. The new Free plan provides access to a wider range of resources classes, enabling teams to dial in the right resources to optimize their pipeline job performance.&lt;/p&gt;

&lt;h2&gt;
  
  
  30x concurrency
&lt;/h2&gt;

&lt;p&gt;Beyond providing appropriate compute capacities for pipeline jobs, another feature teams can use to speed up their pipelines is concurrency, which is the ability to run multiple jobs at the same time across multiple execution environments. Under the new Free plan, you can now run up to 30 jobs concurrently, which can yield significant time savings by allowing you to avoid queuing caused by resource constraints in a given execution environment.&lt;/p&gt;

&lt;p&gt;A great use case for concurrency within pipelines is &lt;a href="https://dev.to/blog/how-bolt-optimized-their-ci-pipeline-to-reduce-their-test-run-time-by-over-3x/"&gt;parallel test execution&lt;/a&gt;. The more tests your project has, the longer it will take for them to complete on a single machine. To reduce this time, you can run tests in parallel by specifying a &lt;code&gt;parallelism&lt;/code&gt; level in your job configuration. Your tests will then run simultaneously across multiple separate executors, allowing you to shorten the amount of time it takes to validate and ship changes to your users. For more information on parallelism and concurrency, review the &lt;a href="https://circleci.com/docs/2.0/parallelism-faster-jobs/"&gt;documentation&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Docker layer caching
&lt;/h2&gt;

&lt;p&gt;Docker is the &lt;a href="https://dev.to/blog/benefits-of-containerization/"&gt;containerization&lt;/a&gt; technology available within the CircleCI platform. Docker images allow teams to spin up container environments to run jobs, and many teams build their own Docker images to have custom environments to test and deploy their applications. Docker images are built from Dockerfiles, and each command in the Dockerfile produces a layer in the image. Building Docker images can be one of the most time-consuming tasks in a CI/CD workflow.&lt;/p&gt;

&lt;p&gt;With Docker layer caching, now available on the CircleCI Free plan, you can reduce the time spent on repeated Docker builds by saving individual layers of your Docker image on every job run. The next time you run your workflow, CircleCI will retrieve any unchanged layers from the cache rather than rebuilding the entire image from scratch. This enables teams to efficiently package their apps and build related Docker images without slowing down pipeline executions. For more information about Docker layer caching, check out &lt;a href="https://circleci.com/docs/2.0/docker-layer-caching/"&gt;the documentation&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Private orbs
&lt;/h2&gt;

&lt;p&gt;Orbs are &lt;a href="https://dev.to/blog/automate-and-scale-your-ci-cd-with-circleci-orbs/"&gt;reusable packages of YAML&lt;/a&gt; that make it easier for developers to automate processes and incorporate third-party tools in their pipelines as well as share configuration across projects. While there are many useful orbs available in our &lt;a href="https://circleci.com/developer/orbs"&gt;public orbs registry&lt;/a&gt;, teams working in higly regulated industries such as healthcare, finance, or the public sector often require higher levels of security and compliance.&lt;/p&gt;

&lt;p&gt;With &lt;a href="https://circleci.com/docs/2.0/orb-intro/#private-orbs"&gt;private orbs&lt;/a&gt;, your team gets all of the collaboration and efficiency advantages of orbs along with the increased privacy and security that comes from restricting access to authenticated users within your organization. Your team can create and publish new private orbs using the CLI tool, and authenticated users can view and manage your organization’s private orbs by visiting the Orbs page in the Organization Settings tab of the CircleCI web app.&lt;/p&gt;

&lt;h2&gt;
  
  
  Test Insights with flaky test detection
&lt;/h2&gt;

&lt;p&gt;Automatically testing changes to your code is the foundation of &lt;a href="https://circleci.com/continuous-integration/"&gt;continuous integration&lt;/a&gt; and is an essential step toward minimizing risk in your software releases. But the reality is that most tests aren’t perfect. They don’t always execute as expected and can sometimes be flaky, meaning they fail nondeterministically. With the &lt;a href="https://dev.to/blog/monitor-and-optimize-your-ci-cd-pipeline-with-insights-from-circleci/"&gt;CircleCI Insights dashboard&lt;/a&gt;, you can monitor test performance across multiple workflows and development branches to automatically identify tests that are slow, flaky, or fail most often.&lt;/p&gt;

&lt;p&gt;Insights gives developers valuable visibility into test execution and performance data. Improved awareness of the health and performance of your test suite can save your team time and money by eliminating hours spent chasing unidentified bugs and increasing your confidence in the quality of your code. Plus, with Insights, you can monitor other key metrics, including credit usage, success rates, and pipeline duration, so that you can get a complete overview of your workflow performance at a glance. To learn more, visit the &lt;a href="https://circleci.com/docs/2.0/insights/"&gt;Insights documentation&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this post, I discussed the newly released CircleCI free plan and highlighted some of the changes that will provide the biggest improvements to the developer experience. With access to the complete feature set on CircleCI’s powerful continuous integration and delivery platform, your team can implement fast, flexible, and efficient CI/CD pipelines that will drastically shorten the time from commit to deploy.&lt;/p&gt;

&lt;p&gt;Automating development processes with continuous delivery is no longer optional for software producers who want to be responsive to their users’ needs and stay ahead of the competition. To learn more about CircleCI’s pricing and how it compares to other plans available on the market, visit our &lt;a href="https://circleci.com/pricing/"&gt;pricing page&lt;/a&gt;. If your team isn’t already benefitting from the time savings and confidence boost that a robust CI/CD solution can provide, sign up for a &lt;a href="http://circleci.com/signup"&gt;free CircleCI account&lt;/a&gt; and get started today.&lt;/p&gt;




</description>
      <category>circleci</category>
      <category>cicd</category>
      <category>free</category>
    </item>
    <item>
      <title>Infrastructure as Code, part 3: automate Kubernetes deployments with CI/CD and Terraform</title>
      <dc:creator>Angel Rivera</dc:creator>
      <pubDate>Thu, 11 Nov 2021 17:00:00 +0000</pubDate>
      <link>https://dev.to/circleci/infrastructure-as-code-part-3-automate-kubernetes-deployments-with-cicd-and-terraform-42om</link>
      <guid>https://dev.to/circleci/infrastructure-as-code-part-3-automate-kubernetes-deployments-with-cicd-and-terraform-42om</guid>
      <description>&lt;p&gt;&lt;em&gt;This series shows you how to get started with infrastructure as code (IaC). The goal is to help developers build a strong understanding of IaC through tutorials and code examples.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;In this post, I will demonstrate how to create &lt;a href="https://circleci.com/continuous-integration/"&gt;continuous integration and deployment (CI/CD)&lt;/a&gt; pipelines that automate the &lt;a href="https://www.terraform.io/"&gt;Terraform&lt;/a&gt; IaC deployments covered in &lt;a href="https://dev.to/ronpowelljr/infrastructure-as-code-part-01-create-a-kubernetes-cluster-5291-temp-slug-4829385"&gt;part 1&lt;/a&gt; and &lt;a href="https://dev.to/ronpowelljr/infrastructure-as-code-part-02-build-docker-images-and-deploy-to-kubernetes-4g86-temp-slug-6226302"&gt;part 2&lt;/a&gt; of this series. Here is a quick list of things we will accomplish in this post:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Build a new &lt;a href="https://circleci.com/docs/2.0/configuration-reference/"&gt;CircleCI .config.yml file&lt;/a&gt; for the project&lt;/li&gt;
&lt;li&gt;Configure new &lt;a href="https://circleci.com/docs/2.0/configuration-reference/#jobs"&gt;jobs&lt;/a&gt; and &lt;a href="https://circleci.com/docs/2.0/configuration-reference/#workflows"&gt;workflows&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Automate the execution of Terraform code to create &lt;a href="https://cloud.google.com/kubernetes-engine"&gt;Google Kubernetes Engine (GKE) clusters&lt;/a&gt; and deploy the application&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; &lt;em&gt;Before you can go through this part of the tutorial, make sure you have completed all the actions in the &lt;a href="https://dev.to/ronpowelljr/infrastructure-as-code-part-01-create-a-kubernetes-cluster-5291-temp-slug-4829385"&gt;prerequisites section of part 1&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;We will start with a quick explanation of what CI/CD is, and a review of the previous two installments of this tutorial series. Then you can start learning about the &lt;a href="https://circleci.com/docs/2.0/configuration-reference/"&gt;CircleCI .config.yml file&lt;/a&gt; included in &lt;a href="https://github.com/datapunkz/learn_iac"&gt;this code repo.&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Continuous integration and continuous deployment
&lt;/h2&gt;

&lt;p&gt;CI/CD pipelines help developers and teams automate their build and test processes. CI/CD creates valuable feedback loops that provide a near-realtime status of software development processes. The CI/CD automation also provides consistent process execution and accurate results. This helps in optimizing these processes and contributes to velocity gains. Streamlining development practices with CI/CD is becoming common practice among teams. Understanding how to integrate and automate repeated tasks is critical in building valuable CI/CD pipelines.&lt;/p&gt;

&lt;p&gt;In &lt;a href="https://dev.to/ronpowelljr/infrastructure-as-code-part-01-create-a-kubernetes-cluster-5291-temp-slug-4829385"&gt;part 1&lt;/a&gt; and &lt;a href="https://dev.to/ronpowelljr/infrastructure-as-code-part-02-build-docker-images-and-deploy-to-kubernetes-4g86-temp-slug-6226302"&gt;part 2&lt;/a&gt;, we used &lt;a href="https://www.terraform.io/"&gt;Terraform&lt;/a&gt; to create a new GKE cluster and related Kubernetes objects that deploy, execute, and serve an application. These Terraform commands were executed manually from our terminal. That works well when you are developing the Terraform code or modifying it, but we want to automate the execution of those commands. There are many ways to automate them, but we are going to focus on how to do it from within CI/CD pipelines.&lt;/p&gt;

&lt;h2&gt;
  
  
  What are CircleCI pipelines?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://circleci.com/docs/2.0/concepts/#pipelines"&gt;CircleCI pipelines&lt;/a&gt; are the full set of processes you run when you trigger work on your projects. Pipelines encompass your workflows, which in turn coordinate your jobs. This is all defined in your project configuration file. In the next sections of this tutorial, we will define a CI/CD pipeline to build our project.&lt;/p&gt;

&lt;h3&gt;
  
  
  Setting up the project on CircleCI
&lt;/h3&gt;

&lt;p&gt;Before we start building a &lt;code&gt;config.yml&lt;/code&gt; file for this project, we need to add the project to CircleCI. If you are unfamiliar with the process, you can use the &lt;a href="https://circleci.com/docs/2.0/getting-started/#setting-up-circleci"&gt;setting up CircleCI guide here&lt;/a&gt;. Once you have completed the &lt;strong&gt;Setting up CircleCI&lt;/strong&gt; section, stop there so we can configure &lt;a href="https://circleci.com/docs/2.0/env-vars/#setting-an-environment-variable-in-a-project"&gt;project level environment variables&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Project level environment variables
&lt;/h3&gt;

&lt;p&gt;Some jobs in this pipeline will need access to authentication credentials to execute commands on the target service. In this section, we will define the required credentials for some of our jobs and demonstrate how to input them into CircleCI as project level environment variables. For each variable, input the &lt;code&gt;EnVar Name:&lt;/code&gt; value in the &lt;strong&gt;Name&lt;/strong&gt; field and input the credentials in the &lt;strong&gt;Value&lt;/strong&gt; field. Here is a list of credentials that our pipeline will need, and their values:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;EnVar Name:&lt;/strong&gt; TF_CLOUD_TOKEN - &lt;strong&gt;Value:&lt;/strong&gt; The &lt;a href="https://circleci.com/docs/2.0/env-vars/#encoding-multi-line-environment-variables"&gt;Base64 encoded value&lt;/a&gt; of the local &lt;a href="https://www.terraform.io/docs/commands/cli-config.html"&gt;.terraformrc&lt;/a&gt; file which hosts the &lt;a href="https://app.terraform.io/app/settings/tokens"&gt;Terraform Cloud user token&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;EnVar Name:&lt;/strong&gt; DOCKER_LOGIN - &lt;strong&gt;Value:&lt;/strong&gt; &lt;a href="https://hub.docker.com/"&gt;Docker Hub username&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;EnVar Name:&lt;/strong&gt; DOCKER_PWD - &lt;strong&gt;Value:&lt;/strong&gt; &lt;a href="https://hub.docker.com/"&gt;Docker Hub password&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;EnVar Name:&lt;/strong&gt; GOOGLE_CLOUD_KEYS - &lt;strong&gt;Value:&lt;/strong&gt; The &lt;a href="https://circleci.com/docs/2.0/env-vars/#encoding-multi-line-environment-variables"&gt;Base64 encoded value&lt;/a&gt; of the &lt;a href="https://console.cloud.google.com/apis/credentials/serviceaccountkey"&gt;GCP credential JSON file&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Once all of the environment variables above are in place, we can begin building our pipeline in the &lt;code&gt;config.yml file&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  The CircleCI config.yml
&lt;/h3&gt;

&lt;p&gt;The &lt;a href="https://circleci.com/docs/2.0/configuration-reference/"&gt;config.yml&lt;/a&gt; file is where you define CI/CD related jobs to be processed and executed. In this section, we will define the jobs and workflows for our pipeline.&lt;/p&gt;

&lt;p&gt;Open the &lt;code&gt;.circleci/.config.yml&lt;/code&gt; file in an editor, delete its contents and paste this code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: 2.1
jobs:

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;version:&lt;/code&gt; key specifies the platform features to use when running this pipeline. The &lt;code&gt;jobs:&lt;/code&gt; key represents the list of individual &lt;a href="https://circleci.com/docs/2.0/configuration-reference/#jobs"&gt;jobs&lt;/a&gt; that we will define for this pipeline. Next, we will create the jobs that our pipeline will execute.&lt;/p&gt;

&lt;h3&gt;
  
  
  Job - run_tests:
&lt;/h3&gt;

&lt;p&gt;I encourage you to familiarize yourself with the special keys, capabilities, and features in this &lt;a href="https://circleci.com/docs/2.0/configuration-reference/"&gt;CircleCI reference doc&lt;/a&gt; which should help you gain experience with the platform. Here is a general outline and explanation for each of the keys in the job we are about to discuss.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://circleci.com/docs/2.0/jobs-steps/#steps-overview"&gt;&lt;strong&gt;docker:&lt;/strong&gt;&lt;/a&gt; is a key represents the runtime our job will be executing in 

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://circleci.com/docs/2.0/jobs-steps/#steps-overview"&gt;&lt;strong&gt;image:&lt;/strong&gt;&lt;/a&gt; is a key represents the Docker container to use for this job&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://circleci.com/docs/2.0/jobs-steps/#steps-overview"&gt;&lt;strong&gt;steps:&lt;/strong&gt;&lt;/a&gt; is a key represents a list or collection of executable commands which are run during a job 

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://circleci.com/docs/2.0/configuration-reference/#checkoutt"&gt;&lt;strong&gt;checkout:&lt;/strong&gt;&lt;/a&gt; is a key is a special step used to check out source code to the configured path&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://circleci.com/docs/2.0/configuration-reference/#run"&gt;&lt;strong&gt;run:&lt;/strong&gt;&lt;/a&gt; is a key is used to invoke all command-line programs &lt;/li&gt;
&lt;li&gt;
&lt;a href="https://circleci.com/docs/2.0/configuration-reference/#run"&gt;&lt;strong&gt;name:&lt;/strong&gt;&lt;/a&gt; is a key represents the title of the step to be shown in the CircleCI UI&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://circleci.com/docs/2.0/configuration-reference/#run"&gt;&lt;strong&gt;command:&lt;/strong&gt;&lt;/a&gt; is a key represents the command to run via the shell&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://circleci.com/docs/2.0/configuration-reference/#store_test_results"&gt;&lt;strong&gt;store_test_results:&lt;/strong&gt;&lt;/a&gt; is a key represents a special step used to upload and store test results for a build &lt;/li&gt;
&lt;li&gt;
&lt;a href="https://circleci.com/docs/2.0/configuration-reference/#store_test_results"&gt;&lt;strong&gt;path:&lt;/strong&gt;&lt;/a&gt; is the path (absolute, or relative to your &lt;code&gt;working_directory&lt;/code&gt;) to the directory containing subdirectories of JUnit XML or Cucumber JSON test metadata files&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://circleci.com/docs/2.0/configuration-reference/#store_artifacts"&gt;&lt;strong&gt;store_artifacts:&lt;/strong&gt;&lt;/a&gt; is a key that represents a step to store artifacts (for example logs, binaries, etc) for availability in the web app or through the API &lt;/li&gt;
&lt;li&gt;
&lt;a href="https://circleci.com/docs/2.0/configuration-reference/#store_artifacts"&gt;&lt;strong&gt;path:&lt;/strong&gt;&lt;/a&gt; is the path to the directory in the primary container used for saving job artifacts&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A valuable benefit of CI/CD is the ability to execute &lt;a href="https://en.wikipedia.org/wiki/Test_automation"&gt;automated testing&lt;/a&gt; on newly written code. It helps identify known and unknown bugs in code by executing tests on code every time it is modified.&lt;/p&gt;

&lt;p&gt;Our next step is to define a new job in the &lt;code&gt;config.yml&lt;/code&gt; file. Paste the following into the file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  run_tests:
    docker:
      - image: circleci/node:12
    steps:
      - checkout
      - run:
          name: Install npm dependencies
          command: |
            npm install --save
      - run:
          name: Run Unit Tests
          command: |
            ./node_modules/mocha/bin/mocha test/ --reporter mochawesome --reporter-options reportDir=test-results,reportFilename=test-results
      - store_test_results:
          path: test-results

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here is a breakdown of what we just added.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The &lt;strong&gt;docker:&lt;/strong&gt; and &lt;strong&gt;image:&lt;/strong&gt; keys specify the executor and the Docker image we are using in this job&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;command: &lt;code&gt;npm install --save&lt;/code&gt;&lt;/strong&gt; key installs the application dependencies used in the app&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;name: Run Unit Tests&lt;/strong&gt; executes the automated tests and saves them to a local directory called &lt;code&gt;test-results/&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;store_test_results:&lt;/strong&gt; is a special command that saves and pins the &lt;code&gt;test-results/&lt;/code&gt; directory results to the build in CircleCI&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This job serves as a unit testing function. It helps with identifying errors in the code. If any of these tests fail, the entire pipeline build will fail, and prompt the developers to fix the errors. The goal is for all the tests and jobs to pass. Next, we will create a job that will build a Docker image and push it to the &lt;a href="https://hub.docker.com/"&gt;Docker Hub registry&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Job - build_docker_image
&lt;/h3&gt;

&lt;p&gt;In &lt;a href="https://dev.to/ronpowelljr/infrastructure-as-code-part-02-build-docker-images-and-deploy-to-kubernetes-4g86-temp-slug-6226302"&gt;part 2&lt;/a&gt; of this series, we manually created a Docker image and pushed it to the Docker Hub registry. In this job, we will use automation to complete this task instead. Append this code block to the &lt;code&gt;config.yml&lt;/code&gt; file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  build_docker_image:
    docker:
      - image: circleci/node:12
    steps:
      - checkout
      - setup_remote_docker:
          docker_layer_caching: false
      - run:
          name: Build Docker image
          command: |
            export TAG=0.2.&amp;lt;&amp;lt; pipeline.number &amp;gt;&amp;gt;
            export IMAGE_NAME=$CIRCLE_PROJECT_REPONAME            
            docker build -t $DOCKER_LOGIN/$IMAGE_NAME -t $DOCKER_LOGIN/$IMAGE_NAME:$TAG .
            echo $DOCKER_PWD | docker login -u $DOCKER_LOGIN --password-stdin
            docker push $DOCKER_LOGIN/$IMAGE_NAME

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;build_docker_image&lt;/code&gt; job is pretty straightforward. You have already encountered most of the CircleCI YAML keys it uses, so I will just jump into the &lt;code&gt;name: Build Docker Image&lt;/code&gt; command block.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The &lt;code&gt;export TAG=0.2.&amp;lt;&amp;lt; pipeline.number &amp;gt;&amp;gt;&lt;/code&gt; line defines a local environment variable that uses the &lt;a href="https://circleci.com/docs/2.0/configuration-reference/#using-pipeline-values"&gt;pipeline.number&lt;/a&gt; value to associate the Docker tag value to the pipeline number being executed&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;export IMAGE_NAME=$CIRCLE_PROJECT_REPONAME&lt;/code&gt; defines the variable we’ll use in naming the Docker image&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;docker build -t $DOCKER_LOGIN/$IMAGE_NAME -t $DOCKER_LOGIN/$IMAGE_NAME:$TAG .&lt;/code&gt; executes the Docker build command using a combination of the project level variables that we set earlier and the local environment variables we specified&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;echo $DOCKER_PWD | docker login -u $DOCKER_LOGIN --password-stdin&lt;/code&gt; authenticates our Docker Hub credentials to access the platform&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;docker push $DOCKER_LOGIN/$IMAGE_NAME&lt;/code&gt; uploads the new docker image to the Docker Hub registry&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This should all look and feel familiar because these are the very same commands you ran manually in part 2. In this example, we added the environment variable naming bits. Next we will build a job to execute Terraform code that builds a GKE cluster.&lt;/p&gt;

&lt;h3&gt;
  
  
  Job - gke_create_cluster
&lt;/h3&gt;

&lt;p&gt;In this job, we will automate the execution of the Terraform code found in the &lt;code&gt;part03/iac_gke_cluster/&lt;/code&gt; directory. Append this code block to the &lt;code&gt;config.yml&lt;/code&gt; file, then save it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  gke_create_cluster:
    docker:
      - image: ariv3ra/terraform-gcp:latest
    environment:
      CLOUDSDK_CORE_PROJECT: cicd-workshops
    steps:
      - checkout
      - run:
          name: Create GKE Cluster
          command: |
            echo $TF_CLOUD_TOKEN | base64 -d &amp;gt; $HOME/.terraformrc
            echo $GOOGLE_CLOUD_KEYS | base64 -d &amp;gt; $HOME/gcloud_keys
            gcloud auth activate-service-account --key-file ${HOME}/gcloud_keys
            cd part03/iac_gke_cluster/
            terraform init
            terraform plan -var credentials=$HOME/gcloud_keys -out=plan.txt
            terraform apply plan.txt

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;One thing to take note of in this code block is the executor Docker image &lt;code&gt;image: ariv3ra/terraform-gcp:latest&lt;/code&gt;. This is an image that I built that has both the &lt;a href="https://cloud.google.com/sdk/docs/quickstarts"&gt;Google SDK&lt;/a&gt; and &lt;a href="https://www.terraform.io/"&gt;Terraform CLI&lt;/a&gt; installed. If we were not using this, we would need to add installation steps to this job to install and configure the tools every time. The &lt;code&gt;environment: CLOUDSDK_CORE_PROJECT: cicd-workshops&lt;/code&gt; keys are also an important element. This sets the environment variable value needed for the &lt;code&gt;gcloud cli&lt;/code&gt; commands we will be executing later.&lt;/p&gt;

&lt;p&gt;Other elements used in the code block:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;echo $TF_CLOUD_TOKEN | base64 -d &amp;gt; $HOME/.terraformrc&lt;/code&gt; is a command that decodes the &lt;code&gt;$TF_CLOUD_TOKEN&lt;/code&gt; value, which creates the &lt;code&gt;./terraformrc&lt;/code&gt; file required by Terraform to access the state data on the respective Terraform cloud workspace&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;echo $GOOGLE_CLOUD_KEYS | base64 -d &amp;gt; $HOME/gcloud_keys&lt;/code&gt; is a command that decodes the &lt;code&gt;$GOOGLE_CLOUD_KEYS&lt;/code&gt; value, which created the &lt;code&gt;gcloud_keys&lt;/code&gt; file required by &lt;code&gt;glcoud cli&lt;/code&gt; to access GCP&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;gcloud auth activate-service-account --key-file ${HOME}/gcloud_keys&lt;/code&gt; is a command that authorizes access to GCP using the &lt;code&gt;gcloud_keys&lt;/code&gt; file we decoded and generated earlier&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The rest of the commands are the &lt;code&gt;terraform cli&lt;/code&gt; commands with &lt;code&gt;-var&lt;/code&gt; parameters that specify and override the &lt;code&gt;default&lt;/code&gt; values of variables defined in the respective Terraform &lt;code&gt;variables.tf&lt;/code&gt; file. Once the &lt;code&gt;terraform apply plan.txt&lt;/code&gt; executes, this job will create a new GKE Cluster.&lt;/p&gt;

&lt;h3&gt;
  
  
  Job - gke_deploy_app
&lt;/h3&gt;

&lt;p&gt;In this job, we will automate the execution of the Terraform code found in the &lt;code&gt;part03/iac_kubernetes_app/&lt;/code&gt; directory. Append this code block to the config.yml file, then save it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  gke_deploy_app:
    docker:
      - image: ariv3ra/terraform-gcp:latest
    environment:
      CLOUDSDK_CORE_PROJECT: cicd-workshops
    steps:
      - checkout
      - run:
          name: Deploy App to GKE
          command: |
            export CLUSTER_NAME="cicd-workshops"
            export TAG=0.2.&amp;lt;&amp;lt; pipeline.number &amp;gt;&amp;gt;
            export DOCKER_IMAGE="docker-image=${DOCKER_LOGIN}/${CIRCLE_PROJECT_REPONAME}:$TAG"
            echo $TF_CLOUD_TOKEN | base64 -d &amp;gt; $HOME/.terraformrc
            echo $GOOGLE_CLOUD_KEYS | base64 -d &amp;gt; $HOME/gcloud_keys
            gcloud auth activate-service-account --key-file ${HOME}/gcloud_keys
            gcloud container clusters get-credentials $CLUSTER_NAME --zone="us-east1-d"
            cd part03/iac_kubernetes_app
            terraform init
            terraform plan -var $DOCKER_IMAGE -out=plan.txt
            terraform apply plan.txt
            export ENDPOINT="$(terraform output endpoint)"
            mkdir -p /tmp/gke/ &amp;amp;&amp;amp; echo 'export ENDPOINT='${ENDPOINT} &amp;gt; /tmp/gke/gke-endpoint
      - persist_to_workspace:
          root: /tmp/gke
          paths:
            - "*"

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here are the important elements of this job code block and some new elements we haven’t previously discussed.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;export CLUSTER_NAME="cicd-workshops"&lt;/code&gt; defines a variable that holds the name of the GCP project that we’ll be deploying to.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;gcloud container clusters get-credentials $CLUSTER_NAME --zone="us-east1-d"&lt;/code&gt; is a command that retrieves the &lt;code&gt;kubeconfig&lt;/code&gt; data from the GKE cluster we created in the previous job.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;terraform plan -var $DOCKER_IMAGE -out=plan.txt&lt;/code&gt; is a command that overrides the &lt;code&gt;default&lt;/code&gt; values of respective variables defined in the respective Terraform &lt;code&gt;variables.tf&lt;/code&gt; file.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;export ENDPOINT="$(terraform output endpoint)"&lt;/code&gt; assigns the output &lt;code&gt;endpoint&lt;/code&gt; value generated by the Terraform command to a local environment variable which will be saved to a file and persisted to a &lt;a href="https://circleci.com/docs/2.0/configuration-reference/#persist_to_workspace"&gt;CircleCI workspace&lt;/a&gt;. It can then be retrieved, attached from an &lt;a href="https://circleci.com/docs/2.0/configuration-reference/#attach_workspace"&gt;attached CircleCI workspace&lt;/a&gt; and used in follow up jobs.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Job - gke_destroy_cluster
&lt;/h3&gt;

&lt;p&gt;This job is the last one we will build for this pipeline. It will basically destroy all of the resources and infrastructure that we have built in previous CI/CD jobs. As part of testing, ephemeral resources are used for smoke testing, integration testing, performance testing, and other types. A job that executes destroy commands is great for getting rid of these constructs when they are no longer required.&lt;/p&gt;

&lt;p&gt;In this job we will automate the execution of the Terraform code found in the &lt;code&gt;part03/iac_kubernetes_app/&lt;/code&gt; directory. Append this code block to the config.yml file, then save it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  gke_destroy_cluster:
    docker:
      - image: ariv3ra/terraform-gcp:latest
    environment:
      CLOUDSDK_CORE_PROJECT: cicd-workshops
    steps:
      - checkout
      - run:
          name: Destroy GKE Cluster
          command: |
            export CLUSTER_NAME="cicd-workshops"
            export TAG=0.2.&amp;lt;&amp;lt; pipeline.number &amp;gt;&amp;gt;
            export DOCKER_IMAGE="docker-image=${DOCKER_LOGIN}/${CIRCLE_PROJECT_REPONAME}:$TAG"            
            echo $TF_CLOUD_TOKEN | base64 -d &amp;gt; $HOME/.terraformrc
            echo $GOOGLE_CLOUD_KEYS | base64 -d &amp;gt; $HOME/gcloud_keys
            gcloud auth activate-service-account --key-file ${HOME}/gcloud_keys
            cd part03/iac_kubernetes_app
            terraform init
            gcloud container clusters get-credentials $CLUSTER_NAME --zone="us-east1-d"            
            terraform destroy -var $DOCKER_IMAGE --auto-approve
            cd ../iac_gke_cluster/
            terraform init
            terraform destroy -var credentials=$HOME/gcloud_keys --auto-approve

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The important element of this job code block is the &lt;code&gt;terraform destroy -var credentials=$HOME/gcloud_keys --auto-approve&lt;/code&gt; command. This command that executes the Terraform command that destroys all of the resources created with the Terraform code in the &lt;code&gt;part03/iac_gke_cluster&lt;/code&gt; and &lt;code&gt;part03/iac_kubernetes_app/&lt;/code&gt; directories respectively.&lt;/p&gt;

&lt;p&gt;Now that we have defined all of the &lt;a href="https://circleci.com/docs/2.0/configuration-reference/#jobs"&gt;jobs&lt;/a&gt; in our pipeline we are ready to create &lt;a href="https://circleci.com/docs/2.0/configuration-reference/#workflows"&gt;CircleCI workflows&lt;/a&gt; that will orchestrate how the jobs will be executed and processed within the pipeline.&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating CircleCI workflows
&lt;/h2&gt;

&lt;p&gt;Our next step is to create the &lt;a href="https://circleci.com/docs/2.0/configuration-reference/#workflows"&gt;workflows&lt;/a&gt; that define how jobs will be executed and processed. Think of a workflow as an ordered list for jobs. You can specify when and how to execute these jobs using the workflow. Append this workflow code block to the &lt;code&gt;config.yml&lt;/code&gt; file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;workflows:
  build_test:
    jobs:
      - run_tests
      - build_docker_image
      - gke_create_cluster
      - gke_deploy_app:
          requires:
            - run_tests
            - build_docker_image
            - gke_create_cluster
      - approve-destroy:
          type: approval
          requires:
            - gke_create_cluster
            - gke_deploy_app
      - gke_destroy_cluster:
          requires:
            - approve-destroy

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This code block represents the &lt;a href="https://circleci.com/docs/2.0/configuration-reference/#workflows"&gt;workflows&lt;/a&gt; definition of our pipeline. Here is what is going on in this block:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The &lt;code&gt;workflows:&lt;/code&gt; key specifies a workflow element&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;build_test:&lt;/code&gt; represents the name/identifier of this workflow&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;jobs:&lt;/code&gt; key represents the list of jobs defined in the &lt;code&gt;config.yml&lt;/code&gt; file to execute&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In this list, you specify the jobs you want to execute in this pipeline. Here is our list of jobs:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;      - run_tests
      - build_docker_image
      - gke_create_cluster
      - gke_deploy_app:
          requires:
            - run_tests
            - build_docker_image
            - gke_create_cluster
      - approve-destroy:
          type: approval
          requires:
            - gke_create_cluster
            - gke_deploy_app
      - gke_destroy_cluster:
          requires:
            - approve-destroy

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;run_tests&lt;/code&gt;, &lt;code&gt;build_docker_image&lt;/code&gt;, and &lt;code&gt;gke_create_cluster&lt;/code&gt; workflow &lt;a href="https://circleci.com/docs/2.0/configuration-reference/#jobs-1"&gt;jobs&lt;/a&gt; run in &lt;a href="https://circleci.com/docs/2.0/configuration-reference/#jobs-1"&gt;parallel or concurrently&lt;/a&gt;, unlike the &lt;code&gt;gke_deploy_app:&lt;/code&gt; item that has a &lt;code&gt;requires:&lt;/code&gt; key. Jobs are run in parallel by default, so you must explicitly require any dependencies by their job name using a &lt;code&gt;requires:&lt;/code&gt; key and a list of jobs that must be complete before kicking off the job specified. Think of &lt;code&gt;requires:&lt;/code&gt; keys as building dependencies on the success of other jobs. These keys let you segment and control the execution of your pipeline.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;approve-destroy:&lt;/code&gt; item specifies a job with a &lt;a href="https://circleci.com/docs/2.0/configuration-reference/#type"&gt;manual approval step&lt;/a&gt;. It requires human intervention where someone must approve the execution of the next job in the workflow jobs list. The next job, &lt;code&gt;gke_destroy_cluster:&lt;/code&gt;, is dependent on the &lt;code&gt;approval-destroy:&lt;/code&gt; job being completed before it executes. It destroys all the resources created by previously executed jobs in the pipeline.&lt;/p&gt;

&lt;h2&gt;
  
  
  The complete .config.yml file
&lt;/h2&gt;

&lt;p&gt;A complete &lt;code&gt;config.yml&lt;/code&gt; based for this post and can be found in the &lt;a href="https://github.com/datapunkz/learn_iac"&gt;project code repo&lt;/a&gt; in the &lt;code&gt;.circleci/&lt;/code&gt; directory. It is here for you to review:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: 2.1
jobs:
  run_tests:
    docker:
      - image: circleci/node:12
    steps:
      - checkout
      - run:
          name: Install npm dependencies
          command: |
            npm install --save
      - run:
          name: Run Unit Tests
          command: |
            ./node_modules/mocha/bin/mocha test/ --reporter mochawesome --reporter-options reportDir=test-results,reportFilename=test-results
      - store_test_results:
          path: test-results
      - store_artifacts:
          path: test-results
  build_docker_image:
    docker:
      - image: circleci/node:12
    steps:
      - checkout
      - setup_remote_docker:
          docker_layer_caching: false
      - run:
          name: Build Docker image
          command: |
            export TAG=0.2.&amp;lt;&amp;lt; pipeline.number &amp;gt;&amp;gt;
            export IMAGE_NAME=$CIRCLE_PROJECT_REPONAME
            docker build -t $DOCKER_LOGIN/$IMAGE_NAME -t $DOCKER_LOGIN/$IMAGE_NAME:$TAG .
            echo $DOCKER_PWD | docker login -u $DOCKER_LOGIN --password-stdin
            docker push $DOCKER_LOGIN/$IMAGE_NAME
  gke_create_cluster:
    docker:
      - image: ariv3ra/terraform-gcp:latest
    environment:
      CLOUDSDK_CORE_PROJECT: cicd-workshops
    steps:
      - checkout
      - run:
          name: Create GKE Cluster
          command: |
            echo $TF_CLOUD_TOKEN | base64 -d &amp;gt; $HOME/.terraformrc
            echo $GOOGLE_CLOUD_KEYS | base64 -d &amp;gt; $HOME/gcloud_keys
            gcloud auth activate-service-account --key-file ${HOME}/gcloud_keys
            cd part03/iac_gke_cluster/
            terraform init
            terraform plan -var credentials=$HOME/gcloud_keys -out=plan.txt
            terraform apply plan.txt
  gke_deploy_app:
    docker:
      - image: ariv3ra/terraform-gcp:latest
    environment:
      CLOUDSDK_CORE_PROJECT: cicd-workshops
    steps:
      - checkout
      - run:
          name: Deploy App to GKE
          command: |
            export CLUSTER_NAME="cicd-workshops"
            export TAG=0.2.&amp;lt;&amp;lt; pipeline.number &amp;gt;&amp;gt;
            export DOCKER_IMAGE="docker-image=${DOCKER_LOGIN}/${CIRCLE_PROJECT_REPONAME}:$TAG"
            echo $TF_CLOUD_TOKEN | base64 -d &amp;gt; $HOME/.terraformrc
            echo $GOOGLE_CLOUD_KEYS | base64 -d &amp;gt; $HOME/gcloud_keys
            gcloud auth activate-service-account --key-file ${HOME}/gcloud_keys
            gcloud container clusters get-credentials $CLUSTER_NAME --zone="us-east1-d"
            cd part03/iac_kubernetes_app
            terraform init
            terraform plan -var $DOCKER_IMAGE -out=plan.txt
            terraform apply plan.txt
            export ENDPOINT="$(terraform output endpoint)"
            mkdir -p /tmp/gke/
            echo 'export ENDPOINT='${ENDPOINT} &amp;gt; /tmp/gke/gke-endpoint
      - persist_to_workspace:
          root: /tmp/gke
          paths:
            - "*"
  gke_destroy_cluster:
    docker:
      - image: ariv3ra/terraform-gcp:latest
    environment:
      CLOUDSDK_CORE_PROJECT: cicd-workshops
    steps:
      - checkout
      - run:
          name: Destroy GKE Cluster
          command: |
            export CLUSTER_NAME="cicd-workshops"
            export TAG=0.2.&amp;lt;&amp;lt; pipeline.number &amp;gt;&amp;gt;
            export DOCKER_IMAGE="docker-image=${DOCKER_LOGIN}/${CIRCLE_PROJECT_REPONAME}:$TAG"            
            echo $TF_CLOUD_TOKEN | base64 -d &amp;gt; $HOME/.terraformrc
            echo $GOOGLE_CLOUD_KEYS | base64 -d &amp;gt; $HOME/gcloud_keys
            gcloud auth activate-service-account --key-file ${HOME}/gcloud_keys
            cd part03/iac_kubernetes_app
            terraform init
            gcloud container clusters get-credentials $CLUSTER_NAME --zone="us-east1-d"            
            terraform destroy -var $DOCKER_IMAGE --auto-approve
            cd ../iac_gke_cluster/
            terraform init
            terraform destroy -var credentials=$HOME/gcloud_keys --auto-approve
workflows:
  build_test:
    jobs:
      - run_tests
      - build_docker_image
      - gke_create_cluster
      - gke_deploy_app:
          requires:
            - run_tests
            - build_docker_image
            - gke_create_cluster
      - approve-destroy:
          type: approval
          requires:
            - gke_create_cluster
            - gke_deploy_app
      - gke_destroy_cluster:
          requires:
            - approve-destroy

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Congratulations! You have just completed part 3 of this series and leveled up your experience by building a new &lt;code&gt;config.yml&lt;/code&gt; file that executes IaC resources using &lt;a href="https://www.terraform.io/"&gt;Terraform&lt;/a&gt;. This post explained and demonstrated some critical elements in the &lt;code&gt;config.yml&lt;/code&gt; file and the internal concepts related to the CircleCI platform.&lt;/p&gt;

&lt;p&gt;In this series we covered many concepts and technologies such as Docker, GCP, Kubernetes, Terraform and CircleCI and included some hands-on experience with them. We also covered how to wire up your projects to use CircleCI and leverage the Terraform code to test your application in target deployment environments. This series is intended to increase your knowledge of important DevOps concepts, technologies, and how they all work together.&lt;/p&gt;

&lt;p&gt;I encourage you to experiment on your own and with your team; changing code, adding new Terraform providers, and reconfiguring CI/CD jobs and pipelines. Challenge each other to accomplish release goals using a combination of what you learned and other ideas the team comes up with. By experimenting, you will learn more than any blog post could teach you.&lt;/p&gt;

&lt;p&gt;Thank you for following along with this series. I hope you found it useful. Please feel free to reach out with feedback on Twitter &lt;a href="https://twitter.com/punkdata"&gt;@punkdata&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The following resources will help you expand your knowledge:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.terraform.io/docs/cli-index.html"&gt;Terraform Getting Started&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://learn.hashicorp.com/terraform/cloud-getting-started/signup#create-your-organization"&gt;Terraform Cloud&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/GoogleCloudPlatform/community/blob/master/tutorials/getting-started-on-gcp-with-terraform/index.md#getting-project-credentials"&gt;Google Cloud Platform&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>kubernetes</category>
      <category>circleci</category>
      <category>iac</category>
    </item>
    <item>
      <title>Infrastructure as Code, part 2: build Docker images and deploy to Kubernetes with Terraform</title>
      <dc:creator>Angel Rivera</dc:creator>
      <pubDate>Tue, 09 Nov 2021 20:00:00 +0000</pubDate>
      <link>https://dev.to/circleci/infrastructure-as-code-part-2-build-docker-images-and-deploy-to-kubernetes-with-terraform-455i</link>
      <guid>https://dev.to/circleci/infrastructure-as-code-part-2-build-docker-images-and-deploy-to-kubernetes-with-terraform-455i</guid>
      <description>&lt;p&gt;&lt;em&gt;This series shows you how to get started with infrastructure as code (IaC). The goal is to help developers build a strong understanding of IaC through tutorials and code examples.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;In this post, I will demonstrate how to how to create a &lt;a href="https://docs.docker.com/get-started/overview/#docker-objects"&gt;Docker image&lt;/a&gt; for an application, then push that image to &lt;a href="https://hub.docker.com/"&gt;Docker Hub.&lt;/a&gt; I will also discuss how to create and deploy the Docker image to a &lt;a href="https://cloud.google.com/kubernetes-engine"&gt;Google Kubernetes Engine (GKE) cluster&lt;/a&gt; using &lt;a href="https://www.terraform.io/"&gt;HashiCorp’s Terraform.&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here is a quick list of things we will accomplish in this post:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Build a new &lt;a href="https://docs.docker.com/get-started/overview/#docker-objects"&gt;Docker image&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Push the new Docker image to the &lt;a href="https://hub.docker.com/"&gt;Docker Hub registry&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Create a new &lt;a href="https://cloud.google.com/kubernetes-engine"&gt;GKE cluster&lt;/a&gt; using &lt;a href="https://www.terraform.io/"&gt;Terraform&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Create a new &lt;a href="https://www.terraform.io/docs/providers/kubernetes/r/deployment.html"&gt;Terraform Kubernetes Deployment&lt;/a&gt; using the &lt;a href="https://www.terraform.io/docs/providers/kubernetes/"&gt;Terraform Kubernetes provider&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Destroy all the resources created using &lt;a href="https://www.terraform.io/"&gt;Terraform&lt;/a&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; &lt;em&gt;Before you can go through this part of the tutorial, make sure you have completed all the actions in the &lt;a href="https://dev.to/ronpowelljr/infrastructure-as-code-part-01-create-a-kubernetes-cluster-5291-temp-slug-4829385"&gt;prerequisites section of part 1&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Our first task is learning how to build a &lt;a href="https://docs.docker.com/get-started/overview/#docker-objects"&gt;Docker image&lt;/a&gt; based on the example Node.js application included in &lt;a href="https://github.com/datapunkz/learn_iac"&gt;this code repo.&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Building a Docker image
&lt;/h2&gt;

&lt;p&gt;In the &lt;a href="https://dev.to/ronpowelljr/infrastructure-as-code-part-01-create-a-kubernetes-cluster-5291-temp-slug-4829385"&gt;previous post&lt;/a&gt;, we used &lt;a href="https://www.terraform.io/"&gt;Terraform&lt;/a&gt; to create a new GKE cluster, but that cluster was unusable because no application or service was deployed. Because Kubernetes (K8s) is a container orchestrator, apps and services must be packaged into &lt;a href="https://docs.docker.com/get-started/overview/#docker-objects"&gt;Docker images&lt;/a&gt;, which can then spawn &lt;a href="https://docs.docker.com/get-started/overview/#docker-objects"&gt;Docker containers&lt;/a&gt; that execute applications or services.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.docker.com/get-started/overview/#docker-objects"&gt;Docker images&lt;/a&gt; are created using the &lt;a href="https://docs.docker.com/engine/reference/commandline/build/"&gt;&lt;code&gt;docker build&lt;/code&gt; command&lt;/a&gt; and you will need a &lt;a href="https://docs.docker.com/engine/reference/builder/"&gt;Dockerfile&lt;/a&gt; to specify how to build your Docker images. I will discuss Dockerfiles, but first I want to address &lt;a href="https://docs.docker.com/engine/reference/builder/#dockerignore-file"&gt;.dockerignore files&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  What is the .dockerignore file?
&lt;/h3&gt;

&lt;p&gt;The &lt;code&gt;.dockerignore&lt;/code&gt; file excludes the files and directories that match the patterns declared in it. Using this file helps to avoid unnecessarily sending large or sensitive files and directories to the daemon, and potentially adding them to public images. In this project, the &lt;code&gt;.dockerignore&lt;/code&gt; file excludes unnecessary files related to Terraform and Node.js local dependencies.&lt;/p&gt;

&lt;h3&gt;
  
  
  Understanding the Dockerfile
&lt;/h3&gt;

&lt;p&gt;The Dockerfile is critical for building Docker images. It specifies how to build and configure the image, in addition to what files to import into it. Dockerfile files are dynamic, so you can accomplish many objectives in different ways. It is important that you have a solid understanding of Dockerfile capabilities so you can build functional images. Here is a breakdown the Dockerfile contained in this project’s code repo.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM node:12

# Create app directory
WORKDIR /usr/src/app

# Install app dependencies
COPY package*.json ./

RUN npm install --only=production

# Bundle app source
COPY . .

EXPOSE 5000
CMD ["npm", "start"]

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;FROM node:12&lt;/code&gt; line defines an image to inherit from. When building images, Docker inherits from a parent image. In this case it is the &lt;code&gt;node:12&lt;/code&gt; image, which is pulled from Docker Hub, if it does not exist locally.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Create app directory
WORKDIR /usr/src/app

# Install app dependencies
COPY package*.json ./

RUN npm install --only=production

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This code block defines the &lt;code&gt;WORKDIR&lt;/code&gt; parameter, which specifies the working directory in the Docker image. The &lt;code&gt;COPY package*.json ./&lt;/code&gt; line copies over any package-related files to the Docker image. The &lt;code&gt;RUN npm install&lt;/code&gt; line installs the application dependencies listed in the &lt;code&gt;package.json&lt;/code&gt; file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;COPY . .

EXPOSE 5000
CMD ["npm", "start"]

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This code block copies all the files into the Docker image, except for the files and directories listed in the &lt;code&gt;.dockerignore&lt;/code&gt; file. The &lt;code&gt;EXPOSE 5000&lt;/code&gt; line specifies the port to expose for this Docker image. The &lt;code&gt;CMD ["npm", "start"]&lt;/code&gt; line defines how to start this image. In this case, it is executing the &lt;code&gt;start&lt;/code&gt; section specified in the &lt;code&gt;package.json&lt;/code&gt; file for this project. This &lt;code&gt;CMD&lt;/code&gt; parameter is the default execution command. Now that you understand the Dockerfile, you can use it to build an image locally.&lt;/p&gt;

&lt;h3&gt;
  
  
  Using the Docker build command
&lt;/h3&gt;

&lt;p&gt;Using the &lt;a href="https://docs.docker.com/engine/reference/commandline/build/"&gt;&lt;code&gt;docker build&lt;/code&gt;&lt;/a&gt; command, the Dockerfile build a new imaged based on the directives defined in it. There are some naming conventions to keep in mind when you are building Docker images. Naming conventions are especially important if you plan on sharing the images.&lt;/p&gt;

&lt;p&gt;Before we start building an image, I will take a moment to describe how to name them. Docker images use &lt;a href="https://docs.docker.com/engine/reference/commandline/tag/"&gt;tags&lt;/a&gt; composed of slash-separated name components. Because we will be pushing the image to &lt;a href="https://hub.docker.com/"&gt;Docker Hub&lt;/a&gt;, we need to prefix the image name with our Docker Hub username. In my case, that is &lt;code&gt;ariv3ra/&lt;/code&gt;. I usually follow that with the name of the project, or a useful description of the image. The full name of this Docker image will be &lt;code&gt;ariv3ra/learniac:0.0.1&lt;/code&gt;. The &lt;code&gt;:0.0.1&lt;/code&gt; is a version tag for the application, but you could also use that to describe other details about the image.&lt;/p&gt;

&lt;p&gt;Once you have a good, descriptive name, you can build an image. The following command must be executed from within the root of the project repo (be sure to replace &lt;code&gt;ariv3ra&lt;/code&gt; with &lt;em&gt;your&lt;/em&gt; Docker Hub name):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker build -t ariv3ra/learniac -t ariv3ra/learniac:0.0.1 .

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, run this command to see a list a of Docker images on your machine:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker images

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This was my output.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;REPOSITORY TAG IMAGE ID CREATED SIZE
ariv3ra/learniac 0.0.1 ba7a22c461ee 24 seconds ago 994MB
ariv3ra/learniac latest ba7a22c461ee 24 seconds ago 994MB

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  The Docker push command
&lt;/h3&gt;

&lt;p&gt;Now we are ready to &lt;a href="https://docs.docker.com/engine/reference/commandline/push/"&gt;push this image&lt;/a&gt; to Docker Hub and make it available publicly. Docker Hub requires authorization to access the service, so we need to use the &lt;a href="https://docs.docker.com/engine/reference/commandline/login/"&gt;&lt;code&gt;login&lt;/code&gt; command to authenticate&lt;/a&gt;. Run this command to log in:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker login

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Enter your Docker Hub credentials in the prompts to authorize your account. You will need to log in only once per machine. Now you can push the image.&lt;/p&gt;

&lt;p&gt;Using the image name listed in your &lt;code&gt;docker images&lt;/code&gt; command, run this command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker push ariv3ra/learniac

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This was my output.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;The push refers to repository [docker.io/ariv3ra/learniac]
2109cf96cc5e: Pushed 
94ce89a4d236: Pushed 
e16b71ca42ab: Pushed 
8271ac5bc1ac: Pushed 
a0dec5cb284e: Mounted from library/node 
03d91b28d371: Mounted from library/node 
4d8e964e233a: Mounted from library/node

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You now have a Docker image available in Docker Hub and ready to be deployed to a GKE cluster. All the pieces are in place to deploy your application to a new Kubernetes cluster. The next step is to build the Kubernetes Deployment using Terraform.&lt;/p&gt;

&lt;h2&gt;
  
  
  Using Terraform to deploy Kubernetes
&lt;/h2&gt;

&lt;p&gt;In &lt;a href="https://dev.to/ronpowelljr/infrastructure-as-code-part-01-create-a-kubernetes-cluster-5291-temp-slug-4829385"&gt;part 1&lt;/a&gt; of this series, we learned how to create a new Google Kubernetes Engine (GKE) cluster using Terraform. As I mentioned earlier, that cluster was not serving any applications or services because we did not deploy any to it. In this section I will describe what it takes to deploy a &lt;a href="https://www.terraform.io/docs/providers/kubernetes/r/deployment.html"&gt;Kubernetes Deployment using Terraform&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Terraform has a &lt;a href="https://www.terraform.io/docs/providers/kubernetes/r/deployment.html"&gt;Kubernetes Deployment&lt;/a&gt; resource that lets you define a and execute a Kubernetes deployment to your GKE cluster. In &lt;a href="https://dev.to/ronpowelljr/infrastructure-as-code-part-01-create-a-kubernetes-cluster-5291-temp-slug-4829385"&gt;part 1&lt;/a&gt; we created a new GKE cluster using the Terraform code in the &lt;code&gt;part01/iac_gke_cluster/&lt;/code&gt; directory. In this post, we will use the &lt;code&gt;part02/iac_gke_cluster/&lt;/code&gt; and &lt;code&gt;part02/iac_kubernetes_app/&lt;/code&gt; directories, respectively. The &lt;code&gt;iac_gke_cluster/&lt;/code&gt; is the same code we used in part 1. We will be using it again here in conjunction with the &lt;code&gt;iac_kubernetes_app/&lt;/code&gt; directory.&lt;/p&gt;

&lt;h3&gt;
  
  
  Terraform Kubernetes provider
&lt;/h3&gt;

&lt;p&gt;We previously used the Terraform &lt;a href="https://www.terraform.io/docs/providers/google/index.html"&gt;Google Cloud Platform provider&lt;/a&gt; to create a new &lt;a href="https://cloud.google.com/kubernetes-engine"&gt;GKE cluster&lt;/a&gt;. The Terraform provider is specific to the Google Cloud Platform, but it is still Kubernetes under the hood. Because GKE is essentially a Kubernetes cluster, we need to use the &lt;a href="https://www.terraform.io/docs/providers/kubernetes/"&gt;Terraform Kubernetes provider&lt;/a&gt; and &lt;a href="https://www.terraform.io/docs/providers/kubernetes/r/deployment.html"&gt;Kubernetes Deployment resource&lt;/a&gt; to configure and deploy our application to the GKE cluster.&lt;/p&gt;

&lt;h2&gt;
  
  
  Terraform code files
&lt;/h2&gt;

&lt;p&gt;The &lt;code&gt;part02/iac_kubernetes_app/&lt;/code&gt; directory contains these files:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;providers.tf&lt;/li&gt;
&lt;li&gt;variables.tf&lt;/li&gt;
&lt;li&gt;main.tf&lt;/li&gt;
&lt;li&gt;deployments.tf&lt;/li&gt;
&lt;li&gt;services.tf&lt;/li&gt;
&lt;li&gt;output.tf&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These files maintain all the code that we are using to define, create, and configure our application deployment to a Kubernetes cluster. Next, I will break down these files to give you a better understanding of what they do.&lt;/p&gt;

&lt;h3&gt;
  
  
  Breakdown: providers.tf
&lt;/h3&gt;

&lt;p&gt;The &lt;code&gt;provider.tf&lt;/code&gt; file is where we define the Terraform provider we will be using: the &lt;a href="https://www.terraform.io/docs/providers/kubernetes/"&gt;Terraform Kubernetes provider&lt;/a&gt;. The &lt;code&gt;provider.tf&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;provider "kubernetes" {

}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This code block defines the providers that will be used in this Terraform project. The &lt;code&gt;{ }&lt;/code&gt; blocks are empty because we will be handling the &lt;a href="https://www.terraform.io/docs/providers/google/guides/using_gke_with_terraform.html#interacting-with-kubernetes"&gt;authentication requirements with a different process&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Breakdown: variables.tf
&lt;/h3&gt;

&lt;p&gt;This file should seem familiar and is similar to the part 1 &lt;code&gt;variables.tf&lt;/code&gt; file. This particular file specifies only the input variables that this Terraform Kubernetes project uses.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "cluster" {
  default = "cicd-workshops"
}
variable "app" {
  type = string
  description = "Name of application"
  default = "cicd-101"
}
variable "zone" {
  default = "us-east1-d"
}
variable "docker-image" {
  type = string
  description = "name of the docker image to deploy"
  default = "ariv3ra/learniac:latest"
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The variables defined in this file will be used throughout the Terraform project in code blocks in the project files. All of these variables have &lt;code&gt;default&lt;/code&gt; values that can be changed by defining them in the CLI when executing the code. These variables add much needed flexibility to the Terraform code and allows the reuse of valuable code. One thing to note here is that the &lt;code&gt;variable "docker-image"&lt;/code&gt; default parameter is set to my Docker image name. Replace that value with the name of &lt;em&gt;your&lt;/em&gt; Docker image.&lt;/p&gt;

&lt;h3&gt;
  
  
  Breakdown: main.tf
&lt;/h3&gt;

&lt;p&gt;The elements of the &lt;code&gt;main.tf&lt;/code&gt; file start with the &lt;code&gt;terraform&lt;/code&gt; block, which specifies the type of &lt;a href="https://www.terraform.io/docs/backends/index.html"&gt;Terraform Backend&lt;/a&gt;. A “backend” in Terraform determines how state is loaded and how an operation such as &lt;code&gt;apply&lt;/code&gt; is executed. This abstraction enables non-local file state storage and remote execution among other things. In this code block, we are using the &lt;code&gt;remote&lt;/code&gt; backend. It uses the Terraform Cloud, and is connected to the &lt;code&gt;iac_kubernetes_app&lt;/code&gt; workspace you created in the &lt;a href="https://dev.to/ronpowelljr/infrastructure-as-code-part-01-create-a-kubernetes-cluster-5291-temp-slug-4829385"&gt;Prerequisites section of the part 1 post&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform {
  required_version = "~&amp;gt;0.12"
  backend "remote" {
    organization = "datapunks"
    workspaces {
      name = "iac_kubernetes_app"
    }
  }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Breakdown: deployments.tf
&lt;/h3&gt;

&lt;p&gt;Next up is a description of the syntax in the &lt;code&gt;deployments.tf&lt;/code&gt; file. This file uses the &lt;a href="https://www.terraform.io/docs/providers/kubernetes/r/deployment.html"&gt;Terraform Kubernetes Deployment resource&lt;/a&gt; to define, configure, and create all the Kubernetes resources required to release our application to the GKE cluster.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "kubernetes_deployment" "app" {
  metadata {
    name = var.app
    labels = {
      app = var.app
    }
  }
  spec {
    replicas = 3

    selector {
      match_labels = {
        app = var.app
      }
    }
    template {
      metadata {
        labels = {
          app = var.app
        }
      }
      spec {
        container {
          image = var.docker-image
          name = var.app
          port {
            name = "port-5000"
            container_port = 5000
          }
        }
      }
    }
  }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Time to review the code elements to gain a better understanding of what is going on.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "kubernetes_deployment" "app" {
  metadata {
    name = var.app
    labels = {
      app = var.app
    }
  }

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This code block specifies the use of the Terraform Kubernetes Deployment resources, which defines our deployment object for Kubernetes. The &lt;code&gt;metadata&lt;/code&gt; block is used to assign values for the parameters used within the Kubernetes services.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  spec {
    replicas = 3

    selector {
      match_labels = {
        app = var.app
      }
    }

    template {
      metadata {
        labels = {
          app = var.app
        }
      }

      spec {
        container {
          image = var.docker-image
          name = var.app
          port {
            name = "port-5000"
            container_port = 5000
          }
        }
      }
    }
  }

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the resources &lt;code&gt;spec{...}&lt;/code&gt; block, we’re specifying that we want three &lt;a href="https://kubernetes.io/docs/concepts/workloads/pods/pod/"&gt;Kubernetes pods&lt;/a&gt; running for our application in the cluster. The &lt;code&gt;selector{...}&lt;/code&gt; block represents &lt;a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#label-selectors"&gt;label selectors&lt;/a&gt;. These are a core grouping primitive in Kubernetes, which will let the users select a set of objects.&lt;/p&gt;

&lt;p&gt;The resource &lt;code&gt;template{...}&lt;/code&gt; block has a &lt;code&gt;spec{...}&lt;/code&gt; block in it which has a &lt;code&gt;container{...}&lt;/code&gt; properties block. This block has parameters that define and configure the container used in the deployment. As you can tell from the code, this is where we will define the pod’s Docker &lt;code&gt;image&lt;/code&gt; (the image we want to use) and the container’s &lt;code&gt;name&lt;/code&gt; as it should appear in Kubernetes. This is also where we define the &lt;code&gt;port&lt;/code&gt; to expose on the container that will allow ingress access to the application running. The values come from the &lt;code&gt;variables.tf&lt;/code&gt; file, found in the same folder. The &lt;a href="https://www.terraform.io/docs/providers/kubernetes/r/deployment.html"&gt;Terraform Kubernetes Deployment resource&lt;/a&gt; is capable of performing very robust configurations. I encourage you and your team to experiment with some of the other properties to gain broader familiarity with the tooling.&lt;/p&gt;

&lt;h3&gt;
  
  
  Breakdown: services.tf
&lt;/h3&gt;

&lt;p&gt;We have created a &lt;a href="https://www.terraform.io/docs/providers/kubernetes/r/deployment.html"&gt;Terraform Kubernetes Deployment resource&lt;/a&gt; file and defined our Kubernetes deployment for this application. That leaves one detail left to complete the deployment of our app. The application we are deploying is a basic web site. As with all web sites, it needs to be accessible for it to be useful. At this point, our &lt;code&gt;deployments.tf&lt;/code&gt; file specifies the directives for deploying a Kubernetes pod with our Docker image and the number of pods required. We are missing a critical element for our deployment: a &lt;a href="https://kubernetes.io/docs/concepts/services-networking/service/"&gt;Kubernetes service&lt;/a&gt;. This is an abstract way to expose an application running on a set of pods as a network service. With Kubernetes, you do not need to modify your application to use an unfamiliar service discovery mechanism. Kubernetes gives pods their own IP addresses and a single DNS name for a set of pods, and can load-balance across them.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;services.tf&lt;/code&gt; file is where we define a &lt;a href="https://www.terraform.io/docs/providers/kubernetes/r/service.html"&gt;Terraform Kubernetes service&lt;/a&gt;. It will wire up the Kubernetes elements to provide ingress access to our application running on pods in the cluster. Here is the &lt;code&gt;services.tf&lt;/code&gt; file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "kubernetes_service" "app" {
  metadata {
    name = var.app
  }
  spec {
    selector = {
      app = kubernetes_deployment.app.metadata.0.labels.app
    }
    port {
      port = 80
      target_port = 5000
    }
    type = "LoadBalancer"
  }
} 

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;At this point, it might be helpful to describe the &lt;code&gt;spec{...}&lt;/code&gt; block and the elements within it. The &lt;code&gt;selector{ app...}&lt;/code&gt; block specifies a name that was defined in the &lt;code&gt;deployments.tf&lt;/code&gt; file and represents the &lt;code&gt;app&lt;/code&gt; value in the &lt;code&gt;label&lt;/code&gt; property of the metadata block in the deployments resource. This is an example of reusing values that have already been assigned in related resources. It also provides a mechanism that streamlines important values and establishes a form of referential integrity for important data like this.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;port{...}&lt;/code&gt; block has two properties: &lt;code&gt;port&lt;/code&gt; and &lt;code&gt;target_port&lt;/code&gt;. These parameters define the external port that the service will listen on for requests to the application. In this example, it is port 80. The &lt;code&gt;target_port&lt;/code&gt; is the internal port our pods are listening on, which is port 5000. This service will route all traffic from port 80 to port 5000.&lt;/p&gt;

&lt;p&gt;The last element to review here is the &lt;code&gt;type&lt;/code&gt; parameter that specifies the type of service we are creating. Kubernetes has three types of &lt;a href="https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types"&gt;services&lt;/a&gt;. In this example, we’re using the &lt;code&gt;LoadBalancer&lt;/code&gt; type, which exposes the service externally using a cloud provider’s load-balancer. NodePort and ClusterIP Services, to which the external load balancer routes, are automatically created. In this case, GCP will create and configure a &lt;code&gt;LoadBalancer&lt;/code&gt; that will control and route traffic to our GKE cluster.&lt;/p&gt;

&lt;h3&gt;
  
  
  Breakdown: output.tf
&lt;/h3&gt;

&lt;p&gt;Terraform uses &lt;a href="https://docs.docker.com/get-started/overview/#docker-objects"&gt;Output Values&lt;/a&gt; to return values of a Terraform module, which provides a child module with outputs after running &lt;code&gt;terraform apply&lt;/code&gt;. These outputs are used to expose a subset of its resource attributes to a parent module, or to print certain values in the CLI output. The &lt;code&gt;output.tf&lt;/code&gt; blocks we are using output values to readout values like Cluster name and the ingress IP address of our newly created LoadBalancer service. This address is where we can access our application hosted in Pods on the GKE cluster.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  output "gke_cluster" {
    value = var.cluster
  }

  output "endpoint" {
    value = kubernetes_service.app.load_balancer_ingress.0.ip
  }

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Initializing the Terraform part02/iac_gke_cluster
&lt;/h2&gt;

&lt;p&gt;Now that you have a better understanding of our Terraform project and syntax, you can start provisioning our GKE cluster using Terraform. Change directory into the &lt;code&gt;part02/iac_gke_cluster&lt;/code&gt; directory:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd part02/iac_gke_cluster

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;While in &lt;code&gt;part02/iac_gke_cluster&lt;/code&gt;, run this command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terrform init

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This was my output.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Initializing the backend...

Successfully configured the backend "remote"! Terraform will automatically
use this backend unless the backend configuration changes.

Initializing provider plugins...
- Checking for available provider plugins...
- Downloading plugin for provider "google" (hashicorp/google) 3.31.0...

The following providers do not have any version constraints in configuration,
so the latest version was installed.

To prevent automatic upgrades to new major versions that may contain breaking
changes, it is recommended to add version = "..." constraints to the
corresponding provider blocks in configuration, with the constraint strings
suggested below.

* provider.google: version = "~&amp;gt; 3.31"

Terraform has been successfully initialized!

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is great! Now we can create the GKE cluster.&lt;/p&gt;

&lt;h3&gt;
  
  
  Terraform apply part02/iac_gke_cluster
&lt;/h3&gt;

&lt;p&gt;Terraform has a command that allows you to dry run and validate your Terraform code without actually executing anything. The command is called &lt;code&gt;terraform plan&lt;/code&gt; and it also graphs all the actions and changes that Terraform will execute against your existing infrastructure. In the terminal, run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform plan

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This was my output.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
-----------------------------------------------------------------------------
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # google_container_cluster.primary will be created
  + resource "google_container_cluster" "primary" {
      + additional_zones = (known after apply)
      + cluster_ipv4_cidr = (known after apply)
      + default_max_pods_per_node = (known after apply)
      + enable_binary_authorization = false
  ...

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Terraform will create new GCP resources for you based on the code in the &lt;code&gt;main.tf&lt;/code&gt; file. Now you are ready to create the new infrastructure and deploy the application. Run this command in the terminal:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform apply

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Terraform will prompt you to confirm your command. Type &lt;code&gt;yes&lt;/code&gt; and press Enter.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Terraform will build your new Google Kubernetes Engine cluster on GCP.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt; : &lt;em&gt;It will take 3-5 minutes for the cluster to complete. It is not an instant process because the backend systems are provisioning and bringing things online.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;After my cluster was completed, this was my output.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

Outputs:

cluster = cicd-workshops
cluster_ca_certificate = &amp;lt;sensitive&amp;gt;
host = &amp;lt;sensitive&amp;gt;
password = &amp;lt;sensitive&amp;gt;
username = &amp;lt;sensitive&amp;gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The new GKE cluster has been created and the &lt;code&gt;Outputs&lt;/code&gt; results are displayed. Notice that the output values that were marked sensitive are masked in the results with &lt;code&gt;&amp;lt;sensitive&amp;gt;&lt;/code&gt; tags. This ensures sensitive data is protected but available when needed.&lt;/p&gt;

&lt;p&gt;Next, we will use the code in the &lt;code&gt;part02/iac_kubernetes_app/&lt;/code&gt; directory to create a Kubernetes deployment and accompanying &lt;code&gt;LoadBalancer&lt;/code&gt; service.&lt;/p&gt;

&lt;h3&gt;
  
  
  Terraform Initialize part02/iac_kubernetes_app/
&lt;/h3&gt;

&lt;p&gt;We can now deploy our application to this GKE cluster using the code in the &lt;code&gt;part02/iac_kubernetes_app/&lt;/code&gt; directory. Change directory into the directory with this command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd part02/iac_kubernetes_app/

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;While in &lt;code&gt;part02/iac_kubernetes_app/&lt;/code&gt;, run this command to initialize the Terraform project:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terrform init

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This was my output.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Initializing the backend...

Successfully configured the backend "remote"! Terraform will automatically
use this backend unless the backend configuration changes.

Initializing provider plugins...
- Checking for available provider plugins...
- Downloading plugin for provider "kubernetes" (hashicorp/kubernetes) 1.11.3...

The following providers do not have any version constraints in configuration,
so the latest version was installed.

To prevent automatic upgrades to new major versions that may contain breaking
changes, it is recommended to add version = "..." constraints to the
corresponding provider blocks in configuration, with the constraint strings
suggested below.

* provider.kubernetes: version = "~&amp;gt; 1.11"

Terraform has been successfully initialized!

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  GKE cluster credentials
&lt;/h3&gt;

&lt;p&gt;After creating a &lt;code&gt;google_container_cluster&lt;/code&gt; with Terraform, authentication to the cluster is required. You can use the &lt;a href="https://cloud.google.com/sdk/docs/quickstarts"&gt;Google Cloud CLI&lt;/a&gt; to configure cluster access, and generate a &lt;a href="https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/"&gt;kubeconfig&lt;/a&gt; file. Execute this command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gcloud container clusters get-credentials cicd-workshops --zone="us-east1-d"

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Using this command, &lt;code&gt;gcloud&lt;/code&gt; will generate a kubeconfig entry that uses &lt;code&gt;gcloud&lt;/code&gt; as an authentication mechanism. This command uses the &lt;code&gt;cicd-workshops&lt;/code&gt; value as the cluster name which is also specified in the &lt;code&gt;variables.tf&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Terraform apply part02/iac_kubernetes_app/
&lt;/h3&gt;

&lt;p&gt;Finally, we are ready to deploy our application to the GKE cluster using Terraform. Execute this command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform plan

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This was my output.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
-----------------------------------------------------------------------------
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create
Terraform will perform the following actions:
  # kubernetes_deployment.app will be created
  + resource "kubernetes_deployment" "app" {
      + id = (known after apply)
      + metadata {
          + generation = (known after apply)
          + labels = {
              + "app" = "cicd-101"
            }
          + name = "cicd-101"
          + namespace = "default"
          + resource_version = (known after apply)
          + self_link = (known after apply)
          + uid = (known after apply)
        }
  ...

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Terraform will create new GCP resources for you based on the code in the &lt;code&gt;deployment.tf&lt;/code&gt; and &lt;code&gt;services.tf&lt;/code&gt; files. Now you can create the new infrastructure and deploy the application. Run this command in the terminal:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform apply

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Terraform will prompt you to confirm your command. Type &lt;code&gt;yes&lt;/code&gt; and press Enter.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Terraform will build your new Kubernetes application deployment and related LoadBalancer.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt; : &lt;em&gt;It will take 3-5 minutes for the cluster to complete. It is not an instant process because the backend systems are provisioning and bringing things online.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;After my cluster was completed, this was my output.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Apply complete! Resources: 2 added, 0 changed, 0 destroyed.

Outputs:

endpoint = 104.196.222.238
gke_cluster = cicd-workshops

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The application has now been deployed. The &lt;code&gt;endpoint&lt;/code&gt; value, and the output, is the IP address to the public ingress of the cluster &lt;code&gt;LoadBalancer&lt;/code&gt;. It also represents the address where you can access the application. Open a web browser and use the &lt;code&gt;output&lt;/code&gt; value to access the application. There will be a web page with the text “Welcome to CI/CD 101 using CircleCI!”.&lt;/p&gt;

&lt;h3&gt;
  
  
  Using Terraform destroy
&lt;/h3&gt;

&lt;p&gt;You have proof that your Kubernetes deployment works and that deploying the application to a GKE cluster has been successfully tested. You can leave it up and running, but be aware that there is a cost associated with any assets running on the Google Cloud Platform and you will be liable for those costs. Google gives a generous $300 credit for its free trial sign-up, but you could easily eat through that if you leave assets running.&lt;/p&gt;

&lt;p&gt;Running the &lt;code&gt;terraform destroy&lt;/code&gt; will terminate any running assets that you created in this tutorial.&lt;/p&gt;

&lt;p&gt;Run this command to destroy the GKE cluster.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform destroy

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Remember that the above command will only destroy the &lt;code&gt;part02/iac_kubeernetes_app/&lt;/code&gt; deployment and you need to run the following to destroy &lt;em&gt;all&lt;/em&gt; the resources created in this tutorial.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd ../iac_gke-cluster/

terraform destroy

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will destroy the GKE cluster we created earlier.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Congratulations! You have completed part 2 of this series, and leveled up your experience by building and publishing a new Docker image, along with provisioning and deploying an application to a Kubernetes cluster using infrastructure as code and Terraform.&lt;/p&gt;

&lt;p&gt;Continue to &lt;a href="https://dev.to/ronpowelljr/infrastructure-as-code-part-3-automate-kubernetes-deployments-with-continuous-integration-and-deployment-o6l-temp-slug-3703784"&gt;part 3&lt;/a&gt; of the tutorial where you will learn how to automate all of this awesome knowledge into CI/CD pipelines using CircleCI.&lt;/p&gt;

&lt;p&gt;The following resources will help you expand your knowledge from here:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.terraform.io/docs/cli-index.html"&gt;Terraform Getting Started&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://learn.hashicorp.com/terraform/cloud-getting-started/signup#create-your-organization"&gt;Terraform Cloud&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/GoogleCloudPlatform/community/blob/master/tutorials/getting-started-on-gcp-with-terraform/index.md#getting-project-credentials"&gt;Google Cloud Platform&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>kubernetes</category>
      <category>circleci</category>
      <category>iac</category>
    </item>
    <item>
      <title>Infrastructure as Code, part 1: create a Kubernetes cluster with Terraform</title>
      <dc:creator>Angel Rivera</dc:creator>
      <pubDate>Mon, 08 Nov 2021 16:00:00 +0000</pubDate>
      <link>https://dev.to/circleci/infrastructure-as-code-part-1-create-a-kubernetes-cluster-with-terraform-4c73</link>
      <guid>https://dev.to/circleci/infrastructure-as-code-part-1-create-a-kubernetes-cluster-with-terraform-4c73</guid>
      <description>&lt;p&gt;&lt;em&gt;This series shows you how to get started with infrastructure as code (IaC). The goal is to help developers build a strong understanding of IaC through tutorials and code examples.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Infrastructure as Code (IaC) is an integral part of modern &lt;a href="https://circleci.com/continuous-integration/"&gt;continuous integration&lt;/a&gt; pipelines. It is the process of managing and provisioning cloud and IT resources using machine readable definition files. IaC gives organizations tools to create, manage, and destroy compute resources by statically defining and declaring these resources in code.&lt;/p&gt;

&lt;p&gt;In this post, I will discuss how to use &lt;a href="https://www.terraform.io/"&gt;HashiCorp’s Terraform&lt;/a&gt; to provision, deploy, and destroy infrastructure resources. Before we start, you will need to create accounts in target cloud providers and services such as &lt;a href="https://cloud.google.com/free"&gt;Google Cloud&lt;/a&gt; and &lt;a href="https://app.terraform.io/signup/account"&gt;Terraform Cloud&lt;/a&gt;. Then you can start learning how to use Terraform to create a new &lt;a href="https://cloud.google.com/kubernetes-engine"&gt;Google Kubernetes Engine (GKE) cluster&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;Before you get started, you will need to have these things in place:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A &lt;a href="https://cloud.google.com/free"&gt;Google Cloud Platform (GCP)&lt;/a&gt; account 

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://cloud.google.com/resource-manager/docs/creating-managing-projects#creating_a_project"&gt;Google Cloud project&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Local install of the &lt;a href="https://cloud.google.com/sdk/docs/quickstarts"&gt;Google Cloud SDK CLI&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Local install of the &lt;a href="https://www.terraform.io/docs/cli-index.html"&gt;Terraform CLI&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;A &lt;a href="https://app.terraform.io/signup/account"&gt;Terraform Cloud&lt;/a&gt; account 

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://learn.hashicorp.com/terraform/cloud-getting-started/signup#create-your-organization"&gt;Terraform Cloud organization&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Two new &lt;a href="https://learn.hashicorp.com/terraform/cloud-getting-started/create-workspace"&gt;Terraform Cloud workspaces&lt;/a&gt; named &lt;code&gt;iac_gke_cluster&lt;/code&gt; and &lt;code&gt;iac_kubernetes_app&lt;/code&gt;. Choose the &lt;code&gt;"No VCS connection"&lt;/code&gt; option&lt;/li&gt;
&lt;li&gt;Enable &lt;a href="https://www.terraform.io/docs/cloud/workspaces/settings.html#execution-mode"&gt;local execution mode&lt;/a&gt; in both the &lt;code&gt;iac_gke_cluster&lt;/code&gt; and &lt;code&gt;iac_kubernetes_app&lt;/code&gt; workspaces&lt;/li&gt;
&lt;li&gt;Create a new &lt;a href="https://learn.hashicorp.com/terraform/tfc/tfc_migration#authenticate-with-terraform-cloud"&gt;Terraform API token&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;A &lt;a href="https://hub.docker.com/"&gt;Docker Hub&lt;/a&gt; account 

&lt;ul&gt;
&lt;li&gt;Local install of the &lt;a href="https://docs.docker.com/get-docker/"&gt;Docker client&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Git clone the &lt;a href="https://github.com/datapunkz/learn_iac"&gt;Learn infrastructure as code repo&lt;/a&gt; from GitHub&lt;/li&gt;
&lt;li&gt;A &lt;a href="https://circleci.com/signup/"&gt;CircleCI&lt;/a&gt; account&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This post works with the code in the &lt;code&gt;part01&lt;/code&gt; folder of &lt;a href="https://github.com/datapunkz/learn_iac"&gt;this repo&lt;/a&gt;. First though, you need to create GCP credentials and then Terraform.&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating GCP project credentials
&lt;/h2&gt;

&lt;p&gt;GCP credentials will allow you to perform administrative actions using IaC tooling. To create them:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Go to the &lt;a href="https://console.cloud.google.com/apis/credentials/serviceaccountkey"&gt;create service account key&lt;/a&gt; page&lt;/li&gt;
&lt;li&gt;Select the default service account or create a new one&lt;/li&gt;
&lt;li&gt;Select JSON as the key type&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Create&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Save this JSON file in the &lt;code&gt;~/.config/gcloud/&lt;/code&gt; directory (you can rename it)&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  How does HashiCorp Terraform work?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.terraform.io/"&gt;HashiCorp Terraform&lt;/a&gt; is an open source tool for building, changing, and versioning infrastructure safely and efficiently. Terraform can manage existing service providers as well as custom in-house solutions.&lt;/p&gt;

&lt;p&gt;Terraform uses configuration files to describe the components needed to run a single application or your entire data center. It generates an execution plan describing what it will do to reach the desired state, and then executes it to build the infrastructure described by the plan. As the configuration changes, Terraform determines what changed and creates incremental execution plans which can be applied to update infrastructure resources.&lt;/p&gt;

&lt;p&gt;Terraform is used to create, manage, and update infrastructure resources such as physical machines, VMs, network switches, containers, and more. Terraform can manage includes low-level infrastructure components such as compute instances, storage, and networking, as well as high-level components like DNS entries and SaaS features.&lt;/p&gt;

&lt;p&gt;Almost any infrastructure type can be represented as a resource in Terraform.&lt;/p&gt;

&lt;h3&gt;
  
  
  What is a Terraform provider?
&lt;/h3&gt;

&lt;p&gt;A provider is responsible for understanding API interactions and exposing resources. Providers can be an IaaS (Alibaba Cloud, AWS, GCP, Microsoft Azure, OpenStack), a PaaS (like Heroku), or SaaS services (Terraform Cloud, DNSimple, Cloudflare).&lt;/p&gt;

&lt;p&gt;In this step, we will provision some resources in &lt;a href="https://cloud.google.com/free"&gt;GCP&lt;/a&gt; using Terraform code. We want to write Terraform code that will define and create a new &lt;a href="https://cloud.google.com/kubernetes-engine"&gt;GKE cluster&lt;/a&gt; that we can use in part 2 of the series.&lt;/p&gt;

&lt;p&gt;To create a new &lt;a href="https://cloud.google.com/kubernetes-engine"&gt;GKE cluster&lt;/a&gt;, we need to rely on the &lt;a href="https://www.terraform.io/docs/providers/google/index.html"&gt;GCP provider&lt;/a&gt; for our interactions with GCP. Once the provider is defined and configured, we can build and control Terraform &lt;a href="https://www.terraform.io/docs/configuration/resources.html"&gt;resources&lt;/a&gt; on GCP.&lt;/p&gt;

&lt;h3&gt;
  
  
  What are Terraform resources?
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://www.terraform.io/docs/configuration/resources.html"&gt;Resources&lt;/a&gt; are the most important element in the Terraform language. Each resource block describes one or more infrastructure objects. An infrastructure object can be a virtual network, a compute instance, or a higher-level component like DNS records. A resource block declares a resource of a given type (&lt;code&gt;google_container_cluster&lt;/code&gt;) with a given local name like “web”. The name is used to refer to this resource from elsewhere in the same Terraform module, but it has no significance outside of the scope of a module.&lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding Terraform code
&lt;/h2&gt;

&lt;p&gt;Now that you have a better understanding of Terraform &lt;a href="https://www.terraform.io/docs/providers/google/index.html"&gt;providers&lt;/a&gt; and &lt;a href="https://www.terraform.io/docs/configuration/resources.html"&gt;resources&lt;/a&gt;, it is time to start digging into the code. Terraform code is maintained within directories. Because we are using the &lt;a href="https://www.terraform.io/docs/cli-index.html"&gt;CLI tool&lt;/a&gt;, you must execute commands from within the root directories where the code is located. For this tutorial, the Terraform code we are using is located in the &lt;code&gt;part01/iac_gke_cluster&lt;/code&gt; folder &lt;a href="https://github.com/datapunkz/learn_iac"&gt;here&lt;/a&gt;. This directory contains these files:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;providers.tf&lt;/li&gt;
&lt;li&gt;variables.tf&lt;/li&gt;
&lt;li&gt;main.tf&lt;/li&gt;
&lt;li&gt;output.tf&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These files represent the GCP resources infrastructure that we are going to create. These are what Terraform processes. You can place all of the Terraform code into one file, but that can become harder to manage once the syntax grows in volume. Most Terraform devs create a separate file for every element. Here is a quick break down of each file and discuss the critical elements of each.&lt;/p&gt;

&lt;h3&gt;
  
  
  Breakdown: providers.tf
&lt;/h3&gt;

&lt;p&gt;The &lt;code&gt;provider.tf&lt;/code&gt; file is where we define the cloud provider we will use. We will use the &lt;a href="https://www.terraform.io/docs/providers/google/index.html"&gt;google_container_cluster provider&lt;/a&gt;. This is the content of the &lt;code&gt;provider.tf&lt;/code&gt; file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;provider "google" {
  # version = "2.7.0"
  credentials = file(var.credentials)
  project = var.project
  region = var.region
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This code block has parameters in closure &lt;code&gt;{ }&lt;/code&gt; blocks. The &lt;code&gt;credentials&lt;/code&gt; block specifies the file path to the GCP credential’s JSON file that you created earlier. Notice that the values for the parameters are prefixed with &lt;code&gt;var&lt;/code&gt;. The &lt;code&gt;var&lt;/code&gt; prefix defines the usage of &lt;a href="https://www.terraform.io/docs/configuration/variables.html"&gt;Terraform Input Variables&lt;/a&gt;, which serve as parameters for a Terraform module. This allows aspects of the module to be customized without altering the module’s own source code, and allows modules to be shared between different configurations. When you declare variables in the root module of your configuration, you can set their values using CLI options and environment variables. When you declare them in child modules, the calling module will pass values in the module block.&lt;/p&gt;

&lt;h3&gt;
  
  
  Breakdown: variables.tf
&lt;/h3&gt;

&lt;p&gt;The &lt;code&gt;variables.tf&lt;/code&gt; file specifies all the input variables that this Terraform project uses.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "project" {
  default = "cicd-workshops"
}

variable "region" {
  default = "us-east1"
}

variable "zone" {
  default = "us-east1-d"
}

variable "cluster" {
  default = "cicd-workshops"
}

variable "credentials" {
  default = "~/.ssh/cicd_demo_gcp_creds.json"
}

variable "kubernetes_min_ver" {
  default = "latest"
}

variable "kubernetes_max_ver" {
  default = "latest"
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The variables defined in this file are used throughout this project. All of these variables have &lt;code&gt;default&lt;/code&gt; values, but the values can be changed by defining them with the CLI when executing Terraform code. These variables add much needed flexibility to the code and makes it possible to reuse valuable code.&lt;/p&gt;

&lt;h3&gt;
  
  
  Breakdown: main.tf
&lt;/h3&gt;

&lt;p&gt;The &lt;code&gt;main.tf&lt;/code&gt; file defines the bulk of our GKE cluster parameters.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform {
  required_version = "~&amp;gt;0.12"
  backend "remote" {
    organization = "datapunks"
    workspaces {
      name = "iac_gke_cluster"
    }
  }
}

resource "google_container_cluster" "primary" {
  name = var.cluster
  location = var.zone
  initial_node_count = 3

  master_auth {
    username = ""
    password = ""

    client_certificate_config {
      issue_client_certificate = false
    }
  }

  node_config {
    machine_type = var.machine_type
    oauth_scopes = [
      "https://www.googleapis.com/auth/logging.write",
      "https://www.googleapis.com/auth/monitoring",
    ]

    metadata = {
      disable-legacy-endpoints = "true"
    }

    labels = {
      app = var.app_name
    }

    tags = ["app", var.app_name]
  }

  timeouts {
    create = "30m"
    update = "40m"
  }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here is a description for each element of the &lt;code&gt;main.tf&lt;/code&gt; file starting with the &lt;code&gt;terraform&lt;/code&gt; block. This block specifies the type of &lt;a href="https://www.terraform.io/docs/backends/index.html"&gt;Terraform backend&lt;/a&gt;. A “backend” in Terraform determines how state is loaded and how an operation such as &lt;code&gt;apply&lt;/code&gt; is executed. This abstraction enables things like non-local file state storage and remote execution. In this code block, we are using the &lt;code&gt;remote&lt;/code&gt; backend which uses the Terraform Cloud and is connected to the &lt;code&gt;iac_gke_cluster&lt;/code&gt; workspace you created in the prerequisites section.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform {
  required_version = "~&amp;gt;0.12"
  backend "remote" {
    organization = "datapunks"
    workspaces {
      name = "iac_gke_cluster"
    }
  }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The next code block defines the &lt;a href="https://cloud.google.com/kubernetes-engine"&gt;GKE Cluster&lt;/a&gt; that we are going to create. We are also using some of the variables defined in &lt;code&gt;variables.tf&lt;/code&gt;. The &lt;code&gt;resource&lt;/code&gt; block has many parameters used to provision and configure the GKE Cluster on GCP. The important parameters here are &lt;code&gt;name&lt;/code&gt;, &lt;code&gt;location&lt;/code&gt;, and &lt;code&gt;Initial_node_count&lt;/code&gt;, which specifies the initial total of compute resources or virtual machines that will comprise this new cluster. We will start with three compute nodes for this cluster.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "google_container_cluster" "primary" {
  name = var.cluster
  location = var.zone
  initial_node_count = 3

  master_auth {
    username = ""
    password = ""

    client_certificate_config {
      issue_client_certificate = false
    }
  }

  node_config {
    machine_type = var.machine_type
    oauth_scopes = [
      "https://www.googleapis.com/auth/logging.write",
      "https://www.googleapis.com/auth/monitoring",
    ]

    metadata = {
      disable-legacy-endpoints = "true"
    }

    labels = {
      app = var.app_name
    }

    tags = ["app", var.app_name]
  }

  timeouts {
    create = "30m"
    update = "40m"
  }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Breakdown: output.tf
&lt;/h3&gt;

&lt;p&gt;Terraform uses something called &lt;a href="https://docs.docker.com/get-docker/"&gt;output values&lt;/a&gt;. These return values of a Terraform module and provide a child module with outputs. The child module outputs expose a subset of its resource attributes to a parent module, or print certain values in the CLI output after running &lt;code&gt;terraform apply&lt;/code&gt;. The &lt;code&gt;output.tf&lt;/code&gt; blocks shown in the following code sample output values to readout values like cluster name, cluster endpoint, as well as sensitive data, specified with the &lt;code&gt;sensitive&lt;/code&gt; parameter.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
output "cluster" {
  value = google_container_cluster.primary.name
}

output "host" {
  value = google_container_cluster.primary.endpoint
  sensitive = true
}

output "cluster_ca_certificate" {
  value = base64decode(google_container_cluster.primary.master_auth.0.cluster_ca_certificate)
  sensitive = true
}

output "username" {
  value = google_container_cluster.primary.master_auth.0.username
  sensitive = true
}

output "password" {
  value = google_container_cluster.primary.master_auth.0.password
  sensitive = true
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Initializing Terraform
&lt;/h2&gt;

&lt;p&gt;Now that we have covered our Terraform project and syntax, you can start provisioning the GKE cluster using Terraform. Change directory into the &lt;code&gt;part01/iac_gke_cluster&lt;/code&gt; folder:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd part01/iac_gke_cluster

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;While in &lt;code&gt;part01/iac_gke_cluster&lt;/code&gt;, run this command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform init

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Your output should be similar to this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;root@d9ce721293e2:~/project/terraform/gcp/compute# terraform init

Initializing the backend...

Initializing provider plugins...
- Checking for available provider plugins...
- Downloading plugin for provider "google" (hashicorp/google) 3.10.0...

* provider.google: version = "~&amp;gt; 3.10"

Terraform has been successfully initialized!

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Previewing with Terraform
&lt;/h3&gt;

&lt;p&gt;Terraform has a command that allows you to dry run and validate your Terraform code without actually executing anything. The command is called &lt;code&gt;terraform plan&lt;/code&gt;. This command also graphs all the actions and changes that Terraform will execute against your existing infrastructure. In the terminal, run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform plan

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # google_container_cluster.primary will be created
  + resource "google_container_cluster" "primary" {
      + additional_zones = (known after apply)
      + cluster_ipv4_cidr = (known after apply)
      + default_max_pods_per_node = (known after apply)
      + enable_binary_authorization = false
      + enable_intranode_visibility = (known after apply)
      + enable_kubernetes_alpha = false
      + enable_legacy_abac = false
      + enable_shielded_nodes = false
      + enable_tpu = (known after apply)
      + endpoint = (known after apply)
      + id = (known after apply)
      + initial_node_count = 3
      + instance_group_urls = (known after apply)
      + label_fingerprint = (known after apply)
      + location = "us-east1-d"
  }....
Plan: 1 to add, 0 to change, 0 to destroy.  

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Terraform will create new GCP resources for you based on the code in the &lt;code&gt;main.tf&lt;/code&gt; file.&lt;/p&gt;

&lt;h3&gt;
  
  
  Terraform apply
&lt;/h3&gt;

&lt;p&gt;Now you can create the new infrastructure and deploy the application. Run this command in the terminal:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform apply

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Terraform will prompt you to confirm your command. Type &lt;code&gt;yes&lt;/code&gt; and press Enter.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Terraform will build your new GKE cluster on GCP.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt; : &lt;em&gt;It will take 3-5 minutes for the cluster to complete. It is not an instant process because the back-end systems are provisioning and bringing things online.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;After my cluster was completed, this was my output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

Outputs:

cluster = cicd-workshops
cluster_ca_certificate = &amp;lt;sensitive&amp;gt;
host = &amp;lt;sensitive&amp;gt;
password = &amp;lt;sensitive&amp;gt;
username = &amp;lt;sensitive&amp;gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The new GKE cluster has been created and the &lt;code&gt;Outputs&lt;/code&gt; results are displayed. Notice that the output values that were marked sensitive are masked in the results with &lt;code&gt;&amp;lt;sensitive&amp;gt;&lt;/code&gt; tags. This ensures sensitive data is protected, but available when needed.&lt;/p&gt;

&lt;h3&gt;
  
  
  Using Terraform destroy
&lt;/h3&gt;

&lt;p&gt;Now that you have proof that your GKE cluster has been successfully created, run the &lt;code&gt;terraform destroy&lt;/code&gt; command to destroy the assets that you created in this tutorial. You can leave it up and running, but be aware that there is a cost associated with any assets running on GCP and you will be liable for those costs. Google gives a generous $300 credit for its free trial sign-up, but you could easily eat through that if you leave assets running. It is up to you, but running &lt;code&gt;terraform destroy&lt;/code&gt; will terminate any running assets.&lt;/p&gt;

&lt;p&gt;Run this command to destroy the GKE cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform destroy

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Congratulations! You have just completed part 1 of this series and leveled up your experience by provisioning and deploying a Kubernetes cluster to GCP using IaC and &lt;a href="https://www.terraform.io/"&gt;Terraform&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Continue to &lt;a href="https://dev.to/ronpowelljr/infrastructure-as-code-part-02-build-docker-images-and-deploy-to-kubernetes-4g86-temp-slug-6226302"&gt;part 2&lt;/a&gt; of the tutorial. In part 2 you will learn how to build a Docker image for an application, push that image to a repository, and then use Terraform to deploy that image as a container to GKE using Terraform.&lt;/p&gt;

&lt;p&gt;Here are a few resources that will help you expand your knowledge:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.terraform.io/docs/cli-index.html"&gt;Terraform Getting Started&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://learn.hashicorp.com/terraform/cloud-getting-started/signup#create-your-organization"&gt;Terraform Cloud&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/GoogleCloudPlatform/community/blob/master/tutorials/getting-started-on-gcp-with-terraform/index.md#getting-project-credentials"&gt;Google Cloud Platform&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>circleci</category>
      <category>iac</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>Smoke testing in CI/CD pipelines</title>
      <dc:creator>Angel Rivera</dc:creator>
      <pubDate>Wed, 20 Oct 2021 17:00:00 +0000</pubDate>
      <link>https://dev.to/circleci/smoke-testing-in-cicd-pipelines-1dh0</link>
      <guid>https://dev.to/circleci/smoke-testing-in-cicd-pipelines-1dh0</guid>
      <description>&lt;p&gt;Here’s a common situation that plagues many development teams. You run an application through your CI/CD pipeline and all of the tests pass, which is great. But when you deploy it to a live target environment the application just does not function as expected. You can’t always predict what will happen when your application is pushed live. The solution? Smoke tests are designed to reveal these types of failures early by running test cases that cover the critical components and functionality of the application. They also ensure that the application will function as expected in a deployed scenario. When implemented, smoke tests are often executed on every application build to verify that basic but critical functionality passes before jumping into more extensive and time-consuming testing. Smoke tests help create the fast feedback loops that are vital to the software development life cycle.&lt;/p&gt;

&lt;p&gt;In this post I’ll demonstrate how to add smoke testing to the deployment stage of a CI/CD pipeline. The smoke testing will test simple aspects of the application post deployment.&lt;/p&gt;

&lt;h2&gt;
  
  
  Technologies used for smoke testing
&lt;/h2&gt;

&lt;p&gt;This post will reference the following technologies:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com"&gt;GitHub&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://circleci.com/"&gt;CircleCI&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.docker.com/"&gt;Docker&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://kubernetes.io/"&gt;Kubernetes&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://cloud.google.com/kubernetes-engine/"&gt;Google Kubernetes Engine (GKE)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.gnu.org/software/bash/manual/html_node/What-is-Bash_003f.html"&gt;Bash&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/asm89/smoke.sh"&gt;smoke.sh - open source smoke testing framework by asm89&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.pulumi.com/"&gt;Pulumi&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;This post relies on configurations and code that are featured in my previous post &lt;a href="https://dev.to/ronpowelljr/automate-releases-from-your-pipelines-using-infrastructure-as-code-23b1-temp-slug-1721005"&gt;Automate releases from your pipelines using Infrastructure as Code&lt;/a&gt;. The full source code can be found in &lt;a href="https://github.com/datapunkz/orb-pulumi-gcp"&gt;this repo&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting the most from smoke tests
&lt;/h2&gt;

&lt;p&gt;Smoke tests are great for exposing unexpected build errors, connection errors, and for validating a server’s expected response after a new release is deployed to a target environment. For example, a quick, simple smoke test could validate that an application is accessible and is responding with expected response codes like &lt;code&gt;OK 200&lt;/code&gt;, &lt;code&gt;300&lt;/code&gt;, &lt;code&gt;301&lt;/code&gt;, &lt;code&gt;404&lt;/code&gt;, etc. The examples in this post will test that the deployed app responds with an &lt;code&gt;OK 200&lt;/code&gt; server code and will also validate that the default page content renders the expected text.&lt;/p&gt;

&lt;h2&gt;
  
  
  Running CI/CD pipelines without smoke tests
&lt;/h2&gt;

&lt;p&gt;Let’s take a look at an example pipeline config that is designed to run unit tests, build, and push a Docker image to Docker Hub. The pipeline also uses &lt;a href="https://dev.to/circleci/an-intro-to-infrastructure-as-code-104d-temp-slug-9937351"&gt;infrastructure as code&lt;/a&gt; (&lt;a href="https://www.pulumi.com/"&gt;Pulumi&lt;/a&gt;) to provision a new Google Kubernetes Engine (GKE) cluster and to deploy this release to the cluster. This pipeline config example does not implement smoke tests. Please be aware that if you run this specific pipeline example, a new GKE cluster will be created and will live on until you manually run the &lt;code&gt;pulumi destroy&lt;/code&gt; command to terminate all the infrastructure it created.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Caution:&lt;/strong&gt; &lt;em&gt;Not terminating the infrastructure will result in unexpected costs.&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: 2.1
orbs:
  pulumi: pulumi/pulumi@2.0.0
jobs:
  build_test:
    docker:
      - image: cimg/python:3.8.1
        environment:
          PIPENV_VENV_IN_PROJECT: 'true'
    steps:
      - checkout
      - run:
          name: Install Python Dependencies
          command: |
            pipenv install --skip-lock
      - run:
          name: Run Tests
          command: |
            pipenv run pytest
  build_push_image:
    docker:
      - image: cimg/python:3.8.1
    steps:
      - checkout
      - setup_remote_docker:
          docker_layer_caching: false
      - run:
          name: Build and push Docker image
          command: |       
            pipenv install --skip-lock
            pipenv run pip install --upgrade 'setuptools&amp;lt;45.0.0'
            pipenv run pyinstaller -F hello_world.py
            echo 'export TAG=${CIRCLE_SHA1}' &amp;gt;&amp;gt; $BASH_ENV
            echo 'export IMAGE_NAME=orb-pulumi-gcp' &amp;gt;&amp;gt; $BASH_ENV
            source $BASH_ENV
            docker build -t $DOCKER_LOGIN/$IMAGE_NAME -t $DOCKER_LOGIN/$IMAGE_NAME:$TAG .
            echo $DOCKER_PWD | docker login -u $DOCKER_LOGIN --password-stdin
            docker push $DOCKER_LOGIN/$IMAGE_NAME
  deploy_to_gcp:
    docker:
      - image: cimg/python:3.8.1
        environment:
          CLOUDSDK_PYTHON: '/usr/bin/python2.7'
          GOOGLE_SDK_PATH: '~/google-cloud-sdk/'
    steps:
      - checkout
      - pulumi/login:
          version: "2.0.0"
          access-token: ${PULUMI_ACCESS_TOKEN}
      - run:
          name: Install dependencies
          command: |
            cd ~/
            pip install --user -r project/requirements.txt
            curl -o gcp-cli.tar.gz https://dl.google.com/dl/cloudsdk/channels/rapid/google-cloud-sdk.tar.gz
            tar -xzvf gcp-cli.tar.gz
            echo ${GOOGLE_CLOUD_KEYS} | base64 --decode --ignore-garbage &amp;gt; ${HOME}/project/pulumi/gcp/gke/cicd_demo_gcp_creds.json
            ./google-cloud-sdk/install.sh --quiet
            echo 'export PATH=$PATH:~/google-cloud-sdk/bin' &amp;gt;&amp;gt; $BASH_ENV
            source $BASH_ENV
            gcloud auth activate-service-account --key-file ${HOME}/project/pulumi/gcp/gke/cicd_demo_gcp_creds.json
      - pulumi/update:
          stack: k8s
          working_directory: ${HOME}/project/pulumi/gcp/gke/
workflows:
  build_test_deploy:
    jobs:
      - build_test
      - build_push_image
      - deploy_to_gcp:
          requires:
          - build_test
          - build_push_image

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This pipeline deploys the new app release to a new GKE cluster, but we do not know if the application is actually up and running after all of this automation completes. How do we find out whether the application has been deployed and is functioning properly in this new GKE cluster? Smoke tests are a great way to quickly and easily validate the application’s status after deployment.&lt;/p&gt;

&lt;h2&gt;
  
  
  How do I write a smoke test?
&lt;/h2&gt;

&lt;p&gt;The first step is to develop test cases that define the steps required to validate an application’s functionality. Identify the functionality that you want to validate, and then create scenarios to test it. In this tutorial, I’m intentionally describing a very minimal scope for testing. For our sample project, my biggest concern is validating that the application is accessible after deployment and that the default page that is served renders the expected static text.&lt;/p&gt;

&lt;p&gt;I prefer to outline and list the items I want to test because it suits my style of development. The outline shows the factors I considered when developing the smoke tests for this app. Here is an example of how I developed test cases for this smoke test:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What language/test framework? 

&lt;ul&gt;
&lt;li&gt;Bash&lt;/li&gt;
&lt;li&gt;smoke.sh&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;When should this test be executed? 

&lt;ul&gt;
&lt;li&gt;After the GKE cluster has been created&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;What will be tested? 

&lt;ul&gt;
&lt;li&gt;Test: Is the application accessible after it is deployed? &lt;/li&gt;
&lt;li&gt;Expected Result: Server responds with code &lt;code&gt;200&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Test: Does the default page render the text “Welcome to CI/CD” &lt;/li&gt;
&lt;li&gt;Expected Result: &lt;code&gt;TRUE&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Test: Does the default page render the text “Version Number: “ &lt;/li&gt;
&lt;li&gt;Expected Results: &lt;code&gt;TRUE&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Post test actions (must occur regardless of pass or fail) 

&lt;ul&gt;
&lt;li&gt;Write test results to standard output&lt;/li&gt;
&lt;li&gt;Destroy the GKE cluster and related infrastructure &lt;/li&gt;
&lt;li&gt;Run &lt;code&gt;pulumi destroy&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;My test case outline (also called a test script) is complete for this tutorial and clearly shows what I’m interested in testing. For this post, I will write smoke tests using a bash-based, open source smoke test framework called &lt;a href="https://github.com/asm89/smoke.sh"&gt;&lt;code&gt;smoke.sh&lt;/code&gt;&lt;/a&gt; by &lt;a href="https://github.com/asm89"&gt;asm89&lt;/a&gt;. For your own projects, you can write smoke tests in what ever language or framework you prefer. I picked &lt;code&gt;smoke.sh&lt;/code&gt; because it’s an easy framework to implement and it’s open source. Now let’s explore how to express this test script using the &lt;code&gt;smoke.sh&lt;/code&gt; framework.&lt;/p&gt;

&lt;h2&gt;
  
  
  Create smoke test using smoke.sh
&lt;/h2&gt;

&lt;p&gt;The &lt;code&gt;smoke.sh&lt;/code&gt; framework’s &lt;a href="https://github.com/asm89/smoke.sh#smokesh"&gt;documentation&lt;/a&gt; describes how to use it. The next block of sample code shows how I used the &lt;code&gt;smoke_test&lt;/code&gt; file found in the &lt;code&gt;test/&lt;/code&gt; directory of the &lt;a href="https://github.com/datapunkz/orb-pulumi-gcp"&gt;example code’s repo&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/bash

. tests/smoke.sh

TIME_OUT=300
TIME_OUT_COUNT=0
PULUMI_STACK="k8s"
PULUMI_CWD="pulumi/gcp/gke/"
SMOKE_IP=$(pulumi stack --stack $PULUMI_STACK --cwd $PULUMI_CWD output app_endpoint_ip)
SMOKE_URL="http://$SMOKE_IP"

while true
do
  STATUS=$(curl -s -o /dev/null -w '%{http_code}' $SMOKE_URL)
  if [$STATUS -eq 200]; then
    smoke_url_ok $SMOKE_URL
    smoke_assert_body "Welcome to CI/CD"
    smoke_assert_body "Version Number:"
    smoke_report
    echo "\n\n"
    echo 'Smoke Tests Successfully Completed.'
    echo 'Terminating the Kubernetes Cluster in 300 second...'
    sleep 300
    pulumi destroy --stack $PULUMI_STACK --cwd $PULUMI_CWD --yes
    break
  elif [[$TIME_OUT_COUNT -gt $TIME_OUT]]; then
    echo "Process has Timed out! Elapsed Timeout Count.. $TIME_OUT_COUNT"
    pulumi destroy --stack $PULUMI_STACK --cwd $PULUMI_CWD --yes
    exit 1
  else
    echo "Checking Status on host $SMOKE... $TIME_OUT_COUNT seconds elapsed"
    TIME_OUT_COUNT=$((TIME_OUT_COUNT+10))
  fi
  sleep 10
done

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, I’ll explain what’s going on in this smoke_test file.&lt;/p&gt;

&lt;h3&gt;
  
  
  Line by line description of the smoke_test file
&lt;/h3&gt;

&lt;p&gt;Let’s start at the top of the file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/bash

. tests/smoke.sh

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This snippet specifies the Bash binary to use and also specifies the file path to the core &lt;code&gt;smoke.sh&lt;/code&gt; framework to import/include in the &lt;code&gt;smoke_test&lt;/code&gt; script.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;TIME_OUT=300
TIME_OUT_COUNT=0
PULUMI_STACK="k8s"
PULUMI_CWD="pulumi/gcp/gke/"
SMOKE_IP=$(pulumi stack --stack $PULUMI_STACK --cwd $PULUMI_CWD output app_endpoint_ip)
SMOKE_URL="http://$SMOKE_IP"

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This snippet defines environment variables that will be used throughout the &lt;code&gt;smoke_test&lt;/code&gt; script. Here is a list of each environment variable and its purpose:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;PULUMI_STACK="k8s"&lt;/code&gt; is used by Pulumi to specify the Pulumi app stack.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;PULUMI_CWD="pulumi/gcp/gke/"&lt;/code&gt; is the path to the Pulumi infrastructure code.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;SMOKE_IP=$(pulumi stack --stack $PULUMI_STACK --cwd $PULUMI_CWD output app_endpoint_ip)&lt;/code&gt; is the Pulumi command used to retrieve the public IP address of the application on the GKE cluster. This variable is referenced throughout the script.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;SMOKE_URL="http://$SMOKE_IP"&lt;/code&gt; specifies the url endpoint of the application on the GKE cluster.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;while true
do
  STATUS=$(curl -s -o /dev/null -w '%{http_code}' $SMOKE_URL)
  if [$STATUS -eq 200]; then
    smoke_url_ok $SMOKE_URL
    smoke_assert_body "Welcome to CI/CD"
    smoke_assert_body "Version Number:"
    smoke_report
    echo "\n\n"
    echo 'Smoke Tests Successfully Completed.'
    echo 'Terminating the Kubernetes Cluster in 300 second...'
    sleep 300
    pulumi destroy --stack $PULUMI_STACK --cwd $PULUMI_CWD --yes
    break
  elif [[$TIME_OUT_COUNT -gt $TIME_OUT]]; then
    echo "Process has Timed out! Elapsed Timeout Count.. $TIME_OUT_COUNT"
    pulumi destroy --stack $PULUMI_STACK --cwd $PULUMI_CWD --yes
    exit 1
  else
    echo "Checking Status on host $SMOKE... $TIME_OUT_COUNT seconds elapsed"
    TIME_OUT_COUNT=$((TIME_OUT_COUNT+10))
  fi
  sleep 10
done

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This snippet is where all the magic happens. It’s a &lt;code&gt;while&lt;/code&gt; loop that executes until a condition is true or the script exits. In this case, the loop uses a &lt;code&gt;curl&lt;/code&gt; command to test if the application returns an &lt;code&gt;OK 200&lt;/code&gt; response code. Because this pipeline is creating a brand new GKE cluster from scratch, there are transactions in the Google Cloud Platform that need to be complete before we begin smoke testing.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The GKE cluster and application service must be up and running.&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;$STATUS&lt;/code&gt; variable is populated with the results of the curl requests then tested for the value of &lt;code&gt;200&lt;/code&gt;. Otherwise, the loop increments the &lt;code&gt;$TIME_OUT_COUNT&lt;/code&gt; variable by 10 seconds, then waits for 10 seconds to repeat the &lt;code&gt;curl&lt;/code&gt; request until the application is responding.&lt;/li&gt;
&lt;li&gt;Once the cluster and app are up, running, and responding, the &lt;code&gt;STATUS&lt;/code&gt; variable will produce a &lt;code&gt;200&lt;/code&gt; response code and the remainder of the tests will proceed.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The &lt;code&gt;smoke_assert_body "Welcome to CI/CD"&lt;/code&gt; and &lt;code&gt;smoke_assert_body "Version Number: "&lt;/code&gt; statements are where I test that the welcome and version number texts are being rendered on the webpage being called. If the result is false, the test will fail, which will cause the pipeline to fail. If the result is true, then the application will return a &lt;code&gt;200&lt;/code&gt; response code and our text tests will result in &lt;code&gt;TRUE&lt;/code&gt;. Our smoke test will pass and execute the &lt;code&gt;pulumi destroy&lt;/code&gt; command that terminates all of the infrastructure created for this test case. Since there is no further need for this cluster, it will terminate all the infrastructure created in this test.&lt;/p&gt;

&lt;p&gt;This loop also has an &lt;code&gt;elif&lt;/code&gt; (else if) statement that checks to see if the application has exceeded the &lt;code&gt;$TIME_OUT&lt;/code&gt; value. The &lt;code&gt;elif&lt;/code&gt; statement is an example of &lt;a href="https://en.wikipedia.org/wiki/Exception_handling"&gt;exception handling&lt;/a&gt; which enables us to control what happens when unexpected results occur. If the &lt;code&gt;$TIME_OUT_COUNT&lt;/code&gt; value exceeds the &lt;code&gt;TIME_OUT&lt;/code&gt; value, then the &lt;code&gt;pulumi destroy&lt;/code&gt; command is executed and terminates the newly created infrastructure. The &lt;code&gt;exit 1&lt;/code&gt; command then fails your pipeline build process. Regardless of test results, the GKE cluster will be terminated because there really isn’t a need for this infrastructure to exist outside of testing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Adding smoke tests to pipelines
&lt;/h2&gt;

&lt;p&gt;I’ve explained the smoke test example and my process for developing the test case. Now it’s time to integrate it into the CI/CD pipeline configuration above. We’ll add a new &lt;code&gt;run&lt;/code&gt; step below the &lt;code&gt;pulumi/update&lt;/code&gt; step of the &lt;code&gt;deploy_to_gcp&lt;/code&gt; job:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;      ...
      - run:
          name: Run Smoke Test against GKE
          command: |
            echo 'Initializing Smoke Tests on the GKE Cluster'
            ./tests/smoke_test
            echo "GKE Cluster Tested &amp;amp; Destroyed"
      ...

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This snippet demonstrates how to integrate and execute the &lt;code&gt;smoke_test&lt;/code&gt; script into an existing CI/CD pipeline. Adding this new run block ensures that every pipeline build will test the application on a live GKE cluster and provide a validation that the application passed all test cases. You can be confident that the specific release will perform nominally when deployed to the tested target environment which in this case, is a Google Kubernetes cluster.&lt;/p&gt;

&lt;h2&gt;
  
  
  Wrapping up
&lt;/h2&gt;

&lt;p&gt;In summary, I’ve discussed and demonstrated the advantages of using smoke tests and Infrastructure as Code in CI/CD pipelines to test builds in their target deployment environments. Testing an application in its target environment provides valuable insight into how it will behave when it’s deployed. Integrating smoke testing into CI/CD pipelines adds another layer of confidence in application builds.&lt;/p&gt;

&lt;p&gt;If you have any questions, comments, or feedback please feel free to ping me on Twitter &lt;a href="https://twitter.com/punkdata"&gt;@punkdata&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Thanks for reading!&lt;/p&gt;

</description>
      <category>cicd</category>
      <category>testing</category>
      <category>circleci</category>
    </item>
    <item>
      <title>Managing reusable pipeline configuration with object parameters</title>
      <dc:creator>Angel Rivera</dc:creator>
      <pubDate>Fri, 08 Oct 2021 16:00:00 +0000</pubDate>
      <link>https://dev.to/circleci/managing-reusable-pipeline-configuration-with-object-parameters-cik</link>
      <guid>https://dev.to/circleci/managing-reusable-pipeline-configuration-with-object-parameters-cik</guid>
      <description>&lt;p&gt;CircleCI pipelines are defined using the &lt;a href="https://circleci.com/docs/2.0/writing-yaml"&gt;YAML syntax&lt;/a&gt;, which has been widely adopted by many software tools and solutions. YAML is a human-readable declarative &lt;a href="https://en.wikipedia.org/wiki/Data_structure"&gt;data structure&lt;/a&gt; used in in &lt;a href="https://circleci.com/docs/2.0/config-intro/#getting-started-with-circleci-config"&gt;configuration files&lt;/a&gt; (like those for CircleCI pipelines) and in applications where data is being stored or transmitted. The data in pipeline configuration files specifies and controls how workflows and jobs are executed when triggered on the platform. These pipeline directives in configuration files tend to become repetitive, which can result in situations where the config syntax grows in volume. Over time, this increased volume makes the config harder to maintain. Because YAML is a data structure, only minimal syntax reusability capabilities (&lt;a href="https://yaml.org/spec/1.2/spec.html#id2765878"&gt;Anchors and Aliases&lt;/a&gt;) are available to address the increased volume. Anchors and Aliases are too limited to be useful for defining CI/CD pipelines. Fortunately, CircleCI configuration &lt;a href="https://circleci.com/docs/2.0/pipeline-variables/#pipeline-parameters-in-configuration"&gt;parameter features&lt;/a&gt; provide robust capabilities for encapsulating and reusing functionality data that would otherwise be redundant.&lt;/p&gt;

&lt;p&gt;In this post, I will introduce pipeline &lt;a href="https://circleci.com/docs/2.0/pipeline-variables/#pipeline-parameters-in-configuration"&gt;configuration parameters&lt;/a&gt; and explain some of the benefits of adopting them in your pipeline configurations.&lt;/p&gt;

&lt;h2&gt;
  
  
  What are configuration parameters?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://circleci.com/docs/2.0/configuration-reference/#executors-requires-version-21"&gt;Executors&lt;/a&gt;, &lt;a href="https://circleci.com/docs/2.0/configuration-reference/#jobs"&gt;jobs&lt;/a&gt; and &lt;a href="https://circleci.com/docs/2.0/configuration-reference/#commands-requires-version-21"&gt;commands&lt;/a&gt; are considered objects within a pipeline configuration file. Like objects in the &lt;a href="https://en.wikipedia.org/wiki/Object-oriented_programming"&gt;Object Oriented Programming (OOP)&lt;/a&gt; paradigm, pipeline objects can be extended to provide customized functionality. CircleCI configuration &lt;a href="https://circleci.com/docs/2.0/pipeline-variables/#pipeline-parameters-in-configuration"&gt;parameters&lt;/a&gt; let developers extend the capabilities of executors, jobs, and commands by providing ways to create, encapsulate, and reuse pipeline configuration syntax.&lt;/p&gt;

&lt;p&gt;Executors, jobs, and commands are objects with their own individual properties. Parameters also have their own distinct properties that can interact with the object. The composition and properties for parameters include:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;parameters:
  |_ parameter name: (Specify a name the parameter)
    |_ description: (Optional. Describes the parameter)
    |_ type: (Required. datatype string, boolean, integer, enum)
    |_ default: (The default value for the parameter)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  When should I use parameters in a pipeline config?
&lt;/h2&gt;

&lt;p&gt;Use parameters when data and functionality is repeated within pipelines. In other words, if your pipelines have any executors, jobs, or commands that are defined or executed in your pipelines more than once, I recommend identifying those patterns or elements and defining them as parameters within your configuration file syntax. Using parameters gives you the ability to centrally manage and maintain functionality, and dramatically minimizes redundant data and the total lines of syntax in configuration files. The ability to provide variable parameter arguments is also a benefit, and the overall readability of the pipeline syntax is improved as well.&lt;/p&gt;

&lt;h2&gt;
  
  
  How do I create parameters?
&lt;/h2&gt;

&lt;p&gt;As I mentioned earlier, &lt;a href="https://circleci.com/docs/2.0/configuration-reference/#executors-requires-version-21"&gt;executors&lt;/a&gt;, &lt;a href="https://circleci.com/docs/2.0/configuration-reference/#jobs"&gt;jobs&lt;/a&gt;, and &lt;a href="https://circleci.com/docs/2.0/configuration-reference/#commands-requires-version-21"&gt;commands&lt;/a&gt; are the configuration elements that can be extended with parameters. Deciding which of these elements to extend will depend on your specific use case.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NOTE:&lt;/strong&gt; &lt;em&gt;The parameters features are available only in CircleCI version 2.1 and above. The version must be defined at the top of the config file like this:&lt;/em&gt; &lt;code&gt;version: 2.1&lt;/code&gt; &lt;em&gt;for the parameters to be recognized by the platform.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Let me give you an example of defining a parameter for the &lt;a href="https://circleci.com/docs/2.0/configuration-reference/#parallelism"&gt;parallelism&lt;/a&gt; quantities within a &lt;code&gt;jobs:&lt;/code&gt; object:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: 2.1
jobs:
  build_artifact:
    parameters:
      parallelism_qty:
        type: integer
        default: 1
    parallelism: &amp;lt;&amp;lt; parameters.parallelism_qty &amp;gt;&amp;gt;
    machine: true
    steps:
      - checkout
workflows:
  build_workflow:
    jobs:
      - build_artifact:
          parallelism_qty: 2

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this example, the &lt;code&gt;build-artifact:&lt;/code&gt; job has a &lt;code&gt;parameters:&lt;/code&gt; key defined with the name &lt;code&gt;parallelism_qty:&lt;/code&gt;. This parameter has a data type of integer, and a default value of 1. The &lt;a href="https://circleci.com/docs/2.0/configuration-reference/#parallelism"&gt;parallelism:&lt;/a&gt; key is a property of the &lt;code&gt;jobs:&lt;/code&gt; object and defines the number of executors to spawn and execute commands in the &lt;code&gt;steps:&lt;/code&gt; list. In this case, the special &lt;code&gt;checkout&lt;/code&gt; command will be executed on all the executors spawned. The job’s &lt;code&gt;parallelism:&lt;/code&gt; key has been assigned the value &lt;code&gt;« parameters.parallelism\_qty »&lt;/code&gt;, which references the &lt;code&gt;parallelism\_qty:&lt;/code&gt; parameter definition defined above it. This example shows how parameters can add flexibility to your pipeline constructs, and provide a convenient way to centrally manage functionality that is repeated in pipeline syntax.&lt;/p&gt;

&lt;h2&gt;
  
  
  Using parameters in job objects
&lt;/h2&gt;

&lt;p&gt;Using the previous example, the &lt;code&gt;parallelism_qty:&lt;/code&gt; parameter in the workflow block demonstrates how to use parameters within configuration syntax. Because the &lt;code&gt;parallelism_qty:&lt;/code&gt; is defined in a job object, it can be executed as a job specified in a workflow.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;workflows:&lt;/code&gt; block has a &lt;code&gt;jobs:&lt;/code&gt; list that specifies the &lt;code&gt;build_artifact:&lt;/code&gt;. It also assigns a value of 2 executors to the &lt;code&gt;parallelism_qty:&lt;/code&gt;, which will spawn 2 executors and execute the commands in the &lt;code&gt;steps:&lt;/code&gt; list. If that value was 3, then the build_artifact job would spawn 3 executors and run the commands 3 times.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Executors, jobs, and commands are objects with properties that can be defined, customized, and reused throughout pipeline configuration syntax.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Reusing executor objects in pipeline config
&lt;/h2&gt;

&lt;p&gt;The previous section demonstrates how to define and use parameters within a jobs object. In this section, I will describe how to use parameters with executors. Executors define the runtime or environment used to execute pipeline jobs and commands. Executor objects have a set of their own unique properties that parameters can interact with. This is an example of defining and implementing &lt;a href="https://circleci.com/docs/2.0/reusing-config/#authoring-reusable-executors"&gt;reusable executors&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: 2.1
executors:
  docker-executor:
    docker:
      - image: cimg/ruby:3.0.2-browsers
   ubuntu_20-04-executor:
    machine:
      image: 'ubuntu-2004:202010-01'
jobs:
  run-tests-on-docker:
    executor: docker-executor
    steps:
      - checkout
      - run: ruby unit_test.rb 
  run-tests-on-ubuntu-2004:
    executor: ubuntu_20-04
    steps:
      - checkout
      - run: ruby unit_test.rb 
workflows:
  test-app-on-diff-os:
    jobs:
      - run-tests-on-docker
      - run-tests-on-ubuntu-2004

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This example shows how to define and implement reusable executors in your pipeline config. I used the &lt;code&gt;executors:&lt;/code&gt; key at the start of the file to define 2 executors, one named &lt;code&gt;docker-executor:&lt;/code&gt; and one called &lt;code&gt;ubuntu_20-04-executor:&lt;/code&gt;. The first specifies using a Docker executor and the second specifies a machine executor using an Ubuntu 20.04 operating system image. Predefining executors this way enables developers to create a list of executor resources to be used in this pipeline, and to centrally manage the various properties related to executor types. For instance, the &lt;a href="https://circleci.com/docs/2.0/reusing-config/#authoring-reusable-executors"&gt;Docker executor&lt;/a&gt; has properties that do not pertain to and are unavailable to the &lt;a href="https://circleci.com/docs/2.0/configuration-reference/#machine"&gt;machine executor&lt;/a&gt;, because the machine executor is not of the type &lt;code&gt;docker&lt;/code&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Reusing objects keeps the amount of syntax to minimum while providing terse object implementations that optimize code readability and central management of functionality.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The &lt;code&gt;jobs:&lt;/code&gt; block defines &lt;code&gt;run-tests-on-docker:&lt;/code&gt; and &lt;code&gt;run-tests-on-ubuntu-2004:&lt;/code&gt;; both have a an &lt;code&gt;executor:&lt;/code&gt; key specified with a value assigned as the appropriate executor for that job. The &lt;code&gt;run-tests-on-docker:&lt;/code&gt; job executes its steps using the &lt;code&gt;docker-executor&lt;/code&gt; definition and the &lt;code&gt;run-tests-on-ubuntu-2004:&lt;/code&gt; job executes on the &lt;code&gt;ubuntu_20-04&lt;/code&gt; definition. As you can see, pre-defining these executors in their own stanza makes the config syntax easier to read, which will make it easier to use and maintain. Any changes to executors can be made in the respective definition and will propagate to any jobs that implement them. This type of centralized management of defined executors can also apply to jobs and command objects that are defined in a similar way.&lt;/p&gt;

&lt;p&gt;In the &lt;code&gt;workflows:&lt;/code&gt; block, the &lt;code&gt;test-app-on-diff-os:&lt;/code&gt; workflow triggers two jobs in parallel that execute unit-tests in their respective executor environments. Running these tests using different executors is helpful when you want to find out how applications will behave in different operating systems. This type of test is common practice. The take away here is that I defined the executors just once and easily implemented them within multiple jobs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Reusable command objects
&lt;/h2&gt;

&lt;p&gt;Commands can also be defined and implemented within config syntax, just like executors and jobs can. Although command object properties differ from executors and jobs, defining and implementating them is similar. Here is an example showing reusable commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: 2.1

commands:
  install-wget:
    description: "Install the wget client"
    parameters:
      version:
        type: string
        default: "1.20.3-1ubuntu1"
    steps:
      - run: sudo apt install -y wget=&amp;lt;&amp;lt; parameters.version &amp;gt;&amp;gt;
jobs:
  test-web-site:
    docker:
      - image: "cimg/base:stable"
        auth:
          username: $DOCKERHUB_USER
          password: $DOCKERHUB_PASSWORD
    steps:
      - checkout
      - install-wget:
          version: "1.17.0-1ubuntu1"
      - run: wget --spider https://www.circleci.com
workflows:
  run-tests:
    jobs:
      - test-web-site

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this example, a reusable command has been defined and implemented by a job. The &lt;code&gt;command:&lt;/code&gt; key at the top of of the config defines a command named &lt;code&gt;install-wget:&lt;/code&gt; that installs a specific version of the wget client. In this case, a parameter is defined to specify which wget version number to install. The &lt;code&gt;default:&lt;/code&gt; key installs the default value of &lt;code&gt;1.20.3-1ubuntu1&lt;/code&gt; if a value is not specified. The &lt;code&gt;steps:&lt;/code&gt; key lists 1 &lt;code&gt;run:&lt;/code&gt; command that installs the version of wget specified in the &lt;code&gt;versions:&lt;/code&gt; parameter. The &lt;code&gt;versions:&lt;/code&gt; parameter is referenced by the &lt;code&gt;&amp;lt;&amp;lt; parameters.version &amp;gt;&amp;gt;&lt;/code&gt; variable.&lt;/p&gt;

&lt;p&gt;As shown in the example, the defined command can be implemented and used by job objects. The steps stanza in the &lt;code&gt;test-web-site:&lt;/code&gt; job implements the &lt;code&gt;- install-wget:&lt;/code&gt; command. Its &lt;code&gt;version:&lt;/code&gt; parameter is set to an earlier, older version of wget, not the default version value. The last &lt;code&gt;run:&lt;/code&gt; command in the job uses wget to test a response from the given URL. This example runs a simple test to check if a website is responding to requests.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;workflows:&lt;/code&gt; block, as usual, triggers the &lt;code&gt;- test-web-site&lt;/code&gt; job, which executes the reusable &lt;code&gt;install-wget&lt;/code&gt; command. Just like executors and jobs, commands bring the ability to reuse code, centrally manage changes, and increase the readability of the syntax within pipeline configuration files.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this post, I described the basics of using pipeline parameters and reusable pipeline objects: executors, jobs, and commands. The key takeaways:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Executors, jobs, and commands are considered objects with properties that can be defined, customized, and reused throughout pipeline configuration syntax&lt;/li&gt;
&lt;li&gt;Reusing these objects helps keep the amount of syntax to minimum while providing terse object implementations that optimize code readability and central management of functionality&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Although I wrote this post to introduce you to the concepts of parameters and reusable objects, I would like to encourage you to review the &lt;a href="///2.0/reusing-config"&gt;Reusable Config Reference Guide&lt;/a&gt;. It will help you gain a deeper understanding of these capabilities so that you can to take full advantage of these awesome features.&lt;/p&gt;

&lt;p&gt;I would love to know your thoughts and opinions, so please join the discussion by tweeting to me &lt;a href="https://twitter.com/punkdata"&gt;@punkdata&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Thanks for reading!&lt;/p&gt;

</description>
      <category>pipelines</category>
      <category>cicd</category>
      <category>circleci</category>
    </item>
    <item>
      <title>Deploy applications using CircleCI, Docker, HashiCorp Terraform, and Google Cloud</title>
      <dc:creator>Angel Rivera</dc:creator>
      <pubDate>Thu, 13 Dec 2018 18:00:00 +0000</pubDate>
      <link>https://dev.to/circleci/deploy-applications-using-circleci-docker-hashicorp-terraform-and-google-cloud-10l8</link>
      <guid>https://dev.to/circleci/deploy-applications-using-circleci-docker-hashicorp-terraform-and-google-cloud-10l8</guid>
      <description>&lt;p&gt;I regularly speak at conferences and tech meetups, and lately I’ve been fielding a lot of questions regarding the continuous delivery of applications to cloud platforms, such as &lt;a href="https://cloud.google.com"&gt;Google Cloud&lt;/a&gt;, using &lt;a href="https://www.terraform.io/"&gt;HashiCorp Terraform&lt;/a&gt;. In this post, I will demonstrate how to deploy an application using CI/CD pipelines, Docker, and Terraform into a Google Cloud instance. In this example, you will create a new Google Cloud instance using a &lt;a href="https://cloud.google.com/container-optimized-os/docs/"&gt;Google Container-Optimized OS&lt;/a&gt; host image. Google’s Container-Optimized OS is an operating system image for Compute Engine VMs that is optimized for running Docker containers. With Container-Optimized OS, you can bring your Docker containers up on Google Cloud Platform quickly, efficiently, and securely.&lt;/p&gt;

&lt;p&gt;This tutorial also demonstrates how to use Terraform to create a new Google Cloud instance and deploy the application using this &lt;a href="https://github.com/datapunkz/python-cicd-workshop/blob/master/tutorial/cicd_101_guide.md"&gt;CI/CD tutorial Docker image&lt;/a&gt;. The image will be pulled from Docker Hub and run on the instance created from Terraform.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;Before you get started you'll need to have the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A &lt;a href="https://github.com/join"&gt;GitHub account&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;A &lt;a href="https://circleci.com/signup/"&gt;CircleCI account&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;A &lt;a href="https://hub.docker.com"&gt;Docker Hub account&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;A &lt;a href="https://cloud.google.com/free"&gt;Google Cloud account&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You'll also need to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Fork or clone the &lt;a href="https://github.com/datapunkz/python-cicd-workshop"&gt;cicd-101-workshop repo&lt;/a&gt; locally.&lt;/li&gt;
&lt;li&gt;Complete the &lt;a href="https://github.com/datapunkz/python-cicd-workshop/blob/rivera-demo/tutorial/cicd_101_guide.md#hands-on-with-circleci"&gt;hands on with CircleCI section in this tutorial&lt;/a&gt;, specifically the &lt;a href="https://github.com/datapunkz/python-cicd-workshop/blob/rivera-demo/tutorial/cicd_101_guide.md#set-project-level-environment-variables"&gt;setting your Docker Hub credential environment variables section&lt;/a&gt;, which is required for the build to push the Docker image to Docker Hub.&lt;/li&gt;
&lt;li&gt;Ensure that a your application's &lt;a href="https://circleci.com/docs/2.0/circleci-images/"&gt;Docker image&lt;/a&gt; exists in your Docker Hub account. A green build on CircleCI should get you there.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After you have all the prerequisites, you're ready to proceed to the next section.&lt;/p&gt;

&lt;h2&gt;
  
  
  Infrastructure as code
&lt;/h2&gt;

&lt;p&gt;Infrastructure as code (IaC) is the process of managing and provisioning cloud and IT resources via machine readable definition files. IaC enables organizations to provision, manage, and destroy compute resources using modern DevOps tools such as &lt;a href="https://www.terraform.io/"&gt;Terraform&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;You'll be using IaC principles and Terraform in this post to deploy your application to Google Cloud.&lt;/p&gt;

&lt;h2&gt;
  
  
  Create a Google Cloud Platform project
&lt;/h2&gt;

&lt;p&gt;Use these instructions to &lt;a href="https://github.com/GoogleCloudPlatform/community/blob/master/tutorials/getting-started-on-gcp-with-terraform/index.md#create-a-google-cloud-platform-project"&gt;create a new Google Cloud project&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Create and get Google Cloud Project credentials
&lt;/h3&gt;

&lt;p&gt;You will need to create &lt;a href="https://github.com/GoogleCloudPlatform/community/blob/master/tutorials/getting-started-on-gcp-with-terraform/index.md#getting-project-credentials"&gt;Google Cloud credentials&lt;/a&gt; in order to perform administrative actions using Terraform. Go to the &lt;a href="https://console.cloud.google.com/apis/credentials/serviceaccountkey"&gt;Create Service Account Key page&lt;/a&gt;. Select the default service account or create a new one, select JSON as the key type, and click &lt;strong&gt;Create&lt;/strong&gt;. Save this JSON file in the root of &lt;code&gt;terraform/google_cloud/&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Important security note:&lt;/strong&gt; Rename the file to &lt;code&gt;cicd_demo_gcp_creds.json&lt;/code&gt; in order to protect your Google Cloud credentials from being published and exposed in a public GitHub repository. You can also protect the credential's JSON file from being released by simply adding the credential's JSON filename in this project's &lt;code&gt;.gitignore&lt;/code&gt; file. You must be very cautious with the data in this JSON file because, if exposed, anyone with this information can hack into your Google Cloud account, create resources, and run up charges.&lt;/p&gt;

&lt;h2&gt;
  
  
  Install Terraform locally
&lt;/h2&gt;

&lt;p&gt;First, &lt;a href="https://www.terraform.io/intro/getting-started/install.html"&gt;install Terraform locally&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Set up Terraform
&lt;/h2&gt;

&lt;p&gt;In a terminal, run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cd &lt;/span&gt;terraform/google_cloud/
terraform init &lt;span class="c"&gt;# this installs the Google Terraform plugins&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, you'll have to change some values in the &lt;code&gt;main.tf&lt;/code&gt; file. You'll change the values of the Terraform variables to match your information. Change the variable &lt;code&gt;project_name&lt;/code&gt;'s &lt;code&gt;default&lt;/code&gt; tag to the project name you created earlier:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;variable&lt;/span&gt; &lt;span class="s2"&gt;"project_name"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;type&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"string"&lt;/span&gt;
  &lt;span class="nx"&gt;default&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"cicd-workshops"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Change the variable &lt;code&gt;docker_declaration&lt;/code&gt;'s &lt;code&gt;default&lt;/code&gt; tag's &lt;code&gt;image&lt;/code&gt; value of: &lt;code&gt;image: 'ariv3ra/python-cicd-workshop'&lt;/code&gt; to the Docker image that you built and pushed to Docker Hub in the CI/CD tutorial:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;variable&lt;/span&gt; &lt;span class="s2"&gt;"docker_declaration"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;type&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"string"&lt;/span&gt;
  &lt;span class="c1"&gt;# Change the image: string to match the Docker image you want to use&lt;/span&gt;
  &lt;span class="nx"&gt;default&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"spec:&lt;/span&gt;&lt;span class="err"&gt;\&lt;/span&gt;&lt;span class="s2"&gt;n  containers:&lt;/span&gt;&lt;span class="err"&gt;\&lt;/span&gt;&lt;span class="s2"&gt;n    - name: test-docker&lt;/span&gt;&lt;span class="err"&gt;\&lt;/span&gt;&lt;span class="s2"&gt;n      image: 'ariv3ra/python-cicd-workshop'&lt;/span&gt;&lt;span class="err"&gt;\&lt;/span&gt;&lt;span class="s2"&gt;n      stdin: false&lt;/span&gt;&lt;span class="err"&gt;\&lt;/span&gt;&lt;span class="s2"&gt;n      tty: false&lt;/span&gt;&lt;span class="err"&gt;\&lt;/span&gt;&lt;span class="s2"&gt;n  restartPolicy: Always&lt;/span&gt;&lt;span class="err"&gt;\&lt;/span&gt;&lt;span class="s2"&gt;n"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Terraform plan
&lt;/h2&gt;

&lt;p&gt;The &lt;a href="https://www.terraform.io/docs/commands/plan.html"&gt;&lt;code&gt;terraform plan&lt;/code&gt; command&lt;/a&gt; is used to create an execution plan. Terraform performs a refresh, unless explicitly disabled, and then determines what actions are necessary to achieve the desired state specified in the configuration files. This command is a convenient way to check whether the execution plan for a set of changes matches your expectations without making any changes to real resources or to the state.&lt;/p&gt;

&lt;p&gt;In a terminal, run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;terraform plan &lt;span class="nt"&gt;-out&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;plan.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will show you a nice graph that lists which items Terraform will create or change.&lt;/p&gt;

&lt;h2&gt;
  
  
  Terraform apply
&lt;/h2&gt;

&lt;p&gt;The &lt;a href="https://www.terraform.io/docs/commands/apply.html"&gt;&lt;code&gt;terraform apply&lt;/code&gt; command&lt;/a&gt; is used to apply the changes required to reach the desired state of the configuration, or the pre-determined set of actions generated by a Terraform execution plan.&lt;/p&gt;

&lt;p&gt;In a terminal, run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;terraform apply plan.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This executes the Terraform plan and attempts to build out a new Google Compute instance based on the plan and the Docker image defined.&lt;/p&gt;

&lt;h2&gt;
  
  
  Google Compute instance IP address
&lt;/h2&gt;

&lt;p&gt;When Terraform completes building the Google assets, you should see the instance's Public IP Address and it should look similar to the output below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;Public IP Address &lt;span class="o"&gt;=&lt;/span&gt; 104.196.11.156
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Copy the IP Address or DNS listed and paste it into a web browser with port 5000 appended to the end. The complete address should look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;https://35.237.090.42:5000
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The new application should render a welcome message and an image. The application is a Docker container spawned from the CI/CD intro tutorial Docker image you built and pushed to CircleCI.&lt;/p&gt;

&lt;h2&gt;
  
  
  Terraform destroy
&lt;/h2&gt;

&lt;p&gt;Now that you have proof that your Google Compute instance and your Docker container work, you  should run the &lt;a href="https://www.terraform.io/docs/commands/destroy.html"&gt;&lt;code&gt;terraform destroy&lt;/code&gt; command&lt;/a&gt; to destroy the assets that you created in this tutorial. You can leave it up and running, but be aware that there is a cost associated with any assets running in the Google Cloud Platform and you could be liable for those costs. Google gives a generous $300 credit for its free trial sign-up, but you could easily eat through that if you leave assets running. It's up to you, but running &lt;code&gt;terraform destroy&lt;/code&gt; will close out any running assets.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;In this post, you deployed the example application to a live Google Cloud instance from Docker Hub using Terraform. This example demonstrates how to manually deploy your applications using Terraform after the application's CI/CD pipeline builds green on CircleCI. With some additional tweaking of the project's CircleCI &lt;code&gt;config.yml&lt;/code&gt; file, you can configure the CI/CD pipeline to automatically deploy using Terraform within the pipeline. Configuring automatic Terraform deployments are a bit more complicated and require a bit more engineering but it's definitely possible. It might be a topic for a future blog pos, so stay tuned!&lt;/p&gt;

&lt;p&gt;If you want to learn more about CircleCI check out the &lt;a href="https://circleci.com/docs/2.0/"&gt;documentation&lt;/a&gt; site. If you get stuck, you can reach out to the CircleCI community via the &lt;a href="https://discuss.circleci.com/"&gt;community&lt;/a&gt; forum.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>tutorial</category>
      <category>cloud</category>
      <category>circleci</category>
    </item>
  </channel>
</rss>
