<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Chris ter Beke</title>
    <description>The latest articles on DEV Community by Chris ter Beke (@christerbeke).</description>
    <link>https://dev.to/christerbeke</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/christerbeke"/>
    <language>en</language>
    <item>
      <title>Terraform with YAML: Part 2</title>
      <dc:creator>Chris ter Beke</dc:creator>
      <pubDate>Wed, 26 Jul 2023 00:00:00 +0000</pubDate>
      <link>https://dev.to/christerbeke/terraform-with-yaml-part-2-4cfh</link>
      <guid>https://dev.to/christerbeke/terraform-with-yaml-part-2-4cfh</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;This post is the second in a series of three about supercharging your Terraform setup using YAML.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In part one of this series we learned how to use YAML to simplify the configuration of Terraform resources. We mainly focussed on reducing syntax overhead of the HCL language and making the configuration accessible to non-infra engineers.&lt;/p&gt;

&lt;p&gt;In this second part we will dive into some more advanced techniques and patterns.&lt;/p&gt;

&lt;h2&gt;
  
  
  Dynamic blocks
&lt;/h2&gt;

&lt;p&gt;A powerful feature of Terraform is &lt;a href="https://developer.hashicorp.com/terraform/language/expressions/dynamic-blocks"&gt;dynamic blocks&lt;/a&gt;. They allow you to specify multiple nested blocks by looping over a set or map.&lt;/p&gt;

&lt;p&gt;In the following example we add a lifecycle rule to a storage bucket that automatically deletes objects after 3 days. We also add a lifecycle rule to automatically abort an incomplete upload after 1 day.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "google_storage_bucket" "bucket" {
  name = "my-awesome-bucket"
  location = "EU"
  force_destroy = false

  lifecycle_rule {
    action {
      type = "Delete"
    }
    condition {
      age = 3
    }
  }

  lifecycle_rule {
    action {
      type = "AbortIncompleteMultipartUpload"
    }
    condition {
      age = 1
    }
  }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can imagine that if we add even more lifecyle rules, the syntax of this resources becomes long and tedious to read. Luckily we have dynamic blocks to relief some of our pain.&lt;/p&gt;

&lt;p&gt;In the following example we use a dynamic block with a local map to apply the same lifecycle rules:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;locals {
    lifecycle_rules = {
        "Delete" = 3
        "AbortIncompleteMultipartUpload" = 1
    }
}

resource "google_storage_bucket" "bucket" {
  name = "my-awesome-bucket"
  location = "EU"
  force_destroy = false

  dynamic "lifecycle_rule" {
    for_each = local.lifecycle_rules

    content {
        action = {
            type = lifecycle_rule.key
        }
        condition {
            age = lifecycle_rule.value
        }
    }
  }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see, the amount of boilerplate code is already significantly reduced. Now let’s apply our YAML magic to it and see what happens.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;bucket:
  name: example-bucket-123
  location: EU
  force_destroy: true
  lifecycle_rules:
    Delete: 3
    AbortIncompleteMultipartUpload: 1


locals {
  config = yamldecode(file("config.yaml"))
}

resource "google_storage_bucket" "bucket" {
  name = config.bucket.name
  location = config.bucket.location
  force_destroy = config.bucket.force_destroy

  dynamic "lifecycle_rule" {
    for_each = config.bucket.lifecycle_rules

    content {
      action = {
        type = lifecycle_rule.key
      }
      condition {
        age = lifecycle_rule.value
      }
    }
  }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;By specifying the actual rules in our YAML config file, it became very clear which rules we are enforcing on our bucket.&lt;/p&gt;

&lt;h2&gt;
  
  
  Multiple resource types
&lt;/h2&gt;

&lt;p&gt;Now let’s see how we can define more than a single resource based on a YAML configuration file. Here is an example of this for storage bucket IAM members:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;bucket:
  name: example-bucket-123
  location: EU
  force_destroy: true
  admins:
    - "group:storage-admins@company.com"
    - "user:john-break-glass@company.com"


locals {
  config = yamldecode(file("config.yaml"))
}

resource "google_storage_bucket" "bucket" {
  name = config.bucket.name
  location = config.bucket.location
  force_destroy = config.bucket.force_destroy
}

resource "google_storage_bucket_iam_member" "admins" {
  for_each = toset(config.bucket.admins)

  bucket = google_storage_bucket.bucket.name
  role = "roles/storage.admin"
  member = each.key
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;One pattern used here is to group configuration together in YAML and spread it out over multiple Terraform resources. This reduces the amount of locations in the code you need to touch in order to change your infrastructure.&lt;/p&gt;

&lt;h2&gt;
  
  
  Up next
&lt;/h2&gt;

&lt;p&gt;Now we know the basics of YAML in Terraform, as well as some more advanced situation that it can be useful in. In the next and final part of this series, we will dive into templating and schema validation. We’ll also have a quick look at how to automate the injection of YAML config files using Terragrunt.&lt;/p&gt;

&lt;p&gt;The post &lt;a href="https://xebia.com/blog/terraform-with-yaml-part-2/"&gt;Terraform with YAML: Part 2&lt;/a&gt; appeared first on &lt;a href="https://xebia.com"&gt;Xebia&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>terraform</category>
    </item>
    <item>
      <title>Guarantee unique keys in Terraform</title>
      <dc:creator>Chris ter Beke</dc:creator>
      <pubDate>Wed, 26 Jul 2023 00:00:00 +0000</pubDate>
      <link>https://dev.to/christerbeke/guarantee-unique-keys-in-terraform-1djf</link>
      <guid>https://dev.to/christerbeke/guarantee-unique-keys-in-terraform-1djf</guid>
      <description>&lt;p&gt;When using Terraform to dynamically create resources based on lists of maps you probaby have run into this issue. Consider the following list of maps that represents IAM access on a generic cloud resource:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;locals = {
    members = [
        {
            member = "contact@christerbeke.com"
            resource = "projects/12345"
            role = "roles/owner"
        },
        {
            member = "test@christerbeke.com"
            resource = "projects/12345"
            role = "roles/reader"
        }
    ]
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If we want to iterate over this list to create a dynamic amount of resources (using &lt;code&gt;for_each&lt;/code&gt;) we need to convert it to a &lt;code&gt;map&lt;/code&gt;. However there is no way you can construct an map key from any of the separate attributes and guarantee uniqueness. So how can we solve this?&lt;/p&gt;

&lt;p&gt;The trick is to create a combination of all the attributes. But simply concatenating all the attributes in a string results in very long keys. To solve this, and get predictable key lengths, we can use an md5 hash:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;locals = {
    unique_members = { for key, member in local.members : md5("${member.member}/${member.resource}/${member.role}") =&amp;gt; member }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This results in the following data:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "0f334c4500b1faab57203343199d5c86" = {
        member = "contact@christerbeke.com"
        resource = "projects/12345"
        role = "roles/owner"
    },
    "c02f629ef8bf2b413a203c4dcafa60c1" = {
        member = "test@christerbeke.com"
        resource = "projects/12345"
        role = "roles/reader"
    }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we can use it in our &lt;code&gt;for_each&lt;/code&gt; iterator:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "iam_member" "default" {
    for_each = local.unique_members

    member = each.value.member
    resource = each.value.resource
    role = each.value.role
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now you know a simple trick to convert a list of maps into an iterable map with unique keys.&lt;/p&gt;

&lt;p&gt;As a bonus you now get alerted about duplicate list entries, as they would result in dulicate map keys causing Terraform to throw an error!&lt;/p&gt;

&lt;p&gt;The post &lt;a href="https://xebia.com/blog/guarantee-unique-keys-in-terraform/"&gt;Guarantee unique keys in Terraform&lt;/a&gt; appeared first on &lt;a href="https://xebia.com"&gt;Xebia&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>terraform</category>
    </item>
    <item>
      <title>Terraform with YAML: Part 1</title>
      <dc:creator>Chris ter Beke</dc:creator>
      <pubDate>Tue, 04 Apr 2023 00:00:00 +0000</pubDate>
      <link>https://dev.to/christerbeke/terraform-with-yaml-part-1-1jm6</link>
      <guid>https://dev.to/christerbeke/terraform-with-yaml-part-1-1jm6</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;This post is the first in a series of three about supercharging your Terraform setup using YAML.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Terraform is one of the most common tools to provision infrastructure from code or configuration. However it’s using a custom language called &lt;a href="https://github.com/hashicorp/hcl"&gt;HCL (Hashicorp Configuration Language)&lt;/a&gt;. In this blog post we will explore how we can replace as much HCL code as possible with &lt;a href="https://yaml.org"&gt;YAML&lt;/a&gt; and what the benefits are of doing so.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why YAML?
&lt;/h2&gt;

&lt;p&gt;One of the best properties of YAML in my opinion is the absence of syntax overhead. It allows you to consicely write down parameters and values. Let’s look at a comparison of some HCL code and YAML where we configure some Google Pub/Sub topics and subscriptions:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;locals {
  config = {
    topics = [
      {
        name = "my-topic"
        labels = {
          environment = "prod"
        }
        subscriptions = [
          {
            name = "my-subscription"
            push_endpoint = "https://example.com/push"
          }
        ]
      }
    ]
  }
}


topics:
  - name: my-topic
    labels:
        environment: prod
    subscriptions:
      - name: my-subription

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see, the difference in number of lines is quite large. Of course this will change once we add some HCL code to import the YAML configuration, but it quickly adds up when your infrastructure grows.&lt;/p&gt;

&lt;p&gt;Loading and converting the YAML file to HCL is very easy. You can do it in one line even using the &lt;code&gt;yamldecode&lt;/code&gt; and &lt;code&gt;file&lt;/code&gt; functions:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;locals {
  config = yamldecode(file("config.yaml"))
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The result is an HCL represenation of the same data as shown in the earlier example.&lt;/p&gt;

&lt;p&gt;For this particular example, the total number of lines of code using plain HCL is 18, of which 9 are purely syntax. The total number of lines using YAML, including the loading and parsing of the file, is 9. That’s a 50% reduction!&lt;/p&gt;

&lt;p&gt;For more information about YAML decoding in Terraform, check the &lt;a href="https://developer.hashicorp.com/terraform/language/functions/yamldecode"&gt;official documentation&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Another benefit of YAML over HCL is familiarity. Many engineers that do not work on infrastructure are not familiar with the HCL syntax and it’s quirks. YAML on the other hand is so simple and widely used that almost every engineer has used it in their career at some point. This means that if your repository contains YAML for infrastructure configuration, other types of engineers can easily adjust the configuration and deploy it (preferrably using a CI/CD pipeline and proper code review). This provides a self-sufficient environment for application or data teams that work on top of the base infrastructure.&lt;/p&gt;

&lt;h2&gt;
  
  
  A simple example
&lt;/h2&gt;

&lt;p&gt;Let’s build a fully working example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;project:
  id: my-project-id
  region: europe-west4
bucket:
  name: example-bucket-123
  location: EU
  force_destroy: true


terraform {
  required_providers {
    google = {
      source = "hashicorp/google"
      version = "4.47.0"
    }
  }
}

locals {
  config = yamldecode(file("config.yaml"))
}

provider "google" {
  project = config.project.id
  region = config.project.region
}

resource "google_storage_bucket" "bucket" {
  name = config.bucket.name
  location = config.bucket.location
  force_destroy = config.bucket.force_destroy
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this example, we create and configure a Cloud Storage bucket. We use two separate root objects (&lt;code&gt;project&lt;/code&gt; and &lt;code&gt;bucket&lt;/code&gt;) to keep the config tidy and readable.&lt;/p&gt;

&lt;h2&gt;
  
  
  Using loops
&lt;/h2&gt;

&lt;p&gt;Often we want to configure multiple resources, for example different storage buckets for different applications. Let’s adjust the example above to use a &lt;code&gt;for_each&lt;/code&gt; loop:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;project:
  id: my-project-id
  region: europe-west4
buckets:
  - name: example-bucket-123
    location: EU
    force_destroy: true
  - name: example-bucket-456
    location: US
    force_destroy: false


terraform {
  required_providers {
    google = {
      source = "hashicorp/google"
      version = "4.47.0"
    }
  }
}

locals {
  config = yamldecode(file("config.yaml"))
}

provider "google" {
  project = config.project.id
  region = config.project.region
}

resource "google_storage_bucket" "bucket" {
  for_each = toset(config.buckets)

  name = each.value.name
  location = each.value.location
  force_destroy = each.value.force_destroy
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see, with minimal extra code, we can now provision as many buckets as we want.&lt;/p&gt;

&lt;h2&gt;
  
  
  Up next
&lt;/h2&gt;

&lt;p&gt;Now we have a basic understanding of the benefits of using YAML configuration files in your Terraform code. In the next post in this series we will dive into more advanced topics, like how to deal with nested loops, creating multiple resource types from a single YAML configuration, and dynamic variable injection and templating. As a bonus we will look into validating YAML files using a schema to get early feedback on the configuration without having to run a Terraform plan.&lt;/p&gt;

&lt;p&gt;The post &lt;a href="https://xebia.com/blog/terraform-with-yaml-part-1/"&gt;Terraform with YAML: Part 1&lt;/a&gt; appeared first on &lt;a href="https://xebia.com"&gt;Xebia&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>terraform</category>
    </item>
    <item>
      <title>Creating a non-classic Google Cloud Global Load Balancer with Terraform</title>
      <dc:creator>Chris ter Beke</dc:creator>
      <pubDate>Fri, 16 Sep 2022 07:30:00 +0000</pubDate>
      <link>https://dev.to/christerbeke/creating-a-non-classic-google-cloud-global-load-balancer-with-terraform-4gna</link>
      <guid>https://dev.to/christerbeke/creating-a-non-classic-google-cloud-global-load-balancer-with-terraform-4gna</guid>
      <description>&lt;p&gt;The &lt;a href="https://registry.terraform.io/providers/hashicorp/google/latest/docs"&gt;Google Cloud Terraform Provider&lt;/a&gt; has resources to configure a Global External HTTP(S) Load Balancer. By default however this creates a &lt;a href="https://cloud.google.com/load-balancing/docs/https#identifying_the_mode"&gt;classic&lt;/a&gt; load balancer, not a new one. For new features like &lt;a href="https://cloud.google.com/load-balancing/docs/https/traffic-management-global"&gt;traffic management&lt;/a&gt; you cannot use the classic load balancer, so you definitely want to use the new one.&lt;/p&gt;

&lt;p&gt;The Google and Terraform documentation is not clear about how to do this properly. The name &lt;code&gt;classic&lt;/code&gt; does not even appear once on the documentation pages for the relevant resources.&lt;/p&gt;

&lt;p&gt;A typical Global Load Balancing stack looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "google_compute_global_address" "default" {
    ...
}

resource "google_compute_backend_service" "default" {
    ...
}

resource "google_compute_url_map" "default" {
    ...
}

resource "google_compute_target_http_proxy" "default" {
    ...
}

resource "google_compute_global_forwarding_rule" "default" {
    ...
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this stack, &lt;code&gt;google_compute_backend_service&lt;/code&gt; is the load balancing back-end, and &lt;code&gt;google_compute_global_forwarding_rule&lt;/code&gt; is the front-end.&lt;/p&gt;

&lt;p&gt;In order to use a new load balancer, both the back-end and front-end need to have their &lt;code&gt;load_balancing_scheme&lt;/code&gt; configured:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "google_compute_backend_service" "default" {
    ...
    load_balancing_scheme = "EXTERNAL_MANAGED"
}

resource "google_compute_global_forwarding_rule" "default" {
    ...
    load_balancing_scheme = "EXTERNAL_MANAGED"
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Now you know how to create a non-classic Global Load Balancer in Google Cloud using Terraform. The configuration is simple, but hard to find based on the available documentation.&lt;/p&gt;

&lt;p&gt;The post &lt;a href="https://xebia.com/blog/creating-a-non-classic-google-cloud-global-load-balancer-with-terraform/"&gt;Creating a non-classic Google Cloud Global Load Balancer with Terraform&lt;/a&gt; appeared first on &lt;a href="https://xebia.com"&gt;Xebia&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>cloud</category>
      <category>googlecloudplatform</category>
      <category>terraform</category>
    </item>
    <item>
      <title>A declarative approach for Dataflow Flex Templates</title>
      <dc:creator>Chris ter Beke</dc:creator>
      <pubDate>Thu, 28 Jul 2022 09:00:00 +0000</pubDate>
      <link>https://dev.to/christerbeke/a-declarative-approach-for-dataflow-flex-templates-203l</link>
      <guid>https://dev.to/christerbeke/a-declarative-approach-for-dataflow-flex-templates-203l</guid>
      <description>&lt;p&gt;Google Cloud offers a managed Apache Beam solution called Dataflow. Since some time now Dataflow has a feature called Flex Templates. Flex Templates use Docker &lt;a href="https://xebia.com/blog/what-is-a-container/"&gt;containers&lt;/a&gt; instead of Dataflow’s custom templates. The benefit is that Docker is a known standard and the container can run in different environments. However, a custom metadata JSON file is still needed to point to the Docker image in your registry.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Both the CLI and the Terraform approach require you to push the Docker image to a registry.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Using the gcloud CLI
&lt;/h2&gt;

&lt;p&gt;To generate and upload the JSON file you can run &lt;code&gt;gcloud dataflow flex-template build&lt;/code&gt;. The input for this command is a bit of JSON that defines the pipeline name and parameters:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "name": "My Apache Beam pipeline",
    "parameters": [
        {
            "name": "output_table",
            "label": "BigQuery output table name.",
            "helpText": "Name of the BigQuery output table name.",
            "regexes": ["([^:]+:)?[^.]+[.].+"]
        }
    ]
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let’s see what is in Cloud Storage after running the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "image": "eu-docker.pkg.dev/my-gcp-project-id/dataflow-templates/example:latest",
    "sdkInfo": {
        "language": "PYTHON"
    },
    "metadata": {
        "name": "My Apache Beam pipeline",
        "parameters": [
            {
                "name": "output_table",
                "label": "BigQuery output table name.",
                "helpText": "Name of the BigQuery output table name.",
                "regexes": ["([^:]+:)?[^.]+[.].+"]
            }
        ]
    }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The command only adds the Docker image location and some Beam SDK before uploading it to Cloud Storage.&lt;/p&gt;

&lt;h2&gt;
  
  
  Using Terraform
&lt;/h2&gt;

&lt;p&gt;While this works fine, it goes against the declarative approach of Terraform and other infrastructure as code tools.&lt;br&gt;&lt;br&gt;
Let’s see what it takes to generate and manage this metadata JSON file in Terraform:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "google_storage_bucket_object" "flex_template_metadata" {
    bucket = "my-unique-bucket"
    name = "dataflow-templates/example/metadata.json"
    content_type = "application/json"

    content = jsonencode({
        image = "eu-docker.pkg.dev/my-gcp-project-id/dataflow-templates/example:latest"
        sdkInfo = {
            language = "PYTHON"
        }
        metadata = {
            name = "My Apache Beam pipeline"
            parameters = [
                {
                    name = "output_table"
                    label = "BigQuery output table name."
                    helpText = "Name of the BigQuery output table name.",
                    regexes = ["([^:]+:)?[^.]+[.].+"]
                }
            ]
        }
    })
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can reference the storage file path in our Flex Template job:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "google_dataflow_flex_template_job" "flex_template_job" {
    provider = google-beta

    name = "example_pipeline"
    region = "europe-west4"
    container_spec_gcs_path = "gs://${google_storage_bucket_object.flex_template_metadata.bucket}/${google_storage_bucket_object.flex_template_metadata.name}"

    parameters = {
        output_table = "my-gcp-project-id/example_dataset/example_table"
    }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run &lt;code&gt;terraform apply&lt;/code&gt; to create both the template file and the Dataflow job.&lt;/p&gt;

&lt;h2&gt;
  
  
  Updating the Dataflow job
&lt;/h2&gt;

&lt;p&gt;We have one issue remaining. A change in the template data does not trigger an update of the Dataflow job. For this to work, we need an attribute on the Dataflow job resource to change. We can do this by including an MD5 hash of the file contents in the storage path:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;locals {
    template_content = jsonencode({
        image = "eu-docker.pkg.dev/my-gcp-project-id/dataflow-templates/example:latest"
        sdkInfo = {
            language = "PYTHON"
        }
        metadata = {
            name = "My Apache Beam pipeline"
            parameters = [
                {
                    name = "output_table"
                    label = "BigQuery output table name."
                    helpText = "Name of the BigQuery output table name.",
                    regexes = ["([^:]+:)?[^.]+[.].+"]
                }
            ]
        }
    })
    template_gcs_path = "dataflow-templates/example/${base64encode(md5(local.template_content))}/metadata.json"
}

resource "google_storage_bucket_object" "flex_template_metadata" {
    bucket = "my-unique-bucket"
    name = local.template_gcs_path
    ...
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A change in the template data will trigger a change in the MD5 hash. This will trigger a change in the template storage path that we also reference in the Dataflow job resource. Running &lt;code&gt;terraform apply&lt;/code&gt; now correctly updates both the JSON data in storage as well as the Dataflow flex job. If you are using a batch mode it will create a new job instance.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Dataflow Flex Templates allow you to use Docker images Dataflow jobs. A &lt;code&gt;gcloud&lt;/code&gt; CLI command is required to build and upload some JSON metadata. We can replicate this behavior using Terraform code. This allows for a 100% declarative infrastructure-as-code solution.&lt;/p&gt;

&lt;p&gt;For a full code example, check my Dataflow Flex Terraform module on &lt;a href="https://github.com/ChrisTerBeke/terraform-playground/tree/main/terraform/modules/gcp_dataflow_flex"&gt;GitHub&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The post &lt;a href="https://xebia.com/blog/a-declarative-approach-for-dataflow-flex-templates/"&gt;A declarative approach for Dataflow Flex Templates&lt;/a&gt; appeared first on &lt;a href="https://xebia.com"&gt;Xebia&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>cloud</category>
      <category>googlecloudplatform</category>
      <category>terraform</category>
    </item>
    <item>
      <title>A minimal setup for a high availability service using Cloud Run</title>
      <dc:creator>Chris ter Beke</dc:creator>
      <pubDate>Tue, 11 Jan 2022 14:16:13 +0000</pubDate>
      <link>https://dev.to/christerbeke/a-minimal-setup-for-a-high-availability-service-using-cloud-run-1on2</link>
      <guid>https://dev.to/christerbeke/a-minimal-setup-for-a-high-availability-service-using-cloud-run-1on2</guid>
      <description>&lt;p&gt;In this blog post, I will explain what is needed to set up a web service that runs in multiple GCP regions.&lt;br&gt;&lt;br&gt;
The main reasons to deploy your service in more than one region are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Handle single-region failures so that your application is highly available.&lt;/li&gt;
&lt;li&gt;Route traffic to the nearest region so your users experience faster loading times.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Create Cloud Run deployments
&lt;/h2&gt;

&lt;p&gt;A Cloud Run service only lives in a single region, so for a multi-region setup we will need to deploy the same container in multiple regions.&lt;br&gt;&lt;br&gt;
Luckily using a Terraform &lt;code&gt;for_each&lt;/code&gt; loop, this does not add too much additional configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;locals {
  locations = ["europe-west4", "europe-west1"]
}

resource "google_cloud_run_service" "service" {
  for_each = toset(local.locations)

  name = "service-${each.key}"
  location = each.key

  ...
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I recommend to use the name of the region in the name of the Cloud Run service so you can easily find them and guarantee uniqueness.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;We use &lt;code&gt;local.locations&lt;/code&gt; to define the regions we want to deploy in so we can re-use that configuration in other resources.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Set up load balancing ingress
&lt;/h2&gt;

&lt;p&gt;By default, Cloud Run gives a service a publicly available &lt;code&gt;.run.app&lt;/code&gt; URL.&lt;br&gt;&lt;br&gt;
However this points to a single Cloud Run service, and for a multi-region set we will need multiple services.&lt;br&gt;&lt;br&gt;
To do this, we will need to create a Global Load Balancer that uses &lt;a href="https://cloud.google.com/load-balancing/docs/negs/serverless-neg-concepts"&gt;Serverless Network Endpoint Groups&lt;/a&gt; (NEGs) as backend.&lt;br&gt;&lt;br&gt;
These NEGs then route the traffic to the Cloud Run instances.&lt;br&gt;&lt;br&gt;
Let’s set up the needed resource for our ingress stack:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "google_compute_global_address" "ip" {
  name = "service-ip"
}

resource "google_compute_region_network_endpoint_group" "neg" {
  for_each = toset(local.locations)

  name = "neg-${each.key}"
  network_endpoint_type = "SERVERLESS"
  region = each.key

  cloud_run {
    service = google_cloud_run_service.service[each.key].name
  }
}

resource "google_compute_backend_service" "backend" {
  name = "backend"
  protocol = "HTTP"

  dynamic "backend" {
    for_each = toset(local.locations)

    content {
      group = google_compute_region_network_endpoint_group.neg[backend.key].id
    }
  }
}

resource "google_compute_url_map" "url_map" {
  name = "url-map"
  default_service = google_compute_backend_service.backend.id
}

resource "google_compute_target_http_proxy" "http_proxy" {
  name = "http-proxy"
  url_map = google_compute_url_map.url_map.id
}

resource "google_compute_global_forwarding_rule" "frontend" {
  name = "frontend"
  target = google_compute_target_http_proxy.http_proxy.id
  port_range = "80"
  ip_address = google_compute_global_address.ip.address
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;Notice how we are re-using &lt;code&gt;local.locations&lt;/code&gt; to create the regional resources.&lt;/p&gt;

&lt;p&gt;No one can call our service yet though, because we need to tell GCP that this is a public service that can be invoked by everyone:&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;data "google_iam_policy" "noauth" {
  binding {
    role = "roles/run.invoker"
    members = ["allUsers"]
  }
}

resource "google_cloud_run_service_iam_policy" "noauth" {
  for_each = toset(local.locations)

  service = google_cloud_run_service.service[each.key].name
  location = google_cloud_run_service.service[each.key].location
  policy_data = data.google_iam_policy.noauth.policy_data
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Deploy and call the service
&lt;/h2&gt;

&lt;p&gt;Let’s add an output for the static IP address so we know what to call after deployment:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;output "static_ip" {
  value = google_compute_global_address.ip.address
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now run &lt;code&gt;terraform apply&lt;/code&gt; to deploy everything and validate that it returns the “Hello World” container (for example using &lt;code&gt;curl $(terraform output --raw static_ip)&lt;/code&gt;).&lt;br&gt;&lt;br&gt;
The Google Cloud Console also gives a nice visual overview of how the requests are routed:&lt;br&gt;&lt;br&gt;
 &lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--mcpj7wJN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://xebia.com/wp-content/uploads/2023/02/a-minimal-setup-for-a-high-availability-service-using-cloud-run-balanced-900x486-1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--mcpj7wJN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://xebia.com/wp-content/uploads/2023/02/a-minimal-setup-for-a-high-availability-service-using-cloud-run-balanced-900x486-1.png" alt="balanced.png" width="800" height="432"&gt;&lt;/a&gt;&lt;br&gt;&lt;br&gt;
Now you know how to deploy Google Cloud Run services in multiple regions. Give it a try with &lt;a href="https://xebia.com/blog/how-to-deploy-privatebin-on-google-cloud-run-and-google-cloud-storage/"&gt;PrivateBin&lt;/a&gt;!&lt;/p&gt;
&lt;h2&gt;
  
  
  Bonus: enable Cloud CDN for even faster loading times
&lt;/h2&gt;

&lt;p&gt;To prevent static assets from being served from your container, you can enable Cloud CDN to automatically serve these from Cloud Storage edge locations instead of the container itself.&lt;br&gt;&lt;br&gt;
Cloud CDN will automatically detect which routes are static resources, but you can manually override this configuration as well.&lt;br&gt;&lt;br&gt;
Simply add the &lt;code&gt;enable_cnd&lt;/code&gt; flag to the backend service resource:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "google_compute_backend_service" "backend" {
  name = "backend"
  protocol = "HTTP"
  enable_cdn = true

  ...
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;By default, a single Cloud Run service can only be deployed in one region.&lt;br&gt;&lt;br&gt;
By using a global load balancer, we can deploy a Cloud Run service in multiple regions to bring high availability and low latency.&lt;br&gt;&lt;br&gt;
The &lt;code&gt;for_each&lt;/code&gt; loop feature of Terraform makes this very easy to set up.&lt;/p&gt;

&lt;p&gt;The post &lt;a href="https://xebia.com/blog/a-minimal-setup-for-a-high-availability-service-using-cloud-run/"&gt;A minimal setup for a high availability service using Cloud Run&lt;/a&gt; appeared first on &lt;a href="https://xebia.com"&gt;Xebia&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>cloud</category>
      <category>googlecloudplatform</category>
      <category>terraform</category>
    </item>
  </channel>
</rss>
