<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Luke Livingstone</title>
    <description>The latest articles on DEV Community by Luke Livingstone (@lukedoesinfra).</description>
    <link>https://dev.to/lukedoesinfra</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/lukedoesinfra"/>
    <language>en</language>
    <item>
      <title>Hashicorp Vault at Super</title>
      <dc:creator>Luke Livingstone</dc:creator>
      <pubDate>Thu, 22 May 2025 14:41:55 +0000</pubDate>
      <link>https://dev.to/superpayments/hashicorp-vault-at-super-4lck</link>
      <guid>https://dev.to/superpayments/hashicorp-vault-at-super-4lck</guid>
      <description>&lt;p&gt;At Super, we use HashiCorp Vault to securely store the secrets required by our microservices running on Kubernetes.&lt;/p&gt;

&lt;p&gt;We’ve been long-time fans of Vault. Our Platform team has previous experience deploying and maintaining it, so choosing Vault for our current setup was an easy decision from a knowledge and reliability standpoint.&lt;/p&gt;

&lt;p&gt;Drawing on lessons from past implementations, we were able to build something robust and scalable. Our infrastructure is hosted entirely on AWS and is segmented across multiple accounts. We maintain three separate workload accounts, Staging, Mock, and Production each running Super's microservices in Kubernetes along side a Infrastructure account, for Platform tooling.&lt;/p&gt;

&lt;p&gt;Rather than deploying and maintaining a separate Vault cluster for each environment, we opted for a centralised approach. This decision reduced operational overhead and significantly improved the developer experience, avoiding the complexity of managing and switching between multiple Vault interfaces.&lt;/p&gt;




&lt;p&gt;To get started, we deployed our Vault infrastructure via Terraform. Vault’s storage backend is powered by Amazon S3, with DynamoDB providing high availability. We also use AWS KMS for auto-unseal functionality, eliminating the need for manual intervention when restarting Vault. Vault itself is installed using the &lt;a href="https://github.com/hashicorp/vault-helm" rel="noopener noreferrer"&gt;official HashiCorp Helm chart&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7kxyvryss9ywn7cwwixn.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7kxyvryss9ywn7cwwixn.jpg" alt="A overview of the Vault infrastructure" width="800" height="464"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, we provisioned an internal Network Load Balancer (NLB) and exposed it through a VPC Endpoint Service. This design choice enables secure, cross-account connectivity to Vault using VPC Interface Endpoints—avoiding the complexity and security risks of VPC peering.&lt;/p&gt;

&lt;p&gt;To simplify service discovery within our Kubernetes clusters, we created human-readable internal services that resolve &lt;code&gt;super.vault&lt;/code&gt; to the appropriate VPC interface endpoint. This gives our services a clean and consistent way to talk to Vault, regardless of the environment they’re running in.&lt;/p&gt;




&lt;p&gt;That wraps up our simple yet effective centralized Vault infrastructure here at Super. By consolidating our setup, we've kept operations streamlined, secure, and developer-friendly across all environments.&lt;/p&gt;

&lt;p&gt;If you're interested in hearing more or want us to dive deeper into any aspect of our Vault implementation—be it authentication flows, secret injection, or scaling—feel free to reach out. We'd love to share more!&lt;/p&gt;

</description>
      <category>devops</category>
      <category>vault</category>
      <category>programming</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Karpenter on EKS Fargate</title>
      <dc:creator>Luke Livingstone</dc:creator>
      <pubDate>Wed, 17 Apr 2024 15:04:30 +0000</pubDate>
      <link>https://dev.to/superpayments/using-fargate-on-eks-for-karpenter-37mk</link>
      <guid>https://dev.to/superpayments/using-fargate-on-eks-for-karpenter-37mk</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;We're reinventing payments. Super powers free payments for businesses and more rewarding shopping for customers, so that everyone wins. &lt;a href="https://www.superpayments.com/" rel="noopener noreferrer"&gt;https://www.superpayments.com/&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;p&gt;We're using &lt;a href="https://karpenter.sh/" rel="noopener noreferrer"&gt;Karpenter&lt;/a&gt; to manage our Kubernetes node scaling. &lt;/p&gt;

&lt;p&gt;We're big fans of how fast Karpenter can provision just-in-time nodes for us across our EKS clusters but there was one sticking point, for obvious reasons the Karpenter controller pods can't run on Karpenter managed nodes.&lt;/p&gt;

&lt;p&gt;To get around this we used AWS EKS managed node groups as &lt;code&gt;init&lt;/code&gt; nodes and pinned Karpenter to said nodes. We provisioned a node group with a minimum and maximum of 2 nodes for Karpenter mostly (although other pods could run on these nodes too, to avoid wasting compute resources!)&lt;/p&gt;

&lt;p&gt;The downside is that updating managed node groups is slow; updating two nodes, with a maximum of one available at a time, took between six and ten minutes, and we wanted to speed up this process.&lt;/p&gt;

&lt;p&gt;The simple solution? Remove the init nodes! But then, where do we run Karpenter? Enter Fargate.&lt;/p&gt;

&lt;p&gt;We created an EKS Fargate profile via our EKS Terraform module with a selector for the Karpenter namespace:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_eks_fargate_profile" "karpenter" {
  cluster_name           = aws_eks_cluster.cluster.name
  fargate_profile_name   = "karpenter"
  pod_execution_role_arn = aws_iam_role.fargate.arn
  subnet_ids             = var.private_subnets

  selector {
    namespace = "karpenter"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Pod Execution Role
&lt;/h3&gt;

&lt;p&gt;If you've ever used ECS, you'll be familiar with the pod execution role. For Fargate and EKS, it's a straightforward role with two AWS managed policies attached:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_iam_role" "fargate" {
  name = "${var.cluster_name}-fargate"

  assume_role_policy = jsonencode({
    Statement = [{
      Action = "sts:AssumeRole"
      Effect = "Allow"
      Principal = {
        Service = "eks-fargate-pods.amazonaws.com"
      }
    }]
    Version = "2012-10-17"
  })
}

resource "aws_iam_role_policy_attachment" "fargate" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKSFargatePodExecutionRolePolicy"
  role       = aws_iam_role.fargate.name
}

resource "aws_iam_role_policy_attachment" "fargate_eks_cni" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
  role       = aws_iam_role.fargate.name
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Karpenter Helm Install
&lt;/h3&gt;

&lt;p&gt;We use the hashicorp/helm Terraform provider to install both the Karpenter and CRD charts directly from our EKS module. This ensures that Karpenter is up and running before anything else, ready to provision compute.&lt;/p&gt;

&lt;p&gt;Next, we set the namespace for the Karpenter chart to match the selector in the Fargate profile, which in our case is karpenter, and we're off to the races!&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME                         READY   STATUS    RESTARTS   AGE   IP              NODE                                                  NOMINATED NODE 
karpenter-75c664b7cb-9z9lr   1/1     Running   0          5d    &amp;lt;snip&amp;gt;   fargate-&amp;lt;snip&amp;gt;.eu-west-2.compute.internal   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
karpenter-75c664b7cb-fxhb2   1/1     Running   0          5d    &amp;lt;snip&amp;gt;    fargate-&amp;lt;snip&amp;gt;.eu-west-2.compute.internal
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Notes
&lt;/h3&gt;

&lt;p&gt;By default we're using Fargate's minimum resources which are &lt;code&gt;0.25 vCPU&lt;/code&gt; and &lt;code&gt;0.5GB RAM&lt;/code&gt; per task.&lt;/p&gt;

&lt;p&gt;Currently you &lt;a href="https://github.com/aws/containers-roadmap/issues/1629" rel="noopener noreferrer"&gt;can't specify ARM&lt;/a&gt; when creating Fargate tasks on EKS so we're currently using x86 but the cost is around $20 per month for both tasks.&lt;/p&gt;

&lt;p&gt;We've generally reduced the number of nodes across our EKS clusters too, resulting in some cost savings but much less waiting around for the Platform team!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>karpenter</category>
      <category>autoscaling</category>
      <category>devops</category>
    </item>
    <item>
      <title>Terraform Modules at Super</title>
      <dc:creator>Luke Livingstone</dc:creator>
      <pubDate>Thu, 28 Mar 2024 12:30:55 +0000</pubDate>
      <link>https://dev.to/superpayments/terraform-modules-at-super-48kp</link>
      <guid>https://dev.to/superpayments/terraform-modules-at-super-48kp</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;We're reinventing payments. Super powers free payments for businesses and more rewarding shopping for customers, so that everyone wins. &lt;a href="https://www.superpayments.com/" rel="noopener noreferrer"&gt;https://www.superpayments.com/&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;p&gt;Like most startups, we use Terraform to manage and deploy our infrastructure. This post covers how we use Terraform modules at Super to adhere to the DRY principle.&lt;/p&gt;

&lt;p&gt;Early in our Terraform refactor, we aimed to invest in modules. Our goal was to promote high reusability while minimising code.&lt;/p&gt;

&lt;p&gt;At the time of writing, Super has around 70 Terraform modules in use across 10 providers. Some of the modules are small (e.g. IAM Role) and some are larger (e.g. EKS Cluster).&lt;/p&gt;




&lt;h3&gt;
  
  
  Template Module &amp;amp; Code Style 📝
&lt;/h3&gt;

&lt;p&gt;In order to keep module creation in line with a style guide we have a template module. Some of the rules below are best practice and some are specific to Super.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;We don't include provider configurations&lt;/li&gt;
&lt;li&gt;We don't include any backend configuration &lt;/li&gt;
&lt;li&gt;
&lt;code&gt;data.tf&lt;/code&gt; file is used for all &lt;code&gt;data&lt;/code&gt; resources&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;outputs.tf&lt;/code&gt; file is used for all output resources&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;variables.tf&lt;/code&gt; file is used for all variables&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;versions.tf&lt;/code&gt; file is used for &lt;code&gt;required_providers&lt;/code&gt; and &lt;code&gt;required_version&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Why no provider?! 😱
&lt;/h3&gt;

&lt;p&gt;The primary reason we avoid including a provider in our modules is to facilitate nesting modules. Nesting modules can be beneficial to keep resources used in a standardised format across modules.&lt;/p&gt;

&lt;p&gt;When using a module inside of a module Terraform deems it incompatible with &lt;code&gt;count&lt;/code&gt;, &lt;code&gt;for_each&lt;/code&gt;, and &lt;code&gt;depends_on&lt;/code&gt; if the module in question has its own local provider configuration.&lt;/p&gt;

&lt;p&gt;We started out only removing the providers of modules nested, but decided that we can make use of Terragrunt's &lt;a href="https://terragrunt.gruntwork.io/docs/reference/config-blocks-and-attributes/#generate" rel="noopener noreferrer"&gt;generate&lt;/a&gt; and &lt;a href="https://terragrunt.gruntwork.io/docs/reference/config-blocks-and-attributes/#include" rel="noopener noreferrer"&gt;include&lt;/a&gt; to remove providers from all modules.&lt;/p&gt;

&lt;p&gt;Let's take the following directory structure for AWS as an example. We have a folder for the AWS region (eu-west-2) and we also have a few hcl files.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;├── super-staging
│   ├── eu-west-2
│   ├── aws.hcl
│   ├── terragrunt.hcl
│   └── vault.hcl
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;aws.hcl&lt;/code&gt; file uses a Terragrunt generate block to arbitrarily generate a file in the terragrunt working directory (where terraform is called).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;generate&lt;/span&gt; &lt;span class="s2"&gt;"aws"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;path&lt;/span&gt;      &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"aws.tf"&lt;/span&gt;
  &lt;span class="nx"&gt;if_exists&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"overwrite_terragrunt"&lt;/span&gt;
  &lt;span class="nx"&gt;contents&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;
provider "aws" {
  region = "eu-west-2"
  default_tags {
    tags = {
      environment = "staging",
    }
  }
}
&lt;/span&gt;&lt;span class="no"&gt;EOF
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When using a module with Terragrunt you can then use the include block with the &lt;code&gt;find_in_parent_folders&lt;/code&gt; function.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;include&lt;/span&gt; &lt;span class="s2"&gt;"aws"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;path&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;find_in_parent_folders&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"aws.hcl"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;terraform&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;source&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"git@github.com:organisation/terraform-example-module.git?ref=v1.0.0"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h3&gt;
  
  
  Remote State
&lt;/h3&gt;

&lt;p&gt;We use S3 as our state store along with DynamoDB for locking all encrypted with KMS.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;terragrunt.hcl&lt;/code&gt; at the root of the directory includes three things. The terragrunt &lt;code&gt;remote_state&lt;/code&gt; block, &lt;code&gt;iam_role&lt;/code&gt; and some default &lt;code&gt;inputs&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;remote_state&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;backend&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"s3"&lt;/span&gt;

  &lt;span class="nx"&gt;generate&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;path&lt;/span&gt;      &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"backend.tf"&lt;/span&gt;
    &lt;span class="nx"&gt;if_exists&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"overwrite"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nx"&gt;config&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;bucket&lt;/span&gt;                &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"super-staging-eu-west-2-example-bucket"&lt;/span&gt;
    &lt;span class="nx"&gt;key&lt;/span&gt;                   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"${path_relative_to_include()}/terraform.tfstate"&lt;/span&gt;
    &lt;span class="nx"&gt;region&lt;/span&gt;                &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"eu-west-2"&lt;/span&gt;
    &lt;span class="nx"&gt;encrypt&lt;/span&gt;               &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
    &lt;span class="nx"&gt;dynamodb_table&lt;/span&gt;        &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"super-staging-eu-west-2-example-table"&lt;/span&gt;
    &lt;span class="nx"&gt;kms_key_id&lt;/span&gt;            &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"alias/s3-super-staging-eu-west-2-example-kms"&lt;/span&gt;
    &lt;span class="nx"&gt;disable_bucket_update&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;iam_role&lt;/span&gt; &lt;span class="err"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"arn:aws:iam::&amp;lt;snip&amp;gt;:role/example-role"&lt;/span&gt;

&lt;span class="nx"&gt;inputs&lt;/span&gt; &lt;span class="err"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;environment&lt;/span&gt;    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"staging"&lt;/span&gt;
  &lt;span class="nx"&gt;aws_account_id&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"&amp;lt;snip&amp;gt;"&lt;/span&gt;
  &lt;span class="nx"&gt;service_owner&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"devops"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We then add the include like we do with the AWS provider. By default &lt;code&gt;find_in_parent_folders&lt;/code&gt; will search for the first &lt;code&gt;terragrunt.hcl&lt;/code&gt; file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;include&lt;/span&gt; &lt;span class="s2"&gt;"root"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;path&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;find_in_parent_folders&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
  &lt;span class="nx"&gt;expose&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h3&gt;
  
  
  Versioning 🔢
&lt;/h3&gt;

&lt;p&gt;Our Platform team are enthusiasts of semantic versioning and we also use conventional commits.&lt;/p&gt;

&lt;p&gt;We have a simple Github Action job on each module repository that uses the &lt;code&gt;semantic-release-action&lt;/code&gt;. We use the &lt;code&gt;@semantic-release/commit-analyzer&lt;/code&gt; plugin with the &lt;a href="https://www.conventionalcommits.org/en/v1.0.0/" rel="noopener noreferrer"&gt;&lt;code&gt;conventionalcommits&lt;/code&gt;&lt;/a&gt; preset.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Release&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cycjimmy/semantic-release-action@v4&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;semantic_version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;23.0.2&lt;/span&gt;
          &lt;span class="na"&gt;extra_plugins&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
            &lt;span class="s"&gt;@semantic-release/changelog@6.0.3&lt;/span&gt;
            &lt;span class="s"&gt;@semantic-release/git@10.0.1&lt;/span&gt;
            &lt;span class="s"&gt;conventional-changelog-conventionalcommits@7.0.2&lt;/span&gt;
        &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;GITHUB_TOKEN&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.CI_GITHUB_TOKEN }}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>terraform</category>
      <category>iac</category>
      <category>terragrunt</category>
      <category>aws</category>
    </item>
  </channel>
</rss>
