<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Philipp Strube</title>
    <description>The latest articles on DEV Community by Philipp Strube (@pst418).</description>
    <link>https://dev.to/pst418</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/pst418"/>
    <language>en</language>
    <item>
      <title>Goodbye Cloud, Hello CLI: Sunsetting Kubestack Cloud</title>
      <dc:creator>Philipp Strube</dc:creator>
      <pubDate>Tue, 09 May 2023 19:53:11 +0000</pubDate>
      <link>https://dev.to/kubestack/goodbye-cloud-hello-cli-sunsetting-kubestack-cloud-12l4</link>
      <guid>https://dev.to/kubestack/goodbye-cloud-hello-cli-sunsetting-kubestack-cloud-12l4</guid>
      <description>&lt;p&gt;I've recently released a major update for Kubestack, the &lt;a href="https://www.kubestack.com/"&gt;Terraform framework for Kubernetes platform engineering teams&lt;/a&gt;. This update moves all functionality previously provided by Kubestack Cloud into the &lt;code&gt;kbst&lt;/code&gt; CLI.&lt;/p&gt;

&lt;p&gt;I decided to make this change, because Kubestack Cloud was only able to provide a better developer experience on day one. But, once exported to Terraform, the UI was not helpful any more on day two and all following days.&lt;/p&gt;

&lt;p&gt;But my goal is to improve the developer experience and day-to-day lives of platform engineering teams at all times. This latest &lt;a href="https://github.com/kbst/kbst/releases/tag/v0.2.1"&gt;&lt;code&gt;kbst&lt;/code&gt; release&lt;/a&gt; is a major step towards achieving this goal.&lt;/p&gt;

&lt;p&gt;If this is the first time you hear about Kubestack Cloud: Kubestack Cloud was a browser based UI and allowed users to design a Kubernetes platform by following a step-by-step wizard and then exporting and downloading the designed platform's Terraform code.&lt;/p&gt;

&lt;p&gt;However, the disconnect between the UI and the code in the repository on a developer's local machine, diminished the value of Kubestack Cloud on day two and beyond. To address this issue, I moved all this functionality into the &lt;code&gt;kbst&lt;/code&gt; CLI, where access to the local code is easier.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;kbst&lt;/code&gt; CLI, which previously also only scaffolded new repositories, now has CRUD (create, read, update, delete) functionality for clusters, node-pools, and services. This means users can use the CLI to scaffold Terraform code to add or remove clusters, node-pools, or services inside their existing Kubestack repositories.&lt;/p&gt;

&lt;p&gt;If you want to see the new CLI in action, give the &lt;a href="https://www.kubestack.com/framework/tutorial/"&gt;updated tutorial a try&lt;/a&gt; or read the documentation on adding and removing &lt;a href="https://www.kubestack.com/framework/documentation/clusters/"&gt;cluster modules&lt;/a&gt;, &lt;a href="https://www.kubestack.com/framework/documentation/node-pools/"&gt;node pool modules&lt;/a&gt; or &lt;a href="https://www.kubestack.com/framework/documentation/services/"&gt;service modules&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;But if you'd like to learn more about how this works under the hood, keep reading.&lt;/p&gt;

&lt;h2&gt;
  
  
  How this works
&lt;/h2&gt;

&lt;p&gt;If you're already familiar with Kubestack, you know that Kubestack repositories follow a convention-over-configuration approach to define the clusters, node pools, and services that make up a Kubernetes platform in a single Terraform codebase. At the root of each repository, there are several &lt;code&gt;.tf&lt;/code&gt; files that follow a specific naming convention. These files contain module calls that define each platform component.&lt;/p&gt;

&lt;p&gt;To add or remove components, or update the versions of existing component modules, the &lt;code&gt;kbst&lt;/code&gt; CLI parses the necessary subset of Terraform code to understand the components of the platform. You can list the Kubestack component modules it discovered using the &lt;code&gt;kbst list&lt;/code&gt; command. By appending &lt;code&gt;--all&lt;/code&gt; to the list command, you can also see any non-Kubestack modules.&lt;/p&gt;

&lt;p&gt;You can add node pools or services to existing clusters or add more clusters from the same or even a different cloud provider. The CLI will scaffold the additional required &lt;code&gt;.tf&lt;/code&gt; files and update the Dockerfile's &lt;code&gt;FROM&lt;/code&gt; line to specify the correct image, in case of changing from a single to a multi-cloud environment or vice versa. Likewise, it will also remove module calls and the respective &lt;code&gt;.tf&lt;/code&gt; files if you remove a service, a node pool or even a cluster from your platform.&lt;/p&gt;

&lt;p&gt;But don't worry, the &lt;code&gt;kbst&lt;/code&gt; CLI &lt;strong&gt;only changes local files&lt;/strong&gt; and does never change any cloud or Kubernetes resources.&lt;/p&gt;

&lt;p&gt;You can use it to avoid writing repetitive boilerplate code or manually deleting module calls and Terraform files, while still owning your codebase and retaining the ability to extend or modify the code to meet specific needs.&lt;/p&gt;

&lt;p&gt;Once you're happy with the code, you can follow the &lt;a href="https://www.kubestack.com/framework/documentation/gitops-process/"&gt;Kubestack GitOps workflow&lt;/a&gt; to peer-review, validate, and promote changes to your platform's environments as usual.&lt;/p&gt;

&lt;p&gt;In conclusion, the shift from Kubestack Cloud to the &lt;code&gt;kbst&lt;/code&gt; CLI provides a better developer experience not only on day one, but also on day two and makes it easier for platform engineering teams to manage their Kubernetes based platforms.&lt;/p&gt;

&lt;h2&gt;
  
  
  What happened to the platforms I designed with Kubestack Cloud?
&lt;/h2&gt;

&lt;p&gt;If you have previously designed a platform with Kubestack Cloud, you can sign in with your existing user and will see instructions how to scaffold your existing platforms using the new CLI.&lt;/p&gt;

&lt;p&gt;Here's an example screenshot of how that will look like.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--BLqqqKSW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hr1gr9ms8yseak67cpek.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--BLqqqKSW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hr1gr9ms8yseak67cpek.png" alt="Screenshot of the Kubestack Cloud export" width="800" height="912"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>kubernetes</category>
      <category>platformengineering</category>
      <category>gitops</category>
    </item>
    <item>
      <title>Getting rigorous about investing in the Kubestack project</title>
      <dc:creator>Philipp Strube</dc:creator>
      <pubDate>Sun, 27 Nov 2022 14:52:30 +0000</pubDate>
      <link>https://dev.to/kubestack/getting-rigorous-about-investing-in-the-kubestack-project-4oj3</link>
      <guid>https://dev.to/kubestack/getting-rigorous-about-investing-in-the-kubestack-project-4oj3</guid>
      <description>&lt;p&gt;Sometimes when you spend a long time solving a problem, it makes it harder to see your solution clearly.&lt;/p&gt;

&lt;p&gt;In 12 years helping companies adopt modern cloud computing, I saw so many of the same snags repeating across multiple organizations. From these lessons, I built Kubestack as guardrails to make it easier to avoid the pain in the first place. Kubestack has been used in companies large and small for years, but I haven’t always known where it has been most helpful to its users. Without knowing this, it’s hard to bring more people in as both users and contributors. So earlier this year, I contracted with &lt;a href="https://www.anahevesi.com/" rel="noopener noreferrer"&gt;Ana Hevesi&lt;/a&gt; to support Kubestack's open source efforts.&lt;/p&gt;

&lt;p&gt;Ana operates a developer experience consultancy. After working in technical community building for companies like Stack Overflow and Nodejitsu, Ana now works with devtools founders to create evidence-based approaches for growing their ecosystems.&lt;/p&gt;

&lt;p&gt;We started by doing some research into how Kubestack serves your goals.&lt;/p&gt;

&lt;h2&gt;
  
  
  Research methods
&lt;/h2&gt;

&lt;p&gt;Our objective was to learn about users’ career trajectories and aspirations, and get a clear picture of what role Kubestack plays in your success.&lt;/p&gt;

&lt;p&gt;Ana recommended we aim for 5 interviews, citing it as a good “goldilocks zone” for initial quantity of data to work with. I then reached out to a spectrum of new and long-tenured Kubestack users to ask for their time in a 60 minute user interview.&lt;/p&gt;

&lt;p&gt;Ana wrote a standard interview script which included bandwidth for conversational “side quests.” After, Ana analyzed recordings, picked out repeat themes, and came to me with conclusions and recommendations.&lt;/p&gt;

&lt;h2&gt;
  
  
  Areas of positive impact
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Kubestack helps careers
&lt;/h3&gt;

&lt;p&gt;Participants attributed their use of Kubestack to positive career outcomes, such as developing a reputation for reliably delivering for users, or scaling on a tight timeframe with limited prior experience. Others reported it was a key learning tool when they were just starting as platform engineers.&lt;/p&gt;

&lt;h3&gt;
  
  
  Works so well it disappears
&lt;/h3&gt;

&lt;p&gt;The most consistent feedback we received was that users can assume Kubestack is just going to work. Multiple participants had been relying on the framework for many months without needing to give it a second thought.&lt;/p&gt;

&lt;h2&gt;
  
  
  Areas to improve
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Documentation for advanced features needs improvement
&lt;/h3&gt;

&lt;p&gt;Kubestack works great for months on end for most orgs setting up their first K8s cluster, but those who wished to modify Kubestack outside of existing use cases told us error messages and upgrade processes were opaque.&lt;/p&gt;

&lt;h3&gt;
  
  
  Backwards compatibility and multi-cloud support presents friction to open source contributions
&lt;/h3&gt;

&lt;p&gt;Adding new features requires working knowledge of both Terraform and cloud provider functionality across historical versions, and at times, their interactions with one another. Furthermore, while Kubestack is committed to supporting EKS, AKS, and GKE, a contributor may wish to implement functionality for only one of these cloud providers. Inviting more PRs from a wider array of contributors necessitates a plan for tiered support of legacy versions or defining contributor scope to accommodate this complexity.&lt;/p&gt;

&lt;h2&gt;
  
  
  How we’re applying these findings
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Connecting with the people who need us most
&lt;/h3&gt;

&lt;p&gt;Kubestack makes a huge impact on early-stage teams and emerging professionals. We’re exploring ways to better tailor our communication and outreach to make sure they know about the opportunities this framework provides, improving both adoption and contributions to the project. Kubestack only succeeds because you succeed.&lt;/p&gt;

&lt;h3&gt;
  
  
  Benefits before features
&lt;/h3&gt;

&lt;p&gt;The current iteration of Kubestack’s landing page assumes a fairly high level of existing knowledge of the platform engineering space. As such, an upcoming iteration of the Kubestack site will aim to engage folks who aren’t already deep in the jargon and progressively bring them into the fold, while still being legible to seasoned professionals.&lt;/p&gt;

&lt;h3&gt;
  
  
  Open source participation onramps
&lt;/h3&gt;

&lt;p&gt;Since enabling users to learn from each other and communicating where the project is going is an important part of growing an open source community, we’ll be experimenting with office hours and public communication about recent releases. We’ll have scheduling details coming soon.&lt;/p&gt;

&lt;h2&gt;
  
  
  Leveling up, together
&lt;/h2&gt;

&lt;p&gt;I created Kubestack so that folks coming to Kubernetes for the first time could take immediate advantage of the separation of concerns that containers provide. User research says that this works as intended!&lt;/p&gt;

&lt;p&gt;Now comes the iterative task of communicating my own knowledge and experience in ways that make it easier to build together, while learning from your use of the project to fill in its gaps. Ultimately, the intent is a healthy community where we’re all working together to make the project better serve your needs.&lt;/p&gt;

&lt;p&gt;Finally, a big thank you to Tomas, AJ, Brendan, Christoph, and Mark for your time and candor. Kubestack is better for it.&lt;/p&gt;

</description>
      <category>emptystring</category>
    </item>
    <item>
      <title>A Better Way to Provision Kubernetes Resources Using Terraform</title>
      <dc:creator>Philipp Strube</dc:creator>
      <pubDate>Wed, 04 May 2022 18:01:47 +0000</pubDate>
      <link>https://dev.to/kubestack/a-better-way-to-provision-kubernetes-resources-using-terraform-355n</link>
      <guid>https://dev.to/kubestack/a-better-way-to-provision-kubernetes-resources-using-terraform-355n</guid>
      <description>&lt;p&gt;Terraform is immensely powerful when it comes to defining and maintaining infrastructure as code. In combination with a declarative API, like a cloud provider API, it can determine, preview, and apply changes to the codified infrastructure.&lt;/p&gt;

&lt;p&gt;Consequently, it is common for teams to use Terraform to define the infrastructure of their Kubernetes clusters. And as a platform to build platforms, Kubernetes commonly requires a number of additional services before workloads can be deployed. Think of ingress controllers or logging and monitoring agents and so on. But despite Kubernetes' own declarative API, and the obvious benefits of maintaining a cluster's infrastructure and services from the same infrastructure as code repository, Terraform is far from the first choice to provision Kubernetes resources.&lt;/p&gt;

&lt;p&gt;With &lt;a href="https://www.kubestack.com/"&gt;Kubestack&lt;/a&gt;, the open-source Terraform framework I maintain, I'm on a mission to provide the best developer experience for teams working with Terraform and Kubernetes. And unified provisioning of all platform components, from cluster infrastructure to cluster services, is something I consider crucial in my relentless pursuit of said developer experience.&lt;/p&gt;

&lt;p&gt;Because of that, the two common approaches to provision Kubernetes resources using Terraform never really appealed to me.&lt;/p&gt;

&lt;p&gt;On the one hand, there's the Kubernetes provider. And while it integrates Kubernetes resources into Terraform, maintaining the Kubernetes resources in HCL is a lot of effort. Especially for Kubernetes YAML you consume from upstream. On the other hand, there are the Helm provider and the Kubectl provider. These two use native YAML instead of HCL, but do not integrate the Kubernetes resources into the Terraform state and, as a consequence, lifecycle.&lt;/p&gt;

&lt;p&gt;I believe my Kustomization provider based modules are a better alternative because of three distinct benefits:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Like Kustomize, the upstream YAML is left untouched, meaning upstream updates require minimal maintenance effort.&lt;/li&gt;
&lt;li&gt;By defining the Kustomize overlay in HCL, all Kubernetes resources are fully customizable using values from Terraform.&lt;/li&gt;
&lt;li&gt;Each Kubernetes resource is tracked individually in Terraform state, so diffs and plans show the changes to the actual Kubernetes resources.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;To make these benefits less abstract, let's compare my Nginx ingress module with one using the Helm provider to provision Nginx ingress.&lt;/p&gt;

&lt;p&gt;The Terraform configuration for both examples is available in &lt;a href="https://github.com/kbst/terraform-helm-vs-kustomize"&gt;this repository&lt;/a&gt;. Let's take a look at the Helm module first.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Helm-based module
&lt;/h2&gt;

&lt;p&gt;Usage of the module is straightforward. First, configure the Kubernetes and Helm providers.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;provider&lt;/span&gt; &lt;span class="s2"&gt;"kubernetes"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;config_path&lt;/span&gt;    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"~/.kube/config"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;provider&lt;/span&gt; &lt;span class="s2"&gt;"helm"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;kubernetes&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;config_path&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"~/.kube/config"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then define a kubernetes_namespace and call the release/helm module.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"kubernetes_namespace"&lt;/span&gt; &lt;span class="s2"&gt;"nginx_ingress"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;metadata&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ingress-nginx"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;module&lt;/span&gt; &lt;span class="s2"&gt;"nginx_ingress"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;source&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"terraform-module/release/helm"&lt;/span&gt;
  &lt;span class="nx"&gt;version&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"2.7.0"&lt;/span&gt;

  &lt;span class="nx"&gt;namespace&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;kubernetes_namespace&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;nginx_ingress&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;metadata&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;
  &lt;span class="nx"&gt;repository&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"https://kubernetes.github.io/ingress-nginx"&lt;/span&gt;

  &lt;span class="nx"&gt;app&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;name&lt;/span&gt;          &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ingress-nginx"&lt;/span&gt;
    &lt;span class="nx"&gt;version&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"4.1.0"&lt;/span&gt;
    &lt;span class="nx"&gt;chart&lt;/span&gt;         &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ingress-nginx"&lt;/span&gt;
    &lt;span class="nx"&gt;force_update&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
    &lt;span class="nx"&gt;wait&lt;/span&gt;          &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
    &lt;span class="nx"&gt;recreate_pods&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
    &lt;span class="nx"&gt;deploy&lt;/span&gt;        &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nx"&gt;set&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;name&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"replicaCount"&lt;/span&gt;
      &lt;span class="nx"&gt;value&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you now run a terraform plan for this configuration, you see the resources to be created.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;Terraform&lt;/span&gt; &lt;span class="nx"&gt;will&lt;/span&gt; &lt;span class="nx"&gt;perform&lt;/span&gt; &lt;span class="nx"&gt;the&lt;/span&gt; &lt;span class="nx"&gt;following&lt;/span&gt; &lt;span class="nx"&gt;actions&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;

  &lt;span class="c1"&gt;# kubernetes_namespace.nginx_ingress will be created&lt;/span&gt;
  &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"kubernetes_namespace"&lt;/span&gt; &lt;span class="s2"&gt;"nginx_ingress"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;id&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="err"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;known&lt;/span&gt; &lt;span class="nx"&gt;after&lt;/span&gt; &lt;span class="nx"&gt;apply&lt;/span&gt;&lt;span class="err"&gt;)&lt;/span&gt;

      &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;metadata&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;generation&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="err"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;known&lt;/span&gt; &lt;span class="nx"&gt;after&lt;/span&gt; &lt;span class="nx"&gt;apply&lt;/span&gt;&lt;span class="err"&gt;)&lt;/span&gt;
          &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;name&lt;/span&gt;             &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ingress-nginx"&lt;/span&gt;
          &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;resource_version&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="err"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;known&lt;/span&gt; &lt;span class="nx"&gt;after&lt;/span&gt; &lt;span class="nx"&gt;apply&lt;/span&gt;&lt;span class="err"&gt;)&lt;/span&gt;
          &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;uid&lt;/span&gt;              &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="err"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;known&lt;/span&gt; &lt;span class="nx"&gt;after&lt;/span&gt; &lt;span class="nx"&gt;apply&lt;/span&gt;&lt;span class="err"&gt;)&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="c1"&gt;# module.nginx_ingress.helm_release.this[0] will be created&lt;/span&gt;
  &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"helm_release"&lt;/span&gt; &lt;span class="s2"&gt;"this"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;atomic&lt;/span&gt;                     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
      &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;chart&lt;/span&gt;                      &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ingress-nginx"&lt;/span&gt;
      &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;cleanup_on_fail&lt;/span&gt;            &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
      &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;create_namespace&lt;/span&gt;           &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
      &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;dependency_update&lt;/span&gt;          &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
      &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;disable_crd_hooks&lt;/span&gt;          &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
      &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;disable_openapi_validation&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
      &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;disable_webhooks&lt;/span&gt;           &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
      &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;force_update&lt;/span&gt;               &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
      &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;id&lt;/span&gt;                         &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="err"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;known&lt;/span&gt; &lt;span class="nx"&gt;after&lt;/span&gt; &lt;span class="nx"&gt;apply&lt;/span&gt;&lt;span class="err"&gt;)&lt;/span&gt;
      &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;lint&lt;/span&gt;                       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
      &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;manifest&lt;/span&gt;                   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="err"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;known&lt;/span&gt; &lt;span class="nx"&gt;after&lt;/span&gt; &lt;span class="nx"&gt;apply&lt;/span&gt;&lt;span class="err"&gt;)&lt;/span&gt;
      &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;max_history&lt;/span&gt;                &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;
      &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;metadata&lt;/span&gt;                   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="err"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;known&lt;/span&gt; &lt;span class="nx"&gt;after&lt;/span&gt; &lt;span class="nx"&gt;apply&lt;/span&gt;&lt;span class="err"&gt;)&lt;/span&gt;
      &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;name&lt;/span&gt;                       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ingress-nginx"&lt;/span&gt;
      &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;namespace&lt;/span&gt;                  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ingress-nginx"&lt;/span&gt;
      &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;recreate_pods&lt;/span&gt;              &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
      &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;render_subchart_notes&lt;/span&gt;      &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
      &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;replace&lt;/span&gt;                    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
      &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;repository&lt;/span&gt;                 &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"https://kubernetes.github.io/ingress-nginx"&lt;/span&gt;
      &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;reset_values&lt;/span&gt;               &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
      &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;reuse_values&lt;/span&gt;               &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
      &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;skip_crds&lt;/span&gt;                  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
      &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;status&lt;/span&gt;                     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"deployed"&lt;/span&gt;
      &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;timeout&lt;/span&gt;                    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;300&lt;/span&gt;
      &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;values&lt;/span&gt;                     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;
      &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;verify&lt;/span&gt;                     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
      &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;version&lt;/span&gt;                    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"4.1.0"&lt;/span&gt;
      &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;wait&lt;/span&gt;                       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
      &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;wait_for_jobs&lt;/span&gt;              &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;

      &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;set&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;name&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"replicaCount"&lt;/span&gt;
          &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;value&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"2"&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;Plan&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt; &lt;span class="nx"&gt;to&lt;/span&gt; &lt;span class="nx"&gt;add&lt;/span&gt;&lt;span class="err"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt; &lt;span class="nx"&gt;to&lt;/span&gt; &lt;span class="nx"&gt;change&lt;/span&gt;&lt;span class="err"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt; &lt;span class="nx"&gt;to&lt;/span&gt; &lt;span class="nx"&gt;destroy&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And this is the key issue with how Helm is integrated into the Terraform workflow. The plan does not tell you what Kubernetes resources will be created for the Nginx ingress controller. And neither are the Kubernetes resources tracked in Terraform state, as shown by the apply output.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;kubernetes_namespace&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;nginx_ingress&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Creating&lt;/span&gt;&lt;span class="err"&gt;...&lt;/span&gt;
&lt;span class="nx"&gt;kubernetes_namespace&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;nginx_ingress&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Creation&lt;/span&gt; &lt;span class="nx"&gt;complete&lt;/span&gt; &lt;span class="nx"&gt;after&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="nx"&gt;s&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="err"&gt;=&lt;/span&gt;&lt;span class="nx"&gt;ingress&lt;/span&gt;&lt;span class="err"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;nginx&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;nginx_ingress&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;helm_release&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;this&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Creating&lt;/span&gt;&lt;span class="err"&gt;...&lt;/span&gt;
&lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;nginx_ingress&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;helm_release&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;this&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Creation&lt;/span&gt; &lt;span class="nx"&gt;complete&lt;/span&gt; &lt;span class="nx"&gt;after&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="nx"&gt;s&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="err"&gt;=&lt;/span&gt;&lt;span class="nx"&gt;ingress&lt;/span&gt;&lt;span class="err"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;nginx&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;

&lt;span class="nx"&gt;Apply&lt;/span&gt; &lt;span class="nx"&gt;complete&lt;/span&gt;&lt;span class="err"&gt;!&lt;/span&gt; &lt;span class="nx"&gt;Resources&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt; &lt;span class="nx"&gt;added&lt;/span&gt;&lt;span class="err"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt; &lt;span class="nx"&gt;changed&lt;/span&gt;&lt;span class="err"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt; &lt;span class="nx"&gt;destroyed&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Similarly, if planning a change, there's again no way to tell what the changes to the Kubernetes resources will be.&lt;/p&gt;

&lt;p&gt;So if you increase the &lt;code&gt;replicaCount&lt;/code&gt; value of the Helm chart, the terraform plan will merely show the change to the &lt;code&gt;helm_release&lt;/code&gt; resource.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;set&lt;/span&gt; &lt;span class="err"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"replicaCount"&lt;/span&gt;
    &lt;span class="nx"&gt;value&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;What will the changes to the Kubernetes resources be? And more importantly, is it a simple in-place update, or does it require a destroy-and-recreate? Looking at the plan, you have no way of knowing.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;Terraform&lt;/span&gt; &lt;span class="nx"&gt;will&lt;/span&gt; &lt;span class="nx"&gt;perform&lt;/span&gt; &lt;span class="nx"&gt;the&lt;/span&gt; &lt;span class="nx"&gt;following&lt;/span&gt; &lt;span class="nx"&gt;actions&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;

  &lt;span class="c1"&gt;# module.nginx_ingress.helm_release.this[0] will be updated in-place&lt;/span&gt;
  &lt;span class="err"&gt;~&lt;/span&gt; &lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"helm_release"&lt;/span&gt; &lt;span class="s2"&gt;"this"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nx"&gt;id&lt;/span&gt;                         &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ingress-nginx"&lt;/span&gt;
        &lt;span class="nx"&gt;name&lt;/span&gt;                       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ingress-nginx"&lt;/span&gt;
        &lt;span class="c1"&gt;# (27 unchanged attributes hidden)&lt;/span&gt;

      &lt;span class="err"&gt;-&lt;/span&gt; &lt;span class="nx"&gt;set&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="err"&gt;-&lt;/span&gt; &lt;span class="nx"&gt;name&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"replicaCount"&lt;/span&gt; &lt;span class="err"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt;
          &lt;span class="err"&gt;-&lt;/span&gt; &lt;span class="nx"&gt;value&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"2"&lt;/span&gt; &lt;span class="err"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
      &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;set&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;name&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"replicaCount"&lt;/span&gt;
          &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;value&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"3"&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;Plan&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt; &lt;span class="nx"&gt;to&lt;/span&gt; &lt;span class="nx"&gt;add&lt;/span&gt;&lt;span class="err"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="nx"&gt;to&lt;/span&gt; &lt;span class="nx"&gt;change&lt;/span&gt;&lt;span class="err"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt; &lt;span class="nx"&gt;to&lt;/span&gt; &lt;span class="nx"&gt;destroy&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  The Kustomize-based module
&lt;/h2&gt;

&lt;p&gt;Now, let's take a look at the same steps for the Kustomize-based module. Usage is similar. First require the kbst/kustomization provider and configure it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;terraform&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;required_providers&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;kustomization&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;source&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"kbst/kustomization"&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;provider&lt;/span&gt; &lt;span class="s2"&gt;"kustomization"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;kubeconfig_path&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"~/.kube/config"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then call the nginx/kustomization module.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;module&lt;/span&gt; &lt;span class="s2"&gt;"nginx_ingress"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;source&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"kbst.xyz/catalog/nginx/kustomization"&lt;/span&gt;
  &lt;span class="nx"&gt;version&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"1.1.3-kbst.1"&lt;/span&gt;

  &lt;span class="nx"&gt;configuration_base_key&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"default"&lt;/span&gt;
  &lt;span class="nx"&gt;configuration&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;default&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;replicas&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[{&lt;/span&gt;
        &lt;span class="nx"&gt;name&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ingress-nginx-controller"&lt;/span&gt;
        &lt;span class="nx"&gt;count&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;
      &lt;span class="p"&gt;}]&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Unlike for the Helm-based module though, when you run terraform plan now you will see each Kubernetes resource and its actual configuration individually. To keep this blog post palatable, I show the details for the namespace only.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;Terraform&lt;/span&gt; &lt;span class="nx"&gt;will&lt;/span&gt; &lt;span class="nx"&gt;perform&lt;/span&gt; &lt;span class="nx"&gt;the&lt;/span&gt; &lt;span class="nx"&gt;following&lt;/span&gt; &lt;span class="nx"&gt;actions&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;

  &lt;span class="c1"&gt;# module.nginx_ingress.kustomization_resource.p0["_/Namespace/_/ingress-nginx"] will be created&lt;/span&gt;
  &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"kustomization_resource"&lt;/span&gt; &lt;span class="s2"&gt;"p0"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;id&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="err"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;known&lt;/span&gt; &lt;span class="nx"&gt;after&lt;/span&gt; &lt;span class="nx"&gt;apply&lt;/span&gt;&lt;span class="err"&gt;)&lt;/span&gt;
      &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;manifest&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;jsonencode&lt;/span&gt;&lt;span class="err"&gt;(&lt;/span&gt;
            &lt;span class="p"&gt;{&lt;/span&gt;
              &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;apiVersion&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"v1"&lt;/span&gt;
              &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;kind&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Namespace"&lt;/span&gt;
              &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;metadata&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                  &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;annotations&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                      &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="s2"&gt;"app.kubernetes.io/version"&lt;/span&gt;      &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"v0.46.0"&lt;/span&gt;
                      &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="s2"&gt;"catalog.kubestack.com/heritage"&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"kubestack.com/catalog/nginx"&lt;/span&gt;
                      &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="s2"&gt;"catalog.kubestack.com/variant"&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"base"&lt;/span&gt;
                    &lt;span class="p"&gt;}&lt;/span&gt;
                  &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;labels&lt;/span&gt;      &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                      &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="s2"&gt;"app.kubernetes.io/component"&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ingress-controller"&lt;/span&gt;
                      &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="s2"&gt;"app.kubernetes.io/instance"&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ingress-nginx"&lt;/span&gt;
                      &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="s2"&gt;"app.kubernetes.io/managed-by"&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"kubestack"&lt;/span&gt;
                      &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="s2"&gt;"app.kubernetes.io/name"&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"nginx"&lt;/span&gt;
                    &lt;span class="p"&gt;}&lt;/span&gt;
                  &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;name&lt;/span&gt;        &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ingress-nginx"&lt;/span&gt;
                &lt;span class="p"&gt;}&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="err"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="c1"&gt;# module.nginx_ingress.kustomization_resource.p1["_/ConfigMap/ingress-nginx/ingress-nginx-controller"] will be created&lt;/span&gt;
  &lt;span class="c1"&gt;# module.nginx_ingress.kustomization_resource.p1["_/Service/ingress-nginx/ingress-nginx-controller"] will be created&lt;/span&gt;
  &lt;span class="c1"&gt;# module.nginx_ingress.kustomization_resource.p1["_/Service/ingress-nginx/ingress-nginx-controller-admission"] will be created&lt;/span&gt;
  &lt;span class="c1"&gt;# module.nginx_ingress.kustomization_resource.p1["_/ServiceAccount/ingress-nginx/ingress-nginx"] will be created&lt;/span&gt;
  &lt;span class="c1"&gt;# module.nginx_ingress.kustomization_resource.p1["_/ServiceAccount/ingress-nginx/ingress-nginx-admission"] will be created&lt;/span&gt;
  &lt;span class="c1"&gt;# module.nginx_ingress.kustomization_resource.p1["apps/Deployment/ingress-nginx/ingress-nginx-controller"] will be created&lt;/span&gt;
  &lt;span class="c1"&gt;# module.nginx_ingress.kustomization_resource.p1["batch/Job/ingress-nginx/ingress-nginx-admission-create"] will be created&lt;/span&gt;
  &lt;span class="c1"&gt;# module.nginx_ingress.kustomization_resource.p1["batch/Job/ingress-nginx/ingress-nginx-admission-patch"] will be created&lt;/span&gt;
  &lt;span class="c1"&gt;# module.nginx_ingress.kustomization_resource.p1["networking.k8s.io/IngressClass/_/nginx"] will be created&lt;/span&gt;
  &lt;span class="c1"&gt;# module.nginx_ingress.kustomization_resource.p1["rbac.authorization.k8s.io/ClusterRole/_/ingress-nginx"] will be created&lt;/span&gt;
  &lt;span class="c1"&gt;# module.nginx_ingress.kustomization_resource.p1["rbac.authorization.k8s.io/ClusterRole/_/ingress-nginx-admission"] will be created&lt;/span&gt;
  &lt;span class="c1"&gt;# module.nginx_ingress.kustomization_resource.p1["rbac.authorization.k8s.io/ClusterRoleBinding/_/ingress-nginx"] will be created&lt;/span&gt;
  &lt;span class="c1"&gt;# module.nginx_ingress.kustomization_resource.p1["rbac.authorization.k8s.io/ClusterRoleBinding/_/ingress-nginx-admission"] will be created&lt;/span&gt;
  &lt;span class="c1"&gt;# module.nginx_ingress.kustomization_resource.p1["rbac.authorization.k8s.io/Role/ingress-nginx/ingress-nginx"] will be created&lt;/span&gt;
  &lt;span class="c1"&gt;# module.nginx_ingress.kustomization_resource.p1["rbac.authorization.k8s.io/Role/ingress-nginx/ingress-nginx-admission"] will be created&lt;/span&gt;
  &lt;span class="c1"&gt;# module.nginx_ingress.kustomization_resource.p1["rbac.authorization.k8s.io/RoleBinding/ingress-nginx/ingress-nginx"] will be created&lt;/span&gt;
  &lt;span class="c1"&gt;# module.nginx_ingress.kustomization_resource.p1["rbac.authorization.k8s.io/RoleBinding/ingress-nginx/ingress-nginx-admission"] will be created&lt;/span&gt;
  &lt;span class="c1"&gt;# module.nginx_ingress.kustomization_resource.p2["admissionregistration.k8s.io/ValidatingWebhookConfiguration/_/ingress-nginx-admission"] will be created&lt;/span&gt;

&lt;span class="nx"&gt;Plan&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;19&lt;/span&gt; &lt;span class="nx"&gt;to&lt;/span&gt; &lt;span class="nx"&gt;add&lt;/span&gt;&lt;span class="err"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt; &lt;span class="nx"&gt;to&lt;/span&gt; &lt;span class="nx"&gt;change&lt;/span&gt;&lt;span class="err"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt; &lt;span class="nx"&gt;to&lt;/span&gt; &lt;span class="nx"&gt;destroy&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Applying, again, has all the individual Kubernetes resources. And because the modules use explicit &lt;code&gt;depends_on&lt;/code&gt; to handle namespaces and CRDs first and webhooks last, resources are reliably applied in the correct order.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;nginx_ingress&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;kustomization_resource&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;p0&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"_/Namespace/_/ingress-nginx"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Creating&lt;/span&gt;&lt;span class="err"&gt;...&lt;/span&gt;
&lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;nginx_ingress&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;kustomization_resource&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;p0&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"_/Namespace/_/ingress-nginx"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Creation&lt;/span&gt; &lt;span class="nx"&gt;complete&lt;/span&gt; &lt;span class="nx"&gt;after&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="nx"&gt;s&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="err"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;369&lt;/span&gt;&lt;span class="nx"&gt;e8643&lt;/span&gt;&lt;span class="err"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;ad33&lt;/span&gt;&lt;span class="err"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="nx"&gt;eb4&lt;/span&gt;&lt;span class="err"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;95&lt;/span&gt;&lt;span class="nx"&gt;dc&lt;/span&gt;&lt;span class="err"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;f506cef4a198&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;nginx_ingress&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;kustomization_resource&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;p1&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"rbac.authorization.k8s.io/RoleBinding/ingress-nginx/ingress-nginx"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Creating&lt;/span&gt;&lt;span class="err"&gt;...&lt;/span&gt;
&lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;nginx_ingress&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;kustomization_resource&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;p1&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"batch/Job/ingress-nginx/ingress-nginx-admission-create"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Creating&lt;/span&gt;&lt;span class="err"&gt;...&lt;/span&gt;

&lt;span class="err"&gt;...&lt;/span&gt;

&lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;nginx_ingress&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;kustomization_resource&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;p1&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"batch/Job/ingress-nginx/ingress-nginx-admission-patch"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Creation&lt;/span&gt; &lt;span class="nx"&gt;complete&lt;/span&gt; &lt;span class="nx"&gt;after&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="nx"&gt;s&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="err"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;58346878&lt;/span&gt;&lt;span class="err"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;70&lt;/span&gt;&lt;span class="nx"&gt;bd&lt;/span&gt;&lt;span class="err"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;42&lt;/span&gt;&lt;span class="nx"&gt;f2&lt;/span&gt;&lt;span class="err"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;af61&lt;/span&gt;&lt;span class="err"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;2730&lt;/span&gt;&lt;span class="nx"&gt;e3435ca7&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;nginx_ingress&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;kustomization_resource&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;p1&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"_/ServiceAccount/ingress-nginx/ingress-nginx"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Creation&lt;/span&gt; &lt;span class="nx"&gt;complete&lt;/span&gt; &lt;span class="nx"&gt;after&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="nx"&gt;s&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="err"&gt;=&lt;/span&gt;&lt;span class="nx"&gt;f009bbb7&lt;/span&gt;&lt;span class="err"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;7&lt;/span&gt;&lt;span class="nx"&gt;d2e&lt;/span&gt;&lt;span class="err"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="nx"&gt;f28&lt;/span&gt;&lt;span class="err"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;a826&lt;/span&gt;&lt;span class="err"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;ce133c91cc15&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;nginx_ingress&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;kustomization_resource&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;p2&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"admissionregistration.k8s.io/ValidatingWebhookConfiguration/_/ingress-nginx-admission"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Creating&lt;/span&gt;&lt;span class="err"&gt;...&lt;/span&gt;
&lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;nginx_ingress&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;kustomization_resource&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;p2&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"admissionregistration.k8s.io/ValidatingWebhookConfiguration/_/ingress-nginx-admission"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Creation&lt;/span&gt; &lt;span class="nx"&gt;complete&lt;/span&gt; &lt;span class="nx"&gt;after&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="nx"&gt;s&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="err"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;3185&lt;/span&gt;&lt;span class="nx"&gt;b09f&lt;/span&gt;&lt;span class="err"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="nx"&gt;f67&lt;/span&gt;&lt;span class="err"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;4079&lt;/span&gt;&lt;span class="err"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;b44f&lt;/span&gt;&lt;span class="err"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;de01bff44bd2&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;

&lt;span class="nx"&gt;Apply&lt;/span&gt; &lt;span class="nx"&gt;complete&lt;/span&gt;&lt;span class="err"&gt;!&lt;/span&gt; &lt;span class="nx"&gt;Resources&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;19&lt;/span&gt; &lt;span class="nx"&gt;added&lt;/span&gt;&lt;span class="err"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt; &lt;span class="nx"&gt;changed&lt;/span&gt;&lt;span class="err"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt; &lt;span class="nx"&gt;destroyed&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Naturally, it also means that if you increase the replica count like this...&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;replicas&lt;/span&gt; &lt;span class="err"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ingress-nginx-controller"&lt;/span&gt;
  &lt;span class="nx"&gt;count&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;
&lt;span class="p"&gt;}]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;...the terraform plan shows which Kubernetes resources will change and what the diff is.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;Terraform&lt;/span&gt; &lt;span class="nx"&gt;will&lt;/span&gt; &lt;span class="nx"&gt;perform&lt;/span&gt; &lt;span class="nx"&gt;the&lt;/span&gt; &lt;span class="nx"&gt;following&lt;/span&gt; &lt;span class="nx"&gt;actions&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;

  &lt;span class="c1"&gt;# module.nginx_ingress.kustomization_resource.p1["apps/Deployment/ingress-nginx/ingress-nginx-controller"] will be updated in-place&lt;/span&gt;
  &lt;span class="err"&gt;~&lt;/span&gt; &lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"kustomization_resource"&lt;/span&gt; &lt;span class="s2"&gt;"p1"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nx"&gt;id&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"81e8ff18-6c6c-440d-bd8b-bf5f0d016953"&lt;/span&gt;
      &lt;span class="err"&gt;~&lt;/span&gt; &lt;span class="nx"&gt;manifest&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;jsonencode&lt;/span&gt;&lt;span class="err"&gt;(&lt;/span&gt;
          &lt;span class="err"&gt;~&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
              &lt;span class="err"&gt;~&lt;/span&gt; &lt;span class="nx"&gt;spec&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                  &lt;span class="err"&gt;~&lt;/span&gt; &lt;span class="nx"&gt;replicas&lt;/span&gt;             &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt; &lt;span class="err"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;
                    &lt;span class="c1"&gt;# (4 unchanged elements hidden)&lt;/span&gt;
                &lt;span class="p"&gt;}&lt;/span&gt;
                &lt;span class="c1"&gt;# (3 unchanged elements hidden)&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="err"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;Plan&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt; &lt;span class="nx"&gt;to&lt;/span&gt; &lt;span class="nx"&gt;add&lt;/span&gt;&lt;span class="err"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="nx"&gt;to&lt;/span&gt; &lt;span class="nx"&gt;change&lt;/span&gt;&lt;span class="err"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt; &lt;span class="nx"&gt;to&lt;/span&gt; &lt;span class="nx"&gt;destroy&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Maybe more importantly even, the Kustomization provider will also correctly show if a resource can be changed using an in-place update. Or if a destroy-and-recreate is required because there is a change to an immutable field, for example.&lt;/p&gt;

&lt;p&gt;This is the result of two things:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;That, as you've just seen, every Kubernetes resource is handled individually in Terraform state, and&lt;/li&gt;
&lt;li&gt;that the Kustomization provider uses Kubernetes' server-side dry-runs to determine the diff of each resource.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Based on the result of that dry-run, the provider instructs Terraform to create an in-place or a destroy-and-recreate plan.&lt;/p&gt;

&lt;p&gt;So, as an example of such a change, imagine you need to change &lt;code&gt;spec.selector.matchLabels&lt;/code&gt;. Since &lt;code&gt;matchLabels&lt;/code&gt; is an immutable field, you will see a plan that states that the Deployment resource must be replaced. And you will see 1 to add and 1 to destroy in the plan's summary.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;Terraform&lt;/span&gt; &lt;span class="nx"&gt;will&lt;/span&gt; &lt;span class="nx"&gt;perform&lt;/span&gt; &lt;span class="nx"&gt;the&lt;/span&gt; &lt;span class="nx"&gt;following&lt;/span&gt; &lt;span class="nx"&gt;actions&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;

  &lt;span class="c1"&gt;# module.nginx_ingress.kustomization_resource.p1["apps/Deployment/ingress-nginx/ingress-nginx-controller"] must be replaced&lt;/span&gt;
&lt;span class="err"&gt;-/+&lt;/span&gt; &lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"kustomization_resource"&lt;/span&gt; &lt;span class="s2"&gt;"p1"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="err"&gt;~&lt;/span&gt; &lt;span class="nx"&gt;id&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"81e8ff18-6c6c-440d-bd8b-bf5f0d016953"&lt;/span&gt; &lt;span class="err"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="err"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;known&lt;/span&gt; &lt;span class="nx"&gt;after&lt;/span&gt; &lt;span class="nx"&gt;apply&lt;/span&gt;&lt;span class="err"&gt;)&lt;/span&gt;
      &lt;span class="err"&gt;~&lt;/span&gt; &lt;span class="nx"&gt;manifest&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;jsonencode&lt;/span&gt;&lt;span class="err"&gt;(&lt;/span&gt;
          &lt;span class="err"&gt;~&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
              &lt;span class="err"&gt;~&lt;/span&gt; &lt;span class="nx"&gt;metadata&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                  &lt;span class="err"&gt;~&lt;/span&gt; &lt;span class="nx"&gt;labels&lt;/span&gt;      &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                      &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;example&lt;/span&gt;&lt;span class="err"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;selector&lt;/span&gt;               &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"example"&lt;/span&gt;
                        &lt;span class="c1"&gt;# (6 unchanged elements hidden)&lt;/span&gt;
                    &lt;span class="p"&gt;}&lt;/span&gt;
                    &lt;span class="nx"&gt;name&lt;/span&gt;        &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ingress-nginx-controller"&lt;/span&gt;
                    &lt;span class="c1"&gt;# (2 unchanged elements hidden)&lt;/span&gt;
                &lt;span class="p"&gt;}&lt;/span&gt;
              &lt;span class="err"&gt;~&lt;/span&gt; &lt;span class="nx"&gt;spec&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                  &lt;span class="err"&gt;~&lt;/span&gt; &lt;span class="nx"&gt;replicas&lt;/span&gt;             &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt; &lt;span class="err"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;
                  &lt;span class="err"&gt;~&lt;/span&gt; &lt;span class="nx"&gt;selector&lt;/span&gt;             &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                      &lt;span class="err"&gt;~&lt;/span&gt; &lt;span class="nx"&gt;matchLabels&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                          &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;example&lt;/span&gt;&lt;span class="err"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;selector&lt;/span&gt;               &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"example"&lt;/span&gt;
                            &lt;span class="c1"&gt;# (4 unchanged elements hidden)&lt;/span&gt;
                        &lt;span class="p"&gt;}&lt;/span&gt;
                    &lt;span class="p"&gt;}&lt;/span&gt;
                  &lt;span class="err"&gt;~&lt;/span&gt; &lt;span class="nx"&gt;template&lt;/span&gt;             &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                      &lt;span class="err"&gt;~&lt;/span&gt; &lt;span class="nx"&gt;metadata&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                          &lt;span class="err"&gt;~&lt;/span&gt; &lt;span class="nx"&gt;labels&lt;/span&gt;      &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                              &lt;span class="err"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;example&lt;/span&gt;&lt;span class="err"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;selector&lt;/span&gt;               &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"example"&lt;/span&gt;
                                &lt;span class="c1"&gt;# (4 unchanged elements hidden)&lt;/span&gt;
                            &lt;span class="p"&gt;}&lt;/span&gt;
                            &lt;span class="c1"&gt;# (1 unchanged element hidden)&lt;/span&gt;
                        &lt;span class="p"&gt;}&lt;/span&gt;
                        &lt;span class="c1"&gt;# (1 unchanged element hidden)&lt;/span&gt;
                    &lt;span class="p"&gt;}&lt;/span&gt;
                    &lt;span class="c1"&gt;# (2 unchanged elements hidden)&lt;/span&gt;
                &lt;span class="p"&gt;}&lt;/span&gt;
                &lt;span class="c1"&gt;# (2 unchanged elements hidden)&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="c1"&gt;# forces replacement&lt;/span&gt;
        &lt;span class="err"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;Plan&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="nx"&gt;to&lt;/span&gt; &lt;span class="nx"&gt;add&lt;/span&gt;&lt;span class="err"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt; &lt;span class="nx"&gt;to&lt;/span&gt; &lt;span class="nx"&gt;change&lt;/span&gt;&lt;span class="err"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="nx"&gt;to&lt;/span&gt; &lt;span class="nx"&gt;destroy&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Try it yourself
&lt;/h2&gt;

&lt;p&gt;You can find the &lt;a href="https://github.com/kbst/terraform-helm-vs-kustomize"&gt;source code&lt;/a&gt; for the comparison on GitHub if you want to experiment with the differences yourself.&lt;/p&gt;

&lt;p&gt;If you want to try the Kustomize modules yourself, you can either use one of the modules from the catalog that bundle upstream YAML, like the &lt;a href="https://www.kubestack.com/catalog/prometheus"&gt;Prometheus operator&lt;/a&gt;, &lt;a href="https://www.kubestack.com/catalog/cert-manager"&gt;Cert-Manager&lt;/a&gt;, &lt;a href="https://www.kubestack.com/catalog/sealed-secrets"&gt;Sealed secrets&lt;/a&gt;, or &lt;a href="https://www.kubestack.com/catalog/tektoncd"&gt;Tekton&lt;/a&gt;, for example.&lt;/p&gt;

&lt;p&gt;But this doesn't only work for upstream services. There is also a module that can be used to provision any Kubernetes YAML in the exact same way as the catalog modules - called the &lt;a href="https://www.kubestack.com/framework/documentation/cluster-service-modules#custom-manifests"&gt;custom manifest module&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Get involved
&lt;/h2&gt;

&lt;p&gt;Currently, the number of services available from the catalog is still limited.&lt;/p&gt;

&lt;p&gt;If you want to get involved, you can also find the &lt;a href="https://github.com/kbst/catalog"&gt;catalog source on GitHub&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Photo by &lt;a href="https://unsplash.com/@garri?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText"&gt;Vladislav Babienko&lt;/a&gt; on &lt;a href="https://unsplash.com/s/photos/options?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText"&gt;Unsplash&lt;/a&gt;&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>kubernetes</category>
      <category>platform</category>
      <category>devops</category>
    </item>
    <item>
      <title>Google Anthos with Terraform and Kubestack</title>
      <dc:creator>Philipp Strube</dc:creator>
      <pubDate>Fri, 02 Jul 2021 12:49:15 +0000</pubDate>
      <link>https://dev.to/pst418/google-anthos-with-terraform-and-kubestack-4i17</link>
      <guid>https://dev.to/pst418/google-anthos-with-terraform-and-kubestack-4i17</guid>
      <description>&lt;p&gt;For a project, I'm currently evaluating Google Anthos. Since the client is a multi-national company, the idea is to build a multi-region and multi-cloud Kubernetes platform. But the sensible kind, no need to bring the pitchforks. So different applications each in one region and cloud. Not the mythical one application in multiple regions and clouds where complexity and necessity are often entirely disproportionate to each other. But that's not really the point here.&lt;/p&gt;

&lt;p&gt;Google sales is pushing Anthos hard. And the client's team is open to the argument that an opinionated stack might help them move faster compared with having to first evaluate various alternatives for each part of the stack and then building the know-how to productize this custom stack. It's a fair argument to make.&lt;/p&gt;

&lt;p&gt;Long story short, we're now evaluating Anthos with GKE and EKS clusters connected to it, because some application teams are drawn to AWS and some are drawn to Google Cloud with their respective workloads. Individual reasons for this are pretty diverse, ranging from data stickiness like terabytes of data already in S3 to quota/capacity limits of specific types of GPUs or preferring certain managed services from one provider over the other cloud provider's alternative.&lt;/p&gt;

&lt;p&gt;I tend to agree with this kind of multi-cloud strategy making a lot of sense. Yes, individual apps may still end up locked-in to one vendor. But at least it's not all eggs in one basket which has real benefits both on blast radius and pricing negotiations, if you're big enough.&lt;/p&gt;

&lt;p&gt;I've been working on this evaluation for a couple of days now and thought I'd share my experience because I couldn't find a lot of hands-on reports about Anthos with Terraform. Most content seemed primarily hypothetical and it seemed like most writers hadn't actually gotten their hands properly dirty before writing about it. I already washed mine to calm down and make sure this doesn't end up in some obnoxious rant.&lt;/p&gt;

&lt;p&gt;The first thing, that really surprised me about Anthos though, is that Anthos does not provision the Kubernetes clusters for you. I totally expected it would. Instead, you have to provision the clusters and then connect them to the Anthos hub. Which basically requires some IAM setup, an Anthos hub membership and running an agent inside each cluster.&lt;/p&gt;

&lt;p&gt;Anthos leaves it up to you to provision clusters any way you like. But since I'm part of the project, it may not come as a surprise that in our case, infrastructure as code and Terraform are the attack plan.&lt;/p&gt;

&lt;p&gt;Now, Google does even provide its own &lt;a href="https://cloud.google.com/architecture/provisioning-anthos-clusters-with-terraform"&gt;Anthos Terraform modules&lt;/a&gt;. But these are only for GKE, meaning for EKS we'd need to use modules from another source. Leaving us to deal with different module usage and update schedules.&lt;/p&gt;

&lt;p&gt;But more importantly, Google's Terraform modules constantly shell out to &lt;a href="https://github.com/terraform-google-modules/terraform-google-kubernetes-engine/blob/master/modules/asm/main.tf#L85:L101"&gt;&lt;code&gt;kubectl&lt;/code&gt;&lt;/a&gt; or &lt;a href="https://github.com/terraform-google-modules/terraform-google-kubernetes-engine/blob/master/modules/hub/main.tf#L73:L87"&gt;&lt;code&gt;gcloud&lt;/code&gt;&lt;/a&gt; CLIs. Which I consider a last resort that should be avoided at any cost for Terraform modules, because long term maintainability. Frankly, calling CLI commands like this has no place in declarative infrastructure as far as I'm concerned.&lt;/p&gt;

&lt;p&gt;Unsurprisingly, my biased proposal is to use &lt;a href="https://www.kubestack.com/"&gt;Kubestack&lt;/a&gt; to provision the GKE and EKS clusters leveraging the &lt;a href="https://www.kubestack.com/framework/documentation/cluster-modules"&gt;Kubestack framework's unified GKE and EKS modules&lt;/a&gt;, and to write a custom module to connect the resulting clusters to Anthos. The bespoke module would integrate the IAM, Anthos and Kubernetes resources required fully into the Terraform state and lifecycle instead of calling &lt;code&gt;kubectl&lt;/code&gt; and &lt;code&gt;gcloud&lt;/code&gt; like the official Google modules do.&lt;/p&gt;

&lt;p&gt;Below is the current work in progress state of the experimental module and some of the challenges I hit so far.&lt;/p&gt;

&lt;p&gt;The first requirement is an IAM identity and role for the agent inside the cluster. For GKE clusters, workload identities can be used but for non GKE clusters, EKS in our case, it seems &lt;a href="https://cloud.google.com/anthos/multicluster-management/connect/registering-a-cluster"&gt;shared credentials in the form of a service account key are the only option&lt;/a&gt;. Creating &lt;code&gt;google_service_account&lt;/code&gt;, &lt;code&gt;google_project_iam_member&lt;/code&gt; and &lt;code&gt;google_service_account_key&lt;/code&gt; resources is easy enough. I'm sure this is overly simplified and I may have to add more roles as my evaluation continues.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"google_service_account"&lt;/span&gt; &lt;span class="s2"&gt;"current"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;project&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;local&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;project_id&lt;/span&gt;
  &lt;span class="nx"&gt;account_id&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;local&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;cluster_name&lt;/span&gt;
  &lt;span class="nx"&gt;display_name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"${local.cluster_name} gke-connect agent"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"google_project_iam_member"&lt;/span&gt; &lt;span class="s2"&gt;"current"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;project&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;local&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;project_id&lt;/span&gt;
  &lt;span class="nx"&gt;role&lt;/span&gt;    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"roles/gkehub.connect"&lt;/span&gt;
  &lt;span class="nx"&gt;member&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"serviceAccount:${google_service_account.current.email}"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"google_service_account_key"&lt;/span&gt; &lt;span class="s2"&gt;"current"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;service_account_id&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;google_service_account&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;current&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The next step is to register the cluster as a member of the Anthos hub. Which means adding a &lt;code&gt;google_gke_hub_membership&lt;/code&gt; resource.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"google_gke_hub_membership"&lt;/span&gt; &lt;span class="s2"&gt;"current"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;provider&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;google&lt;/span&gt;&lt;span class="err"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;beta&lt;/span&gt;

  &lt;span class="nx"&gt;project&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;local&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;project_id&lt;/span&gt;
  &lt;span class="nx"&gt;membership_id&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;local&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;cluster_name&lt;/span&gt;
  &lt;span class="nx"&gt;description&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"${local.cluster_name} hub membership"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally, the agent needs to be provisioned inside the cluster and set up to use the service account as its identity.&lt;/p&gt;

&lt;p&gt;By default, joining the cluster to the hub and provisioning the Kubernetes resources of the agent on the cluster is done via the &lt;code&gt;gcloud beta container hub memberships register&lt;/code&gt; CLI command. But the command has a &lt;code&gt;--manifest-output-file&lt;/code&gt; parameter, that allows writing the Kubernetes resources to a file instead of applying it to the cluster directly.&lt;/p&gt;

&lt;p&gt;To not also have to fall back to calling the register &lt;code&gt;gcloud&lt;/code&gt; command from Terraform, I opted to write the manifests to a YAML file and use them as the base that I patch in a &lt;a href="https://registry.terraform.io/providers/kbst/kustomization/latest/docs/data-sources/overlay"&gt;&lt;code&gt;kustomization_overlay&lt;/code&gt; using my Kustomization provider&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;This way, I will have each individual Kubernetes resource of the Anthos agent to be provisioned and tracked using Terraform. While at the same time being able to use the attributes from my service account and service account key resources to configure the agent.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;data&lt;/span&gt; &lt;span class="s2"&gt;"kustomization_overlay"&lt;/span&gt; &lt;span class="s2"&gt;"current"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;namespace&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"gke-connect"&lt;/span&gt;

  &lt;span class="nx"&gt;resources&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
      &lt;span class="s2"&gt;"${path.module}/upstream_manifest/anthos.yaml"&lt;/span&gt;
  &lt;span class="p"&gt;]&lt;/span&gt;

  &lt;span class="nx"&gt;secret_generator&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"creds-gcp"&lt;/span&gt;
    &lt;span class="nx"&gt;type&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"replace"&lt;/span&gt;
    &lt;span class="nx"&gt;literals&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
      &lt;span class="s2"&gt;"creds-gcp.json=${base64decode(google_service_account_key.current.private_key)}"&lt;/span&gt;
    &lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nx"&gt;patches&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;# this breaks if the order of env vars in the upstream YAML changes&lt;/span&gt;
    &lt;span class="nx"&gt;patch&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;-&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;
      - op: replace
        path: /spec/template/spec/containers/0/env/6/value
        value: "//gkehub.googleapis.com/projects/xxxxxxxxxxxx/locations/global/memberships/${local.cluster_name}"
&lt;/span&gt;&lt;span class="no"&gt;    EOF

&lt;/span&gt;    &lt;span class="nx"&gt;target&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;group&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"apps"&lt;/span&gt;
      &lt;span class="nx"&gt;version&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"v1"&lt;/span&gt;
      &lt;span class="nx"&gt;kind&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Deployment"&lt;/span&gt;
      &lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"gke-connect-agent-20210514-00-00"&lt;/span&gt;
      &lt;span class="nx"&gt;namespace&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"gke-connect"&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The manifests &lt;code&gt;gcloud&lt;/code&gt; writes to disk can't be committed to version control because they include a Kubernetes secret with the plaintext service account key embedded. The key file is unfortunately a required parameter of the &lt;code&gt;hub memberships register&lt;/code&gt; command. So I had to delete this secret from the YAML file. And have to remember doing this whenever I rerun the command to update my base with the latest upstream manifests.&lt;/p&gt;

&lt;p&gt;In the &lt;code&gt;kustomization_overlay&lt;/code&gt; data source, I then use a &lt;code&gt;secret_generator&lt;/code&gt; to create a Kubernetes secret using the private key from the &lt;code&gt;google_service_account_key&lt;/code&gt; resource.&lt;/p&gt;

&lt;p&gt;Additionally, the agent has a number of environment variables set. The URL to the hub memberships resource is one of them and needs to be patched with the respective cluster name. Unfortunately, the environment variables are set directly in the pod template. So the patch will break if the number or order of environment variables changes. It would be better to change this to &lt;code&gt;envFrom&lt;/code&gt; and set the environment variables dynamically in the overlay using a &lt;code&gt;config_map_generator&lt;/code&gt;. But the downside of this is, again, that there's one more modification to the upstream YAML which has to be repeated every time it is updated.&lt;/p&gt;

&lt;p&gt;While we're on the topic of updates. One thing that makes me suspicious is that the generated YAML has a date as part of its resource names. E.g. &lt;code&gt;gke-connect-agent-20210514-00-00&lt;/code&gt;. Call me a pessimist, but I totally expect this to become a problem with updates in the future.&lt;/p&gt;

&lt;p&gt;Ignoring that for now. Next on my evaluation was to apply my Terraform configuration and hopefully have my clusters connected to Anthos.&lt;/p&gt;

&lt;p&gt;Unfortunately, on the first try, that wasn't quite the case. The clusters did show up in the Anthos UI. But had a big red &lt;code&gt;unreachable&lt;/code&gt; warning. As it turned out, this was due to the agent pod crash looping with a permission denied error.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl -n gke-connect logs gke-connect-agent-20210514-00-00-66d94cff9d-tzw5t
2021/07/02 11:40:33.277997 connect_agent.go:17: GKE Connect Agent. Log timestamps in UTC.
2021/07/02 11:40:33.298969 connect_agent.go:21: error creating tunnel: unable to retrieve namespace "kube-system" to be used as externalID: namespaces "kube-system" is forbidden: User "system:serviceaccount:gke-connect:connect-agent-sa" cannot get resource "namespaces" in API group "" in the namespace "kube-system"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Which is weird, because from reading the &lt;code&gt;gcloud&lt;/code&gt; generated YAML I remember there were plenty of RBAC related resources included. Digging in, it turned out the generated YAML has a &lt;code&gt;Role&lt;/code&gt; and &lt;code&gt;RoleBinding&lt;/code&gt;. And if you followed carefully, you probably guessed the issue already. Here's the respective part of the generated resources:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;rbac.authorization.k8s.io/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Role&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="s"&gt;hub.gke.io/project&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ie-gcp-poc"&lt;/span&gt;
    &lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;20210514-00-00&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;gke-connect-namespace-getter&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kube-system&lt;/span&gt;
&lt;span class="na"&gt;rules&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;apiGroups&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;
  &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;namespaces&lt;/span&gt;
  &lt;span class="na"&gt;verbs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;get&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;rbac.authorization.k8s.io/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;RoleBinding&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="s"&gt;hub.gke.io/project&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ie-gcp-poc"&lt;/span&gt;
    &lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;20210514-00-00&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;gke-connect-namespace-getter&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kube-system&lt;/span&gt;
&lt;span class="na"&gt;roleRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;apiGroup&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;rbac.authorization.k8s.io&lt;/span&gt;
  &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Role&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;gke-connect-namespace-getter&lt;/span&gt;
&lt;span class="na"&gt;subjects&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ServiceAccount&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;connect-agent-sa&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;gke-connect&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Unless I'm terribly wrong here, this obviously can't work. Creating a namespaced role and role binding inside the &lt;code&gt;kube-system&lt;/code&gt; namespace can not grant permissions to &lt;code&gt;get&lt;/code&gt; the &lt;code&gt;kube-system&lt;/code&gt; namespace, because namespaces are not namespaced resources.&lt;/p&gt;

&lt;p&gt;So I changed the &lt;code&gt;Role&lt;/code&gt; to a &lt;code&gt;ClusterRole&lt;/code&gt; and the &lt;code&gt;RoleBinding&lt;/code&gt; to a &lt;code&gt;ClusterRoleBinding&lt;/code&gt; and reapplied my Terraform configuration. And I now have a running agent, that established a tunnel to the Anthos control plane and prints lots of log messages. I have yet to dig into what it actually does there.&lt;/p&gt;

&lt;p&gt;With the RBAC fix the generated YAML now already requires three changes to maintain over time. I can't say I'm particularly excited about that. I also wonder if the generated YAML is only broken if the &lt;code&gt;--manifest-output-file&lt;/code&gt; parameter is used or if the RBAC configuration is also broken when directly applying the Kubernetes resources to the cluster using the &lt;code&gt;gcloud&lt;/code&gt; CLI.&lt;/p&gt;

&lt;p&gt;That's it for my evaluation with Google Anthos, Terraform and Kubestack so far. Maybe by sharing my findings, I may safe somebody out there a bit of time in their own evaluation when they hit the same issues.&lt;/p&gt;

&lt;p&gt;Next step for me is to look into provisioning the Anthos Service Mesh. It's not quite clear to me yet, if that should be done via Terraform, the fact that Google has this &lt;code&gt;kubectl&lt;/code&gt; based Terraform module for that may suggest so. But why wouldn't I do everything after the cluster is connected to Anthos using Anthos Config Management?&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>terraform</category>
      <category>googlecloud</category>
      <category>anthos</category>
    </item>
    <item>
      <title>What Terraform can learn from PHP</title>
      <dc:creator>Philipp Strube</dc:creator>
      <pubDate>Mon, 08 Feb 2021 10:30:47 +0000</pubDate>
      <link>https://dev.to/kubestack/what-terraform-can-learn-from-php-4e65</link>
      <guid>https://dev.to/kubestack/what-terraform-can-learn-from-php-4e65</guid>
      <description>&lt;p&gt;TL;DR:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Writing infrastructure as code shows many of the same challenges as writing code for application development, because many of these challenges are not language or use-case specific.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://www.terraform.io/"&gt;Terraform&lt;/a&gt; and its surrounding ecosystem are still evolving and share many similarities with early PHP and the web. Just like PHP evolved by learning from other language ecosystems, Terraform can as well.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Use-case specific frameworks are a major driver of innovation, improved developer experience and productivity on the application development side. But are not yet established parts of the infrastructure as code ecosystem.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The paradigm shift to containers and &lt;a href="https://kubernetes.io/"&gt;Kubernetes&lt;/a&gt; made use-case specific frameworks possible for infrastructure as code by providing a powerful abstraction between application and infrastructure layer. And the cloud native community is evolving rapidly, extending this abstraction to additional use-cases.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Organizations that adopted application development frameworks for their improved developer experience and productivity, can leverage the same benefits for automating Kubernetes by using an &lt;a href="https://www.kubestack.com/"&gt;infrastructure as code framework&lt;/a&gt; and avoid leaving the cluster the weakest link in their GitOps automation.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Learning from other language ecosystems
&lt;/h2&gt;

&lt;p&gt;PHP’s ease of getting started is widely quoted as the boon and bane of the language. It seems as if making fun of the spaghetti code bases of the early PHP days never gets old. Even in 2021. But there is no doubt that PHP is an extremely successful programming language. &lt;/p&gt;

&lt;p&gt;You may ask, what does this have to do with Terraform? Well, hear me out. Terraform and PHP have more in common than you may think. PHP was created when the web was in its infancy and quickly became extremely popular. Don’t forget, PHP is the P in LAMP stack. Similarly, infrastructure as code is still an emerging ecosystem today, and Terraform is by far the most popular language in this ecosystem.&lt;/p&gt;

&lt;p&gt;But the modern PHP of today is vastly different from the early PHP we all like to make fun of. And since Terraform today is so similar to where PHP was when it started, there’s a good chance that the Terraform community can learn a lot from how PHP evolved.&lt;/p&gt;

&lt;p&gt;Rasmus Lerdorf, the creator of PHP, is famously &lt;a href="https://en.wikipedia.org/wiki/PHP#cite_note-itconversations-21"&gt;quoted&lt;/a&gt; as never having intended to write a programming language. But PHP got popular and they had to keep going. In addition, the web and its request-response model were new, even to experienced developers. But the endless possibilities of the web got people excited, and the unintentional programming language PHP was easy to get started with. This combination led to the stereotypical poor quality code bases that ended up powering major parts of the early web.&lt;/p&gt;

&lt;p&gt;Similarly, infrastructure as code offers huge benefits and gets people excited as well. But it also requires both operations and coding experience, and people coming from either one background have to learn a lot about the respective other, before they can be fully productive.&lt;/p&gt;

&lt;p&gt;Languages like Python released a few years before PHP, or Ruby and Java, which were released in the same year as PHP, were intentionally designed programming languages for professional use. While not specific to the web, it is of course possible to build web applications in either one of them. So the self-evident thing was to use these more mature and consistent languages to build web applications, and have more easily maintainable code bases as a result.&lt;/p&gt;

&lt;p&gt;And not only were the languages more mature, but so were their ecosystems. The majority of challenges, developers face when writing code, are not language specific. And many are not even use-case specific. You may need different dependencies for building a web application instead of a desktop application for example. But in both cases having dependency management is greatly useful. A feature Python, Ruby and Java all already had.&lt;/p&gt;

&lt;p&gt;This led to the creation of frameworks like &lt;a href="https://www.djangoproject.com/"&gt;Django&lt;/a&gt;, &lt;a href="https://rubyonrails.org/"&gt;Ruby on Rails&lt;/a&gt; or &lt;a href="https://spring.io/"&gt;Spring&lt;/a&gt; that made it easy to build web applications in Python, Ruby or Java respectively, leveraging their existing language ecosystems.&lt;/p&gt;

&lt;p&gt;A great idea that works in one ecosystem, however is quick to inspire similar development in other languages. And PHP’s wide adoption easily justified major investments to improve the PHP core as well as the surrounding ecosystem. All those teams looking for the best way to maintain their growing PHP code bases were smart to look at other languages and how these same challenges were solved there.&lt;/p&gt;

&lt;p&gt;The result are frameworks like &lt;a href="https://symfony.com/"&gt;Symfony&lt;/a&gt; or &lt;a href="https://cakephp.org/"&gt;CakePHP&lt;/a&gt;, heavily inspired by Spring and Rails respectively. This is also how Composer brought modern dependency management to PHP. And last but not least, this was when the PHP community adopted Git for version control and slowly moved away from just editing production files directly via FTP.&lt;/p&gt;

&lt;h2&gt;
  
  
  It’s all about the code
&lt;/h2&gt;

&lt;p&gt;Let's get back to infrastructure as code. Yes, in a lot of ways automating infrastructure is different from application development. But many of the challenges of writing code, that applied across languages and use-cases on the software development side, also apply to infrastructure as code. Code is kind of the keyword here.&lt;/p&gt;

&lt;p&gt;So just like PHP learned from other languages, their frameworks and their tooling, Terraform can only benefit from doing so as well.&lt;/p&gt;

&lt;p&gt;One area where Hashicorp, the makers of Terraform, recently made major improvements is dependency management. Terraform had the ability to download required providers for quite some time. But it was limited to only Hashicorp’s own providers. Community maintained providers required involving, manual installation. A recent Terraform release introduced support for registry namespaces, which means community providers can now also be installed from the official registry. In addition, required providers and versions can now be &lt;a href="https://www.terraform.io/upgrade-guides/0-13.html#explicit-provider-source-locations"&gt;specified more explicitly&lt;/a&gt;. Even including the ability to vendor providers, and thereby hardening automation runs against failing when the registry is unavailable.&lt;/p&gt;

&lt;h2&gt;
  
  
  The missing piece
&lt;/h2&gt;

&lt;p&gt;All the language ecosystems we discussed share one key piece that heavily improves the developer experience, but which isn’t a thing yet in the infrastructure as code world. I’m referring to frameworks of course. And concretely use-case specific frameworks. By being use-case specific, the aforementioned software development frameworks drastically reduce upfront and maintenance effort, and provide the best developer experience and workflow possible.&lt;/p&gt;

&lt;p&gt;If I’m building a cloud native application in Java, using Spring Boot will make my life much easier. Likewise, if my goal is to build a Jamstack website, a framework like &lt;a href="https://www.gatsbyjs.com/"&gt;Gatsby&lt;/a&gt; will get me there much faster.&lt;/p&gt;

&lt;p&gt;But the reason why frameworks are not a thing in the infrastructure as code world yet is not merely that the ecosystem is still evolving. For frameworks to be useful, we also required a strong abstraction layer that kept the infrastructure layer clear from application specific requirements. Containers and Kubernetes are extremely popular because they provide this very abstraction. And this means two things: First, that with using Terraform to manage Kubernetes there is a popular and very specific use-case for an infrastructure as code framework. And second, that because of the powerful abstraction, such a framework makes sense for the first time.&lt;/p&gt;

&lt;p&gt;Kubestack is this use-case specific, &lt;a href="https://www.kubestack.com/"&gt;Terraform GitOps framework&lt;/a&gt;. If you’re building GitOps automation for Kubernetes cluster infrastructure and cluster services using Terraform, Kubestack may be the framework for you. Think of Kubestack as the Ruby on Rails of infrastructure automation, the Gatsby of GitOps, or the Spring Boot of Terraform and Kubernetes.&lt;/p&gt;

&lt;p&gt;And just like application frameworks copied ideas that worked well from one language to another, Kubestack does the same from application development to infrastructure as code.&lt;/p&gt;

&lt;h2&gt;
  
  
  Talent borrows, genius steals
&lt;/h2&gt;

&lt;p&gt;One example is Kubestack’s convention over configuration based repository layout. Another one is its inheritance based configuration to prevent drift between environments. A third one is the ability to easily vendor dependencies in the repository, like the Nginx ingress controller or Prometheus monitoring operator. Or, as the last but not the least example, local development environments that automatically update as you make changes to the code.&lt;/p&gt;

&lt;p&gt;Slow feedback loops are poison for developer productivity. And infrastructure as code is notoriously known for mandatory, slow pipeline runs. This makes the local development environment the perfect example how Kubestack drastically improves the developer experience, because it’s a use-case specific framework.&lt;/p&gt;

&lt;p&gt;The strong abstraction between the application and infrastructure layers is a key mantra of what we know as cloud native. And if you take a look at recent developments from the cloud native community the direction is clear. As more and more organizations shift their workloads and use-cases to cloud native, we continue to see new innovation and iterative improvements that extend this powerful abstraction.&lt;/p&gt;

&lt;p&gt;This is both positive for the future of infrastructure as code and Terraform as well as for use-case specific infrastructure as code frameworks.&lt;/p&gt;

&lt;h2&gt;
  
  
  Terraform loves cloud native
&lt;/h2&gt;

&lt;p&gt;Systems that provide a separation between declaring desired state and current state are the current state-of-the-art. This is a core principle of Kubernetes and high-level managed cloud services, but also of VM auto-scaling groups, as a lower level example of this principle. On the surface there’s an API to declare the desired state. And behind the API are control loops that keep the current state in sync with the desired state.&lt;/p&gt;

&lt;p&gt;Terraform shines when being combined with such a system, because it is great at planning and applying changes triggered by a commit in a repository. And it can also be run periodically, to detect drift and either alert or overwrite. But when operating distributed systems, there are various failure scenarios where continuously running controllers, that can take immediate action based on more events than just code changes, are clearly superior. The important thing to understand here is, Terraform is great to provide a way for teams to reason about proposed changes and keeping the committed state and desired state in sync. But keeping desired and current state in sync is, in most cases, better left to a continuously running control loop.&lt;/p&gt;

&lt;p&gt;It’s common for teams to hit this limitation when using infrastructure as code to automate legacy systems that don’t provide this separation of concerns. And this frequently leads to automation that only manages the lifecycle partially and causes complex issues for teams to coordinate automation and manual operations. Facing this significantly limits the value of infrastructure as code, and many teams justifiably may hold back on adopting Terraform for this very reason.&lt;/p&gt;

&lt;p&gt;But Kubernetes or managed cloud services are not the only systems that rely on declared desired state and reconciliation loops to keep current state in sync. An example doing this for infrastructure automation outside the cloud provider’s walled gardens is &lt;a href="https://cluster-api.sigs.k8s.io/#why-build-cluster-api"&gt;ClusterAPI&lt;/a&gt;. This cloud native community initiative aims to provide the same separation across on-premise and cloud. And through integration into vSphere, ClusterAPI is readily available to VMware’s vast installed base.&lt;/p&gt;

&lt;h2&gt;
  
  
  The future of infrastructure is code
&lt;/h2&gt;

&lt;p&gt;As an industry, we’re clearly heading into one direction. And as we continue to adopt this paradigm, the limitations that held infrastructure as code back, when working with legacy systems, do not apply any more. As infrastructure as code becomes more viable for more organizations, more teams can benefit from use-case specific frameworks to get the best possible developer experience and productivity.&lt;/p&gt;

&lt;p&gt;Already now, many teams are using Terraform successfully. Yes, there are edge cases to consider and there is a steep learning curve, no matter if your background is in operations or software development. But as the cloud native ecosystem continues to evolve, the benefits of infrastructure as code will be applicable to more teams and more use-cases and just like PHP grew by learning from other language ecosystems, Terraform will too.&lt;/p&gt;

&lt;p&gt;As far as Kubernetes is concerned, if you’re already adopting GitOps, the Kubestack framework is an opportunity to implement &lt;a href="https://www.kubestack.com/"&gt;full-stack GitOps&lt;/a&gt; that covers both the cluster infrastructure and cluster services and not just the application workloads on the cluster. This way, you can avoid having the foundation of your system, the cluster, be the weakest link by not managing it manually via UI.&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>devops</category>
      <category>programming</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Localhost EKS development environments with EKS-D and Kubestack</title>
      <dc:creator>Philipp Strube</dc:creator>
      <pubDate>Tue, 01 Dec 2020 19:04:32 +0000</pubDate>
      <link>https://dev.to/kubestack/localhost-eks-development-environments-with-eks-d-and-kubestack-4p6</link>
      <guid>https://dev.to/kubestack/localhost-eks-development-environments-with-eks-d-and-kubestack-4p6</guid>
      <description>&lt;p&gt;Today Amazon announced EKS Distro, or EKS-D for short. A Kubernetes distribution making the same release artifacts used by Amazon EKS available to everyone.&lt;/p&gt;

&lt;p&gt;This allows teams to use the exact same bits and pieces that power EKS, to build clusters for anything from integration tests to on-premise use-cases. As a launch partner, I got access to EKS-D in advance to integrate it into Kubestack’s local development environments.&lt;/p&gt;

&lt;p&gt;Kubestack is about providing the best &lt;a href="https://www.kubestack.com/" rel="noopener noreferrer"&gt;GitOps developer experience for Terraform and Kubernetes&lt;/a&gt;, from local development, all the way to production.&lt;/p&gt;

&lt;p&gt;Because I believe platform engineers automating Kubernetes deserve the same great developer experience that application engineers building applications on top of Kubernetes already have.&lt;/p&gt;

&lt;p&gt;To achieve this, the Kubestack framework integrates all the moving pieces from Terraform providers, to resources, and modules into a GitOps workflow ready for day-2 operations.&lt;/p&gt;

&lt;p&gt;On top of the reliable automation to propose, validate and promote infrastructure changes, Kubestack is focused on giving platform teams a modern developer experience to iterate quickly using local development environments.&lt;/p&gt;

&lt;p&gt;Now, whenever Kubestack simulates an EKS cluster locally, it uses EKS-D to do so. Let’s take a look at how Kubestack’s infrastructure automation from local development to production works.&lt;/p&gt;

&lt;h2&gt;
  
  
  Local Development
&lt;/h2&gt;

&lt;p&gt;Imagine you’re tasked with provisioning the Prometheus operator to deploy a Prometheus instance and configuring it to scrape the metrics from your team’s application for each environment.&lt;/p&gt;

&lt;p&gt;If you’re like me, it may take a few iterations to get the label and namespace selectors in the Prometheus resource just right and configure the RBAC for the Prometheus instance’s service account. Especially RBAC is notorious for taking a bit of trial and error for many folks to get right.&lt;/p&gt;

&lt;p&gt;Using Kubestack’s local development environment, you can iterate on the exact same manifests that will later be used in production. The local development environment automatically updates as you make changes, and provides immediate feedback, without waiting minutes for CI/CD pipeline runs every time. It’s just like in the infamous &lt;a href="https://xkcd.com/303/" rel="noopener noreferrer"&gt;XKCD comic&lt;/a&gt;, except it’s applying, not compiling.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fimgs.xkcd.com%2Fcomics%2Fcompiling.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fimgs.xkcd.com%2Fcomics%2Fcompiling.png" alt="applying, not compiling, but you get the idea"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;All you have to do to get started on this task is change into your checkout of the infrastructure repository and run one &lt;code&gt;kbst&lt;/code&gt; CLI command. Then you’re all set to work on the Prometheus manifests.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kbst local apply
...
Switched to workspace "loc".
...
Apply complete! Resources: 14 added, 0 changed, 0 destroyed.
2020/11/18 12:55:53 #### Watching for changes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The new EKS-D integration means the local environment is now even closer to the EKS production environment. And fewer differences between environments reduce the risk that promoting a change fails. This is also why Kubestack uses inheritance between environments. Differences are sometimes necessary, but configuration inheritance makes them explicit.&lt;/p&gt;

&lt;h2&gt;
  
  
  Environment Promotion
&lt;/h2&gt;

&lt;p&gt;Eventually, I’ll have the monitoring setup working locally and it’s time to push my changes and ask for a peer-review. This is the first step where the &lt;a href="https://www.kubestack.com/framework/documentation/gitops-process#making-changes" rel="noopener noreferrer"&gt;Kubestack GitOps workflow&lt;/a&gt; kicks in.&lt;/p&gt;

&lt;p&gt;There are two things to review to decide if you want to apply this change. Your code changes of course, and the &lt;code&gt;terraform plan&lt;/code&gt; provided by Kubestack’s pipeline for every branch.&lt;/p&gt;

&lt;p&gt;If the reviewers requires changes, you can push additional commits to the branch and the pipeline will run &lt;code&gt;terraform plan&lt;/code&gt; again. When your team approved, merge the change into master.&lt;/p&gt;

&lt;p&gt;This triggers the pipeline and applies the merged changes to the ops environment. A &lt;code&gt;terraform plan&lt;/code&gt; is not enough to ensure that the changes will apply correctly. That’s why Kubestack uses the ops environment, to validate the configuration change against real cloud infrastructure. The ops environment does not run applications, so that teams can feel confident to merge infrastructure changes at any time, without worrying about blocking team members or breaking applications.&lt;/p&gt;

&lt;p&gt;Finally, if the change to ops applied successfully, the pipeline will additionally provide a &lt;code&gt;terraform plan&lt;/code&gt; to show the required changes for the apps environment. The additional plan helps teams decide if they want to promote this change into the apps environment now.&lt;/p&gt;

&lt;p&gt;Having a reliable workflow is crucial for teams to trust their automation. Kubestack, by combining purpose built Terraform modules and its proven triggers helps teams build infrastructure automation that is ready for day-2 operations.&lt;/p&gt;

&lt;h2&gt;
  
  
  Get Started
&lt;/h2&gt;

&lt;p&gt;If you’ve made it this far and want to learn more, you can get started with the &lt;a href="https://www.kubestack.com/framework/documentation/tutorial-get-started" rel="noopener noreferrer"&gt;Kubestack framework by following the tutorial&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>terraform</category>
      <category>kubernetes</category>
      <category>gitops</category>
    </item>
    <item>
      <title>Keep Application Pipelines Simple by Provisioning Managed Cloud Services using Kubernetes YAML</title>
      <dc:creator>Philipp Strube</dc:creator>
      <pubDate>Thu, 09 Jul 2020 10:55:53 +0000</pubDate>
      <link>https://dev.to/pst418/keep-application-pipelines-simple-by-provisioning-managed-cloud-services-using-kubernetes-yaml-496j</link>
      <guid>https://dev.to/pst418/keep-application-pipelines-simple-by-provisioning-managed-cloud-services-using-kubernetes-yaml-496j</guid>
      <description>&lt;p&gt;Teams adopt Kubernetes because the fully declarative Kubernetes YAML can drastically reduce the complexity of application deployment automation. This is also a key enabler for GitOps.&lt;/p&gt;

&lt;p&gt;However, applications that depend on managed cloud services like e.g. databases, object storage buckets or queues require the deployment automation to also handle these non Kubernetes resources. Since this is a common use-case, the full benefit Kubernetes brings to application deployment is often unfortunately drastically reduced.&lt;/p&gt;

&lt;p&gt;Teams with this requirement have three options to move forward:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Handle the infrastructure dependencies in a separate pipeline.&lt;/li&gt;
&lt;li&gt;Handle the infrastructure dependencies in the application pipeline itself.&lt;/li&gt;
&lt;li&gt;Extend Kubernetes to provide a declarative way to handle these infrastructure dependencies as well.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Options 1 and 2 are well known but have significant downsides. Handling these non Kubernetes infrastructure dependencies in a separate pipeline requires manual orchestration for changes that affect both infrastructure services and the application. But implementing this orchestration in a single pipeline is anything but trivial. For teams choosing options 1 or 2, Terraform is a popular choice because it can handle both infrastructure resources and Kubernetes resources, but the &lt;a href="https://kubernetes.io/blog/2020/06/working-with-terraform-and-kubernetes/"&gt;Terraform integration with Kubernetes&lt;/a&gt; also has some limitations to be aware of.&lt;/p&gt;

&lt;p&gt;With Kubestack, the open-source &lt;a href="https://www.kubestack.com/lp/terraform-gitops-framework"&gt;Terraform GitOps Framework&lt;/a&gt; I maintain, I strongly encourage teams to keep infrastructure and application automation strictly separated from each other because interdependencies are a frequent source of blockers between different tasks. So while Kubestack uses Terraform to provision the AKS, EKS or GKE clusters and all cluster services that are required before applications can be deployed, using the same pipeline to provision the managed database service an application needs is discouraged because it would break the important separation.&lt;/p&gt;

&lt;p&gt;This leaves us with option 3. In principle, the idea is to use &lt;a href="https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/"&gt;custom resources&lt;/a&gt;, to extend the Kubernetes API to support additional resource types, and then have custom controllers, commonly called operators, manage the lifecycle of these new resource types the same way a built-in controller handles deployment resources. This keeps the logic out of the application pipelines giving us back the simplicity of handling just the Kubernetes YAML.&lt;/p&gt;

&lt;p&gt;While there are also operators that run the service workloads themselves inside the Kubernetes clusters, I will only be looking at solutions that help provision managed cloud services for this post.&lt;/p&gt;

&lt;p&gt;Let's take a look at some available projects:&lt;/p&gt;

&lt;h2&gt;
  
  
  AWS Service Operator
&lt;/h2&gt;

&lt;p&gt;At first glance the AWS Service Operator does not look like a project anyone should trust their infrastructure automation with.&lt;/p&gt;

&lt;p&gt;The first version was a thin wrapper around cloud formation templates and has since been &lt;a href="https://github.com/amazon-archives/aws-service-operator"&gt;archived&lt;/a&gt;. The &lt;a href="https://aws.amazon.com/blogs/opensource/aws-service-operator-kubernetes-available/"&gt;blog post that announced&lt;/a&gt; the original version end of 2018 has been edited in February 2020 to refer to an effort to rewrite the operator that started mid-2019. Looking at the &lt;a href="https://github.com/aws/aws-service-operator-k8s/tree/mvp"&gt;MVP branch&lt;/a&gt; in the new repository shows recent activity and gives the impression of active development. But there hasn’t been a release yet.&lt;/p&gt;

&lt;p&gt;Still, the possibility to have an operator that allows managing AWS resources from within Kubernetes using custom resources built and maintained by AWS engineers justifies keeping an eye on the AWS Service Operator for teams running on AWS.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Update Aug, 20&lt;/strong&gt; as of today, AWS has released a preview of the new version under a new name. Read more about the new &lt;a href="https://aws.amazon.com/about-aws/whats-new/2020/08/announcing-the-aws-controllers-for-kubernetes-preview/"&gt;AWS Controller for Kubernetes&lt;/a&gt; on the AWS blog.&lt;/p&gt;

&lt;h2&gt;
  
  
  Azure Service Operator
&lt;/h2&gt;

&lt;p&gt;A few days ago, Azure &lt;a href="https://cloudblogs.microsoft.com/opensource/2020/06/25/announcing-azure-service-operator-kubernetes/"&gt;announced the release&lt;/a&gt; of the Azure Service Operator. The &lt;a href="https://github.com/Azure/azure-service-operator"&gt;repository&lt;/a&gt; shows active development and the latest release was 10 days ago. This leaves a solid first impression.&lt;/p&gt;

&lt;p&gt;The Azure Service Operator states support for a number of Azure services like EventHub, Azure SQL, CosmosDB, Storage Accounts, and more. For teams running applications on Kubernetes on Azure, having access to an operator maintained by Microsoft is a promising option.&lt;/p&gt;

&lt;h2&gt;
  
  
  Crossplane
&lt;/h2&gt;

&lt;p&gt;Where the AWS and Azure Service Operators are cloud specific, &lt;a href="https://crossplane.io/"&gt;Crossplane&lt;/a&gt; is an open-source project that offers a multi-cloud solution. The &lt;a href="https://github.com/crossplane/crossplane"&gt;repository&lt;/a&gt; shows active and recent development. Crossplane is divided into a cloud agnostic part and cloud specific providers. These providers exist for AWS, GCP, Azure and Alibaba.&lt;/p&gt;

&lt;p&gt;Crossplane is supported by &lt;a href="https://upbound.io/"&gt;Upbound&lt;/a&gt; and not one of the cloud providers directly, but makes up for that by offering multi-cloud support.&lt;/p&gt;

&lt;p&gt;Even if multi-cloud support is not a requirement, Crossplane is worth taking a look at because it has been around significantly longer which suggests more maturity. Looking at the custom resources Crossplane makes available inside the cluster it seems there are about 70 different services supported between the four cloud providers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Terraform Operator
&lt;/h2&gt;

&lt;p&gt;Hashicorp in March &lt;a href="https://www.hashicorp.com/blog/creating-workspaces-with-the-hashicorp-terraform-operator-for-kubernetes/"&gt;announced&lt;/a&gt; the alpha release of the official Terraform operator. Terraform already has wide support for cloud resources across many cloud providers including their managed services. But as discussed earlier, having to maintain both Kubernetes YAML and Terraform HCL in one repository and having to run Terraform as part of the application pipeline adds complexity to that pipeline.&lt;/p&gt;

&lt;p&gt;The Terraform operator takes the approach that users specify a Workspace custom resource which it syncs with the workspace on the mandatory linked Terraform cloud account. Based on the activity in the &lt;a href="https://github.com/hashicorp/terraform-k8s"&gt;repository&lt;/a&gt;, the Terraform operator also seems actively developed.&lt;/p&gt;

&lt;p&gt;For teams already invested into Terraform, this may be a good option. Given Terraform’s wide support, it may also support significantly more services than the other three. But this approach isn’t as purpose built as the other three options we discussed earlier and may require all team members to get comfortable with HCL and the Terraform concepts of providers and modules.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;For teams that want to keep their application pipelines simple and only have to worry about maintaining Kubernetes YAML to specify their application’s requirements, there are options. Handling it this way makes perfect sense, because it helps further strengthening the separation between application and platform layer that Kubernetes enables.&lt;/p&gt;

&lt;p&gt;With the AWS and Azure Service Operators there are two examples of implementations directly from the cloud providers. Crossplane is a multi-cloud alternative maintained as an open-source project. And with the Terraform Operator, there is also an option to keep the application pipeline simple but still having access to the Terraform ecosystem.&lt;/p&gt;

&lt;p&gt;None of the solutions have announced general availability yet so it’s too early to make any recommendations. But the approach described here is powerful to help teams clearly separating between application and platform automation and this very much justifies keeping an eye on the future development in this space.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>aws</category>
      <category>googlecloud</category>
      <category>azure</category>
    </item>
    <item>
      <title>Why now is the time for the Spring Boot of infrastructure automation</title>
      <dc:creator>Philipp Strube</dc:creator>
      <pubDate>Wed, 24 Jun 2020 11:13:07 +0000</pubDate>
      <link>https://dev.to/pst418/why-now-is-the-time-for-the-spring-boot-of-infrastructure-automation-46dd</link>
      <guid>https://dev.to/pst418/why-now-is-the-time-for-the-spring-boot-of-infrastructure-automation-46dd</guid>
      <description>&lt;p&gt;&lt;a href="https://www.gartner.com/smarterwithgartner/the-secret-to-devops-success/"&gt;Gartner predicts&lt;/a&gt; that through 2022, 75% of DevOps initiatives will fail to meet expectations due to issues around organizational learning and change.&lt;/p&gt;

&lt;p&gt;But DevOps initiatives are important, because they enable critical capabilities to compete in the future. Without demolishing the figurative wall legacy IT built between development and operations, delivering the kind of digital experiences customers expect today will more likely fail than not.&lt;/p&gt;

&lt;p&gt;So before we can get into how an infrastructure automation framework can help your DevOps initiative succeed, we first need to take a closer look at why so many fail.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why do so many DevOps initiatives fail?
&lt;/h2&gt;

&lt;p&gt;In the last two years at &lt;a href="https://www.container-solutions.com/"&gt;Container Solutions&lt;/a&gt;, I’ve worked with teams from steel to fintech and agriculture to telecommunications on various aspects of their cloud native transformations. Adopting DevOps is always a big part of that transformation. Probably the hardest thing for companies is to adapt their existing organizational structure to cross-functional DevOps teams.&lt;/p&gt;

&lt;p&gt;Organizations need this change to happen, because digital experiences are already the new normal for a majority of consumers. This puts enormous pressure on the teams. The changes required are huge, especially from a legacy operations perspective. Almost everything changes. Team setup, workflow, responsibilities and tooling. And don’t forget it’s rarely on a green field. People in those initiatives are often asked to continue supporting the old while at the same time learning the new.&lt;/p&gt;

&lt;p&gt;For individuals, change brings lots of uncertainty. People may not openly admit it  —  or worse, hide it behind outright hostility — but self-doubt is common and most certainly understandable. People wonder if they personally will be able to succeed in this new world and are afraid what this means for their careers.&lt;/p&gt;

&lt;p&gt;And on top of the high pressure and the self-doubt, still overshadowed by the “them vs. us” mindset that legacy IT nurtured for years, people with a traditional IT-ops background are asked to work more like “them”.&lt;/p&gt;

&lt;h2&gt;
  
  
  Application teams and platform teams
&lt;/h2&gt;

&lt;p&gt;First, getting the “them vs. us” out of the way is crucial. The way I like to think about this is, all teams are cross-functional DevOps-teams. Some just work at the application and some work at the platform layer.&lt;/p&gt;

&lt;p&gt;But still, while application teams use Spring Boot or similar frameworks to move fast, platform teams have to build infrastructure and automation by integrating low level tooling from scratch.&lt;/p&gt;

&lt;p&gt;This lack of tool chain maturity combined with the high pressure and need to learn new skills adds a very strong and counterproductive feeling to the mix. The feeling of being treated unfairly. It’s hard to not feel treated unfairly if you’re expected to move as fast as your application counterparts when at the same time having a dramatically less mature tool chain at your disposal.&lt;/p&gt;

&lt;p&gt;It is as if one team is given a 3D printer to print a house and the other team is given a bunch of parts to first assemble the excavator before they can dig the hole for the foundation with it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Application frameworks and platform frameworks?
&lt;/h2&gt;

&lt;p&gt;In the past, the abstraction between applications and infrastructure was the operating system. But operating systems are a weak abstraction, which leads to application requirements leaking into the platform layer. This meant infrastructure configuration often did not have enough in common, even between application tiers, for reusable framework components to be feasible.&lt;/p&gt;

&lt;p&gt;Containers did not improve the abstraction layer qualities of operating systems, but they still fixed this issue by allowing separate application and platform layer operating system instances.&lt;/p&gt;

&lt;p&gt;Furthermore, Kubernetes sits right at the border of application and platform layer and provides a robust API for teams to have clear responsibilities and not interfere with each other.&lt;/p&gt;

&lt;p&gt;This is huge. Powerful abstraction between the layers marks a paradigm-shift that — for the first time — allows platform teams to benefit from frameworks as well.&lt;/p&gt;

&lt;p&gt;I believe that not only is now the time for such a framework but that it will also help more DevOps initiatives to succeed.&lt;/p&gt;

&lt;h2&gt;
  
  
  How can frameworks help your DevOps initiative succeed?
&lt;/h2&gt;

&lt;p&gt;Frameworks are popular in application development, because they help teams move faster both when creating new applications and also when maintaining these applications.&lt;/p&gt;

&lt;p&gt;Bringing the same benefit to the platform layer and removing the reason to feel unfairly treated by leveling the playing field with the application layer takes out the emotion and removes a big source of conflict.&lt;/p&gt;

&lt;p&gt;But frameworks, based on reusable components, also allow teams to use these components without needing to fully understand their implementation. Allowing teams to make progress, even when the technology is new for them. The usage and common use-cases can be documented. This is a prime resource for teams to learn the new skills and knowledge they need to drive their company’s DevOps initiatives forward.&lt;/p&gt;

&lt;p&gt;Last but not least, frameworks foster a user community of people working on the same challenges and available to help each other. Be it through sharing experience through blog posts and conference talks, writing guides and tutorials or being available as consultants.&lt;/p&gt;

&lt;p&gt;Frameworks have the potential to help your DevOps initiative succeed, because they help solve two of the biggest reasons for failure.&lt;/p&gt;

&lt;p&gt;First, the feeling of unfairness from needing to perform like the application counterpart while having a dramatically less mature tool chain at your disposal.&lt;/p&gt;

&lt;p&gt;And second, the uncertainty whether you will be able to succeed personally. Having access to a framework gives people the trust and confidence to move forward and learn along the way.&lt;/p&gt;

&lt;h2&gt;
  
  
  Luckily, this is more than just theory
&lt;/h2&gt;

&lt;p&gt;If you now think, this all sounds great if only such a framework would exist, I have good news for you.&lt;/p&gt;

&lt;p&gt;I’ve been working on a framework like this for the last one and a half years. Kubestack is an open-source &lt;a href="https://www.kubestack.com/lp/terraform-gitops-framework"&gt;Terraform GitOps framework&lt;/a&gt; for infrastructure automation. It’s designed for teams that want to automate Kubernetes based infrastructure and not reinvent automation. Think of it this way, Kubestack is to Terraform and infrastructure automation, what Spring Boot is to Java and cloud native applications.&lt;/p&gt;

&lt;p&gt;The framework supports all three major cloud providers and has been used as the foundation for a number of real world customer projects as part of my colleagues’ and my consulting work. It is fully documented, has a step-by-step tutorial to help users get started and even includes a local &lt;a href="https://www.kubestack.com/framework/documentation/tutorial-build-local-lab"&gt;GitOps development lab&lt;/a&gt;. So you can test-drive Kubestack and learn more about GitOps for infrastructure automation in the comfort of your own localhost.&lt;/p&gt;

&lt;p&gt;For questions there is a &lt;a href="https://app.slack.com/client/T09NY5SBT/CMBCT7XRQ"&gt;#kubestack channel&lt;/a&gt; on the &lt;a href="https://slack.k8s.io/"&gt;Kubernetes community Slack&lt;/a&gt; and the &lt;a href="https://github.com/kbst/terraform-kubestack"&gt;repository&lt;/a&gt; on Github.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>frameworks</category>
      <category>terraform</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>Speed up multi-stage Docker builds in CI/CD with Buildkit’s registry cache</title>
      <dc:creator>Philipp Strube</dc:creator>
      <pubDate>Wed, 29 Apr 2020 09:49:13 +0000</pubDate>
      <link>https://dev.to/pst418/speed-up-multi-stage-docker-builds-in-ci-cd-with-buildkit-s-registry-cache-11gi</link>
      <guid>https://dev.to/pst418/speed-up-multi-stage-docker-builds-in-ci-cd-with-buildkit-s-registry-cache-11gi</guid>
      <description>&lt;p&gt;Working on a &lt;a href="https://www.kubestack.com/framework/documentation"&gt;GitOps framework around Kubernetes&lt;/a&gt;, I naturally run everything in containers. The two Dockerfiles that matter most for me, unfortunately, both have to download a lot of dependencies not included in the repository at build time. Which means the layer cache is crucial. Unfortunately, ephemeral CI/CD runners like GitHub Actions, start each run with an empty cache.&lt;/p&gt;

&lt;p&gt;The first of the two Dockerfiles builds the image for the framework itself. This image is used for bootstrapping, automation runs and also disaster recovery. As such, it’s not your &lt;a href="https://github.com/kbst/terraform-kubestack/blob/da2ba2382ce9b85f76317c7caeeb2297bf2efa96/oci/Dockerfile"&gt;run-of-the-mill Dockerfile&lt;/a&gt;. It includes a number of dependencies installed from Debian packages, various Go binaries and last but not least the Python based CLIs of AWS, Azure and Google Cloud. It makes heavy use of multi-stage-builds and has different build stages for common dependencies and each Cloud provider’s specific dependencies. The layers of the final image also mirror the build stage logic.&lt;/p&gt;

&lt;p&gt;Dockerfile number two is for the &lt;a href="https://www.kubestack.com"&gt;Kubestack&lt;/a&gt; website itself. The site is built using Gatsby and has to download a lot of node modules during the build. The Dockerfile is optimized for cache-ability and uses multi-stage builds to have a build environment based on NodeJS and a final image based on Nginx to serve the static build.&lt;/p&gt;

&lt;p&gt;Build time for both, the framework image and the website image, heavily benefits from having a layer cache.&lt;/p&gt;

&lt;p&gt;Docker has had the ability to use an image as the build cache using the &lt;code&gt;--cache-from&lt;/code&gt; parameter for some time. This was my preferred option because I need the ability to build and push images anyway. Storing the cache alongside the image is a no-brainer in my opinion.&lt;/p&gt;

&lt;p&gt;For the website image the first step of my CI/CD pipeline is to pull the cache image. Note the &lt;code&gt;|| true&lt;/code&gt; at the end to ensure a missing cache doesn’t prevent my build from running.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker pull gcr.io/&lt;span class="nv"&gt;$PROJECT_ID&lt;/span&gt;/&lt;span class="nv"&gt;$REPO_NAME&lt;/span&gt;:latest-build-cache &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="nb"&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Step two runs a build targeting the dev stage of my multi-stage Dockerfile and tags the result as the new build-cache.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker build &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--cache-from&lt;/span&gt; gcr.io/&lt;span class="nv"&gt;$PROJECT_ID&lt;/span&gt;/&lt;span class="nv"&gt;$REPO_NAME&lt;/span&gt;:latest-build-cache &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--target&lt;/span&gt; dev &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-t&lt;/span&gt; gcr.io/&lt;span class="nv"&gt;$PROJECT_ID&lt;/span&gt;/&lt;span class="nv"&gt;$REPO_NAME&lt;/span&gt;:latest-build-cache &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nb"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The next step runs the actual build that produces the final image and tags it as well.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker build &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--cache-from&lt;/span&gt; gcr.io/&lt;span class="nv"&gt;$PROJECT_ID&lt;/span&gt;/&lt;span class="nv"&gt;$REPO_NAME&lt;/span&gt;:latest-build-cache &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-t&lt;/span&gt; gcr.io/&lt;span class="nv"&gt;$PROJECT_ID&lt;/span&gt;/&lt;span class="nv"&gt;$REPO_NAME&lt;/span&gt;:&lt;span class="nv"&gt;$COMMIT_SHA&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nb"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally, the pipeline pushes both images.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker push gcr.io/&lt;span class="nv"&gt;$PROJECT_ID&lt;/span&gt;/&lt;span class="nv"&gt;$REPO_NAME&lt;/span&gt;:latest-build-cache
docker push gcr.io/&lt;span class="nv"&gt;$PROJECT_ID&lt;/span&gt;/&lt;span class="nv"&gt;$REPO_NAME&lt;/span&gt;:&lt;span class="nv"&gt;$COMMIT_SHA&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For a simple multi-stage build with only two stages, like my Gatsby website’s Dockerfile, this works pretty well.&lt;/p&gt;

&lt;p&gt;But when I tried this for a project with multiple build stages, one for Python and one for JS, specifying two images under &lt;code&gt;--cache-from&lt;/code&gt; never seemed to work reliably. Which is double unfortunate, because having a layer cache here would save time not downloading Python and JS dependencies on every run.&lt;/p&gt;

&lt;p&gt;Having cache pull and cache build steps for every stage also makes for a growingly verbose pipeline file the more stages you have.&lt;/p&gt;

&lt;p&gt;So for the framework Dockerfile, I need something better.&lt;/p&gt;

&lt;p&gt;Enter buildkit. Buildkit brings a number of improvements to container image building. The one’s that won me over are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Running build stages concurrently.&lt;/li&gt;
&lt;li&gt;Increasing cache-efficiency.&lt;/li&gt;
&lt;li&gt;Handling secrets during builds.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Apart from generally increasing cache efficiency, it also allows more control over caches when building with buildctl. This is what I needed. Buildkit has three options for exporting the cache. Called inline, registry and local. Local is not particularly interesting in my case, but would allow writing the cache to a directory. Inline includes the cache in the final image and pushes cache and image to the registry layers together. But this only includes the cache for the final stage in multi-stage builds. Finally, the registry option does allow pushing all cached layers of all stages into a separate image. This is what I needed for my framework Dockerfile.&lt;/p&gt;

&lt;p&gt;Let’s take a look at how I’m using this in my pipeline. Having the cache export and import included in buildkit means I can reduce the three steps into one. And it also stays one step, no matter how many stages my Dockerfile has.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--rm&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--privileged&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-v&lt;/span&gt; &lt;span class="sb"&gt;`&lt;/span&gt;&lt;span class="nb"&gt;pwd&lt;/span&gt;&lt;span class="sb"&gt;`&lt;/span&gt;/oci:/tmp/work &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-v&lt;/span&gt; &lt;span class="nv"&gt;$HOME&lt;/span&gt;/.docker:/root/.docker &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--entrypoint&lt;/span&gt; buildctl-daemonless.sh &lt;span class="se"&gt;\&lt;/span&gt;
  moby/buildkit:master &lt;span class="se"&gt;\&lt;/span&gt;
    build &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--frontend&lt;/span&gt; dockerfile.v0 &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--local&lt;/span&gt; &lt;span class="nv"&gt;context&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/tmp/work &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--local&lt;/span&gt; &lt;span class="nv"&gt;dockerfile&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/tmp/work &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--output&lt;/span&gt; &lt;span class="nb"&gt;type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;image,name&lt;span class="o"&gt;=&lt;/span&gt;kubestack/framework-dev:test-&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="p"&gt;{ github.sha &lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt;,push&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--export-cache&lt;/span&gt; &lt;span class="nb"&gt;type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;registry,ref&lt;span class="o"&gt;=&lt;/span&gt;kubestack/framework-dev:buildcache,push&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--import-cache&lt;/span&gt; &lt;span class="nb"&gt;type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;registry,ref&lt;span class="o"&gt;=&lt;/span&gt;kubestack/framework-dev:buildcache
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This one command handles pulling and importing the cache, building the image, exporting the cache and pushing the image and the cache. By running the build inside a container, I also don’t have to worry about installing the buildkit daemon and cli. The only thing I needed to do was providing the &lt;code&gt;.docker/config&lt;/code&gt; to the build inside the container to be able to push the image and the cache to the registry.&lt;/p&gt;

&lt;p&gt;For a working example, take a look at the Kubestack &lt;a href="https://github.com/kbst/terraform-kubestack/blob/da2ba2382ce9b85f76317c7caeeb2297bf2efa96/.github/workflows/main.yml#L46"&gt;release automation pipeline&lt;/a&gt; on Github.&lt;/p&gt;

&lt;p&gt;Using the cache, the framework image builds in less than one minute. Down from about three minutes before using buildkit without the cache export and import.&lt;/p&gt;

</description>
      <category>docker</category>
      <category>buildkit</category>
      <category>cicd</category>
      <category>cloudnative</category>
    </item>
    <item>
      <title>Don’t mistake Kubestack for yet another way to build Kubernetes clusters
</title>
      <dc:creator>Philipp Strube</dc:creator>
      <pubDate>Fri, 24 Apr 2020 14:59:08 +0000</pubDate>
      <link>https://dev.to/pst418/don-t-mistake-kubestack-for-yet-another-way-to-build-kubernetes-clusters-223</link>
      <guid>https://dev.to/pst418/don-t-mistake-kubestack-for-yet-another-way-to-build-kubernetes-clusters-223</guid>
      <description>&lt;p&gt;I’m writing this post for the ones with stuff to get done in the seemingly never sleeping cloud native community. Talk about two things that don’t mix well.&lt;/p&gt;

&lt;p&gt;When only looking at &lt;a href="https://www.kubestack.com"&gt;Kubestack&lt;/a&gt; briefly, you may mentally file it as yet another tool to build Kubernetes clusters and move on. But doing so, you’d be missing out, in my humble, and arguably biased opinion.&lt;/p&gt;

&lt;p&gt;Kubestack aspires to be for GitOps and Terraform, what Spring Boot or Rails are for application development and Java or Ruby respectively. Yes, that's a big goal.&lt;/p&gt;

&lt;p&gt;Kubestack does not turn a bunch of machines into a Kubernetes cluster. Instead, the open source framework maintains the desired state of clusters and cluster services and applies changes following a GitOps approach to an API. This API may be a cloud provider API or the Kubernetes API, depending on what part of the desired state changed. Currently, Kubestack supports managed Kubernetes from Amazon (EKS), Azure (AKS) and Google (GKE). For on-premise and other cloud providers I consider Cluster API the most promising development.&lt;/p&gt;

&lt;p&gt;Teams basing their infrastructure automation on Kubestack can&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;reason about proposed changes using pull requests,&lt;/li&gt;
&lt;li&gt;and test proposed changes on the non-critical ops-environment&lt;/li&gt;
&lt;li&gt;before applying the changes to the critical apps-environment.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Kubestack enables this for both the infrastructure of the cluster itself, and the services that run on top of the cluster. Think services that need to be on the cluster before application workloads can run there. Using Kubestack you can maintain both in one repository and have changes applied reliably through the same GitOps automation.&lt;/p&gt;

&lt;p&gt;Following this approach, teams can jointly make changes to both infrastructure and applications without one blocking the other.&lt;/p&gt;

&lt;p&gt;And that is Kubestack in a nutshell for you. If you want to learn more, head over to the &lt;a href="https://www.kubestack.com/framework/documentation"&gt;Kubestack GitOps framework documentation&lt;/a&gt; for all the details.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>gitops</category>
      <category>devops</category>
      <category>terraform</category>
    </item>
  </channel>
</rss>
