<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Jan Lepsky</title>
    <description>The latest articles on DEV Community by Jan Lepsky (@janlepsky).</description>
    <link>https://dev.to/janlepsky</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/janlepsky"/>
    <language>en</language>
    <item>
      <title>Pulumi vs. Terraform: Choosing the Best Infrastructure as Code Solution</title>
      <dc:creator>Jan Lepsky</dc:creator>
      <pubDate>Mon, 10 Feb 2025 16:19:08 +0000</pubDate>
      <link>https://dev.to/janlepsky/pulumi-vs-terraform-choosing-the-best-infrastructure-as-code-solution-gbp</link>
      <guid>https://dev.to/janlepsky/pulumi-vs-terraform-choosing-the-best-infrastructure-as-code-solution-gbp</guid>
      <description>&lt;p&gt;&lt;em&gt;This article was originally posted on the &lt;a href="https://mogenius.com/blog-posts/pulumi-vs-terraform-choosing-the-best-infrastructure-as-code-solution" rel="noopener noreferrer"&gt;mogenius blog&lt;/a&gt; by Ben Force.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Building an application in the cloud can become complex quickly. There are multiple cloud providers, each with hundreds of different resources. Even the same type of resources on different providers can have their unique quirks. All of this complexity makes defining your infrastructure – and deploying it – a complex task of its own.&lt;br&gt;
‍&lt;br&gt;
Over the years, two infrastructure as code (IaC) tools have become popular solutions to this problem: Terraform and Pulumi. Terraform is the more mature of the two, with a huge community and support for a lot of resource types. Pulumi is more of a newcomer, trying to make it easier for application developers to define infrastructure.&lt;/p&gt;

&lt;p&gt;This article compares the two tools based on the following:‍&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;‍Basic use: focusing on programming models, the learning curve, and testing‍&lt;/li&gt;
&lt;li&gt;Behind the scenes: comparing their state management and performance&lt;/li&gt;
&lt;li&gt;Support: looking at available cloud providers, the ecosystem, and the community&lt;/li&gt;
&lt;li&gt;Use cases and best-fit scenarios: highlighting when you should use one over the other&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Basic Use
&lt;/h2&gt;

&lt;p&gt;In this first section, you'll explore how day-to-day interactions with the two tools vary, and you'll begin to understand the different types of developers each tool caters to.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Programming Model&lt;/strong&gt;&lt;br&gt;
The first notable difference between Pulumi and Terraform is the programming language. Terraform uses a custom language called &lt;a href="https://github.com/hashicorp/hcl" rel="noopener noreferrer"&gt;HashiCorp configuration language (HCL)&lt;/a&gt; to define infrastructure. In contrast, Pulumi offers more options, allowing you to choose &lt;a href="https://www.pulumi.com/docs/languages-sdks/" rel="noopener noreferrer"&gt;from a list of popular programming languages&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The two different approaches create a trade-off between power and simplicity. Since Pulumi uses general-purpose programming languages, you can implement complex logic and abstractions. While HCL has few advanced features by comparison, its simple declarative nature allows you to look at a terraform file and easily understand what it's created.&lt;/p&gt;

&lt;p&gt;Both Terraform and Pulumi are improving their programming models so they're more accessible. Terraform now offers &lt;a href="https://developer.hashicorp.com/terraform/cdktf" rel="noopener noreferrer"&gt;CDK for Terraform&lt;/a&gt;, which allows you to define your infrastructure using a common programming language, and Pulumi has a &lt;a href="https://www.pulumi.com/docs/iac/languages-sdks/yaml/" rel="noopener noreferrer"&gt;YAML-based configuration option&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Learning Curve and Developer Experience&lt;/strong&gt;&lt;br&gt;
Both Terraform and Pulumi require you to understand the cloud resources you're creating. At the time of writing, &lt;a href="https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-template-resource-type-ref.html" rel="noopener noreferrer"&gt;AWS lists 246 services that each have multiple resource types&lt;/a&gt; you can create. On top of that, you need to understand how to give each of these resources permission to access other resources and how to connect them together. So if you're new to infrastructure development, there's a bit of a learning curve before you start using either tool.&lt;/p&gt;

&lt;p&gt;If you're a software engineer familiar with cloud resources, Pulumi will be a natural fit. As mentioned, it supports multiple popular programming languages, such as TypeScript and Python, so there's likely one you're already familiar with. Pulumi also offers &lt;a href="https://www.pulumi.com/ai" rel="noopener noreferrer"&gt;Pulumi AI&lt;/a&gt;, which allows you to describe the infrastructure you want, and it will generate the code needed to create it.&lt;/p&gt;

&lt;p&gt;Terraform, on the other hand, has a steeper initial learning curve since you'll have to learn HCL. Creating simple configurations doesn't take long to figure out, but the syntax for dynamically creating multiple resources can take longer to get used to.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Testing and Validation&lt;/strong&gt;&lt;br&gt;
Terraform and Pulumi both offer testing to make sure you're deploying the resources that you expect to.&lt;/p&gt;

&lt;p&gt;Pulumi supports &lt;a href="https://www.pulumi.com/docs/using-pulumi/testing/" rel="noopener noreferrer"&gt;three different types of tests&lt;/a&gt;: unit, integration, and property. You can write unit and integration tests for your project using the testing framework of your choice. It even offers &lt;a href="https://www.pulumi.com/docs/using-pulumi/testing/unit/#add-mocks" rel="noopener noreferrer"&gt;tools to mock resources&lt;/a&gt;. Integration tests deploy a temporary copy of your defined infrastructure and then run your tests against the deployed resources. Unit tests mock any calls to external resources so they run much faster.&lt;/p&gt;

&lt;p&gt;Pulumi's property tests allow you to define company-wide policies and check all your infrastructure to ensure compliance. However, these tests are only &lt;a href="https://www.pulumi.com/docs/using-pulumi/crossguard/" rel="noopener noreferrer"&gt;supported in a few languages&lt;/a&gt; and don't use standard testing frameworks.&lt;/p&gt;

&lt;p&gt;Terraform provides a built-in testing framework to validate your project. The testing framework supports both unit and integration tests. It handles unit tests by creating a deployment plan and running the tests against that plan. While Terraform's testing framework allows you to validate basic properties, you may need to write more advanced tests to verify your virtual machines are set up correctly. You can create advanced tests using open source testing tools like &lt;a href="https://terratest.gruntwork.io/" rel="noopener noreferrer"&gt;Terratest&lt;/a&gt; and &lt;a href="https://newcontext-oss.github.io/kitchen-terraform/" rel="noopener noreferrer"&gt;Kitchen-Terraform&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://mogenius.com/whitepaper?utm_source=devto&amp;amp;utm_medium=article_post&amp;amp;utm_campaign=pulumi-vs-terraform" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8gvske6buoud5yunpybw.png" alt="mogenius whitepaper" width="800" height="388"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Behind the Scenes
&lt;/h2&gt;

&lt;p&gt;Now that you've seen some of the trade-offs in the everyday use of the two tools, let's look at how they differ behind the scenes, focusing on their state management and performance.&lt;br&gt;
‍&lt;br&gt;
&lt;strong&gt;State Management&lt;/strong&gt;&lt;br&gt;
Every time you change your IaC project, both Terraform and Pulumi compare the desired end state with the current state. To do this, they store the state of every resource after each deployment as a reference for the next deployment. Both Pulumi and Terraform offer multiple methods to store this state.&lt;/p&gt;

&lt;p&gt;Pulumi offers a managed service, a self-hosted service, and the option to store the state in an object store like AWS S3. All of these state-storage backends support state locking, which ensures only one deployment can run at a time for a specific project. The lock is checked before a deployment is started. If one is found, the deployment will wait until the lock is released. This allows you to safely execute multiple deployments at the same time by ensuring only one set of changes is applied at a time.&lt;/p&gt;

&lt;p&gt;Terraform stores its state in a local file by default, but you can configure your project to use several different services, like AWS S3, using the &lt;code&gt;backend&lt;/code&gt; block. Alternatively, if you subscribe to HashiCorp's cloud service, you can configure it using the cloud block. While Terraform offers many ways to store your state, not all of them support locking.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Performance and Scalability&lt;/strong&gt;&lt;br&gt;
Pulumi handles large updates efficiently by relying on its saved state when planning updates. When applying the updates, it runs all possible change operations in parallel.&lt;/p&gt;

&lt;p&gt;Similarly to Pulumi, Terraform applies updates in parallel. By default, however, it will make requests to refresh the state of every resource for each plan and apply operation. If you have an exceptionally large project, Terraform can cache all resource properties in the state file and use it as the source of truth instead of constantly making calls to your cloud provider to check resource properties on every deployment.&lt;/p&gt;
&lt;h2&gt;
  
  
  Support
&lt;/h2&gt;

&lt;p&gt;This section looks at Terraform's and Pulumi's support for different cloud providers and resources and how you can get help when you need it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cloud Provider Support&lt;/strong&gt;&lt;br&gt;
Terraform has been around since 2014 and has built up solid support for different providers, which have been developed by the service providers themselves, HashiCorp, and third-party developers. As of right now, there are &lt;a href="https://registry.terraform.io/browse/providers" rel="noopener noreferrer"&gt;almost 4,500 providers available in the Terraform Registry&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;In contrast, Pulumi has only about &lt;a href="https://www.pulumi.com/registry/" rel="noopener noreferrer"&gt;160 providers&lt;/a&gt; available in the Pulumi registry. However, since the major cloud providers are supported, this should be enough. If Pulumi lacks a provider for a specific resource that Terraform supports, you can create a Pulumi Terraform bridge. These bridges can adapt to any Terraform provider for use with Pulumi. While this process isn't as painless as adding a provider to your dependencies, it's easier than creating a new provider from scratch.&lt;/p&gt;

&lt;p&gt;Pulumi is also currently testing a Terraform provider in beta testing that will automatically generate one of the Terraform bridge packages mentioned above, making the process a lot easier.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ecosystem and Integrations&lt;/strong&gt;&lt;br&gt;
In addition to supporting a bunch of cloud providers, Pulumi has several first-party integrations that you can add to your CI/CD pipelines. These integrations not only deploy your infrastructure changes but also add comments to merge request reviews so you can easily see what will be changed when the code is committed.&lt;/p&gt;

&lt;p&gt;As mentioned above, Terraform has many providers available. It has even more modules, &lt;a href="https://registry.terraform.io/browse/modules" rel="noopener noreferrer"&gt;about 18,000 at the time of writing&lt;/a&gt;. While this may seem like it makes Terraform the clear winner, remember that some of these modules already exist for your favorite programming language and don't need to be in the Pulumi registry. For example, a module to generate a UUID doesn't need to be in the Pulumi registry.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;License&lt;/strong&gt;&lt;br&gt;
Pulumi is released under the &lt;a href="https://github.com/pulumi/pulumi/blob/master/LICENSE" rel="noopener noreferrer"&gt;Apache 2.0 license&lt;/a&gt;, which means you can build products using it and sell those products to customers. Terraform, on the other hand, used to be released under the Mozilla Public License but has since changed to the Business Source Licnese. This license still allows you to use Terraform internally, but if you want to build your own product on top of it, then you’re going to run into legal issues.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Community and Support&lt;/strong&gt;&lt;br&gt;
Both Terraform and Pulumi are backed by for-profit companies, which provides some assurance that they'll be able to support their products for a while. In comparison, HashiCorp is more stable since it was acquired by IBM, while Pulumi is a venture-funded startup.&lt;/p&gt;

&lt;p&gt;Terraform is generally accepted as the de facto standard IaC solution, so it has extensive support options available. The internet is filled with years of Stack Overflow answers, blog posts, and sample projects on GitHub. If you can't find a solution in one of those places, you can probably find someone at your company with some Terraform experience.&lt;/p&gt;

&lt;p&gt;Though it hasn't been around as long, Pulumi still has over &lt;a href="https://www.pulumi.com/blog/october-23-roundup/" rel="noopener noreferrer"&gt;150,000 end users&lt;/a&gt; as of October 2023. While there are fewer overall resources, some of your experience in your chosen language can be used. They also have well-written documentation.&lt;/p&gt;

&lt;p&gt;Both Terraform and Pulumi offer enterprise subscriptions that include support plans. Both charge based on the number of resources that you're managing. Terraform's Standard subscription offers support and costs about $0.10 per resource per month. To get a support plan for Pulumi, however, you have to splurge for the Enterprise subscription, which will set you back $1.10 per resource per month.&lt;/p&gt;
&lt;h2&gt;
  
  
  Use Cases and Best-Fit Scenarios
&lt;/h2&gt;

&lt;p&gt;Now that you've seen some of the pros and cons of Terraform and Pulumi, how do you decide which one will work best in a given situation? It comes down to what kind of project you're creating.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Serverless Applications: Pulumi&lt;/strong&gt;&lt;br&gt;
If you have a TypeScript serverless application with several small functions, Pulumi is the way to go. It allows you to define your function's code in-line with the infrastructure definition, and it will handle the build step for you.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Applications with Virtual Networks: Terraform&lt;/strong&gt;&lt;br&gt;
When building a more traditional application with a lot of virtual networks and virtual machines, Terraform has a strong track record. Because it's been around for so long, it's likely that the people defining virtual networks will already be familiar with Terraform, making it a natural fit.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Custom Resources: Pulumi&lt;/strong&gt;&lt;br&gt;
Pulumi offers dynamic resource providers, which enable you to easily support custom resources by defining a few functions to perform CRUD operations. You can also use a dynamic resource provider to load test data into a developer environment.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Audited/Regulated Projects: Terraform&lt;/strong&gt;&lt;br&gt;
If you're developing a project for a regulated industry, Terraform can simplify the process of auditing your infrastructure. Due to its declarative nature, it's easier to understand what the code is doing. It also makes it more difficult for noncompliant changes to sneak in.&lt;/p&gt;
&lt;h2&gt;
  
  
  Quick Comparison
&lt;/h2&gt;

&lt;p&gt;Here's a table that quickly summarizes each of the comparison points:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this article, you've seen that Terraform and Pulumi aren't one-size-fits-all solutions. Each has its strengths and weaknesses that need to be considered depending on your situation. Terraform is the established solution that works best for a large CloudOps team. Pulumi, on the other hand, is a great solution if your backend (or full-stack) developers need to manage a project's infrastructure.&lt;/p&gt;

&lt;p&gt;If you use Terraform or Pulumi to create a Kubernetes cluster, it doesn't help much beyond getting the computing resources in place. To gain better visibility into your Kubernetes services across multiple environments, check out &lt;a href="https://mogenius.com" rel="noopener noreferrer"&gt;mogenius&lt;/a&gt;. Its free plan offers a full feature set to deploy and troubleshoot applications on any Kubernetes cluster, and it also provides intuitive onboarding.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>terraform</category>
      <category>pulumi</category>
      <category>devops</category>
    </item>
    <item>
      <title>How and When to Use Terraform with Kubernetes</title>
      <dc:creator>Jan Lepsky</dc:creator>
      <pubDate>Fri, 24 Jan 2025 15:19:13 +0000</pubDate>
      <link>https://dev.to/janlepsky/how-and-when-to-use-terraform-with-kubernetes-4ofo</link>
      <guid>https://dev.to/janlepsky/how-and-when-to-use-terraform-with-kubernetes-4ofo</guid>
      <description>&lt;p&gt;&lt;em&gt;This article was originally posted on the &lt;a href="https://mogenius.com/blog-posts/how-and-when-to-use-terraform-with-kubernetes" rel="noopener noreferrer"&gt;mogenius blog&lt;/a&gt; by Kovid Rathee.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.terraform.io/" rel="noopener noreferrer"&gt;Terraform&lt;/a&gt; is an infrastructure as code tool that replaces the &lt;a href="https://blog.equinix.com/blog/2022/12/01/what-is-clickops-and-how-can-you-prevent-it/" rel="noopener noreferrer"&gt;ClickOps&lt;/a&gt; method of defining, deploying, and managing infrastructure locally, on-premises, or in the cloud. Its declarative method of defining infrastructure lets you focus on the target state of the infrastructure rather than the steps needed to achieve that state, which makes managing infrastructure easier.&lt;/p&gt;

&lt;p&gt;Terraform has its own declarative language called the &lt;a href="https://developer.hashicorp.com/terraform/language" rel="noopener noreferrer"&gt;Hashicorp configuration language&lt;/a&gt; (also called the Terraform language) for defining the infrastructure you want, and its &lt;a href="https://developer.hashicorp.com/terraform/cli" rel="noopener noreferrer"&gt;command line tool&lt;/a&gt; makes it easy to perform operations on your infrastructure.&lt;/p&gt;

&lt;p&gt;In this article, you'll learn how and when to use Terraform with &lt;a href="https://kubernetes.io/" rel="noopener noreferrer"&gt;Kubernetes&lt;/a&gt; with an example of a local &lt;a href="https://nginx.org/en/" rel="noopener noreferrer"&gt;nginx&lt;/a&gt; deployment.&lt;/p&gt;

&lt;h2&gt;
  
  
  When to Use Terraform with Kubernetes‍
&lt;/h2&gt;

&lt;p&gt;Terraform is a platform-agnostic tool, so you can use it to manage your entire stack irrespective of the underlying frameworks, libraries, cloud platforms, etc. It enables multicloud and hybrid-cloud development by bringing all your infrastructure definitions into one place, which makes managing infrastructure, costs, and security easier.&lt;/p&gt;

&lt;p&gt;Combining Terraform with your version control system and CI/CD pipeline also helps you automate infrastructure deployment. If you store your Terraform project in a Git repository, any changes to the project are accompanied by the output of the Terraform plan describing the changes that the PR would bring to the infrastructure. The reviewer can then approve the code changes, triggering a &lt;strong&gt;terraform apply -auto-approve&lt;/strong&gt; command.&lt;/p&gt;

&lt;p&gt;Using Kubernetes with Terraform lets you define all your infrastructure – such as networking, storage volumes, databases, security groups, firewalls, DNS, etc. – within the Terraform project. Terraform comes in handy when you're dealing with all the infrastructure in multicloud and hybrid-cloud environments, especially when you're managing dependencies between different Kubernetes objects and cloud resources.&lt;/p&gt;

&lt;h2&gt;
  
  
  Using Terraform with Kubernetes
&lt;/h2&gt;

&lt;p&gt;Let's now look at how to use Terraform with Kubernetes by walking through an example of setting up two separate nginx clusters.&lt;/p&gt;

&lt;p&gt;For simplicity, you'll be deploying the clusters on your local machine, which is great for development, but in a real-world scenario, the clusters will likely be hosted in the cloud.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting Up the Environment
&lt;/h2&gt;

&lt;p&gt;To use Terraform with Kubernetes, you'll need the following tools installed on your system:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://docs.docker.com/engine/install/" rel="noopener noreferrer"&gt;Docker Engine&lt;/a&gt;: lays the foundation for deploying containers using Minikube on your local machine&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://minikube.sigs.k8s.io/docs/" rel="noopener noreferrer"&gt;Minikube&lt;/a&gt;:  acts as your local Kubernetes cluster, which you'll use to run separate nginx deployments and services&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://kubernetes.io/docs/tasks/tools/#kubectl" rel="noopener noreferrer"&gt;kubectl&lt;/a&gt;:  lets you access Kubernetes resources – such as pods, deployments, and services – from your command line&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://developer.hashicorp.com/terraform/cli/commands" rel="noopener noreferrer"&gt;Terraform CLI&lt;/a&gt;:  lets you work with a Terraform project from your command line (i.e. you can use this tool to plan, apply, and destroy infrastructure)&lt;/li&gt;
&lt;li&gt;Git: the version control system that lets you download and manage the source code for the module you'll be working with in this tutorial&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Ensure Docker Engine and Minikube are up and running. Use the following commands to start Minikube:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;minikube start
minikube dashboard
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You'll see an output resembling the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;➜  ~ minikube dashboard
🤔  Verifying dashboard health ...
🚀  Launching proxy ...
🤔  Verifying proxy health ...
🎉  Opening http://127.0.0.1:57560/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ in your default browser...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;‍&lt;br&gt;
Make a note of the URL &lt;a href="http://127.0.0.1:57560" rel="noopener noreferrer"&gt;http://127.0.0.1:57560&lt;/a&gt; as you'll need it later in this tutorial.&lt;/p&gt;

&lt;p&gt;Check the status of your Docker machine by running the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker info
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Lastly, clone this GitHub repository that contains the Terraform nginx module for this tutorial:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone https://github.com/kovid-r/terraform-nginx-k8s.git
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;‍&lt;br&gt;
Though you can write all your Terraform code in a single &lt;strong&gt;.tf&lt;/strong&gt; file, it's considered a bad practice for code management. In the repository, you'll find the code split into the following three files in the &lt;a href="https://github.com/kovid-r/terraform-nginx-k8s" rel="noopener noreferrer"&gt;GitHub repository&lt;/a&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;main.tf: contains the definitions of the Kubernetes namespace, deployment, and service&lt;/li&gt;
&lt;li&gt;variables.tf: contains the definitions of Terraform variables, for which you'll provide values via parameters from the Minikube configuration file&lt;/li&gt;
&lt;li&gt;versions.tf: contains the definition of the Kubernetes provider, along with its source and version&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can find more information on how files and directories in the &lt;a href="https://github.com/kovid-r/terraform-nginx-k8s" rel="noopener noreferrer"&gt;GitHub repo&lt;/a&gt; can be structured.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://mogenius.com/whitepaper?utm_source=devto&amp;amp;utm_medium=article_post&amp;amp;utm_campaign=terraform-kubernetes" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd0cujtowfyothdmpieg3.png" alt="Image description" width="800" height="388"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Configuring the Terraform Kubernetes Provider
&lt;/h2&gt;

&lt;p&gt;This tutorial uses Hashicorp's official &lt;a href="https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs" rel="noopener noreferrer"&gt;Kubernetes provider&lt;/a&gt; to create the two local Kubernetes clusters you'll be using.&lt;/p&gt;

&lt;p&gt;When you run the terraform init command for the first time, the following snippet from the versions.tf file in the GitHub repository will import the provider module into your local Terraform project:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform {
  required_providers {
    kubernetes = {
      source  = "hashicorp/kubernetes"
      version = "~&amp;gt; 2.11" # Update based on latest stable version
    }
  }

  required_version = "&amp;gt;= 1.0.0"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Creating and Configuring a Terraform nginx Module
&lt;/h2&gt;

&lt;p&gt;To deploy nginx using Kubernetes and Terraform, you first need to create a Terraform nginx module. Terraform lets you create reusable modules when creating new resources in your infrastructure. Although creating modules is easy, Terraform generally advises using them in moderation, especially when they don't create a new abstraction to your application architecture.&lt;/p&gt;

&lt;p&gt;You could follow Terraform's &lt;a href="https://developer.hashicorp.com/terraform/tutorials/modules/module-create" rel="noopener noreferrer"&gt;official tutorial&lt;/a&gt; to build your own module, but for this tutorial, you'll use the prebuilt &lt;strong&gt;terraform-nginx-k8s&lt;/strong&gt; module from the &lt;a href="https://github.com/kovid-r/terraform-nginx-k8s" rel="noopener noreferrer"&gt;GitHub repository&lt;/a&gt;. This module is based on the official Kubernetes provider from the Terraform registry.&lt;/p&gt;

&lt;p&gt;Before deploying the &lt;strong&gt;terraform-nginx-k8s&lt;/strong&gt; module, let's walk through how to configure its core components and how it defines various Kubernetes resources.&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating a Kubernetes Namespace for Deploying Resources
&lt;/h2&gt;

&lt;p&gt;You can use Kubernetes namespaces to manage clusters and subclusters individually. The module uses the following code to create a new namespace:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;## resource "kubernetes_namespace" "nginx" {
  metadata {
    name = var.namespace
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;‍&lt;br&gt;
This creates a new &lt;strong&gt;kubernetes_namespace&lt;/strong&gt; with the default name set as nginx, which you can override in your &lt;strong&gt;variables.tf&lt;/strong&gt; file. Later in the tutorial, you'll use this code to create two namespaces, &lt;strong&gt;nginx-cluster-one-ns&lt;/strong&gt; and &lt;strong&gt;nginx-cluster-two-ns&lt;/strong&gt;, one for each Kubernetes cluster. In each of those clusters, you'll create nginx deployments.&lt;/p&gt;
&lt;h2&gt;
  
  
  Defining the nginx Deployment
&lt;/h2&gt;

&lt;p&gt;Next, the module defines the nginx deployment using the kubernetes_deployment resource:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "kubernetes_deployment" "nginx" {
  metadata {
    name      = "nginx-deployment"
    namespace = kubernetes_namespace.nginx.metadata[0].name
    labels = {
      app = "nginx"
    }
  }

  spec {
    replicas = var.replicas

    selector {
      match_labels = {
        app = "nginx"
      }
    }

    template {
      metadata {
        labels = {
          app = "nginx"
        }
      }

      spec {
        container {
          name  = "nginx"
          image = "nginx:${var.nginx_version}"

          port {
            container_port = 80
          }

          resources {
            limits = {
              cpu    = var.cpu_limit
              memory = var.memory_limit
            }
          }
        }
      }
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This resource takes arguments – such as the namespace, number of replicas, nginx version, CPU limit, and memory limit – from the &lt;strong&gt;variables.tf&lt;/strong&gt; file. You can override the default variable definitions when you call the module. For example, when you import the module to configure the Kubernetes deployment later in the tutorial, you'll define some of the variables as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;namespace                         = "nginx-cluster-one-ns"
nginx_version                     = "1.21.1"
replicas                          = 2
cpu_limit                         = "250"
memory_limit                      = "128Mi"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;‍&lt;br&gt;
This specifies the nginx version and the resources you want to provide the server.&lt;/p&gt;
&lt;h2&gt;
  
  
  Defining a Kubernetes Service for nginx
&lt;/h2&gt;

&lt;p&gt;Finally, the module defines a Kubernetes service, an application that will run in your Kubernetes cluster behind a single outward-facing endpoint:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "kubernetes_service" "nginx" {
  metadata {
    name      = "nginx-service"
    namespace = kubernetes_namespace.nginx.metadata[0].name
  }

  spec {
    selector = {
      app = "nginx"
    }

    port {
      port        = 80
      target_port = 80
    }

    type = var.service_type
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Deploying the Module and Applying the Configuration
&lt;/h2&gt;

&lt;p&gt;Now that you've run through all the core components of the Kubernetes nginx module, you can deploy it and apply your own configuration.&lt;/p&gt;

&lt;p&gt;First, define the two separate nginx clusters by placing the following snippet in a &lt;strong&gt;.tf&lt;/strong&gt; file in a new directory of your choice:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;module "nginx_cluster_one" {
  source = "/path/to/nginx-module/"
  kubernetes_host                   = "http://127.0.0.1:57560/" # replace with your minikube dashboard address
  kubernetes_client_certificate     = "/path/to/.minikube/profiles/minikube/client.crt"
  kubernetes_client_key             = "/path/to/.minikube/profiles/minikube/client.key"
  kubernetes_cluster_ca_certificate = "/path/to/.minikube/ca.crt"
  namespace                         = "nginx-cluster-one-ns"
  nginx_version                     = "1.21.1"
  replicas                          = 2
  cpu_limit                         = "250m"
  memory_limit                      = "128Mi"
  service_type                      = "ClusterIP"
}

module "nginx_cluster_two" {
  source = "/path/to/nginx-module/"
  kubernetes_host                   = "http://127.0.0.1:57560/" # replace with your minikube dashboard address
  kubernetes_client_certificate     = "/path/to/.minikube/profiles/minikube/client.crt"
  kubernetes_client_key             = "/path/to/.minikube/profiles/minikube/client.key"
  kubernetes_cluster_ca_certificate = "/path/to/.minikube/ca.crt"
  namespace                         = "nginx-cluster-two-ns"
  nginx_version                     = "1.21.1"
  replicas                          = 3
  cpu_limit                         = "250m"
  memory_limit                      = "128Mi"
  service_type                      = "ClusterIP"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This imports the module for both clusters and provides it with cluster-specific parameters for variables like the Kubernetes namespace, nginx version, CPU limit, memory limit, and service type, among other things. Make sure to update the &lt;strong&gt;kubernetes_client_certificate&lt;/strong&gt;, &lt;strong&gt;kubernetes_client_key&lt;/strong&gt;, and &lt;strong&gt;kubernetes_cluster_ca_certificate&lt;/strong&gt; variables with the correct paths from your Minikube config file, which will be in the &lt;strong&gt;.minikube&lt;/strong&gt; folder in the home directory where you installed Minikube.&lt;/p&gt;

&lt;p&gt;Once you've updated the variables, run the following Terraform commands one after the other:‍&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform init
terraform plan 
terraform apply
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When you run the &lt;strong&gt;terraform apply&lt;/strong&gt; command, you'll be asked to confirm the actions; type yes. Once you press enter, you'll see an output that resembles the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Plan: 6 to add, 0 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

module.nginx_cluster_one.kubernetes_namespace.nginx: Creating...
module.nginx_cluster_one.kubernetes_namespace.nginx: Creation complete after 0s [id=nginx-cluster-one-ns]
module.nginx_cluster_one.kubernetes_service.nginx: Creating...
module.nginx_cluster_one.kubernetes_deployment.nginx: Creating...
module.nginx_cluster_two.kubernetes_namespace.nginx: Creating...
module.nginx_cluster_two.kubernetes_namespace.nginx: Creation complete after 0s [id=nginx-cluster-two-ns]
module.nginx_cluster_one.kubernetes_service.nginx: Creation complete after 0s [id=nginx-cluster-one-ns/nginx-service]
module.nginx_cluster_two.kubernetes_service.nginx: Creating...
module.nginx_cluster_two.kubernetes_deployment.nginx: Creating...
module.nginx_cluster_two.kubernetes_service.nginx: Creation complete after 0s [id=nginx-cluster-two-ns/nginx-service]
module.nginx_cluster_one.kubernetes_deployment.nginx: Creation complete after 4s [id=nginx-cluster-one-ns/nginx-deployment]
module.nginx_cluster_two.kubernetes_deployment.nginx: Creation complete after 4s [id=nginx-cluster-two-ns/nginx-deployment]

Apply complete! Resources: 6 added, 0 changed, 0 destroyed.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;‍&lt;br&gt;
Once the deployment is complete, use the terraform show command to verify for yourself if the intended infrastructure was deployed. A sample output of the terraform show command can be found in this &lt;a href="https://gist.github.com/kovid-r/e6a4951179d8648a627237a8293d9338" rel="noopener noreferrer"&gt;GitHub Gist&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Alternatively, use the &lt;strong&gt;kubectl CLI&lt;/strong&gt; to get information about both your clusters and the resources deployed in them. Here's how you get all the resources deployed in the namespace &lt;strong&gt;nginx-cluster-one-ns&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;➜ kubectl get all -n nginx-cluster-one-ns
NAME                                    READY   STATUS    RESTARTS   AGE
pod/nginx-deployment-7d85df7dbc-ff2wz   1/1     Running   0          101m
pod/nginx-deployment-7d85df7dbc-g8xqh   1/1     Running   0          101m

NAME                    TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
service/nginx-service   ClusterIP   10.109.36.68   &amp;lt;none&amp;gt;        80/TCP    128m

NAME                               READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/nginx-deployment   2/2     2            2           128m

NAME                                          DESIRED   CURRENT   READY   AGE
replicaset.apps/nginx-deployment-7d85df7dbc   2         2         2       128m
replicaset.apps/nginx-deployment-d55b49455    0         0         0       116m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;‍&lt;br&gt;
If the second nginx cluster deployed successfully, you should get the same for &lt;strong&gt;nginx-cluster-two-ns&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;This information is also available on the Kubernetes dashboard, as shown in the image below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9q4gya2aygneis5xgkt1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9q4gya2aygneis5xgkt1.png" alt="Resources deployed in one of the clusters, as shown on the Kubernetes dashboard" width="800" height="446"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Scaling the nginx Deployment
&lt;/h2&gt;

&lt;p&gt;Scaling your nginx deployment is very straightforward in Terraform. To scale up the nginx deployment in this tutorial, simply go to the **.tf **file and make the following changes:&lt;br&gt;
‍&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In nginx_cluster_one, change the number of replicas from 2 to 4 and the CPU limit from 250m to 450m.&lt;/li&gt;
&lt;li&gt;In nginx_cluster_two, change the number of replicas from 3 to 5 and the CPU limit from 250m to 450m.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These changes allow your nginx deployments to address more requests because of the increased resources. After making these changes, your &lt;strong&gt;.tf&lt;/strong&gt; should look something like the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;module "nginx_cluster_one" {
  source = "/path/to/nginx-module/"
  kubernetes_host                   = "http://127.0.0.1:57560/" # replace with your minikube dashboard address
  kubernetes_client_certificate     = "/path/to/.minikube/profiles/minikube/client.crt"
  kubernetes_client_key             = "/path/to/.minikube/profiles/minikube/client.key"
  kubernetes_cluster_ca_certificate = "/path/to/.minikube/ca.crt"
  namespace                         = "nginx-cluster-one-ns"
  nginx_version                     = "1.21.1"
  replicas                          = 4
  cpu_limit                         = "450m"
  memory_limit                      = "128Mi"
  service_type                      = "ClusterIP"
}

module "nginx_cluster_two" {
  source = "/path/to/nginx-module/"
  kubernetes_host                   = "http://127.0.0.1:57560/" # replace with your minikube dashboard address
  kubernetes_client_certificate     = "/path/to/.minikube/profiles/minikube/client.crt"
  kubernetes_client_key             = "/path/to/.minikube/profiles/minikube/client.key"
  kubernetes_cluster_ca_certificate = "/path/to/.minikube/ca.crt"
  namespace                         = "nginx-cluster-two-ns"
  nginx_version                     = "1.21.1"
  replicas                          = 5
  cpu_limit                         = "450m"
  memory_limit                      = "128Mi"
  service_type                      = "ClusterIP"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;‍&lt;br&gt;
Apply the changes using the &lt;strong&gt;terraform apply&lt;/strong&gt; command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Plan: 0 to add, 2 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

module.nginx_cluster_two.kubernetes_deployment.nginx: Modifying... [id=nginx-cluster-two-ns/nginx-deployment]
module.nginx_cluster_one.kubernetes_deployment.nginx: Modifying... [id=nginx-cluster-one-ns/nginx-deployment]
module.nginx_cluster_two.kubernetes_deployment.nginx: Still modifying... [id=nginx-cluster-two-ns/nginx-deployment, 10s elapsed]
module.nginx_cluster_one.kubernetes_deployment.nginx: Still modifying... [id=nginx-cluster-one-ns/nginx-deployment, 10s elapsed]
module.nginx_cluster_two.kubernetes_deployment.nginx: Modifications complete after 15s [id=nginx-cluster-two-ns/nginx-deployment]
module.nginx_cluster_one.kubernetes_deployment.nginx: Modifications complete after 15s [id=nginx-cluster-one-ns/nginx-deployment]

Apply complete! Resources: 0 added, 2 changed, 0 destroyed.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqh4xca9ah33k5ej6ipfl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqh4xca9ah33k5ej6ipfl.png" alt="Scaling the nginx deployment" width="800" height="317"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After applying the infrastructure changes, run the &lt;strong&gt;kubectl get all -n nginx-cluster-one-ns&lt;/strong&gt; command to confirm that you have four instead of two pods running in your cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;➜ kubectl get all -n nginx-cluster-one-ns
NAME                                   READY   STATUS    RESTARTS   AGE
pod/nginx-deployment-d55b49455-2sqwg   1/1     Running   0          33s
pod/nginx-deployment-d55b49455-n87wf   1/1     Running   0          33s
pod/nginx-deployment-d55b49455-ww6l8   1/1     Running   0          25s
pod/nginx-deployment-d55b49455-zlntp   1/1     Running   0          26s

NAME                    TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
service/nginx-service   ClusterIP   10.109.36.68   &amp;lt;none&amp;gt;        80/TCP    141m

NAME                               READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/nginx-deployment   4/4     4            4           141m

NAME                                          DESIRED   CURRENT   READY   AGE
replicaset.apps/nginx-deployment-7d85df7dbc   0         0         0       141m
replicaset.apps/nginx-deployment-d55b49455    4         4         4       128m
‍
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Managing Kubernetes Resources with Terraform
&lt;/h2&gt;

&lt;p&gt;You now know how to set up and deploy nginx on two different Kubernetes clusters, and you can start exploring additional Kubernetes resources that help manage your infrastructure.&lt;/p&gt;

&lt;p&gt;To better manage multiple deployments of the same resource, use a &lt;a href="https://kubernetes.io/docs/concepts/configuration/configmap/" rel="noopener noreferrer"&gt;ConfigMap&lt;/a&gt; to inject configuration files and environment variables into your pods that nginx can read during startup. For example, a sample Kubernetes ConfigMap resource definition stores the nginx.conf file that can be loaded when nginx is first deployed:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "kubernetes_config_map" "nginx_configmap" {
  metadata {
    name      = "nginx-configmap"
    namespace = var.namespace
  }

  data = {
    "nginx.conf" = &amp;lt;&amp;lt;-EOF
      server {
        listen 80;
        server_name localhost;

        location / {
          root /usr/share/nginx/html;
          index index.html index.htm;
        }

        error_page 500 502 503 504 /50x.html;
        location = /50x.html {
          root /usr/share/nginx/html;
        }
      }
    EOF
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;‍&lt;br&gt;
Another important Kubernetes resource is &lt;strong&gt;kubernetes_ingress&lt;/strong&gt;, which allows you to define rules for inbound connections to reach the endpoints defined by the backend. Here's an example from &lt;a href="https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs#argument-reference" rel="noopener noreferrer"&gt;Terraform's official documentation&lt;/a&gt; of how to describe this resource in your Terraform project:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "kubernetes_ingress" "example_ingress" {
  metadata {
    name = "example-ingress"
  }

  spec {
    backend {
      service_name = "myapp-1"
      service_port = 8080
    }

    rule {
      http {
        path {
          backend {
            service_name = "myapp-1"
            service_port = 8080
          }

          path = "/app1/*"
        }

        path {
          backend {
            service_name = "myapp-2"
            service_port = 8080
          }

          path = "/app2/*"
        }
      }
    }

    tls {
      secret_name = "tls-secret"
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;‍&lt;br&gt;
While these resources are not part of the GitHub repository you cloned at the beginning of the tutorial, you can &lt;a href="https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs#argument-reference" rel="noopener noreferrer"&gt;add these resources&lt;/a&gt; to your module as per your requirements.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;This article demonstrated some of the useful features of Terraform and how to use it to deploy nginx on Kubernetes using a module built with the official Kubernetes provider from the Terraform registry.&lt;/p&gt;

&lt;p&gt;Kubernetes can be difficult to manage, especially when you're bringing changes made in local environments to production. Because it's difficult to mirror actual production setups, teams often face the infamous "works on my machine" dilemma.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://mogenius.com" rel="noopener noreferrer"&gt;mogenius&lt;/a&gt; helps tackle this challenge by enabling you to create self-service Kubernetes environments with enhanced visibility. With a unified view of both application and infrastructure components, simplified Kubernetes interaction, and guided troubleshooting, mogenius supports better local testing for developers. It also allows you to integrate seamlessly with CI/CD pipelines, configure external domains, manage SSL, set up load balancing, and handle certificate management. This approach enhances the developer experience, particularly in cloud-agnostic or multicloud environments.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>cloud</category>
      <category>terraform</category>
    </item>
    <item>
      <title>Securing Applications Using Keycloak's Helm Chart</title>
      <dc:creator>Jan Lepsky</dc:creator>
      <pubDate>Tue, 10 Dec 2024 15:20:24 +0000</pubDate>
      <link>https://dev.to/janlepsky/securing-applications-using-keycloaks-helm-chart-1lii</link>
      <guid>https://dev.to/janlepsky/securing-applications-using-keycloaks-helm-chart-1lii</guid>
      <description>&lt;p&gt;&lt;em&gt;This article was originally posted on the &lt;a href="https://mogenius.com/blog-posts/securing-applications-using-keycloaks-helm-chart" rel="noopener noreferrer"&gt;mogenius blog&lt;/a&gt; by Rubaiat Hossain.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.keycloak.org/" rel="noopener noreferrer"&gt;Keycloak&lt;/a&gt;, an open source identity and access management solution, offers single sign-on, user federation, and strong authentication for web applications and services.&lt;/p&gt;

&lt;p&gt;Deploying Keycloak in a Kubernetes environment using &lt;a href="https://helm.sh/" rel="noopener noreferrer"&gt;Helm&lt;/a&gt; has several benefits:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Scalability and high availability&lt;/strong&gt;: As users or requests increase, Kubernetes can automatically scale Keycloak across multiple nodes to handle the load efficiently, ensuring consistent performance.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Simplified deployment and management&lt;/strong&gt;: Helm handles deployment configurations through a single customizable file, drastically reducing manual configuration mistakes and making it easier to manage upgrades, rollbacks, and scaling operations.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Integration with cloud-native apps&lt;/strong&gt;: Keycloak on Kubernetes integrates easily with other microservices, providing security and access management across your entire stack.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This tutorial guides you through installing Keycloak on Kubernetes using Helm, configuring it for secure usage, and managing users and realms through Helm.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;Before starting the installation process, make sure you have the following prerequisites installed on your local machine.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Kubernetes cluster&lt;/strong&gt;: You need a running Kubernetes cluster that supports &lt;a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/" rel="noopener noreferrer"&gt;persistent volumes&lt;/a&gt;. You can use a local cluster, like &lt;a href="https://kind.sigs.k8s.io/" rel="noopener noreferrer"&gt;kind&lt;/a&gt; or &lt;a href="https://minikube.sigs.k8s.io/" rel="noopener noreferrer"&gt;Minikube&lt;/a&gt;, or a cloud-based solution, like &lt;a href="https://cloud.google.com/kubernetes-engine/" rel="noopener noreferrer"&gt;GKE&lt;/a&gt;%20or&lt;a href="https://aws.amazon.com/eks/" rel="noopener noreferrer"&gt;EKS&lt;/a&gt; or &lt;a href="https://aws.amazon.com/eks/" rel="noopener noreferrer"&gt;EKS&lt;/a&gt;. The cluster should expose ports 80 (HTTP) and 443 (HTTPS) for external access. Persistent storage should be configured to retain Keycloak data (e.g., user credentials, sessions) across restarts.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Nginx Ingress&lt;/strong&gt;: An ingress controller, such as Nginx Ingress, must be installed and configured to route external traffic to Keycloak. You can deploy Nginx Ingress by following its &lt;a href="https://kubernetes.github.io/ingress-nginx/deploy/" rel="noopener noreferrer"&gt;official documentation&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cert manager&lt;/strong&gt;: You'll need the &lt;a href="https://letsencrypt.org/" rel="noopener noreferrer"&gt;Let's Encrypt certificate manager&lt;/a&gt; installed and configured to provide secrets for TLS configurations.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;kubectl&lt;/strong&gt;: The kubectl command line interface is required to issue commands to the Kubernetes APIs. You can install Kubectl by following the &lt;a href="https://kubernetes.io/docs/tasks/tools/#kubectl" rel="noopener noreferrer"&gt;official installation guide&lt;/a&gt;. This tutorial is tested with kubectl version 1.27.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Helm&lt;/strong&gt;: You'll use Helm to deploy Keycloak and perform management tasks, such as creating realms and managing password policies. You can install Helm by following the &lt;a href="https://helm.sh/docs/intro/install/" rel="noopener noreferrer"&gt;official installation guide&lt;/a&gt;. This tutorial is tested with Helm version 3.12.1.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Setting Up Keycloak Using Helm
&lt;/h2&gt;

&lt;p&gt;To start, you'll deploy the Keycloak stack to your local Kubernetes cluster using a Helm chart.&lt;/p&gt;

&lt;h3&gt;
  
  
  Adding the Helm Repository
&lt;/h3&gt;

&lt;p&gt;First, add the &lt;a href="https://artifacthub.io/packages/helm/bitnami/keycloak" rel="noopener noreferrer"&gt;Bitnami Helm repository&lt;/a&gt; to access the Keycloak Helm chart:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This step allows you to fetch and install the latest version of Keycloak from the Bitnami repository.&lt;/p&gt;

&lt;h3&gt;
  
  
  Configuring the Keycloak Values File
&lt;/h3&gt;

&lt;p&gt;To customize your Keycloak deployment, you need to configure the &lt;code&gt;values.yaml&lt;/code&gt; file for the Keycloak Helm chart. This file allows you to define environment variables, ingress settings, database configurations, and resource limits for your Keycloak deployment.&lt;/p&gt;

&lt;p&gt;Create a &lt;code&gt;values.yaml&lt;/code&gt; file and populate it with the following code snippet:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;keycloak&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;extraEnvVars&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;KEYCLOAK_LOG_LEVEL&lt;/span&gt;
      &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;DEBUG&lt;/span&gt;
  &lt;span class="na"&gt;persistence&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;enabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;  
    &lt;span class="na"&gt;existingClaim&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;keycloak-pvc&lt;/span&gt;

  &lt;span class="na"&gt;ingress&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;enabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
    &lt;span class="na"&gt;hostname&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;keycloak.example.com&lt;/span&gt;
    &lt;span class="na"&gt;ingressClassName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
    &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/&lt;/span&gt;
    &lt;span class="na"&gt;annotations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;nginx.ingress.kubernetes.io/rewrite-target&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/&lt;/span&gt;
    &lt;span class="na"&gt;tls&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
&lt;span class="na"&gt;postgresql&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;enabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;postgresqlUsername&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;keycloak&lt;/span&gt;
  &lt;span class="na"&gt;postgresqlPassword&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;keycloakpassword&lt;/span&gt;
  &lt;span class="na"&gt;postgresqlDatabase&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;keycloakdb&lt;/span&gt;
&lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;NodePort&lt;/span&gt;
  &lt;span class="na"&gt;nodePorts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;http&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;30080&lt;/span&gt;
&lt;span class="na"&gt;readinessProbe&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;httpGet&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/realms/master&lt;/span&gt;
    &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http&lt;/span&gt;
  &lt;span class="na"&gt;initialDelaySeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;60&lt;/span&gt;
  &lt;span class="na"&gt;timeoutSeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5&lt;/span&gt;
&lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;requests&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;512Mi"&lt;/span&gt;
    &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;500m"&lt;/span&gt;
  &lt;span class="na"&gt;limits&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;1Gi"&lt;/span&gt;
    &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;1"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above configuration defines the environment variable &lt;code&gt;KEYCLOAK_LOG_LEVEL&lt;/code&gt; and sets its value to &lt;code&gt;DEBUG&lt;/code&gt;. This enables detailed logs for any issues you may encounter during deployment. You can use environment variables like this to configure Keycloak's runtime behavior and change settings like service ports or database credentials without directly modifying the application code.&lt;/p&gt;

&lt;p&gt;It then sets up Ingress settings to expose your Keycloak service to external traffic and defines how ingress will route HTTP(S) requests to your Keycloak instance. The host name parameter specifies the domain name you'll use to access Keycloak. You can use any other domain name here, but make sure to map this URL to the external IP of your Kubernetes cluster in your DNS settings or local &lt;code&gt;/etc/hosts&lt;/code&gt; file.&lt;/p&gt;

&lt;p&gt;The following part of the code exposes your Keycloak instance to external traffic through the domain &lt;code&gt;keycloak.example.com&lt;/code&gt;, with Nginx as the ingress controller managing the traffic:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;ingress&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;enabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;hostname&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;keycloak.example.com&lt;/span&gt;
  &lt;span class="na"&gt;ingressClassName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
  &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/&lt;/span&gt;
  &lt;span class="na"&gt;annotations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;nginx.ingress.kubernetes.io/rewrite-target&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/&lt;/span&gt;
  &lt;span class="na"&gt;tls&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Keycloak stores its user data, realms, and configurations in a PostgreSQL database. It uses the &lt;code&gt;postgresql&lt;/code&gt; block in the &lt;code&gt;values.yaml&lt;/code&gt; file to set up this database and its credentials. The following part of the configuration enables the default database and sets up its credentials. You can use any other values for the credential:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;postgresql&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;enabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;postgresqlUsername&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;keycloak&lt;/span&gt;
  &lt;span class="na"&gt;postgresqlPassword&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;keycloakpassword&lt;/span&gt;
  &lt;span class="na"&gt;postgresqlDatabase&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;keycloakdb&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;During the Helm chart installation, the &lt;code&gt;enabled: true&lt;/code&gt; parameter enables this PostgreSQL database. You can set this to false if you have an external database or want to use a different service. In that case, you'll need to provide the connection details.&lt;/p&gt;

&lt;p&gt;In the &lt;code&gt;resources&lt;/code&gt; section of the code, you set resource requests and limits to control the CPU and memory usage of your Keycloak deployment. This ensures that Keycloak has enough resources to run while preventing it from consuming too much of your cluster's resources:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;requests&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;512Mi"&lt;/span&gt;
    &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;500m"&lt;/span&gt;
  &lt;span class="na"&gt;limits&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;1Gi"&lt;/span&gt;
    &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;1"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When you access Keycloak for the first time, you'll need to log in to the admin console using the administrator account user and its password. You can set the admin password when deploying the Keycloak Helm chart or let Keycloak create an autogenerated password for you.&lt;/p&gt;

&lt;p&gt;Use the second option for this tutorial to get a quick start.&lt;/p&gt;

&lt;h3&gt;
  
  
  Deploying Keycloak Using Helm
&lt;/h3&gt;

&lt;p&gt;With the &lt;code&gt;values.yaml&lt;/code&gt; file configured, you can now deploy Keycloak using Helm:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm &lt;span class="nb"&gt;install &lt;/span&gt;keycloak bitnami/keycloak &lt;span class="nt"&gt;-f&lt;/span&gt; values.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command installs the Keycloak chart using the &lt;code&gt;values.yaml&lt;/code&gt; file described above. It deploys Keycloak on your local Kubernetes cluster and creates all the necessary resources, including services, StatefulSets, and a PostgreSQL database.&lt;/p&gt;

&lt;p&gt;To verify that Keycloak is running, check the status of the pods using the following kubectl command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get pod
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Make sure all Keycloak-related pods are ready (i.e., they have at least one replica) and in a running state.&lt;/p&gt;

&lt;h3&gt;
  
  
  Accessing the Keycloak Admin Console
&lt;/h3&gt;

&lt;p&gt;Once Keycloak is successfully installed on your cluster, you need to obtain the URL of the deployed Keycloak instance by logging into the admin panel.&lt;/p&gt;

&lt;p&gt;You can access Keycloak using the domain name set in the &lt;code&gt;values.yaml&lt;/code&gt; file. However, the latest versions of the Keycloak Helm chart implement advanced SSL security checks that prevent users from logging in to the admin panel using HTTP requests to the URL.&lt;/p&gt;

&lt;p&gt;You can bypass this by using the IP address of the backend Keycloak server instead. Use the following commands to obtain this IP address:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;HTTP_NODE_PORT&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;kubectl get &lt;span class="nt"&gt;--namespace&lt;/span&gt; default &lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="nv"&gt;jsonpath&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"{.spec.ports[?(@.name=='http')].nodePort}"&lt;/span&gt; services keycloak&lt;span class="si"&gt;)&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;NODE_IP&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;kubectl get nodes &lt;span class="nt"&gt;--namespace&lt;/span&gt; default &lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="nv"&gt;jsonpath&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"{.items[0].status.addresses[0].address}"&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"http://&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;NODE_IP&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;:&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;HTTP_NODE_PORT&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Use the URL provided in the output to access the Keycloak admin panel. The default admin username is &lt;code&gt;user&lt;/code&gt;. To retrieve the autogenerated password from the main Keycloak pod, use the command below, which spawns a shell inside the main pod and locates the admin password using &lt;code&gt;grep&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; keycloak-0 &lt;span class="nt"&gt;--&lt;/span&gt; &lt;span class="nb"&gt;printenv&lt;/span&gt; | &lt;span class="nb"&gt;grep &lt;/span&gt;KEYCLOAK_ADMIN_PASSWORD
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Use these credentials to log in to the master realm, which is the default admin's realm from where you can create additional realms, security policies, users, and more.&lt;/p&gt;

&lt;h2&gt;
  
  
  Initial Security Measures
&lt;/h2&gt;

&lt;p&gt;When you're logged in to the master realm as the admin user, you can set up some initial security measures from the GUI interface. These settings will apply to all other realms and their identities.&lt;/p&gt;

&lt;h3&gt;
  
  
  Enable SSL
&lt;/h3&gt;

&lt;p&gt;Enabling SSL encrypts the data transferred between clients, such as browsers and applications, and the Keycloak server. This prevents attackers from intercepting sensitive data like usernames, passwords, and access tokens when users access Keycloak over the public internet.&lt;/p&gt;

&lt;p&gt;Keycloak SSL configuration can be enabled by navigating to &lt;code&gt;Configure&lt;/code&gt; &amp;gt; &lt;code&gt;Realm settings&lt;/code&gt; &amp;gt; &lt;code&gt;General&lt;/code&gt;. The &lt;code&gt;Require SSL&lt;/code&gt; dropdown shows three options allowing you to configure SSL based on where the requests are coming from.&lt;/p&gt;

&lt;p&gt;Set the &lt;code&gt;Require SSL&lt;/code&gt; option to &lt;code&gt;All requests&lt;/code&gt; to enforce HTTPS for all connections and protect sensitive data. Use &lt;code&gt;External requests&lt;/code&gt; if only external IPs need encryption while allowing internal access without HTTPS; avoid using &lt;code&gt;None&lt;/code&gt; in production for security.&lt;/p&gt;

&lt;h3&gt;
  
  
  Configure a Strong Password Policy
&lt;/h3&gt;

&lt;p&gt;A strong password policy safeguards access to your business resources.&lt;/p&gt;

&lt;p&gt;The Keycloak GUI makes it very easy to configure password policies. Navigate to &lt;code&gt;Configure&lt;/code&gt; &amp;gt; &lt;code&gt;Authentication&lt;/code&gt; &amp;gt; &lt;code&gt;Policies&lt;/code&gt; &amp;gt; &lt;code&gt;Password policy&lt;/code&gt; and click the &lt;code&gt;Add policy&lt;/code&gt; dropdown action.&lt;/p&gt;

&lt;p&gt;Keycloak password policy allows you to choose different attributes for user passwords, such as the minimum length, number of special characters and digits required, password expiry, and so on.&lt;/p&gt;

&lt;h3&gt;
  
  
  Enable Account Lockout After Failed Login Attempts
&lt;/h3&gt;

&lt;p&gt;Setting up account lockout after a certain number of failed login attempts helps evade brute-force login attempts from malicious bots and thwarts DDoS attacks on the identity server.&lt;/p&gt;

&lt;p&gt;You can enable account lockout in Keycloak by navigating to &lt;code&gt;Configure&lt;/code&gt; &amp;gt; &lt;code&gt;Realm settings&lt;/code&gt; &amp;gt; &lt;code&gt;Security defenses&lt;/code&gt; &amp;gt; &lt;code&gt;Brute force detection&lt;/code&gt; and setting the desired mode from the action menu.&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating and Managing Realms with Helm
&lt;/h2&gt;

&lt;p&gt;In Keycloak, a realm is a way to organize and isolate resources. It's a space where you manage users, roles, policies, and other settings specific to an application. Each realm is entirely isolated, meaning users and clients in one realm cannot access the resources of another realm. So realms allow you to isolate authentication and authorization settings for specific groups of users and applications.&lt;/p&gt;

&lt;p&gt;Once in the master realm's admin console, you can create additional realms from the GUI interface.&lt;/p&gt;

&lt;p&gt;Creating realms through the GUI, however, can be repetitive and time-consuming, especially in larger deployments.&lt;/p&gt;

&lt;p&gt;Helm simplifies this process by allowing you to define realms and their settings in its &lt;code&gt;values.yaml&lt;/code&gt; file using the "keycloakConfigCli" utility. This automates the deployment process and makes it reproducible and easy to manage. You can use the &lt;code&gt;values.yaml&lt;/code&gt; file to define realms, users, and other Keycloak settings.&lt;/p&gt;

&lt;p&gt;To create a new realm called "example-realm", add this configuration to your &lt;code&gt;values.yaml&lt;/code&gt; file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;keycloakConfigCli&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;enabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;configuration&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;example-realm.json&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
      &lt;span class="s"&gt;{&lt;/span&gt;
        &lt;span class="s"&gt;"realm": "example-realm",&lt;/span&gt;
        &lt;span class="s"&gt;"enabled": true,&lt;/span&gt;
        &lt;span class="s"&gt;"registrationAllowed": true&lt;/span&gt;
      &lt;span class="s"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Apply the changes directly to your Keycloak installation using the following helm command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm upgrade &lt;span class="nt"&gt;--install&lt;/span&gt; keycloak bitnami/keycloak &lt;span class="nt"&gt;-f&lt;/span&gt; values.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you log in to the Keycloak admin console again and navigate to the realm dropdown at the top left, you'll see the newly created example-realm.&lt;/p&gt;

&lt;h2&gt;
  
  
  User Management Basics
&lt;/h2&gt;

&lt;p&gt;User management is one of Keycloak's core features. You can either create the various users for a realm through the Keycloak GUI interface or use Helm and "keycloakConfigCli" to automatically create users, groups, and roles in your realms during deployment.&lt;/p&gt;

&lt;p&gt;By defining user configurations in the &lt;code&gt;values.yaml&lt;/code&gt; file, you ensure that Keycloak automatically sets up your users and groups, making the process consistent and repeatable across environments.&lt;/p&gt;

&lt;p&gt;For example, if you update the keycloakConfigCli section of your values.yaml with the following code, it will create the "example-realm" alongside a user called "basicuser" and a group named "basicgroup" inside that realm:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;keycloakConfigCli&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;enabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;configuration&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;example-realm.json&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
      &lt;span class="s"&gt;{&lt;/span&gt;
        &lt;span class="s"&gt;"realm": "example-realm",&lt;/span&gt;
        &lt;span class="s"&gt;"enabled": true,&lt;/span&gt;
        &lt;span class="s"&gt;"registrationAllowed": true,&lt;/span&gt;
        &lt;span class="s"&gt;"users": [&lt;/span&gt;
          &lt;span class="s"&gt;{&lt;/span&gt;
            &lt;span class="s"&gt;"username": "basicuser",&lt;/span&gt;
            &lt;span class="s"&gt;"email": "basicuser@example.com",&lt;/span&gt;
            &lt;span class="s"&gt;"enabled": true,&lt;/span&gt;
            &lt;span class="s"&gt;"firstName": "Basic",&lt;/span&gt;
            &lt;span class="s"&gt;"lastName": "User",&lt;/span&gt;
            &lt;span class="s"&gt;"credentials": [&lt;/span&gt;
              &lt;span class="s"&gt;{&lt;/span&gt;
                &lt;span class="s"&gt;"type": "password",&lt;/span&gt;
                &lt;span class="s"&gt;"value": "basicpassword"&lt;/span&gt;
              &lt;span class="s"&gt;}&lt;/span&gt;
            &lt;span class="s"&gt;]&lt;/span&gt;
          &lt;span class="s"&gt;}&lt;/span&gt;
        &lt;span class="s"&gt;],&lt;/span&gt;
        &lt;span class="s"&gt;"groups": [&lt;/span&gt;
          &lt;span class="s"&gt;{&lt;/span&gt;
            &lt;span class="s"&gt;"name": "basicgroup"&lt;/span&gt;
          &lt;span class="s"&gt;}&lt;/span&gt;
        &lt;span class="s"&gt;]&lt;/span&gt;
      &lt;span class="s"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Update the Keycloak installation to see the changes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm upgrade &lt;span class="nt"&gt;--install&lt;/span&gt; keycloak bitnami/keycloak &lt;span class="nt"&gt;-f&lt;/span&gt; values.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Creating users with helm provides a streamlined and automated approach to user and group management.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Congrats! You now know how to deploy and manage Keycloak on Kubernetes using Helm. You've learned how to configure Keycloak's values file, handle post-installation security measures, and automate realm and user management using the keycloakConfigCli utility and Helm.&lt;/p&gt;

&lt;p&gt;While Keycloak's Helm chart helps you efficiently deploy and manage your identity and access management (IAM) needs, managing Kubernetes applications can still be challenging, especially at scale. mogenius enables you to take care of the entire lifecycle of your Kubernetes applications—from deployment to scaling and maintenance—so you can focus on development rather than infrastructure management.&lt;/p&gt;

&lt;p&gt;Whether you use Keycloak or any other large-scale distributed application running on Kubernetes, mogenius abstracts Kubernetes complexity into easy-to-use workspaces, simplifies monitoring and maintenance, and provides seamless CI/CD pipelines to make application deployment a breeze. This reduces the need for DevOps support and increases productivity through developer self-service.&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

</description>
      <category>kubernetes</category>
      <category>keycloak</category>
      <category>devops</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Introduction to Helm for Kubernetes</title>
      <dc:creator>Jan Lepsky</dc:creator>
      <pubDate>Tue, 19 Nov 2024 12:32:59 +0000</pubDate>
      <link>https://dev.to/janlepsky/introduction-to-helm-for-kubernetes-5427</link>
      <guid>https://dev.to/janlepsky/introduction-to-helm-for-kubernetes-5427</guid>
      <description>&lt;p&gt;&lt;em&gt;This article was originally posted on the &lt;a href="https://mogenius.com/blog-posts/introduction-to-helm-for-kubernetes" rel="noopener noreferrer"&gt;mogenius blog&lt;/a&gt; by Cameron Pavey.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Kubernetes is highly flexible in how it allows you to compose your applications, but this flexibility can mean that building and deploying applications with multiple components can be quite complex. &lt;a href="https://helm.sh/" rel="noopener noreferrer"&gt;Helm&lt;/a&gt;, a self-described “package manager for Kubernetes,” alleviates some of this complexity.‍&lt;/p&gt;

&lt;p&gt;In this guide, you’ll learn all you need to know to get started with Helm, including its core components and features and what the Helm workflow looks like. Finally, you’ll see how Helm streamlines Kubernetes DevOps to equip your team for better efficiency.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Is Helm?
&lt;/h2&gt;

&lt;p&gt;Helm acts as a layer of abstraction on top of your typical Kubernetes workflow. Rather than manually applying manifests when you want to deploy or modify an application, you can use the Helm CLI client to create, find, install, modify, and uninstall charts. In Helm’s terminology, a &lt;a href="https://helm.sh/docs/topics/charts/" rel="noopener noreferrer"&gt;chart&lt;/a&gt; is a collection of files that represents a Kubernetes application (called a release) by describing all the necessary related resources. The application that a chart describes can range from simple single-pod applications to more complex applications with multiple interconnected pods.‍&lt;/p&gt;

&lt;p&gt;Charts are declared as files in a directory tree that can either be used as-is or subsequently packaged into versioned &lt;code&gt;.tgz&lt;/code&gt; archives, ready for distribution. In a parent directory, a chart consists of several key files and directories:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;Chart.yaml&lt;/code&gt;: A YAML file containing information about the chart.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;values.yaml&lt;/code&gt;: The default configuration for the chart.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;charts/&lt;/code&gt;: A directory containing any charts upon which this chart depends.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;crds/&lt;/code&gt;: A directory containing any custom resource definitions.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;templates/&lt;/code&gt;: A directory of templates that will be combined with values to generate valid Kubernetes manifests.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Noteworthy Helm Features
&lt;/h2&gt;

&lt;p&gt;Helm has several features that help you manage your Kubernetes applications more efficiently.‍&lt;/p&gt;

&lt;h3&gt;
  
  
  Templates and Value Substitution
&lt;/h3&gt;

&lt;p&gt;Often, you will need to create dynamic templates that can be configured depending on how the application will be used. Helm handles this by allowing you to define &lt;em&gt;values&lt;/em&gt; in your &lt;code&gt;values.yaml&lt;/code&gt; file, which can then be used in your &lt;em&gt;templates&lt;/em&gt; through a special templating syntax.‍&lt;/p&gt;

&lt;p&gt;For instance, consider a &lt;code&gt;values.yaml&lt;/code&gt; file like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;color: red
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You could use this value in a template like so:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: ConfigMap
metadata:
  name: my-configmap
data: 
  color: {{ .Values.color }}‍
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Using templated values this way means you can control this value without directly modifying the manifest template.&lt;/p&gt;

&lt;h3&gt;
  
  
  Release Management and Versioning
&lt;/h3&gt;

&lt;p&gt;As a package manager, one of Helm’s responsibilities is the &lt;a href="https://helm.sh/docs/helm/helm_package/" rel="noopener noreferrer"&gt;release and version management&lt;/a&gt; of packages. While you will typically use the latest version when installing a chart, Helm package archives are versioned, so it is possible to specify a particular version of a package, much like other package managers you may be familiar with.‍&lt;/p&gt;

&lt;p&gt;This means that if you maintain Helm charts, you can make incremental updates and publish new versioned archives as needed, while the old ones can remain available to have historical records or for legacy support.&lt;/p&gt;

&lt;h3&gt;
  
  
  Dependency Management
&lt;/h3&gt;

&lt;p&gt;One of Helm’s benefits is its ability to help you install complex applications without manually setting everything up.‍&lt;/p&gt;

&lt;p&gt;Complex applications often require a number of dependencies. If you were to set up an application from scratch, you would need to declare and manage the dependencies yourself.‍&lt;/p&gt;

&lt;p&gt;Helm, however, has built-in dependency management: a chart declares its dependencies, and these will be set up and configured when the release is created.‍&lt;/p&gt;

&lt;h3&gt;
  
  
  Rollback Capabilities
&lt;/h3&gt;

&lt;p&gt;If something goes wrong during a release, Helm makes it easy to revert to your previous working configuration through rollbacks.&lt;/p&gt;

&lt;p&gt;Each time you install, upgrade, or roll back a release, your release’s revision number is incremented by 1. You can use this revision number to revert to a previous version of your release like so:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# helm rollback [RELEASE] [REVISION]
helm rollback my-release 1‍
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To determine which revision to roll back to, you can view the revision history of a given release:‍&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ helm history my-release  
REVISION        UPDATED                         STATUS          CHART                   APP VERSION     DESCRIPTION  
1               Tue Sep 10 09:26:54 2024        deployed        wordpress-23.1.12       6.6.1           Install complete
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  How Does Helm Work?
&lt;/h2&gt;

&lt;p&gt;Helm offers many options, and trying to understand them all at once can be overwhelming. However, you don’t need that to get started. You only have to know how Helm’s core features can be used together to form a workflow:‍&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;‍Chart creation and packaging: If you’re packaging your own application, the first step is to create the necessary files. This can be done via &lt;code&gt;helm create NAME&lt;/code&gt;,which will generate some boilerplate files for you to modify. Once your application is configured, you can create a versioned archive if you want to distribute it.&lt;/li&gt;
&lt;li&gt;‍Finding existing charts: Rather than packaging your own applications, you can also use Helm to find and download existing charts of other applications. You can use these charts to install and configure an application and its dependencies.&lt;/li&gt;
&lt;li&gt;‍Template rendering process: As part of the installation process, Helm combines your chart, templates, and values to &lt;a href="https://helm.sh/docs/helm/helm_template/" rel="noopener noreferrer"&gt;&lt;em&gt;render&lt;/em&gt; your templates&lt;/a&gt; as Kubernetes manifests, ready to be applied to your cluster.&lt;/li&gt;
&lt;li&gt;‍Interaction with the Kubernetes API: With the rendered manifests on hand, Helm interacts with the Kubernetes API server to apply the manifests and create or update any resources necessary for your release.‍&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Getting Started with Helm
&lt;/h2&gt;

&lt;p&gt;There are several ways to install Helm, including binary release, install scripts, and package managers. &lt;a href="https://helm.sh/docs/intro/install/" rel="noopener noreferrer"&gt;Refer to the Helm documentation&lt;/a&gt; to select the best method for your needs.‍&lt;/p&gt;

&lt;p&gt;Once you have Helm installed and a Kubernetes cluster to use it with, you can try out some common workflows.‍&lt;/p&gt;

&lt;h3&gt;
  
  
  Basic Helm Commands
&lt;/h3&gt;

&lt;p&gt;To use Helm, you’ll issue commands via the CLI client. Helm has quite a few commands, but the following are essential ones you should be aware of:‍&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Install a chart
helm install &amp;lt;name&amp;gt; &amp;lt;chart&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Install a chart while setting values on the command line (can specify multiple or separate values with commas)
helm install &amp;lt;name&amp;gt; &amp;lt;chart&amp;gt; --set key1=val1,key2=val2
# Run a test installation to validate chart
helm install &amp;lt;name&amp;gt; &amp;lt;chart&amp;gt; --dry-run --debug
# Upgrade a release
helm upgrade &amp;lt;release&amp;gt; &amp;lt;chart&amp;gt;
# Upgrade a release, or install it if a corresponding release does not already exist
helm upgrade &amp;lt;release&amp;gt; &amp;lt;chart&amp;gt; --install
# Add a repository from the internet:
helm repo add &amp;lt;repo-name&amp;gt; &amp;lt;url&amp;gt;
# Update information of available charts locally from chart repositories
helm repo update
# Download a chart as a versioned archive
helm pull &amp;lt;chart&amp;gt;‍
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The official documentation has a &lt;a href="https://helm.sh/docs/intro/cheatsheet/" rel="noopener noreferrer"&gt;handy cheat sheet&lt;/a&gt; with more common commands and their purposes.&lt;/p&gt;

&lt;h3&gt;
  
  
  Creating Custom Charts
&lt;/h3&gt;

&lt;p&gt;If you want to use Helm for packaging and distributing your own applications, you will need to create a custom chart. To do this, run &lt;code&gt;helm create my-chart&lt;/code&gt;, which creates a new directory containing generated boilerplate files. The following are key files to be mindful of here:‍&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;Chart.yaml&lt;/code&gt;: This file contains metadata about your chart.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;templates/&lt;/code&gt;: This directory contains manifest templates for each resource your chart will create. This is where you can define any Kubernetes resources you want your custom chart to include.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;values.yaml&lt;/code&gt;: This file contains the configuration values that many of the default templates will use. Anything that should be configurable in your chart should be defined here.‍&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Once you’ve modified your chart to suit your needs, you can install it directly from your &lt;code&gt;my-chart/&lt;/code&gt; directory by running &lt;code&gt;helm install my-chart&lt;/code&gt;.It will run through the workflow above to render your templates before applying them to the Kubernetes cluster.‍&lt;/p&gt;

&lt;h3&gt;
  
  
  Finding and Using Existing Charts
&lt;/h3&gt;

&lt;p&gt;You can use Helm to search for existing charts, either by searching &lt;a href="https://helm.sh/docs/helm/helm_search_hub/" rel="noopener noreferrer"&gt;the artifact hub&lt;/a&gt; or a &lt;a href="https://helm.sh/docs/helm/helm_repo/" rel="noopener noreferrer"&gt;repo&lt;/a&gt;.‍&lt;/p&gt;

&lt;p&gt;For instance, if you wanted to install &lt;code&gt;nginx&lt;/code&gt; you could search for it on the artifact hub using &lt;code&gt;helm search hub nginx&lt;/code&gt; or by accessing the hub &lt;a href="https://artifacthub.io/packages/helm/bitnami/nginx" rel="noopener noreferrer"&gt;via the web&lt;/a&gt;. Viewing the entry on the web shows that you can install the chart with its default settings by running this command:‍&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm install my-nginx oci://registry-1.docker.io/bitnamicharts/nginx‍
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you want to view or modify the chart before installing it, use the &lt;code&gt;helm pull&lt;/code&gt; command like so:‍&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm pull oci://registry-1.docker.io/bitnamicharts/nginx‍
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command downloads a versioned chart archive that you can view and modify as needed. You can also view the values supported by the chart to reveal any options you can modify and configure before installing:‍&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm show values oci://registry-1.docker.io/bitnamicharts/nginx‍
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Specify any values you would like to change by defining them in a YAML file like &lt;code&gt;my-values.yaml&lt;/code&gt; and pass it as an option when you install the chart, like so:‍&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm install my-nginx oci://registry-1.docker.io/bitnamicharts/nginx -f my-values.yaml‍
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Deploying a Sample Application
&lt;/h3&gt;

&lt;p&gt;Let’s now see how all of the above comes together when you install, upgrade, and roll back a simple application. A good example application to use for understanding Helm’s workflows is &lt;a href="https://artifacthub.io/packages/helm/bitnami/nginx?modal=install" rel="noopener noreferrer"&gt;nginx&lt;/a&gt;; it needs no configuration, and it’s easy to tell whether it is working.‍&lt;/p&gt;

&lt;p&gt;As mentioned, you can install the nginx chart directly from the artifact hub linked above, but adding the source repository and installing charts from there is closer to how you are likely to use Helm for internal application development. It is good to be familiar with both approaches, but the repository approach will be used here.‍&lt;/p&gt;

&lt;p&gt;To add the Bitnami nginx repo, run this command:‍&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm repo add bitnami https://charts.bitnami.com/bitnami‍
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can search your repos for nginx charts like so:‍&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ helm search repo nginx
NAME                                    CHART VERSION   APP VERSION     DESCRIPTION  
bitnami/nginx                           18.1.11         1.27.1          NGINX Open Source is a web server that can be a...  
bitnami/nginx-ingress-controller        11.4.1          1.11.2          NGINX Ingress Controller is an Ingress controll...  
bitnami/nginx-intel                     2.1.15          0.4.9           DEPRECATED NGINX Open Source for Intel is a lig...‍
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To install a basic nginx release, you can use one of the charts listed by your search:‍&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm install my-nginx bitnami/nginx --version 18.1.11 --namespace my-nginx --create-namespace‍
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will install version 18.1.11 of the &lt;em&gt;chart&lt;/em&gt;, which corresponds to version 1.27.1 of nginx. The &lt;code&gt;--namespace my-nginx&lt;/code&gt; specifies the Kubernetes namespace into which to install the chart, and the &lt;code&gt;--create-namespace&lt;/code&gt; option creates the namespace if it does not already exist. You can omit these options if you prefer, but it's good to know them since they give you more control over how your applications are installed.‍&lt;/p&gt;

&lt;p&gt;Once this command is done running, you can confirm whether the release has been successfully installed via &lt;code&gt;kubectl&lt;/code&gt;:‍&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get all -n my-nginx
NAME                            READY   STATUS    RESTARTS   AGE  
pod/my-nginx-546f4bccc7-pvmt2   1/1    Running   0          11m
NAME                TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                   AGE  
service/kubernetes   ClusterIP      10.152.183.1    &amp;lt;none&amp;gt;       443/TCP                    2d11h  
service/my-nginx    LoadBalancer   10.152.183.65   &amp;lt;pending&amp;gt;   80:32399/TCP,443:31390/TCP   13m
NAME                    READY   UP-TO-DATE   AVAILABLE   AGE  
deployment.apps/my-nginx   1/1    1            1           13m
NAME                                DESIRED   CURRENT   READY   AGE  
replicaset.apps/my-nginx-546f4bccc7   1         1       1       13m‍
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Several resources have been created to support the release, even for this basic application. This is one of Helm’s key advantages — it manages all the resources and dependencies for you.‍&lt;/p&gt;

&lt;p&gt;Suppose you want to change some of the values in the nginx chart. You can view a list of all the values available by running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm show values bitnami/nginx‍
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There are many values you can configure, and one of them is replicaCount, which corresponds to how many pods will be running for this release. You can modify your existing release to set new values. For good measure, you can also use a different version of the chart (in this case, an earlier version if you were already on the latest):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm upgrade my-nginx bitnami/nginx --version 18.1.0 --set replicaCount=2‍
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once this command is done, you can view the impact with kubectl again:‍&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get all -n my-nginx
NAME                            READY   STATUS    RESTARTS   AGE  
pod/my-nginx-7f9647978d-5j8rn   1/1    Running   0          26s  
pod/my-nginx-7f9647978d-8hdqd   1/1    Running   0          15s
NAME                TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                   AGE  
service/kubernetes   ClusterIP      10.152.183.1    &amp;lt;none&amp;gt;       443/TCP                    2d11h  
service/my-nginx    LoadBalancer   10.152.183.65   &amp;lt;pending&amp;gt;   80:32399/TCP,443:31390/TCP   18m
NAME                    READY   UP-TO-DATE   AVAILABLE   AGE  
deployment.apps/my-nginx   2/2    2            2           18m
NAME                                DESIRED   CURRENT   READY   AGE  
replicaset.apps/my-nginx-546f4bccc7   0         0       0       18m  
replicaset.apps/my-nginx-7f9647978d   2         2       2       18m‍
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Similarly, you can view the release history in Helm, which lists each revision your release has gone through:‍&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ helm history my-nginx
REVISION        UPDATED                         STATUS          CHART           APP VERSION     DESCRIPTION  
1               Tue Sep 10 22:10:05 2024        superseded      nginx-18.1.11   1.27.1          Install complete  
2               Tue Sep 10 22:11:02 2024        deployed        nginx-18.1.0    1.27.0          Upgrade complete‍
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you want to undo some changes you’ve made to a release, it’s as simple as running the following:‍&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm rollback my-nginx 1‍
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command reverts the release to the specified revision. You can confirm the revert through &lt;code&gt;kubectl&lt;/code&gt; and &lt;code&gt;helm history&lt;/code&gt;:‍&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get all -n my-nginx
NAME                            READY   STATUS    RESTARTS   AGE  
pod/my-nginx-546f4bccc7-5bm62   1/1    Running   0          56s
NAME                TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                   AGE  
service/kubernetes   ClusterIP      10.152.183.1    &amp;lt;none&amp;gt;       443/TCP                    2d11h  
service/my-nginx    LoadBalancer   10.152.183.65   &amp;lt;pending&amp;gt;   80:32399/TCP,443:31390/TCP   23m
NAME                    READY   UP-TO-DATE   AVAILABLE   AGE  
deployment.apps/my-nginx   1/1    1            1           23m
NAME                                DESIRED   CURRENT   READY   AGE  
replicaset.apps/my-nginx-546f4bccc7   1         1       1       23m  
replicaset.apps/my-nginx-7f9647978d   0         0       0       22m‍
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  How Helm Streamlines Kubernetes DevOps
&lt;/h2&gt;

&lt;p&gt;As you’ve seen, Helm offers powerful capabilities over your standard Kubernetes toolkit, making deploying and managing Kubernetes applications simpler.&lt;/p&gt;

&lt;p&gt;When used effectively, Helm can streamline both the development and operation of your applications. Helm offers the following benefits for developers of Kubernetes applications:‍&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Packaging: You can package an entire Kubernetes application into a single unit rather than a loose collection of manifests.&lt;/li&gt;
&lt;li&gt;Version management: Helm provides easy, built-in version management functionality for your charts.&lt;/li&gt;
&lt;li&gt;Templates: The combination of templates and values gives you an effective way to customize your application for deployment in different environments.&lt;/li&gt;
&lt;li&gt;Dependencies: Helm streamlines the complexity of managing dependencies between different components of your application.&lt;/li&gt;
&lt;li&gt;Reuse: You can create reusable charts for common application components and patterns you want to share between multiple applications.‍&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Once applications have been developed and are ready to be deployed and operated, Helm also offers advantages for operations:‍&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Consistency: Helm gives you a consistent, standard way to manage the deployment of your applications across many environments. It also provides a consistent mechanism for applying things like security best practices, environment configuration, and settings across deployments.&lt;/li&gt;
&lt;li&gt;Upgrades and rollbacks: Helm’s upgrade and rollback functionality makes it easy and low-risk to upgrade applications, change values, and revert changes if something goes wrong. This reduces the effort and complexity involved in modifying running applications.&lt;/li&gt;
&lt;li&gt;Management: Helm’s dependency management means applications are essentially plug-and-play, without needing to ensure all your dependencies are configured and ready to go in advance. This greatly simplifies the process of deploying new application instances.‍&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Wrapping Up
&lt;/h2&gt;

&lt;p&gt;Helm, a powerful package manager for Kubernetes applications, provides new functionality for managing larger-scale application deployments on top of the existing Kubernetes API server and &lt;code&gt;kubectl&lt;/code&gt; functionality. Features like templates with value substitution, seamless upgrades and rollbacks, fully managed dependencies, and versioned package archives fill the gaps needed to take Kubernetes from a container orchestrator to an out-of-the-box application platform.‍&lt;/p&gt;

&lt;p&gt;Even though tools like Kubernetes and Helm provide powerful capabilities for scalable cloud-native operations, complexity can be a burden for development teams. &lt;a href="https://mogenius.com" rel="noopener noreferrer"&gt;mogenius&lt;/a&gt;, as a self-service solution, bridges the gap and allows developers to safely deploy and manage applications without deep Kubernetes expertise. The internal developer platform comes with a built-in Helm manager that enables teams to manage Helm repositories, charts, and releases on clusters with a comprehensive interface. You can even &lt;a href="https://app.mogenius.com/user/registration" rel="noopener noreferrer"&gt;sign up for free&lt;/a&gt; and deploy your first Helm chart effortlessly.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>cloud</category>
      <category>devtools</category>
    </item>
    <item>
      <title>How Kubernetes YAML manifests are dragging down developer productivity</title>
      <dc:creator>Jan Lepsky</dc:creator>
      <pubDate>Tue, 01 Oct 2024 08:40:48 +0000</pubDate>
      <link>https://dev.to/janlepsky/how-kubernetes-yaml-manifests-are-dragging-down-developer-productivity-268</link>
      <guid>https://dev.to/janlepsky/how-kubernetes-yaml-manifests-are-dragging-down-developer-productivity-268</guid>
      <description>&lt;p&gt;In the previous blog post, we explored &lt;a href="https://mogenius.com/blog-posts/best-practices-for-writing-kubernetes-yaml-manifests" rel="noopener noreferrer"&gt;five best practices for writing Kubernetes YAML manifests&lt;/a&gt;. While YAML’s human-readable format and flexibility make it a popular choice for writing configuration files in the Kubernetes ecosystem, it presents significant challenges when used as an interface for developers and DevOps professionals.&lt;/p&gt;

&lt;p&gt;In this blog post, we’ll first discuss some common issues associated with managing Kubernetes YAML manifests and their impact on developer productivity. Next, we’ll explore how Kubernetes self-service platforms help solve these challenges and why you should adopt them to streamline your workflow.&lt;/p&gt;

&lt;p&gt;Let’s get started!&lt;/p&gt;

&lt;h2&gt;
  
  
  Challenges in managing Kubernetes YAML manifests
&lt;/h2&gt;

&lt;p&gt;Below are some of the key challenges developers face when dealing with Kubernetes YAML manifests:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Complexity and sensitivity of YAML syntax&lt;/strong&gt;&lt;br&gt;
YAML relies heavily on indentation to define the structure and hierarchy of data. A minor mistake, such as mixing spaces and tabs or misplacing an indent, can lead to parsing errors. This sensitivity to whitespace can be a significant source of frustration, as even a small error can cause a deployment to fail or behave unexpectedly.&lt;/p&gt;

&lt;p&gt;Moreover, YAML's syntax includes various features like anchors, aliases, and complex data types, which can add to the complexity. You need to be familiar with these features to use YAML effectively, which can increase the cognitive load and learning curve.&lt;/p&gt;

&lt;p&gt;‍&lt;br&gt;
&lt;strong&gt;Version inconsistencies&lt;/strong&gt;&lt;br&gt;
YAML has undergone several revisions, each introducing new features and subtle changes in behavior. This means that YAML documents can be parsed differently depending on the version being used. You must be aware of the specific YAML version you are working with to avoid compatibility issues, which adds another layer of complexity to managing YAML manifests. This requirement can be particularly challenging in environments where multiple tools or libraries with different YAML version dependencies are used.&lt;/p&gt;

&lt;p&gt;Here are just a few examples of how version inconsistencies can impact YAML management:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Feature Deprecation:&lt;/strong&gt; Older versions of YAML might support features that have been deprecated or changed in newer versions. For instance, if a YAML file uses an older syntax that has been replaced or removed in a newer version, this can lead to parsing errors or unexpected behavior. For example, YAML 1.1 has certain features like implicit typing that were revised in YAML 1.2.&lt;br&gt;
&lt;strong&gt;2. Syntax Variations:&lt;/strong&gt; Different YAML versions can have subtle differences in syntax rules. For example, YAML 1.1 allowed for more flexibility in how complex data structures were represented compared to YAML 1.2. A YAML file that works fine in one version might cause errors or fail to parse correctly in another if it relies on deprecated or changed syntax.&lt;br&gt;
&lt;strong&gt;3. Library and Tool Compatibility:&lt;/strong&gt; Various tools and libraries that handle YAML might be built to work with specific versions. If your project uses tools or libraries that expect a particular YAML version, you could encounter compatibility issues when integrating different tools or libraries. For example, if a CI/CD tool is designed to parse YAML 1.2 but your manifests are written using YAML 1.1 features, you might face integration issues.&lt;br&gt;
&lt;strong&gt;4. Behavioral Changes:&lt;/strong&gt; Subtle behavioral changes between YAML versions can lead to inconsistencies. For instance, the way certain data types or structures are interpreted might differ. This could lead to unexpected results if your YAML documents use features that have changed behavior between versions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cross-environmental management&lt;/strong&gt;&lt;br&gt;
Applications typically need to be deployed across multiple environments, such as development, testing, staging, and production. Each environment may require specific configuration settings, such as different resource limits or database connections. Managing these environment-specific configurations within YAML manifests can quickly become a maintenance nightmare. Developers often resort to duplicating and modifying manifests for each environment, leading to inconsistencies and configuration drift.&lt;/p&gt;
&lt;h2&gt;
  
  
  How Kubernetes YAML challenges affect developer productivity
&lt;/h2&gt;

&lt;p&gt;The challenges associated with managing Kubernetes YAML manifests have a direct and profound impact on developer productivity. These issues consume valuable time and resources, hindering developers' ability to focus on building features that deliver value to users and the business. Here are some specific ways these challenges affect productivity:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Increased debugging time&lt;/strong&gt;&lt;br&gt;
The complexity and sensitivity of YAML syntax often result in errors that are difficult to diagnose and resolve. One common issue is indentation errors caused by using tabs instead of spaces. YAML is highly sensitive to indentation and expects consistent use of spaces for defining structure. Here’s an example of how this can lead to issues:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;YAML Example with Tabs (Invalid)&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Pod
metadata:
  name: example-pod
  labels:
    app: example
  spec:
    containers:
      - name: example-container
        image: nginx
        ports:
          - containerPort: 80
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;(Note: The above example uses tabs for indentation, but it should use spaces.)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;YAML Example with Spaces (Valid)&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Pod
metadata:
  name: example-pod
  labels:
    app: example
  spec:
    containers:
      - name: example-container
        image: nginx
        ports:
          - containerPort: 80
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the invalid YAML example, tabs are used for indentation instead of spaces. This error can be difficult to detect because YAML parsers often don’t provide clear error messages for this kind of issue. As a result, a deployment might fail or behave unexpectedly due to incorrect indentation, leading developers to spend considerable time debugging the issue.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cognitive load and burnout&lt;/strong&gt;&lt;br&gt;
The need to master YAML's intricate features adds to the cognitive load on developers. This learning curve can be particularly steep for those new to YAML, leading to frustration and burnout. The constant attention required to manage these complexities detracts from creative problem-solving and innovation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Inefficient collaboration and missed deadlines&lt;/strong&gt;&lt;br&gt;
The challenges of managing YAML manifests can impede collaboration between development and DevOps teams. The time and resources spent resolving YAML-related issues slow down the overall development process, leading to potential bottlenecks. This inefficiency can further impact project timelines, resulting in missed deadlines and a slower pace of product evolution. The reduced focus on feature development and innovation ultimately affects the competitiveness of the product in the market and hinders a team's ability to meet user demands and adapt to market changes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Solving Kubernetes YAML challenges with Kubernetes self-service platforms
&lt;/h2&gt;

&lt;p&gt;As discussed, Kubernetes YAML challenges can have a significant impact on developer productivity. Therefore, it's essential to explore solutions that can help overcome these obstacles. &lt;a href="https://mogenius.com/" rel="noopener noreferrer"&gt;Kubernetes self-service platforms like mogenius&lt;/a&gt; offer a powerful means to streamline workflows and reduce the complexities associated with YAML management. Here’s how these platforms can make a difference:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Simplified configuration management&lt;/strong&gt;&lt;br&gt;
Self-service platforms provide user-friendly interfaces that abstract away the intricacies of YAML syntax, allowing developers to manage configurations without getting bogged down by syntax errors. This simplification reduces the time spent on debugging and allows developers to focus on core development tasks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Automated version control&lt;/strong&gt;&lt;br&gt;
With built-in version control mechanisms, self-service platforms ensure that YAML manifests remain consistent across different environments. This automation mitigates the risk of version inconsistencies and configuration drift, leading to more reliable deployments and smoother operations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Enhanced collaboration and communication&lt;/strong&gt;&lt;br&gt;
These platforms facilitate seamless collaboration between development and DevOps teams by providing centralized tools and repositories. By reducing the need for constant back-and-forth communication, teams can resolve issues more efficiently and maintain a steady development pace.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Environment-specific customization&lt;/strong&gt;&lt;br&gt;
Kubernetes self-service platforms enable developers to easily manage environment-specific configurations through templating and parameterization. This customization ensures that each environment's unique requirements are met without the need for redundant manual edits, enhancing consistency and reducing errors.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Reduced cognitive load&lt;/strong&gt;&lt;br&gt;
By abstracting complex features and automating routine tasks, these platforms significantly reduce the cognitive load on developers. With fewer distractions and technical hurdles, developers can dedicate more energy to innovation and delivering value to users.&lt;/p&gt;

&lt;h2&gt;
  
  
  Wrapping It Up
&lt;/h2&gt;

&lt;p&gt;In this blog post, we first discussed some key challenges associated with managing Kubernetes YAML manifests. We then explored how these challenges significantly affect developer experience and decrease the productivity of teams. Finally, we examined how Kubernetes self-service platforms help address these challenges.&lt;/p&gt;

&lt;p&gt;The message is clear: if you value efficient operations and want to make the most of your time, adopting self-service platforms is the best approach.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>developer</category>
      <category>devops</category>
      <category>learning</category>
    </item>
    <item>
      <title>Best Practices for Writing Kubernetes YAML Manifests</title>
      <dc:creator>Jan Lepsky</dc:creator>
      <pubDate>Mon, 09 Sep 2024 15:26:18 +0000</pubDate>
      <link>https://dev.to/janlepsky/best-practices-for-writing-kubernetes-yaml-manifests-1ia4</link>
      <guid>https://dev.to/janlepsky/best-practices-for-writing-kubernetes-yaml-manifests-1ia4</guid>
      <description>&lt;p&gt;Kubernetes objects are deployed to Kubernetes clusters using configuration files written in YAML, often referred to as YAML manifests. You specify the "desired state" of objects in a manifest file and send the file to the Kubernetes API server. Kubernetes then automatically configures and manages the application based on your specifications. In this blog post, we'll outline five best practices you should remember while writing Kubernetes YAML manifests. Let's get started!&lt;/p&gt;

&lt;h1&gt;
  
  
  1) Use the latest stable API version
&lt;/h1&gt;

&lt;p&gt;Kubernetes API versions typically go through three stages:‍&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Alpha:&lt;/strong&gt; The version names contain "alpha" (e.g., &lt;code&gt;v1alpha1&lt;/code&gt;). These are experimental features that may be unstable and are disabled by default. Alpha APIs can change without notice and are not recommended for production use.‍&lt;br&gt;
&lt;strong&gt;2. Beta:&lt;/strong&gt; The version names contain "beta" (e.g., &lt;code&gt;v2beta3&lt;/code&gt;). These are well-tested features, but are disabled by default. Beta features are considered safe to enable but are not recommended for production use as they may still undergo breaking changes.&lt;br&gt;
&lt;strong&gt;3. Stable:&lt;/strong&gt; The version names are simply "vX" where X is an integer (e.g., &lt;code&gt;v1&lt;/code&gt;). These are production-ready features that are fully supported, enabled by default, and maintain backwards compatibility.&lt;/p&gt;

&lt;p&gt;To leverage these versioned APIs, every Kubernetes object manifest must specify the API version in a field named &lt;code&gt;apiVersion&lt;/code&gt;. This field tells Kubernetes which version of the API to use when processing the manifest.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;apiVersion&lt;/code&gt; field typically consists of two parts: the API group (such as &lt;code&gt;apps&lt;/code&gt; or &lt;code&gt;batch&lt;/code&gt;) and the actual version (such as &lt;code&gt;v1&lt;/code&gt;). Note that for core Kubernetes objects, only the version is specified.&lt;/p&gt;

&lt;p&gt;Here's an example manifest for a Deployment object:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.27
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;‍&lt;br&gt;
In this example, &lt;code&gt;apiVersion&lt;/code&gt;: &lt;code&gt;apps/v1&lt;/code&gt; indicates that this Deployment object uses the &lt;code&gt;v1&lt;/code&gt; version of the &lt;code&gt;apps&lt;/code&gt; API group, which is the latest stable version for Deployments. When creating Kubernetes object manifests, you must always use the latest stable API version available for each object type.&lt;/p&gt;

&lt;p&gt;Here’s why:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Reliability:&lt;/strong&gt; Stable APIs are less likely to introduce breaking changes, ensuring that your applications remain functional over time.&lt;br&gt;
&lt;strong&gt;2. Support:&lt;/strong&gt; Stable versions receive regular updates and support from the Kubernetes community, making it easier to find help and resources.&lt;br&gt;
&lt;strong&gt;3. Future-proofing:&lt;/strong&gt; By adopting stable APIs, you position your applications to benefit from ongoing enhancements and avoid the risks associated with deprecated or unstable versions.&lt;/p&gt;

&lt;p&gt;To find out the latest stable API version for an object on your current Kubernetes cluster, you can use the &lt;code&gt;kubectl api-resources&lt;/code&gt; command. This command queries the Kubernetes API server you're connected to and lists all available resources and their API versions supported by that specific cluster.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://mogenius.com/whitepaper?utm_source=devto&amp;amp;utm_medium=article_post&amp;amp;utm_campaign=yaml-best-practices" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy82um0cem7h6ss5stgzz.png" alt="Image description" width="800" height="388"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h1&gt;
  
  
  2) Use labels that identify semantic attributes of your application
&lt;/h1&gt;

&lt;p&gt;In Kubernetes, labels are arbitrary key/value pairs that you can attach to objects. They provide a flexible way to organize and categorize objects and manage them efficiently. Labels are particularly useful when combined with label selectors, which allow you to filter and operate on specific sets of objects.&lt;/p&gt;

&lt;p&gt;Consider this example Deployment manifest:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp-deployment
  labels:
    app: myapp
spec:
  replicas: 3
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
    spec:
      containers:
      - name: myapp
        image: myapp:v1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;‍&lt;br&gt;
While these labels are valid, they don't fully capture the semantic attributes of the application. In other words, these labels don't convey meaningful characteristics about the application. A better way to label the application would be to add more descriptive labels.&lt;/p&gt;

&lt;p&gt;The Kubernetes official documentation recommends a set of &lt;a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/" rel="noopener noreferrer"&gt;common labels&lt;/a&gt; that you can apply to your object manifest. These labels, which begin with the prefix &lt;code&gt;app.kubernetes.io/&lt;/code&gt; followed by a separator (&lt;code&gt;/&lt;/code&gt;), provide a standardized way to describe your application's components and improve interoperability with various Kubernetes tools and systems.&lt;/p&gt;

&lt;p&gt;Here's what an improved version of the aforementioned Deployment manifest could look like, incorporating these recommended labels:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp-deployment
  labels:
    app.kubernetes.io/name: myapp
    app.kubernetes.io/version: "1.0.0"
    app.kubernetes.io/component: frontend
    app.kubernetes.io/part-of: web-application
spec:
  replicas: 3
  selector:
    matchLabels:
      app.kubernetes.io/name: myapp
        template:
    metadata:
      labels:
        app.kubernetes.io/name: myapp
    spec:
      containers:
      - name: myapp
        image: myapp:v1.0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;‍&lt;br&gt;
These semantic labels provide much more context about the application. They describe not just what the application is, but also its version and its role in the larger system.&lt;/p&gt;

&lt;p&gt;The benefits of this semantic labelling approach include:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Improved organization:&lt;/strong&gt; Resources are grouped in a more meaningful way.&lt;br&gt;
&lt;strong&gt;2. Enhanced querying:&lt;/strong&gt; You can easily find all frontend components or all resources related to a specific application.&lt;br&gt;
&lt;strong&gt;3. Clearer communication:&lt;/strong&gt; Team members can quickly understand the purpose and context of each resource.&lt;br&gt;
&lt;strong&gt;4. Interoperability:&lt;/strong&gt; The use of standardized app.kubernetes.io/ labels enables different Kubernetes tools to work together seamlessly, recognizing and utilizing the same information across various platforms and systems.&lt;br&gt;
‍&lt;br&gt;
By using semantic labels consistently across your Kubernetes objects, you create a more self-documenting system that is easier to understand and manage.‍&lt;/p&gt;
&lt;h1&gt;
  
  
  3) Put object descriptions in annotations for better introspection
&lt;/h1&gt;

&lt;p&gt;In Kubernetes, annotations are key-value pairs that allow you to attach non-identifying metadata to objects. Typical examples of annotations include build information, release IDs, Git branch names, PR numbers, image hashes, registry information, or team contact details.&lt;/p&gt;

&lt;p&gt;Annotations provide a way to examine and understand the objects in the cluster more deeply. This is what the phrase "better introspection" means in the context of Kubernetes annotations.&lt;/p&gt;

&lt;p&gt;Here's an example of a Pod with three annotations for commit, author, and branch:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Pod
metadata:
  name: nginx-pod
  annotations:
    git.commit: "7a8b9c0d1e2f3g4h5i6j7k8l9m0n1o2p"
    git.author: "Sarah Chen &amp;lt;sarah.chen@example.com&amp;gt;"
    git.branch: "feature/custom-nginx-config"
spec:
  containers:
  - name: nginx
    image: nginx:1.27
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this example, annotations provide valuable context about the specific version of the NGINX configuration being deployed, which can be extremely useful for debugging, auditing, and managing your Kubernetes Deployments. While these annotations offer useful information for human readers, their power extends far beyond simple documentation.&lt;/p&gt;

&lt;p&gt;In fact, annotations are primarily used to provide additional context or configuration information that can be utilized by external tools, automation systems, or client libraries interacting with the Kubernetes API.&lt;/p&gt;

&lt;p&gt;For example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A CI/CD tool might use annotations to store information about the build process.&lt;/li&gt;
&lt;li&gt;A monitoring tool might use annotations to specify how to scrape metrics or which alerts to associate with a particular resource.&lt;/li&gt;
&lt;li&gt;A custom deployment tool might use annotations to store information about rollout strategies or canary deployments.
‍
Therefore, always ensure to include descriptive and relevant annotations in your Kubernetes object definitions to enhance manageability and observability.‍&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Key Differences in Allowed Characters&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Labels: Keys are more restricted; both the prefix and the name must conform to DNS subdomain rules and specific character restrictions. Values must be relatively short (63 characters or less) and are limited to certain characters.&lt;/li&gt;
&lt;li&gt;Annotations: Keys follow the same rules as labels, but values have no strict size limits and can contain any UTF-8 characters, allowing for much more flexibility in what they can store.&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  4) Don’t hardcode secret data
&lt;/h1&gt;

&lt;p&gt;Secret data consists of sensitive information that should be protected from unauthorized access. This includes passwords, API keys, tokens, and other confidential data. A common mistake in Kubernetes deployments is hardcoding this sensitive information directly into manifest files.&lt;/p&gt;

&lt;p&gt;To illustrate this issue, let's examine the following Deployment manifest:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: apps/v1
kind: Deployment
metadata:
  name: mysql-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mysql
  template:
    metadata:
      labels:
        app: mysql
    spec:
      containers:
      - name: mysql-container
        image: mysql:8.0
        env:
        - name: MYSQL_ROOT_PASSWORD
          value: "my-secret-password"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this manifest, we have hardcoded the value for the root password directly in the environment variable. This exposes the password in plain text, posing significant security risks.&lt;/p&gt;

&lt;p&gt;A more secure approach is to store the root password in a Kubernetes Secret object and then reference it in the Deployment. Here's how you can create a Secret:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Secret
metadata:
  name: mysql-root-pass
type: Opaque
stringData:
  password: mysql_secret_password
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;‍&lt;br&gt;
Once you've created the Secret, you can reference it in your Deployment manifest by mounting it as an environment variable:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: apps/v1
kind: Deployment
metadata:
  name: mysql-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mysql
  template:
    metadata:
      labels:
        app: mysql
    spec:
      containers:
      - name: mysql-container
        image: mysql:8.0
        env:
        - name: MYSQL_ROOT_PASSWORD
          valueFrom:
            secretKeyRef:
              name: mysql-root-pass
              key: password
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Using Secrets in this manner offers several key benefits:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Separation of concerns:&lt;/strong&gt; Keeping sensitive data separate from application code adheres to best practices in configuration management.&lt;br&gt;
&lt;strong&gt;2. Easier management:&lt;/strong&gt; Secrets can be updated independently of the Deployment, allowing for easier rotation of credentials.&lt;br&gt;
&lt;strong&gt;3. Environment-specific configuration:&lt;/strong&gt; Different Secret objects can be created for various environments, facilitating consistent application behavior across different setups.&lt;br&gt;
‍&lt;/p&gt;
&lt;h1&gt;
  
  
  5) Combine related Kubernetes objects into one file
&lt;/h1&gt;

&lt;p&gt;When working with Kubernetes, you often need to create multiple related objects to deploy an application. While it's possible to define each object in a separate file, the following are some scenarios where combining related Kubernetes objects into a single file can be beneficial.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;For small to medium-sized applications where all components are tightly coupled.&lt;/li&gt;
&lt;li&gt;When you want to deploy an entire application stack with a single command.&lt;/li&gt;
&lt;li&gt;For objects that are always deployed together and have a strong relationship.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let's explore this concept with an example. Imagine you want to deploy a backend API server for a simple web application. Traditionally, you might have separate YAML files for this component: &lt;code&gt;backend-deployment.yaml&lt;/code&gt; and &lt;code&gt;backend-service.yaml&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;backend-deployment.yaml&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Pod
metadata:
 name: nginx-web-server
 labels:
   app.kubernetes.io/name: nginx-web-server
spec:
 containers:
   - name: nginx-web-server
     image: nginx:1.27
     ports:
       - containerPort: 80
         name: http-web-svc
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;‍&lt;br&gt;
&lt;strong&gt;backend-service.yaml&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Service
metadata:
 name: nginx-web-server-service
spec:
 selector:
   app.kubernetes.io/name: nginx-web-server
 ports:
   - name: http
     protocol: TCP
     port: 80
     targetPort: http-web-svc
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;‍&lt;br&gt;
This approach results in two separate files that you need to manage and deploy individually.&lt;/p&gt;

&lt;p&gt;Instead, you could combine the two objects into a single &lt;code&gt;backend.yaml&lt;/code&gt; file. This file would contain the Deployment and the Service for your backend, separated by &lt;code&gt;---&lt;/code&gt; delimiters, like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;backend.yaml
apiVersion: v1
kind: Pod
metadata:
 name: nginx-web-server
 labels:
   app.kubernetes.io/name: nginx-web-server
spec:
 containers:
   - name: nginx-web-server
     image: nginx:1.27
     ports:
       - containerPort: 80
         name: http-web-svc
---
apiVersion: v1
kind: Service
metadata:
 name: nginx-web-server-service
spec:
 selector:
   app.kubernetes.io/name: nginx-web-server
 ports:
   - name: http
     protocol: TCP
     port: 80
     targetPort: http-web-svc
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;‍&lt;br&gt;
This combined approach offers several benefits:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Simplified deployment:&lt;/strong&gt; You can deploy all objects with a single &lt;code&gt;kubectl apply -f backend.yaml&lt;/code&gt; command.&lt;br&gt;
&lt;strong&gt;2. Improved version control:&lt;/strong&gt; Changes to related objects are tracked together in your version control system.&lt;br&gt;
&lt;strong&gt;3. Better readability:&lt;/strong&gt; It's easier to understand the full application structure when all components are in one file.&lt;br&gt;
&lt;strong&gt;4. Easier troubleshooting:&lt;/strong&gt; When all related objects are in one file, it's simpler to identify and fix issues.&lt;/p&gt;

&lt;p&gt;However, it's important to note that this approach works best for smaller applications or tightly coupled components. For larger, more complex applications, you might still prefer to keep objects in separate files for better modularity and easier management of individual components.&lt;br&gt;
‍&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Adhering to best practices for writing Kubernetes YAML manifests is essential, particularly when collaborating within a team or managing complex applications. By consistently implementing these practices, you not only streamline your workflow but also enhance the clarity, security, and organization of your deployments. You contribute to a more efficient and collaborative development environment, which leads to improved productivity and a faster software development lifecycle.&lt;/p&gt;

&lt;p&gt;Additionally, utilizing tools such as the &lt;a href="https://marketplace.visualstudio.com/items?itemName=redhat.vscode-yaml" rel="noopener noreferrer"&gt;YAML extension&lt;/a&gt; for Visual Studio Code can further improve your efficiency when working with YAML manifests. This extension provides features like auto-completion, error highlighting, and snippets, which can help you avoid syntax errors and speed up the file creation process, making it an invaluable tool for any Kubernetes user.&lt;/p&gt;




&lt;p&gt;This article was first published on the &lt;a href="https://mogenius.com?utm_source=Dev.to&amp;amp;utm_medium=blog+post&amp;amp;utm_campaign=Best_Practices_YAML_Manifest" rel="noopener noreferrer"&gt;mogenius&lt;/a&gt; blog. Check it out and discover a Kubernetes platform made for outstanding Developer Experience.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>developer</category>
      <category>learning</category>
    </item>
    <item>
      <title>Monitoring Service Health in Kubernetes: A Simplified Approach</title>
      <dc:creator>Jan Lepsky</dc:creator>
      <pubDate>Tue, 26 Mar 2024 14:42:28 +0000</pubDate>
      <link>https://dev.to/janlepsky/monitoring-service-health-in-kubernetes-a-simplified-approach-3f51</link>
      <guid>https://dev.to/janlepsky/monitoring-service-health-in-kubernetes-a-simplified-approach-3f51</guid>
      <description>&lt;p&gt;Determining the health of a service is a crucial aspect of maintaining reliability and performance in modern software development, especially when working with container orchestration tools like Kubernetes. &lt;/p&gt;

&lt;p&gt;The health of a service is a multifaceted concept that hinges on its ability to meet availability, performance, and correctness standards throughout its lifecycle. This is not a trivial task, given the complex interplay of factors that influence a service's health, from the build pipeline and deployment processes to the ongoing monitoring and management of the service on Kubernetes.&lt;/p&gt;

&lt;p&gt;Let’s have a look at some critical areas.&lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding Service Health: A Comprehensive Overview
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkgp31d7up42v6jqeso6j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkgp31d7up42v6jqeso6j.png" alt="CICD Pipeline" width="800" height="518"&gt;&lt;/a&gt;&lt;em&gt;Effective service status monitoring is crucial for real-time health insights across build, deployment, and operations, enabling developers to promptly identify and address failures.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Continuous integration and building: The Foundation of Service Health&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The journey toward a healthy service begins with a robust CI pipeline. Key health indicators here include the speed and success of builds and tests, crucial for minimizing "Time to Feedback" for developers. Real-time feedback mechanisms, such as CI/CD dashboards and direct integration of checks into the pipeline (e.g., code coverage, linting), are essential for early detection and resolution of potential issues.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Deployment: Ready for Traffic&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Once the build is finished it is key to observe the success of the deployment. Multiple building blocks of Kubernetes can cause failures, e.g. deployment configurations, resources, pod scheduling, etc. Furthermore, deployment practices must ensure that a service is fully prepared to handle traffic before it is exposed to users. Kubernetes Readiness Probes are critical in this phase, verifying that a service is ready to perform. Furthermore, the implementation of automatic rollback mechanisms based on metrics like error rates and latency ensures that any deployment that could degrade service health is promptly reverted.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fazsmz0z72uvrn9o0ev28.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fazsmz0z72uvrn9o0ev28.png" alt="Grafana dashboard" width="800" height="518"&gt;&lt;/a&gt;&lt;em&gt;To keep your service healthy, rigorously monitor CI/CD progress, deploy readiness checks, and proactively manage performance through real-time dashboards for uninterrupted service excellence. (source: grafana-dashboards)&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;3. Ongoing Service Monitoring: Keeping the Pulse&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Once deployed, continuous monitoring of a service's health is crucial. There’s narrow features in Kubernetes like Liveness Probes that help determine the need for restarting instances, or broad solutions like Prometheus and Grafana that offer detailed monitoring and visualization of service health. Centralized log aggregation and distributed tracing tools enable deep analysis and troubleshooting.&lt;/p&gt;

&lt;h2&gt;
  
  
  Challenges of bringing service status monitoring to life
&lt;/h2&gt;

&lt;p&gt;Each of the presented steps offers valuable tactics for achieving continuous service health, better visibility during the development process, and faster recovery in case of an incident. However, deploying them to a team and making use of the tools in the software development lifecycle is a different story. In Kubernetes environments, a failing pipeline or crashing pods can have multiple causes. Multiple tools are involved in the process and on Kubernetes-level there’s several workloads that can affect service health. &lt;/p&gt;

&lt;p&gt;The challenge lies in the aggregation of service status data in order to deliver an actionable source of truth. With self-service in mind, this should be designed for software developers to enable them to independently check the status of their service and investigate each step in the pipeline. In an aggregated view, delivering near real-time data is key, in order to display what is happening at any given moment what is happening with build, deployment, pod, or health check. Additionally, a certain level of abstraction is required to allow fast interpretation of service status as well as reducing onboarding efforts of developers.&lt;/p&gt;

&lt;h2&gt;
  
  
  The goal: A single pane of glass
&lt;/h2&gt;

&lt;p&gt;The recently emerging trend of Internal Developer Platforms (IDP) offers a solution to introducing self-service capabilities in development teams. Especially for Kubernetes environments, &lt;a href="https://mogenius.com"&gt;mogenius&lt;/a&gt; offers a Kubernetes Operations Platform that embodies the principles of platform engineering by offering a self-service solution dedicated to simplifying cloud-native development. This platform enables software developers to efficiently deploy and manage applications in the cloud, focusing on innovation rather than infrastructure management.&lt;/p&gt;

&lt;p&gt;Within its suite of features, mogenius includes actionable service status monitoring, providing users with a transparent view of their service's health across the build, deployment, and operational phases. The service status delivers real-time data in an aggregated way, allowing developers to instantly identify the cause of failures independently.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq56di7et1efssgmtwoch.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq56di7et1efssgmtwoch.png" alt="mogenius service" width="800" height="518"&gt;&lt;/a&gt;&lt;em&gt;Each component of the service indicates success or failure with associated logs to individually investigate the pipeline step.&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Kubernetes Health Checks and Beyond&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;While competing solutions require specific Kubernetes onboarding and offer limited capabilities in adjusting and configuring resources, mogenius’s service status system allows for quick identification and resolution of issues, enhancing the developer experience by reducing the time spent on diagnosing problems by offering detailed logs and metrics at each step of the service lifecycle.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6hamltpzcme00zv73n08.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6hamltpzcme00zv73n08.png" alt="mogenius status successful" width="800" height="53"&gt;&lt;/a&gt;&lt;em&gt;The recent build and deployment were successful and three pods are in running state.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqah72za3fuyaq3rxzl78.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqah72za3fuyaq3rxzl78.png" alt="mogenius pod error" width="800" height="53"&gt;&lt;/a&gt;&lt;em&gt;Two of three pods are in an error state which can be investigated in the pod logs.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1hppy4wi0vues2e4rl40.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1hppy4wi0vues2e4rl40.png" alt="mogenius deployment error" width="800" height="53"&gt;&lt;/a&gt;&lt;em&gt;The last deployment failed, resulting in all three pods being unable to reach a running state.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;mogenius goes further by introducing a fourth level of checks: Kubernetes Health Checks, enabling a comprehensive health overview up to the application layer, including scenarios where containers are running, but the application is unreachable, e.g. due to external dependencies like a database failure. With pre-configured health checks toggleable by users, mogenius leverages Kubernetes' Startup Probes, Liveness Probes, and Readiness Probes in a user-friendly manner, adhering to best practices while simplifying the user experience.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F25z7dghjr8ue7ryld0em.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F25z7dghjr8ue7ryld0em.png" alt="mogenius Kubernetes health monitoring" width="800" height="600"&gt;&lt;/a&gt;&lt;em&gt;mogenius’ Advanced Kubernetes Health Monitoring&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;mogenius’s approach not only reduces the complexity involved in ensuring service health but also aligns with the shift-left ideology, empowering developers to handle more tasks earlier in the software development lifecycle. mogenius's new service status system exemplifies how technology can be leveraged to enhance visibility, reduce error diagnosis time, and improve overall service reliability and performance in a Kubernetes environment.&lt;/p&gt;

&lt;h3&gt;
  
  
  Consider
&lt;/h3&gt;

&lt;p&gt;The health of a service is a critical aspect of software development, impacting not just the reliability and performance of the applications but also the efficiency of the development process itself. Navigating the complexities of maintaining service health in Kubernetes environments can be challenging. However, with the advent of internal developer platforms like mogenius, the landscape is changing. mogenius offers a compelling solution that simplifies cloud-native development, empowering developers and DevOps professionals to maintain the health of their services more effectively, allowing them to focus on what they do best: building great software.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>cloud</category>
      <category>docker</category>
    </item>
  </channel>
</rss>
