<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: s1ntaxe770r</title>
    <description>The latest articles on DEV Community by s1ntaxe770r (@s1ntaxe770r).</description>
    <link>https://dev.to/s1ntaxe770r</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/s1ntaxe770r"/>
    <language>en</language>
    <item>
      <title>Deploying a Database Cluster on DigitalOcean using Pulumi</title>
      <dc:creator>s1ntaxe770r</dc:creator>
      <pubDate>Sat, 19 Aug 2023 00:24:15 +0000</pubDate>
      <link>https://dev.to/everythingdevops/deploying-a-database-cluster-on-digitalocean-using-pulumi-4dgc</link>
      <guid>https://dev.to/everythingdevops/deploying-a-database-cluster-on-digitalocean-using-pulumi-4dgc</guid>
      <description>&lt;p&gt;&lt;a href="https://everythingdevops.dev/deploying-a-database-cluster-on-digitalocean-using-pulumi/"&gt;This article was originally posted on EverythingDevOps&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Pulumi is an open source infrastructure as code (IaC) tool that allows you to define and manage cloud resources using popular languages such as Golang, Python, Typescript, and a &lt;a href="https://www.pulumi.com/docs/intro/languages/"&gt;few others&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Pulumi is often compared to Terraform, which is another infrastructure as code tool that allows users to &lt;a href="https://www.techtarget.com/searchitoperations/news/2240187079/Declarative-vs-imperative-The-DevOps-automation-debate"&gt;declaratively&lt;/a&gt; manage infrastructure using &lt;a href="https://www.terraform.io/language"&gt;HashiCorp Configuration Language&lt;/a&gt;(HCL). The key difference here Is Pulumi allows you to manage your infrastructure using one of their supported SDKs in your language of choice.&lt;/p&gt;

&lt;p&gt;In this guide, you would be using Typescript to deploy a PostgreSQL database cluster on DigitalOcean, as such, this guide assumes some familiarity with Typescript.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Pulumi?
&lt;/h2&gt;

&lt;p&gt;Unlike most infrastructure as code tools, Pulumi allows you to define your infrastructure using a general-purpose programming language, this allows your code to be tested much more easily. If you are familiar with Terraform you’d agree that not many testing frameworks exist for it. In comparison to a language like Python, where numerous testing frameworks exist.&lt;/p&gt;

&lt;p&gt;Another advantage lies in the use of a general-purpose programming language, most developers would find using their favorite language more intuitive than a DSL (Domain-Specific Language*&lt;em&gt;)&lt;/em&gt;* such as HCL.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;To follow along in this tutorial you would need the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://cloud.digitalocean.com/registrations/new"&gt;A DigitalOcean account&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.digitalocean.com/reference/api/create-personal-access-token/"&gt;DigitalOcean API token&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.pulumi.com/docs/get-started/install/"&gt;Pulumi CLI&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://nodejs.org/en/download/"&gt;Node.js&lt;/a&gt;.
## Initializing a Pulumi project&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The Pulumi CLI provides a command for scaffolding new projects. To do this, run the following commands:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir postgres-db &amp;amp;&amp;amp; cd postgres-db
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: It’s important the directory is empty else, the next command will return an error.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pulumi new typescript -y
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The above command will show an output showing Pulumi initialization, as shown in the image below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--tSSPjG3a--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://paper-attachments.dropboxusercontent.com/s_13391A74EB1C8BB5898EFC17337547BBBA4C512C7A4333E17FDBD333AABCE71A_1690045028112_image.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--tSSPjG3a--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://paper-attachments.dropboxusercontent.com/s_13391A74EB1C8BB5898EFC17337547BBBA4C512C7A4333E17FDBD333AABCE71A_1690045028112_image.png" alt="" width="800" height="321"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This generates a new Pulumi project along with a &lt;a href="https://www.pulumi.com/docs/intro/concepts/stack/"&gt;stack&lt;/a&gt;, a stack is an Independent instance of a Pulumi program, and each stack is usually used to represent a separate environment (production, development, or staging). This is similar to &lt;a href="https://www.terraform.io/language/state/workspaces"&gt;workspaces&lt;/a&gt; if you are familiar with Terraform.&lt;/p&gt;

&lt;h2&gt;
  
  
  Installing Dependencies
&lt;/h2&gt;

&lt;p&gt;To interact with DigitalOcean resources you would need to install the DigitalOcean Pulumi package:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ npm install @pulumi/digitalocean
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--xWJjlddS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://paper-attachments.dropboxusercontent.com/s_13391A74EB1C8BB5898EFC17337547BBBA4C512C7A4333E17FDBD333AABCE71A_1690045124168_image.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--xWJjlddS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://paper-attachments.dropboxusercontent.com/s_13391A74EB1C8BB5898EFC17337547BBBA4C512C7A4333E17FDBD333AABCE71A_1690045124168_image.png" alt="" width="648" height="183"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, set your DigitalOcean API key:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ pulumi config set digitalocean:token YOUR_DO_API_KEY --secret
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--AOIpASXm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://paper-attachments.dropboxusercontent.com/s_13391A74EB1C8BB5898EFC17337547BBBA4C512C7A4333E17FDBD333AABCE71A_1690045233381_image.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--AOIpASXm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://paper-attachments.dropboxusercontent.com/s_13391A74EB1C8BB5898EFC17337547BBBA4C512C7A4333E17FDBD333AABCE71A_1690045233381_image.png" alt="" width="782" height="97"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This would store your API key so Pulumi can authenticate against the DigitalOcean API, the &lt;code&gt;--secret&lt;/code&gt;flag ensures the value passed in is encrypted.&lt;/p&gt;

&lt;p&gt;You could also set your token using an environment variable:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ export DIGITALOCEAN_TOKEN=YOUR-DO-API-KEY 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  Provisioning a Database Cluster
&lt;/h2&gt;

&lt;p&gt;To get started open &lt;code&gt;index.ts&lt;/code&gt; and follow along with the code below:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import * as pulumi from "@pulumi/pulumi";
import * as digitalocean from "@pulumi/digitalocean";

const pg_cluster = new digitalocean.DatabaseCluster("pulumi-experiments", {
    engine: "pg",
    nodeCount: 2,
    region: "nyc1",
    size: "db-s-2vcpu-4gb",
    version: "12",
});

export const db_uri = pg_cluster.uri;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The above code creates a PostgreSQL database cluster with 2 nodes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;engine&lt;/code&gt; allows you to specify the database cluster you want to create. In this case, it’s PostgreSQL. However, DigitalOcean supports a few other database types, see &lt;a href="https://www.pulumi.com/registry/packages/digitalocean/api-docs/databasecluster/"&gt;this&lt;/a&gt; section of the Pulumi documentation for more information.&lt;/li&gt;
&lt;li&gt;Another important field to note is &lt;code&gt;size&lt;/code&gt;, this allows you to configure the size of each node, see this section of the Pulumi documentation for all valid &lt;a href="https://www.pulumi.com/registry/packages/digitalocean/api-docs/databasecluster/#databaseslug"&gt;database slugs&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;version&lt;/code&gt; allows you to specify what version of PostgreSQL you would like to run. &lt;a href="https://docs.digitalocean.com/products/databases/postgresql/#postgresql-limits"&gt;Here&lt;/a&gt; is a list of all supported PostgreSQL versions on DigitalOcean.&lt;/li&gt;
&lt;li&gt;Finally, you export the database connection URI so you can connect to it using your client of choice.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To deploy the cluster, run the following:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ pulumi up
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;In a few minutes, you should have a cluster up and running.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--vRwa5O5M--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://paper-attachments.dropboxusercontent.com/s_13391A74EB1C8BB5898EFC17337547BBBA4C512C7A4333E17FDBD333AABCE71A_1690048863254_Screenshot%2B2023-07-22%2Bat%2B6.30.04%2BPM.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--vRwa5O5M--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://paper-attachments.dropboxusercontent.com/s_13391A74EB1C8BB5898EFC17337547BBBA4C512C7A4333E17FDBD333AABCE71A_1690048863254_Screenshot%2B2023-07-22%2Bat%2B6.30.04%2BPM.png" alt="" width="800" height="405"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Notice how &lt;code&gt;db_uri&lt;/code&gt; is marked as “secret”, this is because it is a sensitive value, to output your database URI run the following command:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ pulumi stack output db_uri --show-secrets &amp;gt; pass.db
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The above would write the database URI to a file called &lt;code&gt;pass.db&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Updating your Infrastructure
&lt;/h2&gt;

&lt;p&gt;Now that you have a database cluster, chances are you are not going to stop here, and you would want to update your infrastructure. You can do this by adding a user to the newly created cluster.&lt;/p&gt;

&lt;p&gt;It's good practice to create a separate user to avoid using the admin user, update &lt;code&gt;index.ts&lt;/code&gt; with the following code:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import * as pulumi from "@pulumi/pulumi";
import * as digitalocean from "@pulumi/digitalocean";

const pg_cluster = new digitalocean.DatabaseCluster("pulumi-experiments", {
    engine: "pg",
    nodeCount: 2,
    region: "nyc1",
    size: "db-s-2vcpu-4gb",
    version: "12",
});

const pg_user = new digitalocean.DatabaseUser("non-admin",{clusterId:pg_cluster.id})

export const db_uri = pg_cluster.uri;
export const pg_user_pass = pg_user.password
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Here you create a new instance of &lt;code&gt;digitalocean.DatabaseUser&lt;/code&gt;, pass in the &lt;code&gt;cluster&lt;/code&gt;&lt;code&gt;I&lt;/code&gt;&lt;code&gt;d&lt;/code&gt; of &lt;code&gt;pg_cluster&lt;/code&gt; and export the new user’s password as we did with the database URI.&lt;/p&gt;

&lt;p&gt;To see what changes would be applied, run the following command:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ pulumi preview 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The output should look something like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--kGA5v57t--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://paper-attachments.dropboxusercontent.com/s_13391A74EB1C8BB5898EFC17337547BBBA4C512C7A4333E17FDBD333AABCE71A_1690048975818_image.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--kGA5v57t--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://paper-attachments.dropboxusercontent.com/s_13391A74EB1C8BB5898EFC17337547BBBA4C512C7A4333E17FDBD333AABCE71A_1690048975818_image.png" alt="" width="800" height="213"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once you are satisfied with the changes, go ahead and apply the changes.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ pulumi up 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Once this completes, you should have a new database user called &lt;code&gt;non-admin&lt;/code&gt;, you can output the password by running.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ pulumi stack output pg_user_pass --show-secrets
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  Clean up (Optional)
&lt;/h2&gt;

&lt;p&gt;To tear down the resources you just created, run the following command:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ pulumi destroy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--FCGuT6au--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://paper-attachments.dropboxusercontent.com/s_13391A74EB1C8BB5898EFC17337547BBBA4C512C7A4333E17FDBD333AABCE71A_1690049845201_Screenshot%2B2023-07-22%2Bat%2B7.15.22%2BPM.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--FCGuT6au--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://paper-attachments.dropboxusercontent.com/s_13391A74EB1C8BB5898EFC17337547BBBA4C512C7A4333E17FDBD333AABCE71A_1690049845201_Screenshot%2B2023-07-22%2Bat%2B7.15.22%2BPM.png" alt="" width="800" height="441"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this post, you deployed a PostgreSQL database cluster using Pulumi and updated it with a non-admin user. However, this is one of several services that you could potentially deploy using Pulumi. Hopefully, this was enough to get you started with Pulumi.&lt;/p&gt;

&lt;h2&gt;
  
  
  Next Steps
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Check out &lt;a href="https://www.civo.com/learn/kubernetes-clusters-using-the-civo-pulumi-provider"&gt;this guide&lt;/a&gt; on how to deploy a Kubernetes cluster using Pulumi&lt;/li&gt;
&lt;li&gt;Take a look at this section of the &lt;a href="https://www.pulumi.com/registry/packages/digitalocean/api-docs/"&gt;Pulumi registry&lt;/a&gt; for a full list of supported services&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>infrastructureascode</category>
      <category>pulumi</category>
      <category>cloud</category>
    </item>
    <item>
      <title>How To Deploy Meshery In Kind</title>
      <dc:creator>s1ntaxe770r</dc:creator>
      <pubDate>Sun, 03 Oct 2021 13:23:54 +0000</pubDate>
      <link>https://dev.to/s1ntaxe770r/how-to-deploy-meshery-in-kind-1d5j</link>
      <guid>https://dev.to/s1ntaxe770r/how-to-deploy-meshery-in-kind-1d5j</guid>
      <description>&lt;p&gt;In this post, I would be showing you how to  deploy &lt;a href="https://meshery.io/" rel="noopener noreferrer"&gt;Meshery&lt;/a&gt; on Kubernetes using  &lt;a href="//kind.sigs.k8s.io/"&gt;Kind&lt;/a&gt; but first…&lt;/p&gt;

&lt;h2&gt;
  
  
  What the heck is Meshery?
&lt;/h2&gt;

&lt;p&gt;If you are reading this chances are you are already familiar with Meshery or you are looking to find out what it is. Well, you are in the right place.&lt;/p&gt;

&lt;p&gt;Meshery is an open-source Service Mesh management plane. In simpler terms Meshery allows you to orchestrate the installation and management of different Service Meshes, Meshery also allows you to evaluate the performance of Service Meshes using the &lt;a href="https://smp-spec.io" rel="noopener noreferrer"&gt;SMP specification&lt;/a&gt; These are just two of the features Meshery provides out of the box.  If I have gotten you a tiny bit interested in Meshery head over to &lt;a href="https://docs.meshery.io/functionality" rel="noopener noreferrer"&gt;https://docs.meshery.io/functionality&lt;/a&gt; for a list of additional features Meshery provides.&lt;/p&gt;

&lt;p&gt;Now that you’re familiar with what Meshery is let's get it installed.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setup
&lt;/h2&gt;

&lt;p&gt;Before we get started be sure you have docker and go installed as both are requirements for installing Kind,  we'll also be needing &lt;a href="https://helm.sh/" rel="noopener noreferrer"&gt;helm&lt;/a&gt; to deploy Meshery &lt;/p&gt;

&lt;h3&gt;
  
  
  Installing Kind
&lt;/h3&gt;

&lt;p&gt;to install Kind run the following command&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;GO111MODULE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"on"&lt;/span&gt; go get sigs.k8s.io/kind@v0.11.1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;if you run into an error along the lines of :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;zsh: &lt;span class="nb"&gt;command &lt;/span&gt;not found: kind
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Try adding the following alias to your shell configuration&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;alias &lt;/span&gt;&lt;span class="nv"&gt;kind&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$GOBIN&lt;/span&gt;&lt;span class="s2"&gt;/kind"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Creating a cluster
&lt;/h2&gt;

&lt;p&gt;Next, we'll create a kind cluster with an Ingress enabled, this ingress will come in handy when we want to expose Meshery later on. &lt;/p&gt;

&lt;p&gt;Create a file called &lt;code&gt;cluster.yaml&lt;/code&gt; and populate the file with the following code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Cluster&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kind.x-k8s.io/v1alpha4&lt;/span&gt;
&lt;span class="na"&gt;nodes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;role&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;control-plane&lt;/span&gt;
  &lt;span class="na"&gt;kubeadmConfigPatches&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
    &lt;span class="s"&gt;kind: InitConfiguration&lt;/span&gt;
    &lt;span class="s"&gt;nodeRegistration:&lt;/span&gt;
      &lt;span class="s"&gt;kubeletExtraArgs:&lt;/span&gt;
        &lt;span class="s"&gt;node-labels: "ingress-ready=true"&lt;/span&gt;
  &lt;span class="na"&gt;extraPortMappings&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
    &lt;span class="na"&gt;hostPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
    &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;TCP&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;443&lt;/span&gt;
    &lt;span class="na"&gt;hostPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;443&lt;/span&gt;
    &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;TCP&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we the following command to create a cluster using the cluster configuration&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kind create cluster &lt;span class="nt"&gt;--name&lt;/span&gt; meshery &lt;span class="nt"&gt;--config&lt;/span&gt; cluster.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After a few minutes, you should have a Kubernetes cluster up and running.&lt;/p&gt;

&lt;h3&gt;
  
  
  Installing Meshery
&lt;/h3&gt;

&lt;p&gt;hop into your terminal and run the following command to get Meshery installed&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt; &lt;span class="nv"&gt;$ &lt;/span&gt;git clone https://github.com/layer5io/meshery.git&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nb"&gt;cd &lt;/span&gt;meshery
 &lt;span class="nv"&gt;$ &lt;/span&gt;kubectl create namespace meshery
 &lt;span class="nv"&gt;$ &lt;/span&gt;helm &lt;span class="nb"&gt;install &lt;/span&gt;meshery &lt;span class="nt"&gt;--namespace&lt;/span&gt; meshery &lt;span class="nb"&gt;install&lt;/span&gt;/kubernetes/helm/meshery
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Exposing meshery
&lt;/h3&gt;

&lt;p&gt;As mentioned earlier on we would access Meshery by using Ingress, create a file called &lt;code&gt;meshery-ingress.yaml&lt;/code&gt;, and add the following configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;networking.k8s.io/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Ingress&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;meshery-ingress&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;meshery-ingress&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;rules&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;host&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;meshery.local&lt;/span&gt;
    &lt;span class="na"&gt;http&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;paths&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;pathType&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Prefix&lt;/span&gt;
        &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/"&lt;/span&gt;
        &lt;span class="na"&gt;backend&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;meshery&lt;/span&gt;
            &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; 
              &lt;span class="na"&gt;number&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;9081&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Apply the configuration using &lt;code&gt;kubectl apply -n meshery -f meshery-ingress.yaml&lt;/code&gt; .&lt;/p&gt;

&lt;p&gt;Now create the following entry in &lt;code&gt;/etc/hosts&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="s"&gt;127.0.0.1 meshery.local&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;At this point if you head over to &lt;a href="http://meshery.local" rel="noopener noreferrer"&gt;http://meshery.local&lt;/a&gt; in your browser you should be able to access Meshery's UI which looks something like this.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ffile.coffee%2Fu%2F4L_8Zc31sDWWF1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ffile.coffee%2Fu%2F4L_8Zc31sDWWF1.png" alt="Untitled"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Configuring Meshery's command-line client
&lt;/h3&gt;

&lt;p&gt;While you could interact with Meshery from the UI only, at some point you are going to want to use the command line client which is what mesheryctl is. So let's get that installed.&lt;/p&gt;

&lt;p&gt;Head over to &lt;a href="https://github.com/meshery/meshery/releases/" rel="noopener noreferrer"&gt;https://github.com/meshery/meshery/releases/&lt;/a&gt; and download the binary for your operating system. Next unzip the file and move it to your path&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;unzip mesheryctl_0.5.52_Darwin_x86_64.zip
&lt;span class="nb"&gt;mv &lt;/span&gt;mesheryctl /usr/local/bin/mesheryctl
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The version of the binary might differ depending on when you are reading this. &lt;/p&gt;

&lt;p&gt;Now that you have mesheryctl installed you should be able to run &lt;code&gt;mesheryctl version&lt;/code&gt; .On your first try you should see something like this&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;~
❯ mesheryctl version     
Missing Meshery config file.
Create default config now &lt;span class="o"&gt;[&lt;/span&gt;y/n]?
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;enter &lt;code&gt;y&lt;/code&gt; and mesheryctl would generate a config file which we would also be needing later on. &lt;/p&gt;

&lt;p&gt;if all went well you should be presented with this error message&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;Default config file created at /Users/someguy/.meshery/config.yaml
        VERSION     GITSHA      
Client  v0.5.62     35e8d943    
Server  unavailable unavailable 

  Unable to communicate with Meshery: Get &lt;span class="s2"&gt;"http://localhost:9081/api/system/version"&lt;/span&gt;: dial tcp &lt;span class="o"&gt;[&lt;/span&gt;::1]:9081: connect: connection refused
  See https://docs.meshery.io &lt;span class="k"&gt;for &lt;/span&gt;&lt;span class="nb"&gt;help &lt;/span&gt;getting started with Meshery.

Checking &lt;span class="k"&gt;for &lt;/span&gt;latest version of mesheryctl...

  v0.5.62 is the latest release.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This happens because mesheryctl is trying to communicate with Meshery on the default address, in our case it's &lt;a href="http://meshery.local" rel="noopener noreferrer"&gt;http://meshery.local&lt;/a&gt;. Luckily we can change this using the config file  Meshery generated earlier.&lt;/p&gt;

&lt;p&gt;Open up the config file located at ~/.meshery/config.yaml&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;contexts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;local&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;endpoint&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http://localhost:9081&lt;/span&gt;
    &lt;span class="na"&gt;token&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Default&lt;/span&gt;
    &lt;span class="na"&gt;platform&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;docker&lt;/span&gt;
    &lt;span class="na"&gt;adapters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;meshery-istio&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;meshery-linkerd&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;meshery-consul&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;meshery-nsm&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;meshery-kuma&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;meshery-cpx&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;meshery-osm&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;meshery-traefik-mesh&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;meshery-nginx-sm&lt;/span&gt;
    &lt;span class="na"&gt;channel&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;stable&lt;/span&gt;
    &lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;latest&lt;/span&gt;
&lt;span class="na"&gt;current-context&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;local&lt;/span&gt;
&lt;span class="na"&gt;tokens&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Default&lt;/span&gt;
  &lt;span class="na"&gt;location&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;auth.json&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Taking a closer look we see that the endpoint is set to &lt;code&gt;[localhost:9081](http://localhost:9081)&lt;/code&gt;, modify the file so it looks like this&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;contexts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;local&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;endpoint&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http://meshery.local&lt;/span&gt;
    &lt;span class="na"&gt;token&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Default&lt;/span&gt;
    &lt;span class="na"&gt;platform&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kubernetes&lt;/span&gt;
    &lt;span class="na"&gt;adapters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;meshery-istio&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;meshery-linkerd&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;meshery-consul&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;meshery-nsm&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;meshery-kuma&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;meshery-cpx&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;meshery-osm&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;meshery-traefik-mesh&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;meshery-nginx-sm&lt;/span&gt;
    &lt;span class="na"&gt;channel&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;stable&lt;/span&gt;
    &lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;latest&lt;/span&gt;
&lt;span class="na"&gt;current-context&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;local&lt;/span&gt;
&lt;span class="na"&gt;tokens&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Default&lt;/span&gt;
  &lt;span class="na"&gt;location&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;auth.json&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here i changed the endpoint and platform to match our current configuration. Now run &lt;code&gt;mesheryctl version&lt;/code&gt; again and you should see the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;❯ mesheryctl version      
        VERSION GITSHA   
Client  v0.5.62 35e8d943    
Server  v0.5.62 35e8d943    

Checking &lt;span class="k"&gt;for &lt;/span&gt;latest version of mesheryctl...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And there you go, we successfully deployed Meshery in kind and configured the CLI to interact with Meshery. If you have any questions or want to contribute to the Meshery project feel free to join the slack workspace using the link &lt;a href="https://layer5io.slack.com" rel="noopener noreferrer"&gt;here&lt;/a&gt;. Now go forth and make a mesh of things&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>meshery</category>
      <category>cloudnative</category>
    </item>
    <item>
      <title>Deploying a Go-based app to AKS using Kubestack</title>
      <dc:creator>s1ntaxe770r</dc:creator>
      <pubDate>Mon, 30 Aug 2021 14:16:56 +0000</pubDate>
      <link>https://dev.to/s1ntaxe770r/deploying-a-go-based-app-to-aks-using-kubestack-2oll</link>
      <guid>https://dev.to/s1ntaxe770r/deploying-a-go-based-app-to-aks-using-kubestack-2oll</guid>
      <description>&lt;p&gt;In this post, I would be showing you how to deploy a golang application to Kubernetes using &lt;a href="https://kubestack.com" rel="noopener noreferrer"&gt;Kubestack&lt;/a&gt;. Kubestack is a Gitops automation built on Terraform to reduce the risk of deploying and increasing development speed. That being said I would not be covering how to set up kubestack but rather how to deploy Kubernetes manifests using Kubestack, for more info on how to up a working Kubestack pipeline check out the getting started guide over &lt;a href="https://www.kubestack.com/framework/documentation/tutorial-get-started" rel="noopener noreferrer"&gt;here&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;Now let's dive in.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setup
&lt;/h2&gt;

&lt;p&gt;Once you have your kubestack pipeline set up your folder structure should look something like this&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;    &lt;span class="nb"&gt;.&lt;/span&gt;
    ├── Dockerfile
    ├── Dockerfile.loc
    ├── README.md
    ├── aks_zero_cluster.tf
    ├── aks_zero_ingress.tf
    ├── aks_zero_providers.tf
    ├── manifests
    └── versions.tf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;to deploy our Kubernetes manifest we would be making use of a cluster service module. Cluster service modules in Kubestack allows Terraform to interact directly with Kubernetes through Terraform, this covers things like deploying and creating namespaces, creating deployments..etc. &lt;/p&gt;

&lt;p&gt;Open up &lt;code&gt;ask-zero_cluster.tf&lt;/code&gt; and add the following lines&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;    &lt;span class="err"&gt;...&lt;/span&gt;

    &lt;span class="k"&gt;module&lt;/span&gt; &lt;span class="s2"&gt;"custom_manifests"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;providers&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nx"&gt;kustomization&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;kustomization&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;aks_zero&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;
      &lt;span class="nx"&gt;source&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"kbst.xyz/catalog/custom-manifests/kustomization"&lt;/span&gt;
      &lt;span class="nx"&gt;version&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"0.1.0"&lt;/span&gt;
      &lt;span class="nx"&gt;configuration&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nx"&gt;apps&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="nx"&gt;namespace&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"apps-&lt;/span&gt;&lt;span class="k"&gt;${terraform&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;workspace&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
          &lt;span class="nx"&gt;resources&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
                &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;path&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;root&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/manifests/apps/namespace.yaml"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;path&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;root&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/manifests/apps/deployment.yaml"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;path&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;root&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/manifests/apps/service.yaml"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;path&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;root&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/manifests/apps/ingress.yaml"&lt;/span&gt;
          &lt;span class="p"&gt;]&lt;/span&gt;
          &lt;span class="nx"&gt;common_labels&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="s2"&gt;"env"&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;terraform&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;workspace&lt;/span&gt;
          &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="nx"&gt;ops&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{}&lt;/span&gt;
        &lt;span class="nx"&gt;loc&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{}&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here we initialize the custom manifest cluster service module and tell kustomize to use the existing &lt;code&gt;aks_zero&lt;/code&gt; module using.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;     &lt;span class="nx"&gt;providers&lt;/span&gt; &lt;span class="err"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nx"&gt;kustomization&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;kustomization&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;aks_zero&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The configuration block is where the magic happens, first, we specify we declare what workspace we want our resources to be deployed to using&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;
     &lt;span class="nx"&gt;apps&lt;/span&gt; &lt;span class="err"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="nx"&gt;namespace&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"apps-&lt;/span&gt;&lt;span class="k"&gt;${terraform&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;workspace&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
          &lt;span class="nx"&gt;resources&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
                &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;path&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;root&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/manifests/apps/namespace.yaml"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;path&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;root&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/manifests/apps/deployment.yaml"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;path&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;root&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/manifests/apps/service.yaml"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;path&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;root&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/manifests/apps/ingress.yaml"&lt;/span&gt;
          &lt;span class="p"&gt;]&lt;/span&gt;
          &lt;span class="nx"&gt;common_labels&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="s2"&gt;"env"&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;terraform&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;workspace&lt;/span&gt;
          &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, we declare what namespace the manifests should be deployed to using &lt;code&gt;namespace = "apps${terraform.workspace}"&lt;/code&gt; which would translate to &lt;code&gt;apps-ops&lt;/code&gt; or &lt;code&gt;apps-apps&lt;/code&gt; depending on your current workspace, next the resources block tells terraform to look in manifests folder in the current directory for the manifests we want to deploy.  Now that we have the cluster module set up let's create the manifests.&lt;br&gt;
Run the following commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;mkdir &lt;/span&gt;manifests/apps &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;cd &lt;/span&gt;manifests/apps &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;touch &lt;/span&gt;deployment.yaml service.yaml ingress.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now populate the files with the following code.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# manifests/apps/namespace.yaml&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Namespace&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps-apps&lt;/span&gt;

&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Namespace&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps-ops&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note: since this is a fresh Kubernetes cluster I have included a manifest for a namespace as kubestack would deploy the resources in the order specified in the module so the namespace &lt;code&gt;apps-apps&lt;/code&gt; would need to exist first.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;#manifests/apps/deployment.yaml&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ping-api&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ping-api&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ping-api&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ping-api&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ghcr.io/s1ntaxe770r/evil-ekow:latest&lt;/span&gt;
        &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;limits&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;128Mi"&lt;/span&gt;
            &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;500m"&lt;/span&gt;
        &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;8080&lt;/span&gt;
        &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;REDIS_CACHE_HOST&lt;/span&gt;
            &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;redis-svc"&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;REDIS_PORT&lt;/span&gt;
            &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;6379"&lt;/span&gt;

&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;redis&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;redis&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;redis&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;redis&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;redis:alpine&lt;/span&gt;
        &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;limits&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;128Mi"&lt;/span&gt;
            &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;500m"&lt;/span&gt;
        &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;6379&lt;/span&gt;

&lt;span class="c1"&gt;# manifests/apps/service.yaml&lt;/span&gt;
&lt;span class="s"&gt;--&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Service&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;redis-svc&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;redis&lt;/span&gt;
  &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;6379&lt;/span&gt;
    &lt;span class="na"&gt;targetPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;6379&lt;/span&gt;

&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Service&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ping-svc&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ping-api&lt;/span&gt;
  &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
    &lt;span class="na"&gt;targetPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;8080&lt;/span&gt;
&lt;span class="c1"&gt;# manifests/apps/ingress.yaml&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;networking.k8s.io/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Ingress&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;myingress&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;myingress&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;rules&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;host&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;prod.evil.corp&lt;/span&gt;
    &lt;span class="na"&gt;http&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;paths&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;pathType&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Prefix&lt;/span&gt;
        &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/"&lt;/span&gt;
        &lt;span class="na"&gt;backend&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ping-svc&lt;/span&gt;
            &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;number&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above manifests will deploy a version of an API that uses Redis to store key-value pairs. You can check the source code out &lt;a href="https://github.com/s1ntaxe770r/evil-ekow" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Next, you can test out these changes locally by running  &lt;code&gt;kbst apply local&lt;/code&gt;, if all looks good you should be able to tag and deploy like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;git tag apps-deploy-1
&lt;span class="nv"&gt;$ &lt;/span&gt;git push origin apps-deploy-1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Accessing your deployment
&lt;/h2&gt;

&lt;p&gt;Because I don't have a valid domain at the time of writing I would be using my ingress controller external IP to access the deployment, however, if you have your domain properly configured pls refer to this portion of the&lt;br&gt;
&lt;a href="https://www.kubestack.com/framework/documentation/tutorial-provision-infrastructure" rel="noopener noreferrer"&gt;Kubestack&lt;/a&gt; documentation.&lt;/p&gt;

&lt;p&gt;First, grab your ingress controller\'s external IP using:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$  kubectl get ingresses -A
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Next, create the following entry in &lt;code&gt;/etc/hosts&lt;/code&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;10.10.20.5   prod.evil.corp
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;be sure to change the IP to your ingress controllers external IP ,now you should be able to access the swagger docs at&lt;br&gt;
&lt;a href="http://prod.evil.corp" rel="noopener noreferrer"&gt;http://prod.evil.corp&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ffile.coffee%2Fu%2FKwFlKr9wJoj3hk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ffile.coffee%2Fu%2FKwFlKr9wJoj3hk.png" alt="evil-corp"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this post, I covered how to deploy your Kubernetes manifests using Kubestack on Azure do note that this is not limited to azure as Kubetack also supports GKE and AWS at the time of writing this post, so be sure to check out the&lt;br&gt;
&lt;a href="https://www.kubestack.com/framework/documentation/tutorial-provision-infrastructure" rel="noopener noreferrer"&gt;documentation&lt;/a&gt; for any provider-specific steps. Now go forth and git deploying 😄&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>azure</category>
      <category>kubestack</category>
      <category>go</category>
    </item>
    <item>
      <title>5 reasons why frameworks make sense for infrastructure as code</title>
      <dc:creator>s1ntaxe770r</dc:creator>
      <pubDate>Tue, 13 Jul 2021 13:41:18 +0000</pubDate>
      <link>https://dev.to/s1ntaxe770r/5-reasons-why-frameworks-make-sense-for-infrastructure-as-code-1epl</link>
      <guid>https://dev.to/s1ntaxe770r/5-reasons-why-frameworks-make-sense-for-infrastructure-as-code-1epl</guid>
      <description>&lt;p&gt;In this post I hope to provide a few reasons why frameworks make sense for infrastructure as code, Having tried out &lt;a href="https://www.kubestack.com/"&gt;Kubestack&lt;/a&gt; this got me thinking, Should more of these frameworks exist? And do they provide the same value as a traditional web framework like Flask, Express.js, or Ruby on Rails. Now you might be thinking don't tools like &lt;a href="http://terraform.io/"&gt;Terraform&lt;/a&gt;, &lt;a href="https://ansible.com"&gt;Ansible&lt;/a&gt;, &lt;a href="https://chef.io"&gt;Chef&lt;/a&gt; already exists?&lt;/p&gt;

&lt;p&gt;Well yes, but these technologies themselves are just tools and not necessarily Frameworks.&lt;/p&gt;

&lt;h2&gt;
  
  
  Back to basics
&lt;/h2&gt;

&lt;p&gt;So what exactly is a framework in the context of software development? A quick google search would give you an array of answers, here are two answers/analogies I really like. &lt;/p&gt;

&lt;p&gt;From &lt;a href="https://djangostars.com/blog/what-is-a-web-framework/"&gt;djangostars.com&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A web framework is a software tool that provides a way to build and run web applications. As a result, you don’t need to write code on your own and waste time looking for possible miscalculations and bugs.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;From &lt;a href="https://hashnode.com/post/what-are-frameworks-and-libraries-explain-like-im-five-cjecpev1b0a0yiqwup29s7wak"&gt;Hashnode&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--DjhO5KpK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://file.coffee/u/QW4gGC4jOnxkAP.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--DjhO5KpK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://file.coffee/u/QW4gGC4jOnxkAP.png" alt="hashnode"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When selecting a web framework for a project there a few things one has to consider.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Speed of development &lt;/li&gt;
&lt;li&gt;Maintainability &lt;/li&gt;
&lt;li&gt;Eliminate repetitive code &lt;/li&gt;
&lt;li&gt;Community support / documentation &lt;/li&gt;
&lt;li&gt;Reduce risk&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  How is any of this related to infrastructure as code?
&lt;/h3&gt;

&lt;p&gt;To understand how this relates to infrastructure it's worth taking a look at a typical scenario where you choose a framework. Say you only want to accept post requests on /login in your app if you were to implement this from scratch you'd find that it is far from ideal in most cases and could potentially have its own flaws, Whereas virtually every modern web framework ships with it's own routing solution, saving you overhead that comes with implementing this from scratch. &lt;/p&gt;

&lt;p&gt;As I mentioned earlier the premise for this article was my experience with &lt;a href="https://kubestack.com"&gt;kubestack&lt;/a&gt;, for the remainder of this article I would be making several references to it. But first a little bit on Kubestack. &lt;/p&gt;

&lt;p&gt;Kubestack is an infrastructure automation framework built on top of terraform. Kubestack's primary aim is not to reinvent the wheel but foster rapid development using when using terraform. &lt;/p&gt;

&lt;p&gt;From &lt;a href="https://kubestack.com"&gt;kubestack.com&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Kubestack is the open-source Terraform framework for teams that want to automate infrastructure, not reinvent automation.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Now the same could be said about building infrastructure, if you were to spin up a Kubernetes cluster using terraform, there is a generic process of obtaining some boilerplate and customizing it to suit your needs. Now this isn't exactly a problem, however in a scenario where you have to create more than one cluster you may find yourself repeating this process once again and this is where my point about eliminating repetitive code comes in. Move over to something like &lt;a href="https://www.kubestack.com/framework/documentation/tutorial-develop-locally"&gt;Kubestack&lt;/a&gt; and this boilerplate is generated for you, this would be the equivalent of running Django-admin startproject xyz if you are familiar with Django. Obviously, this isn't convincing enough so let's do a side-by-side comparison of some reasons to use a regular web development framework and see if they could apply to infrastructure. &lt;/p&gt;

&lt;h2&gt;
  
  
  Side-by-Side comparison
&lt;/h2&gt;

&lt;p&gt;For this section I would be looking at the following points:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Speed of development &lt;/li&gt;
&lt;li&gt;Maintainability &lt;/li&gt;
&lt;li&gt;Reducing repetitive code &lt;/li&gt;
&lt;li&gt;Community support&lt;/li&gt;
&lt;li&gt;Reduce risk&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here I'll discuss how each of these points also applies to infrastructure as code. &lt;/p&gt;

&lt;h3&gt;
  
  
  Speed of development
&lt;/h3&gt;

&lt;p&gt;In general speed of development is the time it takes to get an idea from your head into a working application, the tooling/frameworks you use can increase or hinder your development speed. &lt;/p&gt;

&lt;p&gt;When creating a Kubernetes cluster a few things are relatively constant across each cloud e.g( picking node size/count, region ), at this point you already start to see how a framework could speed up development by abstracting these components whilst leaving room for you to customize as you please.&lt;/p&gt;

&lt;p&gt;This is something &lt;a href="https://kubestack.com"&gt;Kubestack&lt;/a&gt; provides out of the box but I won't go into detail in this article. See &lt;a href="https://www.kubestack.com/framework/documentation/tutorial-develop-locally"&gt;here&lt;/a&gt; for more info.&lt;/p&gt;

&lt;h3&gt;
  
  
  Maintainability
&lt;/h3&gt;

&lt;p&gt;Being able to write code quickly isn't always worth the effort if your app becomes a nightmare to maintain. Maintainability in software development refers to the process of writing code that is easy to understand, fix and extend.&lt;/p&gt;

&lt;p&gt;"&lt;em&gt;Good programmers write code that humans can understand.&lt;/em&gt;" - &lt;strong&gt;Martin Fowler&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I've seen a fair amount of projects become a nightmare from a lack of structure. Frameworks like Django or Ruby on Rails provide a nice scaffold to build your projects on top of, they also import relevant modules you are bound to use at some point. This is not to say that frameworks magically make all your problems go away but in general I think it provides a good stepping stone to making your app maintainable. &lt;/p&gt;

&lt;p&gt;This directly transfers over to infrastructure as code to a certain extent, when starting a new project using something like Terraform for instance, it's hard to predict how large it would grow and this could potentially how you structure your modules, having a structure with relevant information pre-filled would significantly increase development. Going back to the inheritance model Kubestack adopts the chances that you misconfigure a module or copy-paste wrongly are significantly reduced.&lt;/p&gt;

&lt;p&gt;Ansible actually does better in this regard with ansible-galaxy , helping you generate a rather sensible folder structure for your playbooks. Learn more about ansible-galaxy &lt;a href="https://galaxy.ansible.com/"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Reducing repetitive code
&lt;/h3&gt;

&lt;p&gt;reducing repetitive processes is at the core of both infrastructure as code and using software development frameworks, but in trying to automate I often find the process a bit repetitive and error-prone, having to use the same piece of code here every so often, Once again frameworks quickly reduce the amount of effort spent doing small things like this. Many frameworks ensure developers follow &lt;a href="https://en.wikipedia.org/wiki/Don't_repeat_yourself"&gt;DRY&lt;/a&gt; principles. Terraform uses HCL for configuration ,  which by itself can become  repetitive very quckily, classic example would be when provision more than one of the same resource&lt;/p&gt;

&lt;p&gt;You might find your self copy-pasiting code or doing some weird stuff to get around this, whereas when using Kubestack your configuration follows an &lt;a href="https://www.kubestack.com/framework/documentation/inheritance-model"&gt;inheritance model&lt;/a&gt; which helps keep your configuration &lt;a href="https://en.wikipedia.org/wiki/Don't_repeat_yourself"&gt;DRY&lt;/a&gt; this means you define once and simply inherit in any child module you wish &lt;/p&gt;

&lt;p&gt;to make use of it in. &lt;/p&gt;

&lt;h3&gt;
  
  
  Community support /Documentation
&lt;/h3&gt;

&lt;p&gt;This is perhaps one of the biggest advantages of using a framework, for every error you run into while using a framework there is a good chance a fellow user has experienced such and if for some reason they haven't this could be an opportunity to get involved with the project if you so choose. I can't imagine learning how to use Flask without the thousands of StackOverflow posts and issues on GitHub. No doubt a healthy community fosters the growth of any language or framework and I think the same holds true for infrastructure as code if the were more people using IAC tools with a framework approach at a lot more questions would be asked and ultimately leading to growth. &lt;/p&gt;

&lt;h2&gt;
  
  
  Reducing risk
&lt;/h2&gt;

&lt;p&gt;When using IAC tools it's vital that your code is tested properly as the slightest of misconfigurations tends to have a significant impact on your infrastructure, when using a framework things are less likely to break as most features undergo a certain level of testing before it can be accepted, move over to IAC world and most of the testing is left to you, a typical example would be an incident that happened at Spotify, more details on that &lt;a href="https://www.youtube.com/watch?v=ix0Tw8uinWs"&gt;here&lt;/a&gt;. One common problem terraform users often encounter is what I am calling "state drift"(because I don't know what it's called), where your terraform state and code do not match leading to some undesirable results which was the case with Spotify. Going back to Kubestack, when using it, the remote state is automatically set up for you, which means you can focus on developing your infrastructure rather than worrying about your state files. This can be thought of like how Django provides a simple interface for setting up connections for different types of databases.  Of all the points so far this is the most critical part, bad infrastructure directly affects end-users.&lt;/p&gt;

&lt;h3&gt;
  
  
  So what's the point of this article?
&lt;/h3&gt;

&lt;p&gt;This article is not to advertise the use of frameworks in every project in fact avoid using them for as long as it makes sense, this is rather a subtle call to action and to hopefully convince about the benefits it could provide, i think it's time the DevOps tooling evolved in the framework direction.&lt;/p&gt;

&lt;p&gt;If you'd like to learn more about Kubestack you can check that out over &lt;a href="https://www.kubestack.com"&gt;here&lt;/a&gt;. &lt;/p&gt;

</description>
      <category>kubestack</category>
      <category>devops</category>
      <category>terraform</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>Provisioning an azure kubernetes cluster with Terraform</title>
      <dc:creator>s1ntaxe770r</dc:creator>
      <pubDate>Sat, 27 Mar 2021 07:56:16 +0000</pubDate>
      <link>https://dev.to/s1ntaxe770r/provisioning-an-azure-kubernetes-cluster-with-terraform-171o</link>
      <guid>https://dev.to/s1ntaxe770r/provisioning-an-azure-kubernetes-cluster-with-terraform-171o</guid>
      <description>&lt;p&gt;In this post you will learn how to set up an Azure Kubernetes cluster using T&lt;a href="https://www.terraform.io/" rel="noopener noreferrer"&gt;erraform&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;NOTE: This article assumes some basic knowledge of cloud concepts and the &lt;a href="https://azure.microsoft.com" rel="noopener noreferrer"&gt;Microsoft Azure platform&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Terraform ??
&lt;/h2&gt;

&lt;p&gt;Terraform is an Infrastructure as code tool that allows developers and operations teams to automate how they provision their infrastructure.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why write more code for my infrastructure ?
&lt;/h2&gt;

&lt;p&gt;If you are new to Infrastructure as code it could seem like an extra step , when you could just click a few buttons on you cloud provider of choice's dashboard and be on your way. But IaC(Infrastructure as code) offers quite a few advantages.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Because your infrastructure is now represented as code it is testable&lt;/li&gt;
&lt;li&gt;Your environments are now very much reproducible&lt;/li&gt;
&lt;li&gt;You can now track changes to your infrastructure over time with a version control system like Git&lt;/li&gt;
&lt;li&gt;Deployments are faster,because you interact with the cloud provider less.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Before diving into Terraform you need a brief understanding of&lt;/p&gt;

&lt;p&gt;Hashicorp configuration language(HCL).&lt;/p&gt;

&lt;h3&gt;
  
  
  HCL ?
&lt;/h3&gt;

&lt;p&gt;Yes Terraform uses a it's own configuration language, this may seem daunting at first but it's quite easy. Here's a quick peek at what it looks like.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;azurerm_resource_group&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;resource-group&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt;     &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;staging-resource-group&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
  &lt;span class="nx"&gt;location&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;West Europe&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Your infrastructure in Terraform are represented as "resources", everything from networking to databases or virtual machines are all resources.&lt;/p&gt;

&lt;p&gt;This is exactly what the &lt;code&gt;resource&lt;/code&gt; block represents. Here we are creating an &lt;code&gt;azurerm_resource_group&lt;/code&gt; as the name implies , it's a resource group. Resource groups are how you organize resources together, typical use case would be putting all your servers for single project under the same resource group.&lt;/p&gt;

&lt;p&gt;Next we give the resource block a name , think of this as a variable name we can use throughout the Terraform file. Within the resource block we give our resource group a name, this is the name that would be given to our resource group in Azure. Finally we give a &lt;code&gt;location&lt;/code&gt; where we want the resource group to be deployed.&lt;/p&gt;

&lt;p&gt;If you are coming from something like Ansible you might notice how different Terraform's approach to configuration is, this is because Terraform uses whats known as an imperative style of configuration, simply put. In an imperative style of configuration you declare the state you want your infrastructure in and not how you want to achieve that state. You can learn more about declarative and imperative configuration &lt;a href="https://pantheon.tech/what-is-declarative-imperative/" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now that you have an idea of what Terraform configuration looks like lets dive in.&lt;/p&gt;

&lt;h2&gt;
  
  
  Project setup
&lt;/h2&gt;

&lt;p&gt;Prerequisites:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;An &lt;a href="https://azure.microsoft.com" rel="noopener noreferrer"&gt;Azure account&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Git&lt;/li&gt;
&lt;li&gt;Azure &lt;a href="https://docs.microsoft.com/en-us/cli/azure/install-azure-cli" rel="noopener noreferrer"&gt;CLI&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.terraform.io/downloads.html" rel="noopener noreferrer"&gt;Terraform&lt;/a&gt; CLI&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Once you have all that setup, login to your Azure account through the command line using the following command&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;az login
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next clone the sample project.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;git clone https://github.com/s1ntaxe770r/aks-terraform-demo.git
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Before we begin we need to run terraform init. This would download any plugins that the Azure provider depends on.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;terraform init
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Taking a quick look at our folder structure you should have something like this.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;.&lt;/span&gt;
├── main.tf
├── modules
│   └── cluster
│       ├── cluster.tf
│       └── variables.tf
├── README.md
└── variables.tf

2 directories, 6 files
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Starting from the top lets look at &lt;code&gt;main.tf&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#main.tf&lt;/span&gt;

terraform &lt;span class="o"&gt;{&lt;/span&gt;
  required_providers &lt;span class="o"&gt;{&lt;/span&gt;
    azurerm &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
      &lt;span class="nb"&gt;source&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"hashicorp/azurerm"&lt;/span&gt;
      version &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"2.39.0"&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;
  &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
provider &lt;span class="s2"&gt;"azurerm"&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
  features &lt;span class="o"&gt;{}&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

module &lt;span class="s2"&gt;"cluster"&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
  &lt;span class="nb"&gt;source&lt;/span&gt;                &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"./modules/cluster"&lt;/span&gt;
  ssh_key               &lt;span class="o"&gt;=&lt;/span&gt; var.ssh_key
  location              &lt;span class="o"&gt;=&lt;/span&gt; var.location
  kubernetes_version    &lt;span class="o"&gt;=&lt;/span&gt; var.kubernetes_version

&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;First we declare what Provider we are using, which is how Terraform knows what cloud platform we intend on using , this could be Google cloud , AWS or any other provider they support. You can learn more about Terraform providers &lt;a href="https://www.terraform.io/docs/language/providers/index.html" rel="noopener noreferrer"&gt;here&lt;/a&gt;. Its also important to note that each provider block is usually in the documentation so you don't need to write this out each time.&lt;/p&gt;

&lt;p&gt;Next we define a &lt;code&gt;module&lt;/code&gt; block and pass it the folder where our module is located and a few variables.&lt;/p&gt;

&lt;h3&gt;
  
  
  Modules??
&lt;/h3&gt;

&lt;p&gt;Modules in Terraform are a way to separate your configuration so each module can handle a specific task. Sure we could just dump all of our configuration in &lt;code&gt;main.tf&lt;/code&gt; but that makes things clunky and less portable.&lt;/p&gt;

&lt;p&gt;Now lets take a look at the cluster folder in modules directory.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;modules/cluster
├── cluster.tf
└── variables.tf

0 directories, 2 files
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Lets take a look at &lt;code&gt;cluster.tf&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#modules/cluster/cluster.tf&lt;/span&gt;

resource &lt;span class="s2"&gt;"azurerm_resource_group"&lt;/span&gt; &lt;span class="s2"&gt;"aks-resource"&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    name &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"kubernetes-resource-group"&lt;/span&gt;
    location &lt;span class="o"&gt;=&lt;/span&gt; var.location
&lt;span class="o"&gt;}&lt;/span&gt;

resource &lt;span class="s2"&gt;"azurerm_kubernetes_cluster"&lt;/span&gt; &lt;span class="s2"&gt;"aks-cluster"&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;**&lt;/span&gt;name &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"terraform-cluster"&lt;/span&gt;
    location &lt;span class="o"&gt;=&lt;/span&gt; azurerm_resource_group.aks-resource.location
    resource_group_name &lt;span class="o"&gt;=&lt;/span&gt; azurerm_resource_group.aks-resource.name
    dns_prefix &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"terrafo**rm-cluster"&lt;/span&gt;
    kubernetes_version &lt;span class="o"&gt;=&lt;/span&gt; var.kubernetes_version

    default_node_pool &lt;span class="o"&gt;{&lt;/span&gt;
      name &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"default"&lt;/span&gt;
      node_count &lt;span class="o"&gt;=&lt;/span&gt; 2
      vm_size &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Standard_A2_v2"&lt;/span&gt;
      &lt;span class="nb"&gt;type&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"VirtualMachineScaleSets"&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;


  identity &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="nb"&gt;type&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"SystemAssigned"&lt;/span&gt;
  &lt;span class="o"&gt;}&lt;/span&gt;

    linux_profile &lt;span class="o"&gt;{&lt;/span&gt;
        admin_username &lt;span class="o"&gt;=&lt;/span&gt; var.admin_user
        ssh_key &lt;span class="o"&gt;{&lt;/span&gt;
            key_data &lt;span class="o"&gt;=&lt;/span&gt; var.ssh_key
        &lt;span class="o"&gt;}&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;

    network_profile &lt;span class="o"&gt;{&lt;/span&gt;
      network_plugin &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"kubenet"&lt;/span&gt;
      load_balancer_sku &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Standard"&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;

&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;in the first part of the part of the configuration we define a resource group for our cluster and cleverly name it "kubernetes-resource-group", and give it a location which would come from a variable which is defined in &lt;code&gt;variables.tf&lt;/code&gt;. The next part are the actual specs of our kubernetes cluster. First we tell Terraform we want an azure kubernetes cluster using &lt;code&gt;resource "azurerm_kubernetes_cluster"&lt;/code&gt; , then we give our cluster a name , location and a resource group. We can use the location of the resource group we defined earlier by using the reference name &lt;code&gt;aks-resource&lt;/code&gt; plus the value we want. In this case it's the location so we use &lt;code&gt;aks-resource.location&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;There are two more blocks that we need to pay attention too. The first being &lt;code&gt;default_node_pool&lt;/code&gt; block and the second &lt;code&gt;linux_profile&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;default_node_pool&lt;/code&gt; block lets us define how many nodes we want to run and what type of virtual machines we want to run on our nodes. Its important you pick the right size for your nodes as this can affect cost and performance. You can take a look at what VM sizes azure offers and their use cases over &lt;a href="https://docs.microsoft.com/en-us/azure/virtual-machines/sizes" rel="noopener noreferrer"&gt;here&lt;/a&gt;. &lt;code&gt;node_count&lt;/code&gt; tells terraform how many nodes we want our cluster to have. Next we define the VM. Here I'm using and &lt;a href="https://docs.microsoft.com/en-us/azure/virtual-machines/av2-series" rel="noopener noreferrer"&gt;A series&lt;/a&gt; VM with 4 gigs of ram and two CPU cores. Lastly we give it a type of "virtual machine scale sets" which basically lets you create a group of auto scaling VM's&lt;/p&gt;

&lt;p&gt;The last block we need to look at is &lt;code&gt;linux_profile&lt;/code&gt; . This creates a user we can use to ssh into one of our nodes in case something goes wrong. Here we simply pass the block variables.&lt;/p&gt;

&lt;p&gt;I intentionally didn't go over all the blocks because most times you don't need to change them and if you do the documentation is quite easy to go through.&lt;/p&gt;

&lt;p&gt;Finally lets take a look at &lt;code&gt;variables.tf&lt;/code&gt; as you might have guessed this is were we define all the variables we referenced earlier.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#variables.tf&lt;/span&gt;

variable &lt;span class="s2"&gt;"location"&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="nb"&gt;type&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; string
    description &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"resource location"&lt;/span&gt;
    default &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"East US"&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

variable &lt;span class="s2"&gt;"kubernetes_version"&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
  &lt;span class="nb"&gt;type&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; string
  description &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"k8's version"&lt;/span&gt;
  default &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"1.19.6"&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

variable &lt;span class="s2"&gt;"admin_user"&lt;/span&gt;&lt;span class="o"&gt;{&lt;/span&gt;
  &lt;span class="nb"&gt;type&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; string
  description &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"username for linux_profile"&lt;/span&gt;
  default &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"enderdragon"&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

variable &lt;span class="s2"&gt;"ssh_key"&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
   description &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ssh_key for admin_user"&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;to define a variable we use the variable keyword , give it a name and within the curly braces we define what type it is, in this case it's a string, an optional description and a default value, which is also optional.&lt;/p&gt;

&lt;p&gt;Now we are almost ready to create our cluster but first we need to generate an ssh key , if you remember we created a variable for it earlier. if you have an ssh key pair you can skip this step&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;ssh-keygen &lt;span class="nt"&gt;-t&lt;/span&gt; rsa &lt;span class="nt"&gt;-b&lt;/span&gt; 4096
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;you can leave everything as default by pressing enter. Next we export the key into an environment variable.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;TF_VAR_ssh_key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt; &lt;span class="nb"&gt;cat&lt;/span&gt; ~/.ssh/id_rsa.pub&lt;span class="si"&gt;)&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Notice the &lt;code&gt;TF_VAR&lt;/code&gt; prefix before the name of the actual variable name. This so Terraform is aware of the environment variable and can make use of it. You should also note that the variable name should correspond to the one in &lt;code&gt;variables.tf&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Before we actually create our infrastructure its always a good idea to see what exactly Terraform would be creating luckily Terraform has a command for that&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;terraform plan
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The output should look something like this&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  &lt;span class="c"&gt;# module.cluster.azurerm_kubernetes_cluster.aks-cluster will be created&lt;/span&gt;
  + resource &lt;span class="s2"&gt;"azurerm_kubernetes_cluster"&lt;/span&gt; &lt;span class="s2"&gt;"aks-cluster"&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
      + dns_prefix              &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"terraform-cluster"&lt;/span&gt;
      + fqdn                    &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;known after apply&lt;span class="o"&gt;)&lt;/span&gt;
      + &lt;span class="nb"&gt;id&lt;/span&gt;                      &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;known after apply&lt;span class="o"&gt;)&lt;/span&gt;
      + kube_admin_config       &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;known after apply&lt;span class="o"&gt;)&lt;/span&gt;
      + kube_admin_config_raw   &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;sensitive value&lt;span class="o"&gt;)&lt;/span&gt;
      + kube_config             &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;known after apply&lt;span class="o"&gt;)&lt;/span&gt;
      + kube_config_raw         &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;sensitive value&lt;span class="o"&gt;)&lt;/span&gt;
      + kubelet_identity        &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;known after apply&lt;span class="o"&gt;)&lt;/span&gt;
      + kubernetes_version      &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"1.19.1"&lt;/span&gt;
      + location                &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"eastus"&lt;/span&gt;
      + name                    &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"terraform-cluster"&lt;/span&gt;
      + node_resource_group     &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;known after apply&lt;span class="o"&gt;)&lt;/span&gt;
      + private_cluster_enabled &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;known after apply&lt;span class="o"&gt;)&lt;/span&gt;
      + private_fqdn            &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;known after apply&lt;span class="o"&gt;)&lt;/span&gt;
      + private_link_enabled    &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;known after apply&lt;span class="o"&gt;)&lt;/span&gt;
      + resource_group_name     &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"kubernetes-resource-group"&lt;/span&gt;
      + sku_tier                &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Free"&lt;/span&gt;

      + addon_profile &lt;span class="o"&gt;{&lt;/span&gt;
          + aci_connector_linux &lt;span class="o"&gt;{&lt;/span&gt;
              + enabled     &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;known after apply&lt;span class="o"&gt;)&lt;/span&gt;
              + subnet_name &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;known after apply&lt;span class="o"&gt;)&lt;/span&gt;
            &lt;span class="o"&gt;}&lt;/span&gt;

          + azure_policy &lt;span class="o"&gt;{&lt;/span&gt;
              + enabled &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;known after apply&lt;span class="o"&gt;)&lt;/span&gt;
            &lt;span class="o"&gt;}&lt;/span&gt;

          + http_application_routing &lt;span class="o"&gt;{&lt;/span&gt;
              + enabled                            &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;known after apply&lt;span class="o"&gt;)&lt;/span&gt;
              + http_application_routing_zone_name &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;known after apply&lt;span class="o"&gt;)&lt;/span&gt;
            &lt;span class="o"&gt;}&lt;/span&gt;

          + kube_dashboard &lt;span class="o"&gt;{&lt;/span&gt;
              + enabled &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;known after apply&lt;span class="o"&gt;)&lt;/span&gt;
            &lt;span class="o"&gt;}&lt;/span&gt;

          + oms_agent &lt;span class="o"&gt;{&lt;/span&gt;
              + enabled                    &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;known after apply&lt;span class="o"&gt;)&lt;/span&gt;
              + log_analytics_workspace_id &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;known after apply&lt;span class="o"&gt;)&lt;/span&gt;
              + oms_agent_identity         &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;known after apply&lt;span class="o"&gt;)&lt;/span&gt;
            &lt;span class="o"&gt;}&lt;/span&gt;
        &lt;span class="o"&gt;}&lt;/span&gt;

      + auto_scaler_profile &lt;span class="o"&gt;{&lt;/span&gt;
          + balance_similar_node_groups      &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;known after apply&lt;span class="o"&gt;)&lt;/span&gt;
          + max_graceful_termination_sec     &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;known after apply&lt;span class="o"&gt;)&lt;/span&gt;
          + scale_down_delay_after_add       &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;known after apply&lt;span class="o"&gt;)&lt;/span&gt;
          + scale_down_delay_after_delete    &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;known after apply&lt;span class="o"&gt;)&lt;/span&gt;
          + scale_down_delay_after_failure   &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;known after apply&lt;span class="o"&gt;)&lt;/span&gt;
          + scale_down_unneeded              &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;known after apply&lt;span class="o"&gt;)&lt;/span&gt;
          + scale_down_unready               &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;known after apply&lt;span class="o"&gt;)&lt;/span&gt;
          + scale_down_utilization_threshold &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;known after apply&lt;span class="o"&gt;)&lt;/span&gt;
          + scan_interval                    &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;known after apply&lt;span class="o"&gt;)&lt;/span&gt;
        &lt;span class="o"&gt;}&lt;/span&gt;

      + default_node_pool &lt;span class="o"&gt;{&lt;/span&gt;
          + max_pods             &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;known after apply&lt;span class="o"&gt;)&lt;/span&gt;
          + name                 &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"default"&lt;/span&gt;
          + node_count           &lt;span class="o"&gt;=&lt;/span&gt; 2
          + orchestrator_version &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;known after apply&lt;span class="o"&gt;)&lt;/span&gt;
          + os_disk_size_gb      &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;known after apply&lt;span class="o"&gt;)&lt;/span&gt;
          + os_disk_type         &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Managed"&lt;/span&gt;
          + &lt;span class="nb"&gt;type&lt;/span&gt;                 &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"VirtualMachineScaleSets"&lt;/span&gt;
          + vm_size              &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Standard_A2_v2"&lt;/span&gt;
        &lt;span class="o"&gt;}&lt;/span&gt;

      + identity &lt;span class="o"&gt;{&lt;/span&gt;
          + principal_id &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;known after apply&lt;span class="o"&gt;)&lt;/span&gt;
          + tenant_id    &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;known after apply&lt;span class="o"&gt;)&lt;/span&gt;
          + &lt;span class="nb"&gt;type&lt;/span&gt;         &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"SystemAssigned"&lt;/span&gt;
        &lt;span class="o"&gt;}&lt;/span&gt;

      + linux_profile &lt;span class="o"&gt;{&lt;/span&gt;
          + admin_username &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"enderdragon"&lt;/span&gt;

          + ssh_key &lt;span class="o"&gt;{&lt;/span&gt;
              + key_data &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"jsdksdnjcdkcdomocadcadpadmoOSNSINCDOICECDCWCdacwdcwcwccdscdfvevtbrbrtbevF
CDSCSASACDCDACDCDCdsdsacdq&lt;/span&gt;&lt;span class="nv"&gt;$q&lt;/span&gt;&lt;span class="s2"&gt;@#qfesad== you@probablyyourdesktop"&lt;/span&gt;
            &lt;span class="o"&gt;}&lt;/span&gt;
        &lt;span class="o"&gt;}&lt;/span&gt;

      + network_profile &lt;span class="o"&gt;{&lt;/span&gt;
          + dns_service_ip     &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;known after apply&lt;span class="o"&gt;)&lt;/span&gt;
          + docker_bridge_cidr &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;known after apply&lt;span class="o"&gt;)&lt;/span&gt;
          + load_balancer_sku  &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Standard"&lt;/span&gt;
          + network_plugin     &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"kubenet"&lt;/span&gt;
          + network_policy     &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;known after apply&lt;span class="o"&gt;)&lt;/span&gt;
          + outbound_type      &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"loadBalancer"&lt;/span&gt;
          + pod_cidr           &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;known after apply&lt;span class="o"&gt;)&lt;/span&gt;
          + service_cidr       &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;known after apply&lt;span class="o"&gt;)&lt;/span&gt;

          + load_balancer_profile &lt;span class="o"&gt;{&lt;/span&gt;
              + effective_outbound_ips    &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;known after apply&lt;span class="o"&gt;)&lt;/span&gt;
              + idle_timeout_in_minutes   &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;known after apply&lt;span class="o"&gt;)&lt;/span&gt;
              + managed_outbound_ip_count &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;known after apply&lt;span class="o"&gt;)&lt;/span&gt;
              + outbound_ip_address_ids   &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;known after apply&lt;span class="o"&gt;)&lt;/span&gt;
              + outbound_ip_prefix_ids    &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;known after apply&lt;span class="o"&gt;)&lt;/span&gt;
              + outbound_ports_allocated  &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;known after apply&lt;span class="o"&gt;)&lt;/span&gt;
            &lt;span class="o"&gt;}&lt;/span&gt;
        &lt;span class="o"&gt;}&lt;/span&gt;

      + role_based_access_control &lt;span class="o"&gt;{&lt;/span&gt;
          + enabled &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;known after apply&lt;span class="o"&gt;)&lt;/span&gt;

          + azure_active_directory &lt;span class="o"&gt;{&lt;/span&gt;
              + admin_group_object_ids &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;known after apply&lt;span class="o"&gt;)&lt;/span&gt;
              + client_app_id          &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;known after apply&lt;span class="o"&gt;)&lt;/span&gt;
              + managed                &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;known after apply&lt;span class="o"&gt;)&lt;/span&gt;
              + server_app_id          &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;known after apply&lt;span class="o"&gt;)&lt;/span&gt;
              + server_app_secret      &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;sensitive value&lt;span class="o"&gt;)&lt;/span&gt;
              + tenant_id              &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;known after apply&lt;span class="o"&gt;)&lt;/span&gt;
            &lt;span class="o"&gt;}&lt;/span&gt;
        &lt;span class="o"&gt;}&lt;/span&gt;

      + windows_profile &lt;span class="o"&gt;{&lt;/span&gt;
          + admin_password &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;sensitive value&lt;span class="o"&gt;)&lt;/span&gt;
          + admin_username &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;known after apply&lt;span class="o"&gt;)&lt;/span&gt;
        &lt;span class="o"&gt;}&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;

  &lt;span class="c"&gt;# module.cluster.azurerm_resource_group.aks-resource will be created&lt;/span&gt;
  + resource &lt;span class="s2"&gt;"azurerm_resource_group"&lt;/span&gt; &lt;span class="s2"&gt;"aks-resource"&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
      + &lt;span class="nb"&gt;id&lt;/span&gt;       &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;known after apply&lt;span class="o"&gt;)&lt;/span&gt;
      + location &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"eastus"&lt;/span&gt;
      + name     &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"kubernetes-resource-group"&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;

Plan: 2 to add, 0 to change, 0 to destroy.

&lt;span class="nt"&gt;------------------------------------------------------------------------&lt;/span&gt;

Note: You didn&lt;span class="s1"&gt;'t specify an "-out" parameter to save this plan, so Terraform
can'&lt;/span&gt;t guarantee that exactly these actions will be performed &lt;span class="k"&gt;if&lt;/span&gt;
&lt;span class="s2"&gt;"terraform apply"&lt;/span&gt; is subsequently run.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If every thing looks good we can apply our configuration using:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;terraform apply
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Terraform will prompt you one last time to make sure you want to proceed enter yes and watch the magic happen. Once the resources have been provisioned head over to your azure dashboard a look. You should see something like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcloud-5e3kgz3ft-hack-club-bot.vercel.app%2F0untitled.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcloud-5e3kgz3ft-hack-club-bot.vercel.app%2F0untitled.png" alt="cluster specs"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see, Terraform configured everything we needed to spin up a cluster, and we didn't have to specify everything. Click on terraform-cluster and lets make sure everything looks good.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcloud-466i1k5d7-hack-club-bot.vercel.app%2F0screenshot_2021-02-04_231739.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcloud-466i1k5d7-hack-club-bot.vercel.app%2F0screenshot_2021-02-04_231739.png" alt="cluster"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And there you have it, we deployed a kubernetes cluster with our desired specifications and Terraform all did the heavy lifting.&lt;/p&gt;

&lt;p&gt;Once you are done it's as easy as running &lt;code&gt;terraform destroy&lt;/code&gt; to tear down all the resources you have just provisioned.&lt;/p&gt;

&lt;h2&gt;
  
  
  Quick recap
&lt;/h2&gt;

&lt;p&gt;You learnt :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Why Infrastructure as code is important is important&lt;/li&gt;
&lt;li&gt;The basics of HCL(Hashicorp configuration language)&lt;/li&gt;
&lt;li&gt;How to provision a kubernetes cluster with terraform&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you are wondering where to go from here. Here are somethings you can try.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Here we authenticated through the azure CLI but that's not completely ideal. Instead you might want to use a service principal with more streamlined permissions. Check that out over &lt;a href="https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/guides/service_principal_client_secret" rel="noopener noreferrer"&gt;here&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;You should never store your state file in version control as that might contain sensitive information. Instead you can put it in an &lt;a href="https://docs.microsoft.com/en-us/azure/developer/terraform/store-state-in-azure-storage#:~:text=The%20Terraform%20state%20back%20end%20is%20configured%20when,account.%20container_name:%20The%20name%20of%20the%20blob%20container." rel="noopener noreferrer"&gt;azure blob store&lt;/a&gt; .&lt;/li&gt;
&lt;li&gt;There are better ways to pass variables to terraform which i did not cover here, but &lt;a href="https://www.terraform.io/docs/language/values/variables.html" rel="noopener noreferrer"&gt;this&lt;/a&gt; post on the terraform website should walk you through it nicely.&lt;/li&gt;
&lt;li&gt;Finally. This article couldn't possibly cover all there is to terraform, so i highly suggest taking a look at the Terraform &lt;a href="https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs" rel="noopener noreferrer"&gt;&lt;/a&gt;documentation, it has some boilerplate configuration to get yo u started provisioning resources.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All the code samples used in this tutorial can be found &lt;a href="https://github.com/s1ntaxe770r/aks-terraform-demo" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>terraform</category>
      <category>azure</category>
    </item>
    <item>
      <title>Automating python deployment environments setup with ansible</title>
      <dc:creator>s1ntaxe770r</dc:creator>
      <pubDate>Wed, 26 Aug 2020 13:18:00 +0000</pubDate>
      <link>https://dev.to/s1ntaxe770r/automating-python-deployment-environments-setup-with-ansible-1j85</link>
      <guid>https://dev.to/s1ntaxe770r/automating-python-deployment-environments-setup-with-ansible-1j85</guid>
      <description>&lt;h2&gt;
  
  
  Prerequsites
&lt;/h2&gt;

&lt;h4&gt;
  
  
  to follow along with this post you would need the folowing installed on your machine
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.digitalocean.com/community/tutorials/how-to-install-and-configure-ansible-on-ubuntu-18-04"&gt;ansible&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;python3 (on your local machine and remote server).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In this post i will show you how you can automate creating python deployment environment's using Ansible.&lt;/p&gt;

&lt;h3&gt;
  
  
  But setting this up manually isn't a problem ?
&lt;/h3&gt;

&lt;p&gt;say you have a web application that uses the flask framework, deploying this to an individual server manually wouldn't be a problem, scale this up to more than three applications with only slight variations in dependencies and things start to get pretty repetitive.This is where ansible comes in.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's this ansible thing anyway ?
&lt;/h2&gt;

&lt;p&gt;Ansible is an open source automation tool that can be used for configuration management, application deployment and infrastructure provisioning,to mention a few.&lt;/p&gt;

&lt;h2&gt;
  
  
  Ok lets do it then!
&lt;/h2&gt;

&lt;p&gt;Start by creating the following folder structure.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;Directory_Name
├── ansible.cfg
├── inventory
├── requirements.txt
└── playbook.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Lets start with &lt;code&gt;ansible.cfg&lt;/code&gt; this is where we would set configuration values that we want ansible to use when it runs.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;ansible.cfg&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[defaults]
inventory = ./inventory
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above simply tells ansible to use the inventory file in the current directory.&lt;/p&gt;

&lt;h3&gt;
  
  
  Whats an inventory file?
&lt;/h3&gt;

&lt;p&gt;ansible uses this to determine what what machine you would like to run your playbooks against.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;inventory&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ini"&gt;&lt;code&gt;&lt;span class="nn"&gt;[deployment server]&lt;/span&gt;
&lt;span class="err"&gt;127.0.0.1&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;within the square brackets i specified the the name i would use to refer to a group of servers here i have just one but in practice &lt;code&gt;127.0.0.1&lt;/code&gt; would be the ip address of the machines you want to run the playbooks.If you need to create a new group simply add another pair of square brackets and add the ip address of the targets machines below like above.If you would like to run this against your machine replace &lt;code&gt;127.0.0.1&lt;/code&gt; with &lt;code&gt;127.0.0.1 ansible_connection=local&lt;/code&gt;,which is what i did.&lt;/p&gt;

&lt;h3&gt;
  
  
  Playbooks
&lt;/h3&gt;

&lt;p&gt;playbooks in ansible simply refer to where you define the tasks you want to execute on the remote machine, playbooks are written in Yaml which is a markup language.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;playbook.yml&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;hosts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;all&lt;/span&gt;
  &lt;span class="na"&gt;gather_facts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="no"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;tasks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;install dependencies from requirements.txt&lt;/span&gt;
      &lt;span class="na"&gt;pip&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;requirements&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;requirements.txt&lt;/span&gt;
        &lt;span class="na"&gt;chdir&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;.&lt;/span&gt;
        &lt;span class="na"&gt;executable&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;pip3&lt;/span&gt;


    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;additional dependency&lt;/span&gt;
        &lt;span class="s"&gt;pip&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;gunicorn&lt;/span&gt;
          &lt;span class="na"&gt;executable&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;pip3&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;we use &lt;code&gt;hosts&lt;/code&gt; to indicate what group we want to run the playbook against here i used all to run against all the ip addresses  i have in my inventory, to run against a single group replace &lt;code&gt;all&lt;/code&gt; with the name you placed within square brackets in the &lt;code&gt;inventory&lt;/code&gt; ,ansible uses &lt;code&gt;gather_facts&lt;/code&gt; to collect information about the target.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;tasks&lt;/code&gt; is where we define what we actually want to do each task has a &lt;code&gt;name&lt;/code&gt; which is used to make playbooks more readable and is also used in the output to indicate what task ansible is currently  executing.&lt;code&gt;pip3&lt;/code&gt; is the ansible &lt;a href="https://docs.ansible.com/ansible/latest/dev_guide/developing_modules_general.html"&gt;module&lt;/a&gt; we are using , when installing from a requirements.txt is i used &lt;code&gt;chdir&lt;/code&gt; to tell ansible what path on my local machine the &lt;code&gt;requirements.txt&lt;/code&gt; is located , in the second instance i pass the &lt;code&gt;name&lt;/code&gt; of package i wish to install. you might want to install dependencies from a &lt;code&gt;requirements.txt&lt;/code&gt; if you have specific version of your dependencies check out the ansible pip &lt;a href="https://docs.ansible.com/ansible/latest/modules/pip_module.html"&gt;docs&lt;/a&gt; for more info.Now use &lt;code&gt;ansible-playbook playbook.yml&lt;/code&gt;. Your output should look something like this&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ok: [127.0.0.1]

TASK [install dependencies requirements.txt] ***********************************
changed: [127.0.0.1]

TASK [additional dependency] ***************************************************
ok: [127.0.0.1]

PLAY RECAP *********************************************************************
127.0.0.1                  : ok=3    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;PLAY RECAP&lt;/code&gt; is  a summary of the tasks that where executed, &lt;code&gt;ok&lt;/code&gt; tells you the number of tasks that ran successfully , &lt;code&gt;changed&lt;/code&gt; shows you how many tasks changed the state of the target machine in this case 1 because i have some of the dependencies installed. The rest are pretty self explanatory so i would not dive into them.&lt;/p&gt;

&lt;p&gt;If you would like to take a look at the source files for this post you can find them in this &lt;a href="https://github.com/s1ntaxe770r/ansible-demo.git"&gt;repository&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you're wondering where to go from here check out &lt;a href="https://spacelift.io/blog/ansible-playbooks"&gt;this post&lt;/a&gt;, that goes into more detail with Ansible tips and tricks &lt;/p&gt;

&lt;h4&gt;
  
  
  In a future post i would show you how to deploy a simple web app using ansible, if you have any questions feel free to reach me on &lt;a href="https://twitter.com/s1ntaxe770r"&gt;twitter&lt;/a&gt; or create an issue in the repo.
&lt;/h4&gt;

</description>
      <category>ansible</category>
      <category>devops</category>
      <category>python</category>
    </item>
    <item>
      <title>How to setup an ssh server within a docker container</title>
      <dc:creator>s1ntaxe770r</dc:creator>
      <pubDate>Tue, 26 May 2020 05:09:13 +0000</pubDate>
      <link>https://dev.to/s1ntaxe770r/how-to-setup-ssh-within-a-docker-container-i5i</link>
      <guid>https://dev.to/s1ntaxe770r/how-to-setup-ssh-within-a-docker-container-i5i</guid>
      <description>&lt;p&gt;In this post I will walk you through my process of setting up ssh access to your docker container. &lt;/p&gt;

&lt;h3&gt;
  
  
  Why run an ssh server within a container in the first place?
&lt;/h3&gt;

&lt;p&gt;The major reason why you might want to do this is for testing purposes, perhaps you are testing infrastructure automation or provisioning with something like ansible which requires ssh access to the target machine, you'd want to test this in a safe environment before going live.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;This article assumes you have docker installed on your machine if not you can refer to this page to get it installed &lt;a href="https://docs.docker.com/get-docker/"&gt;here&lt;/a&gt; &lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Dockerfile!
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; ubuntu:latest&lt;/span&gt;

&lt;span class="k"&gt;RUN &lt;/span&gt;apt update &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; apt &lt;span class="nb"&gt;install  &lt;/span&gt;openssh-server &lt;span class="nb"&gt;sudo&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt;

&lt;span class="k"&gt;RUN &lt;/span&gt;useradd &lt;span class="nt"&gt;-rm&lt;/span&gt; &lt;span class="nt"&gt;-d&lt;/span&gt; /home/ubuntu &lt;span class="nt"&gt;-s&lt;/span&gt; /bin/bash &lt;span class="nt"&gt;-g&lt;/span&gt; root &lt;span class="nt"&gt;-G&lt;/span&gt; &lt;span class="nb"&gt;sudo&lt;/span&gt; &lt;span class="nt"&gt;-u&lt;/span&gt; 1000 &lt;span class="nb"&gt;test&lt;/span&gt; 

&lt;span class="k"&gt;RUN  &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s1"&gt;'test:test'&lt;/span&gt; | chpasswd

&lt;span class="k"&gt;RUN &lt;/span&gt;service ssh start

&lt;span class="k"&gt;EXPOSE&lt;/span&gt;&lt;span class="s"&gt; 22&lt;/span&gt;

&lt;span class="k"&gt;CMD&lt;/span&gt;&lt;span class="s"&gt; ["/usr/sbin/sshd","-D"]&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here I am using ubuntu as the base image for the container, then on line 2 i install open-ssh server and sudo.&lt;/p&gt;

&lt;h4&gt;
  
  
  Sudo?
&lt;/h4&gt;

&lt;p&gt;By default docker does not have sudo installed , hence the need to install it along with the open ssh server &lt;/p&gt;

&lt;p&gt;On line 3 i create a user called test and add it to the sudo group &lt;/p&gt;

&lt;p&gt;&lt;code&gt;echo 'test:test' | chpasswd&lt;/code&gt;  sets the password for the user test to test&lt;/p&gt;

&lt;p&gt;Line 5 starts the ssh service and line 6 tells docker the container listens on port 22 ( which is the default for ssh) and finally i start the ssh daemon.&lt;/p&gt;

&lt;h3&gt;
  
  
  Building the image
&lt;/h3&gt;

&lt;p&gt;To build the image run &lt;code&gt;docker build -t IMAGE_NAME .&lt;/code&gt; , once that's  done you can run the image using &lt;code&gt;docker run IMAGE_NAME -p 22:22&lt;/code&gt;. finally you can connect to the container using the user you created , in this case it will be test so &lt;code&gt;ssh test@ip_address&lt;/code&gt; enter your password in the prompt and your all setup &lt;/p&gt;

&lt;p&gt;The original Dockerfile can be found on my github &lt;a href="https://github.com/s1ntaxe770r/SSH-dockerfile"&gt;here&lt;/a&gt; &lt;/p&gt;

</description>
      <category>docker</category>
      <category>devops</category>
    </item>
  </channel>
</rss>
