<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: chrisedrego</title>
    <description>The latest articles on DEV Community by chrisedrego (@chrisedrego).</description>
    <link>https://dev.to/chrisedrego</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/chrisedrego"/>
    <language>en</language>
    <item>
      <title>4 easy steps to setup AWS WorkSpaces (Screenshot’s included)</title>
      <dc:creator>chrisedrego</dc:creator>
      <pubDate>Thu, 01 Jul 2021 18:55:36 +0000</pubDate>
      <link>https://dev.to/chrisedrego/4-easy-steps-to-setup-aws-workspaces-screenshot-s-included-1fgg</link>
      <guid>https://dev.to/chrisedrego/4-easy-steps-to-setup-aws-workspaces-screenshot-s-included-1fgg</guid>
      <description>&lt;p&gt;A simple 4 steps guide to set up AWS WorkSpaces.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is AWS WorkSpaces?
&lt;/h2&gt;

&lt;p&gt;AWS WorkSpaces is a desktop as a service offering from AWS. Users can connect from PC, Mac Desktops computers by downloading the clients or use the web clients. AWS WorkSpaces is only available for specific regions. AWS WorkSpaces supports Amazon Linux, Windows 10 bundles with pre-installed software packages. AWS WorkSpaces contains Free Tier available as well.&lt;/p&gt;

&lt;h2&gt;
  
  
  4 Steps to setup AWS WorkSpaces
&lt;/h2&gt;

&lt;p&gt;Below are the steps used to setup AWS Workspaces&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;1. Head over to Amazon WorkSpaces.&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Login to your AWS Account, head over to AWS &lt;a href="https://console.aws.amazon.com/workspaces" rel="noopener noreferrer"&gt;WorkSpaces&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2488%2F1%2AyEfGyz8t-86A4LZaYFsRWg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2488%2F1%2AyEfGyz8t-86A4LZaYFsRWg.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;2. Click on Quick-Setup&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;This Guide is about setting up much of the stuff automatically.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2040%2F1%2A2dcY-PQQEsdZjL9lQ6l_Aw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2040%2F1%2A2dcY-PQQEsdZjL9lQ6l_Aw.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;3. Select the WorkSpaces Bundle &amp;amp; Provide the user details.&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;There are a couple of bundles to choose from which contain specific software packages pre-installed &amp;amp; provide the username and email address.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2932%2F1%2AGyuTlDL6snAADDp2IBepaQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2932%2F1%2AGyuTlDL6snAADDp2IBepaQ.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Wait for some time.&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;You’ll be getting an email that would contain a Registration Code (keep that handy) and click on the WorkSpaces link to set the password. --&amp;gt; &lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F3040%2F1%2AD4S2lYLE_nqPe7BotWyu2g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F3040%2F1%2AD4S2lYLE_nqPe7BotWyu2g.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F1%2A4X4auEW0G8Q_jS8VNwRYbw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F1%2A4X4auEW0G8Q_jS8VNwRYbw.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Download &amp;amp; Launch AWS WorkSpaces Client
&lt;/h3&gt;

&lt;p&gt;There are two ways to have Amazon WebSpaces client&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Standalone Client → &lt;a href="https://clients.amazonworkspaces.com/" rel="noopener noreferrer"&gt;https://clients.amazonworkspaces.com/&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Web Client → &lt;a href="https://clients.amazonworkspaces.com/webclient" rel="noopener noreferrer"&gt;https://clients.amazonworkspaces.com/webclient&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Once you have Downloaded &amp;amp; Installed the client, Enter the Registration code you received in the email and log in to it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F1%2APbr2P2Lk9LJY0m4f1kP2eg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F1%2APbr2P2Lk9LJY0m4f1kP2eg.png" alt="**Boom! ❤**"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;h2&gt;
  
  
  “If you found this article useful, feel free to 👏 clap many times or share it with your friends. If you have any doubts regarding the same or anything around the DevOps Space, get in touch with me on &lt;a href="https://www.linkedin.com/in/chrisedrego" rel="noopener noreferrer"&gt;Linkedin&lt;/a&gt;, &lt;a href="https://twitter.com/chrisedrego" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt;, &lt;a href="https://www.instagram.com/chrisedrego/" rel="noopener noreferrer"&gt;Instagram&lt;/a&gt;.”
&lt;/h2&gt;
&lt;/blockquote&gt;

</description>
      <category>devops</category>
      <category>aws</category>
      <category>sre</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Kubernetes AutoScaling Series: Cluster AutoScaler</title>
      <dc:creator>chrisedrego</dc:creator>
      <pubDate>Thu, 03 Jun 2021 19:05:14 +0000</pubDate>
      <link>https://dev.to/chrisedrego/kubernetes-autoscaling-series-cluster-autoscaler-3m8l</link>
      <guid>https://dev.to/chrisedrego/kubernetes-autoscaling-series-cluster-autoscaler-3m8l</guid>
      <description>&lt;p&gt;A Complete Zero-to-Hero Guide to Kubernetes Cluster AutoScaler which allows scaling the number of nodes based on the resource requests and avoids having your pods waiting in the &lt;strong&gt;Pending State.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  AutoScalling in Kubernetes
&lt;/h2&gt;

&lt;p&gt;Kubernetes is feature filled with all the good-ness one of which is scaling, it’s often &amp;amp; assumption that Kubernetes comes with &lt;strong&gt;AutoScaling **as default, but that’s hardly the case, we often need to tweak the bars to make things actually work. Today we would discuss how we can use *Kubernetes Cluster AutoScaler *to **scale Kubernetes Nodes&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is ClusterAutoScaler?
&lt;/h2&gt;

&lt;p&gt;Cluster Autoscaler is an amazing utility that automatically upscales &amp;amp; down-scales the number of nodes based on the request of the resource for pods.&lt;/p&gt;

&lt;p&gt;Cluster Autoscaler can be used to scale both the Kubernetes Control Plane(master nodes) or Data plane (worker nodes aka minion). For the purpose of this demo, we would choose an AWS-based on-premise cluster provisioned using &lt;a href="https://kops.sigs.k8s.io/" rel="noopener noreferrer"&gt;KOPS&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;In order for Cluster AutoScaler deployment to authenticate to AWS and scale the number of nodes, there are a couple of ways to do so.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Attaching the nodes IAM policy with appropriate permissions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Creating an IAM user and create Kubernetes secrets and attaching the secrets to Cluster AutoScaler Deployment.&lt;/p&gt;

&lt;p&gt;{&lt;br&gt;
  "Version": "2012-10-17",&lt;br&gt;
  "Statement": [&lt;br&gt;
    {&lt;br&gt;
      "Effect": "Allow",&lt;br&gt;
      "Action": [&lt;br&gt;
        "autoscaling:DescribeAutoScalingGroups",&lt;br&gt;
        "autoscaling:DescribeAutoScalingInstances",&lt;br&gt;
        "autoscaling:DescribeLaunchConfigurations",&lt;br&gt;
        "autoscaling:SetDesiredCapacity",&lt;br&gt;
        "autoscaling:DescribeTags",&lt;br&gt;
        "autoscaling:TerminateInstanceInAutoScalingGroup"&lt;br&gt;
      ],&lt;br&gt;
      "Resource": [&lt;br&gt;
        "*"&lt;br&gt;
      ]&lt;br&gt;
    }&lt;br&gt;
  ]&lt;br&gt;
}&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://kops.sigs.k8s.io/" rel="noopener noreferrer"&gt;KOPS&lt;/a&gt; is an amazing tool that helps to create/bootstrap the clusters. In the case of AWS, &lt;a href="https://kops.sigs.k8s.io/" rel="noopener noreferrer"&gt;KOPS&lt;/a&gt; provision the nodes in the form of Instance groups which are AutoScaling groups in AWS.&lt;/p&gt;

&lt;h3&gt;
  
  
  Pre-requisites:
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://kubernetes.io/docs/tasks/tools/" rel="noopener noreferrer"&gt;kubectl&lt;/a&gt; CLI&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://kops.sigs.k8s.io/" rel="noopener noreferrer"&gt;KOPS&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kubernetes Cluster (v.1.14.0+ preferably)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://chrisedrego.medium.com/kubernetes-monitoring-metrics-server-8ee562df97fc" rel="noopener noreferrer"&gt;Metrics Server&lt;/a&gt; (The Complete Guide on Metrics Server? -&amp;gt; &lt;a href="https://chrisedrego.medium.com/kubernetes-monitoring-metrics-server-8ee562df97fc" rel="noopener noreferrer"&gt;link&lt;/a&gt; )&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  How does Cluster AutoScaler really, Scale?
&lt;/h2&gt;

&lt;p&gt;Cluster Autoscaler follows a cycle through which it continuously checks if there is any pod that is &lt;strong&gt;&lt;em&gt;Pending&lt;/em&gt;&lt;/strong&gt; state because of inadequate resources of the available nodes in the cluster, if that's the case it adds new nodes to make sure the pods get scheduled. The way it determines is also &lt;strong&gt;based on the request which is specified the pod spec&lt;/strong&gt;, that's why it's pretty much expected to make sure that we provide a realistic request value to the pods &lt;strong&gt;&lt;em&gt;(nothing less, nothing more)&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Cluster Autoscaler decreases the number of nodes that are consistently unneeded for a significant amount of time. A node is unneeded when it has low utilization and all of its important pods can be moved elsewhere.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F1%2AoYL-R7gdi4vafutvC59J6g.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F1%2AoYL-R7gdi4vafutvC59J6g.gif" alt="LifeCycle: ClusterAutoScaler"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md" rel="noopener noreferrer"&gt;&lt;strong&gt;kubernetes/autoscaler&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Getting Started
&lt;/h2&gt;

&lt;p&gt;In this demo, we would be using an on-premise Kubernetes Cluster already set up on AWS using kops.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Clone the Github repo &lt;a href="https://github.com/chrisedrego/clusterautoscaler" rel="noopener noreferrer"&gt;**https://github.com/chrisedrego/clusterautoscaler&lt;/a&gt;**&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h3&gt;
  
  
  Creating a Test Instance Group
&lt;/h3&gt;

&lt;p&gt;For testing, we would create a dedicated Instance group with nodeSelector as &lt;strong&gt;node: test-node *&lt;em&gt;so while testing the pods would only get scheduled on this node. We have selected the *&lt;/em&gt;&lt;a href="https://aws.amazon.com/ec2/instance-types/t3/" rel="noopener noreferrer"&gt;t3.medium&lt;/a&gt;&lt;/strong&gt; type as it has the following configuration.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;**compute/vcpu:** 2vcpu
**memory:** 4Gi
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;
&lt;br&gt;
Make the required changes in the existing config use the CLUSTER_NAME, SUBNET_NAME, and apply the configuration in the steps mentioned.
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export KOPS_STATE_STORE='s3://STATE_STORE_URL'
export KOPS_CLUSTER_NAME='CLUSTER_NAME'
kops create -f ./kops/test-node.yaml

kops update cluster --yes
kops rolling-update cluster --yes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;&lt;strong&gt;Verifying the new InstanceGroup &amp;amp; nodes are Ready.&lt;/strong&gt;&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;***# Specify the new Instance Group Name***
INSTANCE_GROUP=''
aws autoscaling describe-auto-scaling-groups | grep $INSTANCE_GROUP

***# Check if new nodes are added***
kubectl get nodes 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Creating a Test Deployment
&lt;/h3&gt;

&lt;p&gt;For Testing, we will create a test deployment that has &lt;em&gt;**nodeSelector *&lt;/em&gt;&lt;em&gt;set to *&lt;/em&gt;&lt;em&gt;test-node *&lt;/em&gt;&lt;em&gt;with the **request&lt;/em&gt;* values set accordingly. In this case, we have provided the requested memory to &lt;strong&gt;2Gi&lt;/strong&gt;.&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;h3&gt;
  
  
  Deploying Cluster AutoScaler
&lt;/h3&gt;

&lt;p&gt;Once you have already cloned the repo, there is a folder called **cluster-auto-scaler, **which contains three different ways of deploying, for the sake of simplicity we would use a single auto-scaling group.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Single Auto-Scaling Group&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Multiple Auto-Scaling Group&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;On-Control Plane (on the master nodes)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Autodiscover (auto-discover using tags)&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;There will be a slight change required before applying the changes.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- --skip-nodes-with-local-storage=false

- --nodes=**&amp;lt;MIN_COUNT&amp;gt;:&amp;lt;MAX_COUNT&amp;gt;:&amp;lt;INSTANCE_GROUP&amp;gt;**
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;MIN_COUNT&lt;/strong&gt;: Minimum number of nodes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;MAX_COUNT&lt;/strong&gt;: Maximum number of nodes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;INSTANCE_GROUP&lt;/strong&gt;: AutoScalingGroup / InstanceGroup&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Make sure before applying the min-count and max-count should be between the actual range of InstanceGroup.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Stimulating Load.
&lt;/h2&gt;

&lt;p&gt;We would be stimulating the load by increasing the number of replicas.&lt;/p&gt;

&lt;p&gt;As we already know that the machine &lt;strong&gt;t3.medium&lt;/strong&gt; has &lt;strong&gt;&lt;em&gt;4Gi memory, **as *there are other resources as well that makes the usable memory around *&lt;/em&gt;~3.7Gi&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Before ClusterAutoScaler: Pending
&lt;/h3&gt;

&lt;p&gt;Before ClusterAutoScaler, we assigned the test app memory of 2Gi and tried scaling the application to 2 replicas. In this case, it fails as a single node (t3.medium) doesn't have enough resources and we have the pod in a &lt;strong&gt;Pending state&lt;/strong&gt;. (2Gi X 2 = 4Gi &amp;gt; 3.7~Gi)&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl scale deploy test-app --replicas=2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2144%2F1%2A-XmyFQmWDgxZAzO4ur14lA.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2144%2F1%2A-XmyFQmWDgxZAzO4ur14lA.gif" alt="Pod’s Pending State"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  After ClusterAutoScaler: Running
&lt;/h3&gt;

&lt;p&gt;After ClusterAutoScaler, it finds that the pod is in a &lt;strong&gt;Pending&lt;/strong&gt; state and hence tries to assign a new node to the cluster, after doing so the Pod is scheduled onto the new node, once it's part of the cluster and everything works fine.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F1%2Al31K123MoyDHdpoqRezOkg.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F1%2Al31K123MoyDHdpoqRezOkg.gif" alt="Pod’s in Running State. ❤"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Recommendations: Cluster AutoScaler
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Here are few recommendations to keep in mind.&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Specifying the request of the pods
&lt;/h3&gt;

&lt;p&gt;Specifying the request helps CA to get details of the resource request and accordingly can scale the cluster. Make sure to keep the value realistic and not too high as that might lead to false upscaling and can burn your cloud budgets.&lt;/p&gt;

&lt;h3&gt;
  
  
  Have an HPA for deployments
&lt;/h3&gt;

&lt;p&gt;HPA ensures that pods scale automatically based on the increase in the request which ideally will trigger CA to scale the nodes accordingly and scale down when needed.&lt;/p&gt;

&lt;h3&gt;
  
  
  Avoid running in Production workloads
&lt;/h3&gt;

&lt;p&gt;Avoid running ClusterAutoScaler where we can have a mission-critical application that can have issues while getting rescheduled on different nodes.&lt;/p&gt;

&lt;blockquote&gt;
&lt;h1&gt;
  
  
  if you found this article useful, feel free to 👏 clap many times or share it with your friends. If you have any doubts regarding the same or anything around DevOps, get touch with me on &lt;a href="https://www.linkedin.com/in/chrisedrego" rel="noopener noreferrer"&gt;Linkedin&lt;/a&gt;, &lt;a href="https://twitter.com/chrisedrego" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt;, &lt;a href="https://www.instagram.com/chrisedrego/" rel="noopener noreferrer"&gt;Instagram&lt;/a&gt;.
&lt;/h1&gt;
&lt;/blockquote&gt;

</description>
      <category>devops</category>
      <category>kubernetes</category>
      <category>aws</category>
      <category>microservices</category>
    </item>
    <item>
      <title>6 Easy steps for sharing AWS Encrypted RDS snapshot between two accounts.</title>
      <dc:creator>chrisedrego</dc:creator>
      <pubDate>Sat, 29 May 2021 08:20:37 +0000</pubDate>
      <link>https://dev.to/chrisedrego/6-easy-steps-for-sharing-aws-encrypted-rds-snapshot-between-two-accounts-1c40</link>
      <guid>https://dev.to/chrisedrego/6-easy-steps-for-sharing-aws-encrypted-rds-snapshot-between-two-accounts-1c40</guid>
      <description>&lt;p&gt;This is a hassle-free guide to share AWS Encrypted RDS across two different AWS accounts within 7 easy steps.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F11182%2F0%2AsdNr-kOmJ9ffmUQw" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F11182%2F0%2AsdNr-kOmJ9ffmUQw" alt="Photo by [Paweł Czerwiński](https://unsplash.com/@pawel_czerwinski?utm_source=medium&amp;amp;utm_medium=referral) on [Unsplash](https://unsplash.com?utm_source=medium&amp;amp;utm_medium=referral)"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Overview&lt;/strong&gt;
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Login to the &lt;strong&gt;Source Account&lt;/strong&gt;, Create a snapshot from RDS.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Creating KMS Key&lt;/strong&gt; (with details of the destination account)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;After the snapshot is created, Create a new copy of the snapshot &amp;amp; attach the KMS key.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Share the newly created snapshot to the destination account.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Log in to the **Destination Account, **head over to **Shared with me **snapshots, and create a new copy of the snapshot.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Restore the copied Snapshot into a new RDS Instance.&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;

&lt;/ol&gt;

&lt;h2&gt;
  
  
  Changes at the Source Account
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Create Snapshot
&lt;/h3&gt;

&lt;p&gt;Log in to the source AWS Account which contains the source Database and create a snapshot from it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F4596%2F1%2A47VEJMchAkq3lhU0cF6OLg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F4596%2F1%2A47VEJMchAkq3lhU0cF6OLg.png" alt="Click on Action &amp;gt; Take snapshot"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2840%2F1%2AqZGG4hNQh9EkXpditZ1XMg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2840%2F1%2AqZGG4hNQh9EkXpditZ1XMg.png" alt="Enter the name for the snapshot and create snapshot"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Cannot share an Encrypted Snapshot straight away.&lt;/p&gt;

&lt;p&gt;Click on Share Snapshot, we can see that we cannot directly share the snapshot. For that, we have the KMS key to the rescue.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2804%2F1%2AVwi-ZYTIO4oMYqNNwrnSgA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2804%2F1%2AVwi-ZYTIO4oMYqNNwrnSgA.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Create KMS Key
&lt;/h3&gt;

&lt;p&gt;Open Key Management Service (&lt;a href="https://console.aws.amazon.com/kms/home" rel="noopener noreferrer"&gt;KMS&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;Create a Symmetric key, and add a label along with permission.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F3980%2F1%2AbfJevJ9C8XbGEqiKTiqn-w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F3980%2F1%2AbfJevJ9C8XbGEqiKTiqn-w.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Enter the AWS Account ID and save the KMS key.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F3372%2F1%2A-_T1Czf6ly99cEwbV6cniA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F3372%2F1%2A-_T1Czf6ly99cEwbV6cniA.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Create a Copy of the Snapshot
&lt;/h3&gt;

&lt;p&gt;Once the snapshot is created, Select Snapshot, Click Actions &amp;gt; Copy Snapshot&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F5376%2F1%2Ae_U2U_4HWO5YvZ2i8HAcRw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F5376%2F1%2Ae_U2U_4HWO5YvZ2i8HAcRw.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Provide a name &amp;amp; select the newly create KMS key under the Master key&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2240%2F1%2AP8Mnha4sKoKW8XYeTNE3cQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2240%2F1%2AP8Mnha4sKoKW8XYeTNE3cQ.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Share Snapshot with Destination account
&lt;/h3&gt;

&lt;p&gt;Once the Copy of the snapshot is created, click on Actions &amp;gt; Share snapshot&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F1%2A-tVDQ2CbQ0kZp61zo0K-Sw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F1%2A-tVDQ2CbQ0kZp61zo0K-Sw.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Provide the AWS Account key and click Save.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2140%2F1%2AEpgh4zqEuhEkd9K7IIQYEw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2140%2F1%2AEpgh4zqEuhEkd9K7IIQYEw.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Changes at the Destination Account
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Import the Shared snapshot
&lt;/h3&gt;

&lt;p&gt;Snapshot which we have shared from the source account will be available in the Shared with me tab under the snapshot window for AWS RDS.&lt;/p&gt;

&lt;p&gt;Create a Copy of Snapshot click on Actions &amp;gt; Copy snapshot&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F5760%2F1%2AI8t5dZg0TlGToS5IK8V97g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F5760%2F1%2AI8t5dZg0TlGToS5IK8V97g.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Restore the Shared snapshot into RDS
&lt;/h3&gt;

&lt;p&gt;Once the Copy of the share snapshot is created we can Restore the snapshot.&lt;/p&gt;

&lt;p&gt;Select the Snapshot, Click on Actions &amp;gt; Restore Snapshot&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F1%2AZsquEoZdNitE4HjUIyMArQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F1%2AZsquEoZdNitE4HjUIyMArQ.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2220%2F1%2A7G3QoEZZS71WVDJINZdhMw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2220%2F1%2A7G3QoEZZS71WVDJINZdhMw.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Provide the Details for the new RDS instance and there we go!!.&lt;/p&gt;

&lt;blockquote&gt;
&lt;h1&gt;
  
  
  “if you found this article useful, feel free to show some ❤️ and click on ❤️ many times or share it with your friends. Also follow us for more DevOps content”
&lt;/h1&gt;
&lt;/blockquote&gt;

</description>
      <category>aws</category>
      <category>devops</category>
      <category>sre</category>
      <category>mysql</category>
    </item>
    <item>
      <title>MYSQL Operator: A MYSQL ❤ affair with Kubernetes</title>
      <dc:creator>chrisedrego</dc:creator>
      <pubDate>Sat, 29 May 2021 08:18:16 +0000</pubDate>
      <link>https://dev.to/chrisedrego/mysql-operator-a-mysql-affair-with-kubernetes-551j</link>
      <guid>https://dev.to/chrisedrego/mysql-operator-a-mysql-affair-with-kubernetes-551j</guid>
      <description>&lt;p&gt;We will explore how to easily provision, backup, restore &amp;amp; monitor MYSQL Instances on Kubernetes the easy way using MYSQL Operator.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;“Running databases in Kubernetes is one shiny disco ball that attracts a lot of limelight. Although Databases by their very nature are stateful, on the other hand, Kubernetes is more inclined towards running stateless &amp;amp; ephemeral applications, which makes them two different worlds apart, although to make them one ❤ we have MYSQL Operator. As there is some great amount of work done worldwide in the Open Source community by Oracle, Presslabs &amp;amp; Percona providing rich MySQL Operator’s which make running MYSQL on Kubernetes a hassle-free experience.”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--oFkHnp35--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2160/1%2ABUzBKmB4IbrFuIiXeBv1yA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--oFkHnp35--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2160/1%2ABUzBKmB4IbrFuIiXeBv1yA.png" alt="**“And they lived happily ever after” **— Joshua Loth Liebman"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What is MYSQL Operator?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://kubernetes.io/docs/concepts/extend-kubernetes/operator/"&gt;Operator&lt;/a&gt; are applications written ontop of kubernetes which makes challenging &amp;amp; domain specific operations automated &amp;amp; easy. We are choosing &lt;a href="https://github.com/presslabs/mysql-operator"&gt;*MYSQL Operator&lt;/a&gt;* from &lt;a href="https://dev.toundefined"&gt;*Presslabs&lt;/a&gt; which &lt;em&gt;makes running **MySQL as a service&lt;/em&gt;* with built-in High-Availability, Scalability &amp;amp; Monitoring quite simple. A single definition of MYSQL Cluster can include all the information needed for backup, storage along with MySQLD configuration.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Architecture of MYSQL Operator&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--HoxD-yt6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2160/1%2AwZvZ6Y0ht1IZOLKZtj6DDA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--HoxD-yt6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2160/1%2AwZvZ6Y0ht1IZOLKZtj6DDA.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The whole infrastructure runs on top of Kubernetes, along with &lt;a href="https://github.com/openark/orchestrator"&gt;**Github Orchestrator&lt;/a&gt;** which is an open-source tools that provides a pretty intuitive UI, also we have the MYSQL Operator which does the actual heavy lifting &amp;amp; provisions various MySQL Nodes &amp;amp; Services, what’s even greater is that each MYSQL Instances has mysqld-exporter service running which can be used for monitoring.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Reason’s to use &lt;a href="https://medium.com/@presslabs"&gt;Presslabs&lt;/a&gt; MYSQL Operator&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Automatic on-demand &amp;amp; scheduled backups.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Initializing new clusters from existing backups.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Built-in High-Availability, Scalability, Self-Healing &amp;amp; Monitoring.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Save money &amp;amp; Avoid vendor lock-in’s&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Best suited for Microservices Architecture.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Prerequisite
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Basic understanding of MYSQL, Kubernetes &amp;amp; &lt;a href="https://kubernetes.io/docs/concepts/extend-kubernetes/operator/"&gt;Kubernetes Operator&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Install the following tools: &lt;a href="https://kubernetes.io/docs/tasks/tools/install-kubectl/"&gt;kubectl&lt;/a&gt;, &lt;a href="https://helm.sh/docs/intro/install/"&gt;helm&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Installing MySQL Operator using helm&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;It’s a simple straightforward way to install mysql-operator with helm, run the following commands, based on the version of helm installed on your machine.&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;h2&gt;
  
  
  Provisioning: MYSQL Cluster
&lt;/h2&gt;

&lt;p&gt;Once, we are done installing MYSQL Operator the next step is to provision MYSQL cluster instance with credentials, storage &amp;amp; MySQL configuration.&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;
&lt;br&gt;
In this case, we have configured a root password which needs to be base64 encoded, we can do that with help of &lt;a href="https://www.base64encode.org/"&gt;base64encode&lt;/a&gt;, along with it we can configure mysqld configuration an persistent volume claim for storage.
&lt;h3&gt;
  
  
  &lt;strong&gt;List &amp;amp; **Describe **the available MYSQL Instances&lt;/strong&gt;
&lt;/h3&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get mysql
kubectl describe mysql/&amp;lt;NAME_OF_MYSQL_CLUSTER&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;Connecting to a MySQL Instance&lt;/strong&gt;
&lt;/h3&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl -n mysql-operator run mysql-client --image=mysql:5.7 -it --rm --restart=Never -- /bin/bash

mysql -uroot -p'changme' -h '**mycluster-mysql.default.svc.cluster.local**' -P3306
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Exposing MYSQL Cluster Publically using Ingress
&lt;/h3&gt;

&lt;p&gt;There are some minor tweaks needed in the Ingress configuration such as adding the port number along with service to be exposed when it comes to TCP Servies, i have already addressed it in this &lt;a href="https://medium.com/@chrisedrego/setting-up-a-standalone-mysql-instance-on-kubernetes-exposing-it-using-nginx-ingress-controller-262fc7af593a"&gt;blog&lt;/a&gt;.&lt;br&gt;
&lt;a href="https://medium.com/@chrisedrego/setting-up-a-standalone-mysql-instance-on-kubernetes-exposing-it-using-nginx-ingress-controller-262fc7af593a"&gt;&lt;strong&gt;Setting up a Standalone MYSQL Instance on Kubernetes &amp;amp; exposing it using Nginx Ingress Controller.&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Backups
&lt;/h2&gt;

&lt;p&gt;Backups are done pretty easily in an automated fashion with On-demand or Scheduled backups. Backups are stored on an Object Storage such as AWS S3, GCloud Bucket, Google Drive &amp;amp; Azure Bucket. Backups are done using &lt;a href="https://rclone.org/"&gt;rclone&lt;/a&gt;.&lt;/p&gt;
&lt;h3&gt;
  
  
  Creating Backups
&lt;/h3&gt;

&lt;p&gt;In order for backups to be uploaded on these object storage we need to provide the credentials for the same using Kubernetes Secrets which need to &lt;a href="https://www.base64encode.org"&gt;base64 encoded.&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;
&lt;br&gt;
We can &lt;strong&gt;list&lt;/strong&gt; or &lt;strong&gt;view&lt;/strong&gt; the status of the backups. MysqlBackups are actually runs as Jobs inside of Kubernetes Cluster.
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get MysqlBackup

kubectl describe MysqlBackup/&amp;lt;BACKUP_NAME&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Creating Scheduled Backups
&lt;/h3&gt;

&lt;p&gt;Operator provides a way in which clusters can be periodically backed up in regular intervals by specifying a cron expression.&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;h2&gt;
  
  
  Restoring Backups
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Initializing a cluster from existing backups
&lt;/h3&gt;

&lt;p&gt;There are often at times the need to spawn databases in Microservice environment which are restored from a point in-time backup or snapshot. MYSQL Operator makes it pretty easy by defining &lt;strong&gt;initBucketURL&lt;/strong&gt; which points to backup archive file along with secret &lt;strong&gt;initBucketSecretName&lt;/strong&gt; to access the file.&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;h2&gt;
  
  
  Monitoring &amp;amp; Visualization
&lt;/h2&gt;

&lt;p&gt;MYSQL Operator comes along with built-in monitoring &amp;amp; visualization with the help of Orchestrator and mysqld-exporter service.&lt;/p&gt;

&lt;h3&gt;
  
  
  Orchestrator
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://github.com/openark/orchestrator"&gt;Orchestrator&lt;/a&gt; is developed by Github, which allows to provides a replication topology control &amp;amp; high availability and bird-eyes view of the whole MYSQL Cluster farm.&lt;/p&gt;

&lt;p&gt;In order to access the Orchestrator we can port-forward the service&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl port-forward svc/mysql-operator -n mysql-operator 3003:80
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--srQDXPEK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2412/1%2AN_Jqr5VtDG3lWaiZqZMoyg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--srQDXPEK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2412/1%2AN_Jqr5VtDG3lWaiZqZMoyg.png" alt="Complete view of the entire MYSQL cluster farm."&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  mysqld-exporter
&lt;/h3&gt;

&lt;p&gt;Each MYSQL Cluster which is provisioned, have mysqld-exporter pod which captures metrics/information about individual clusters. This service can be used by Prometheus to scrape the metrics &amp;amp; provide valuable insight about the database.&lt;/p&gt;

&lt;p&gt;We need to define a service which exposes the mysqld-exporter pod which runs on port 9125&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;

&lt;p&gt;&lt;br&gt;&lt;br&gt;
In order to access the mysqld-exporter, lets port-forward the service.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl port-forward svc/mycluster-exporter-svc 9125:9125
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--rXtGHTZv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2620/1%2Ad4J7iplpEMhmJWFZfdkvPg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rXtGHTZv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2620/1%2Ad4J7iplpEMhmJWFZfdkvPg.png" alt="Metrics provides better insight about mysql cluster"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Final Notes
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;Although, we have learned to provision, scale,monitor mysql instances making production databases run on Kubernetes, the most important thing to look forward is the underlying storage for Persistent Volume as there are a couple of Storage Backends such as EFS, NFS that support RWX, but often don’t work well.We can certainly opt for EBS, Azure disk, local disk although we can still run into problems like for EBS ReadWriteMany isnt a supported. In that case we can prefer options such as Ceph, GlusterFS, Rook which are production ready but have complex side for setup &amp;amp; maintenance.&lt;/p&gt;
&lt;h1&gt;
  
  
  “if you found this article useful, feel free show some ❤️ click on ❤️ many times or share it with your friends.”
&lt;/h1&gt;
&lt;/blockquote&gt;

</description>
      <category>mysql</category>
      <category>kubernetes</category>
      <category>devops</category>
      <category>sre</category>
    </item>
    <item>
      <title>Kubernetes Monitoring: Kube-State-Metrics</title>
      <dc:creator>chrisedrego</dc:creator>
      <pubDate>Sat, 29 May 2021 08:13:38 +0000</pubDate>
      <link>https://dev.to/chrisedrego/kubernetes-monitoring-kube-state-metrics-2bbi</link>
      <guid>https://dev.to/chrisedrego/kubernetes-monitoring-kube-state-metrics-2bbi</guid>
      <description>&lt;p&gt;In a pursuit to monitor our Kubernetes Cluster we often need the right set of tools to capture, the right metrics. Although there are a couple of tools present already, today let’s discuss Kube-state-metrics.&lt;/p&gt;

&lt;h3&gt;
  
  
  What is Kube-State-Metrics?
&lt;/h3&gt;

&lt;p&gt;Kube-State-Metrics is an open-source light-weight utility used to monitor the Kubernetes Cluster. As the name suggests, it provides information about the &lt;strong&gt;state of a couple of Kubernetes objects&lt;/strong&gt; by listening to Kubernetes API. Kube-State-Metrics acts like a Swiss Army Knife that provides metrics for a long list of Kubernetes-Objects.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Z6e4Iuz2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2A0ovezrF2X2I2uSujya48Ig.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Z6e4Iuz2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2A0ovezrF2X2I2uSujya48Ig.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Installing: Kube-State-Metrics
&lt;/h2&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone [https://github.com/kubernetes/kube-state-metrics](https://github.com/kubernetes/kube-state-metrics)
kubectl apply -f kube-state-metrics/examples/standard/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;This will go ahead and install all the components needed such as ServiceAccount, ClusterRole, ClusterRolebinding along with Deployment, and Service.&lt;/p&gt;

&lt;p&gt;Let’s test it locally, by exposing the service, run the command&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl port-forward svc/kube-state-metrics -n kube-system 8080:8080
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--YD1IqpuV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AzeJMnl9CQghEpqgxkOQMWQ.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--YD1IqpuV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AzeJMnl9CQghEpqgxkOQMWQ.gif" alt=""&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Scrapping: Kube-State-Metrics with Prometheus
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--dzxjiR6Q--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2Aa1pU0aZHigEuxip-8vXe5A.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--dzxjiR6Q--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2Aa1pU0aZHigEuxip-8vXe5A.png" alt="**Kube-state-metrics ❤ Prometheus**"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As we already, know Kube-state-metrics is Prometheus friendly let's get started with scrapping the Kube-State-metrics with Prometheus.&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;

&lt;p&gt;&lt;br&gt;&lt;br&gt;
Assuming that you have already installed on Prometheus on your cluster, you can use the above Prometheus Configuration to start scraping metrics.&lt;/p&gt;

&lt;blockquote&gt;
&lt;h1&gt;
  
  
  if you found this article useful, feel free to click ❤️ heart many times or share it with your friends.
&lt;/h1&gt;
&lt;/blockquote&gt;

</description>
      <category>kubernetes</category>
      <category>monitoring</category>
      <category>devops</category>
      <category>sre</category>
    </item>
    <item>
      <title>Monitoring Nginx Ingress Controller with Prometheus &amp; Grafana.</title>
      <dc:creator>chrisedrego</dc:creator>
      <pubDate>Sat, 29 May 2021 08:11:12 +0000</pubDate>
      <link>https://dev.to/chrisedrego/monitoring-nginx-ingress-controller-with-prometheus-grafana-2l37</link>
      <guid>https://dev.to/chrisedrego/monitoring-nginx-ingress-controller-with-prometheus-grafana-2l37</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2560%2F1%2AORT3EJkt_g5y9ja5L19zjg.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2560%2F1%2AORT3EJkt_g5y9ja5L19zjg.jpeg" alt="Simple Guide to Monitoring Nginx Ingress Controller"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Before getting started we need to make sure that we have the following tools installed.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Nginx Ingress Controller&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Prometheus&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Grafana.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let’s install Nginx-Ingress Controller, Prometheus, Grafana with &lt;a href="https://helm.sh/docs/intro/install/" rel="noopener noreferrer"&gt;***helm3&lt;/a&gt;***&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm repo add nginx-stable [https://helm.nginx.com/stable](https://helm.nginx.com/stable)
helm repo update

helm install controller  nginx-stable/nginx-ingress 
--set prometheus.create=true --set prometheus.port=9901

helm install prometheus stable/prometheus
helm install grafana stable/grafana
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;You can tweak the installation as per your needs, more details on this on &lt;a href="https://docs.nginx.com/nginx-ingress-controller/installation/installation-with-helm/" rel="noopener noreferrer"&gt;https://docs.nginx.com/nginx-ingress-controller/installation/installation-with-helm/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;One extra work that is needed is that we need to expose a service that listens to the port that contains the Nginx-Ingress-Controllers Prometheus Metrics in our case it's &lt;em&gt;9901&lt;/em&gt;.&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;

&lt;p&gt;&lt;br&gt;&lt;br&gt;
Once we have created the service we now can go ahead and add the endpoints for Prometheus to scrape. We need to make sure although Prometheus is present inside of the Kubernetes cluster so that in the case easily resolves the internal Domains of the Nginx-ingress controller.&lt;br&gt;&lt;/p&gt;

&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;

&lt;p&gt;&lt;br&gt;&lt;br&gt;
The next step is to use the above configuration as a Prometheus configuration. After which we can open Prometheus and head over the &lt;strong&gt;&lt;em&gt;/targets&lt;/em&gt;&lt;/strong&gt; page and verify if the ingress endpoints state is &lt;strong&gt;UP&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2408%2F1%2Ai9DLOQf2M7o7RhVwVi-Ijg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2408%2F1%2Ai9DLOQf2M7o7RhVwVi-Ijg.png" alt="Verify if Prometheus URL"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To check what are the different types of metrics that Nginx Ingress Controller emits, we can head over to the Prometheus Dashboard and type in the console &lt;strong&gt;n*ginx&lt;/strong&gt;. where *we receive a long list of different metrics that it emits.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2144%2F1%2AtPzdgCl2tDgmr2LBiOCLVA.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2144%2F1%2AtPzdgCl2tDgmr2LBiOCLVA.gif" alt="Nginx Ingress Controller Prometheus Metrics"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Let's Graph it up: Grafana
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Adding Prometheus DataSource in Grafana
&lt;/h3&gt;

&lt;p&gt;Go ahead and add Prometheus as the data source in &lt;a href="https://grafana.com/" rel="noopener noreferrer"&gt;Grafana&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Head over to Grafana Configuration &amp;gt; DataSource &amp;gt; Add Data Sources&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2592%2F1%2Ax9zr3a48Xhj9V57vV4lbiQ.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2592%2F1%2Ax9zr3a48Xhj9V57vV4lbiQ.gif"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Adding Dashboard in Grafana
&lt;/h3&gt;

&lt;p&gt;Now, since we have added the Prometheus as a Datasource, we now need to added Dashboard for Nginx Ingress Controller. To make it less hassle-free there is already a Dashboard in place that gets everything set up with a few clicks.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2956%2F1%2Af7w2sqVw5hRfH0mZ3KAgzw.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2956%2F1%2Af7w2sqVw5hRfH0mZ3KAgzw.gif"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here’s is the link to the awesome dashboard for Grafana. &lt;a href="https://grafana.com/grafana/dashboards/9614" rel="noopener noreferrer"&gt;**9614&lt;/a&gt;**&lt;br&gt;
&lt;a href="https://grafana.com/grafana/dashboards/9614" rel="noopener noreferrer"&gt;&lt;strong&gt;NGINX Ingress controller dashboard for Grafana&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Bingo! Here-we-go!
&lt;/h2&gt;

&lt;p&gt;We have now successfully able to set up Prometheus to scrape the metrics and Graphana to show the eye-catchy visualization for Nginx-Ingress-Controller.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F1%2AGFSurkpD_yN0ptxaW-7-0g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F1%2AGFSurkpD_yN0ptxaW-7-0g.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;h1&gt;
  
  
  if you found this article useful, feel free to click ❤️ Heart many times or share it with your friends and also follow me for more articles like these
&lt;/h1&gt;
&lt;/blockquote&gt;

</description>
      <category>nginx</category>
      <category>devops</category>
      <category>monitoring</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>Kubernetes Monitoring: Metrics Server</title>
      <dc:creator>chrisedrego</dc:creator>
      <pubDate>Sat, 29 May 2021 08:04:08 +0000</pubDate>
      <link>https://dev.to/chrisedrego/kubernetes-monitoring-metrics-server-3fhk</link>
      <guid>https://dev.to/chrisedrego/kubernetes-monitoring-metrics-server-3fhk</guid>
      <description>&lt;p&gt;&lt;strong&gt;Kubernetes Monitoring Series: Understanding the what, why, how &amp;amp; behind the scenes — Metrics-server.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--kInths-A--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AQBms8ZxxCHJAb3OwxLsq2w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--kInths-A--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AQBms8ZxxCHJAb3OwxLsq2w.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Metrics Server?
&lt;/h2&gt;

&lt;p&gt;Metrics Server is another useful lightweight tool that can be added to your Kubernetes Monitoring arsenal. As the name suggests Metrics Server provides metrics for resource utilization like CPU &amp;amp; Memory. Metrics Server discovers all the nodes in the cluster and queries the &lt;a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/"&gt;kubelet&lt;/a&gt; which is an agent that runs on each node and further forwards the details for resource utilization i.e CPU/Memory to Kubernetes API Server which exposes it as a Metrics API endpoint.&lt;/p&gt;

&lt;p&gt;In most of the managed Kubernetes as a service offering from cloud vendors, it does come as an addon by default, as well a couple of bootstrapping tools like Kube-up.sh. If you are using minkube it can be enabled as an addon.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why do I need the Metrics Server?
&lt;/h2&gt;

&lt;p&gt;Although the Metrics Server misses all the glitter and glamour compared the rest of Monitoring tools offer with visualisation and dashboard, if you use features such as &lt;a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/"&gt;Horizontal Pod AutoScaler&lt;/a&gt;, &lt;a href="https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler"&gt;Vertical Pod AutoScaler&lt;/a&gt;, kubectl top, or &lt;a href="https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/"&gt;Kubernetes Dashboard&lt;/a&gt; then you do need the Metrics Server to provide resource utilisation metrics for them.&lt;/p&gt;

&lt;h3&gt;
  
  
  How to view the Metrics Server?
&lt;/h3&gt;

&lt;p&gt;There are two ways to capture the information in the Metrics Server.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;kubectl top command&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Metrics endpoints&lt;/strong&gt; &lt;em&gt;/apis/metrics.k8s.io/v1beta1&lt;/em&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;kubectl top commands&lt;/strong&gt; more of works like the top commands in Linux, which provides the details of the resources utilizes for pods &amp;amp; nodes.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl top nodes 
kubectl top pod
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--1gTp_R3K--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AA77D0vmdsCgO8wzWrBgcMw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--1gTp_R3K--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AA77D0vmdsCgO8wzWrBgcMw.png" alt="kubectl top nodes"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--SWCJ-mme--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2ASNmWKq6YT9s-f2TiqxcvBg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--SWCJ-mme--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2ASNmWKq6YT9s-f2TiqxcvBg.png" alt="kubectl top pods"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Both the commands, provide metrics details about the resources consumptions (CPU/Memory) and displays the order based on the resources usage.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Kubernetes metrics endpoint&lt;/strong&gt; can be accessed directly as follows.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get --raw /apis/metrics.k8s.io/v1beta1/namespaces/default/pods/

kubectl get --raw /apis/metrics.k8s.io/v1beta1/nodes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This returns a response about resource utilization for nodes and pods present in the cluster.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to install Metrics Server?
&lt;/h2&gt;

&lt;p&gt;Before installing the metrics Server we need to ensure that the Aggregation layer is present. &lt;a href="https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/"&gt;Aggregation Layer&lt;/a&gt; helps to further extend the existing core Kubernetes APIs by adding additional APIs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Enabling metrics server as an addon in minikube&lt;/strong&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;minikube addons enable metrics-server
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Cloning the Metrics Server repository and applying the manifest.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone [https://github.com/kubernetes-sigs/metrics-server](https://github.com/kubernetes-sigs/metrics-server)
cd metrics-server
kubectl apply -f manifests/base/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;blockquote&gt;
&lt;h1&gt;
  
  
  OR
&lt;/h1&gt;
&lt;/blockquote&gt;

&lt;p&gt;You can also deploy the v3.0.6 that works with some minor tweaks.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f [https://gist.githubusercontent.com/chrisedrego/0dc6d79f25c2235934c2835d0e8952c9/raw/e22fdd4fa6476165a46968120afe4bd52538ae0b/kubernetes_metrics_server.yaml](https://gist.githubusercontent.com/chrisedrego/0dc6d79f25c2235934c2835d0e8952c9/raw/e22fdd4fa6476165a46968120afe4bd52538ae0b/kubernetes_metrics_server.yaml)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This basically installs metrics-server along with related Kubernetes Objects.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Service Account&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;APIService (v1beta1.metrics.k8s.io)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Deployment&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;ClusterRole/ClusterRoleBinding/Roles/RoleBinding&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Service.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;Metrics Server overall is best suited where we have Horizontal Pod Autoscaler/Vertical Pod Scaler, Kubernetes Dashboard or view the stats emiited by kubectl top command, it’s not ideal solution where the metrics emitted can be scrapped by Prometheus.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Troubleshooting
&lt;/h3&gt;

&lt;p&gt;Finally, once the metrics server is installed we can wait for few seconds and then try running **kubectl top nodes **or **kubectl top pods **or by hitting the **Metrics Server endpoint **by kubectl get — raw “/apis/metrics.k8s.io/v1beta1/nodes.&lt;/p&gt;

&lt;p&gt;Well in case if you do run into an issue where you are not able to scrape the metrics in that case, you can head over the issues tab for the metrics-server repository. &lt;a href="https://github.com/kubernetes-sigs/metrics-server/issues/"&gt;https://github.com/kubernetes-sigs/metrics-server/issues/&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;h1&gt;
  
  
  if you found this article useful, feel free click ❤️ Heart, many times or share it with your friends.
&lt;/h1&gt;
&lt;/blockquote&gt;

</description>
      <category>kubernetes</category>
      <category>monitoring</category>
      <category>devops</category>
      <category>sre</category>
    </item>
    <item>
      <title>Deep Dive with Provisioning AKS RBAC Enabled Kubernetes Cluster using Terraform.</title>
      <dc:creator>chrisedrego</dc:creator>
      <pubDate>Sat, 29 May 2021 07:58:58 +0000</pubDate>
      <link>https://dev.to/chrisedrego/deep-dive-with-provisioning-aks-rbac-enabled-kubernetes-cluster-using-terraform-1geg</link>
      <guid>https://dev.to/chrisedrego/deep-dive-with-provisioning-aks-rbac-enabled-kubernetes-cluster-using-terraform-1geg</guid>
      <description>&lt;p&gt;In this long descriptive blog post, where we would understand what is Infrastructure a code. Understanding the what, why, and how behind **terraform **and how to a provision simple RBAC enabled Azure Kubernetes Service (AKS) Cluster using Terraform.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--UQpGah2G--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/3840/0%2AI0WuWjARmyGBhQlR.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--UQpGah2G--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/3840/0%2AI0WuWjARmyGBhQlR.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Terraform, anyways?
&lt;/h2&gt;

&lt;p&gt;Terraform is an open-source, cross-platform Infrastructure as a code,(Iaac) software tool that is provided by Hashicorp which is available on Windows, Linux, Mac, and other OS. Terraform provides a better way to provision Infrastructure on various platforms and cloud providers with the help of a configuration file (main.tf). Terraform uses a high-level configuration language called HCL(Hashicorp Configuration Language) which is more human-readable, and easy to understand.&lt;/p&gt;

&lt;p&gt;So in simple words, instead of manually configuring the Infrastructure which involves point and click through User Interface to provision Virtual Machines, Storage, Networking, and other resources on various cloud providers such as (AWS, Azure, Google Cloud). We can &lt;strong&gt;automate&lt;/strong&gt;, &lt;strong&gt;version control&lt;/strong&gt; the same task for provisioning infrastructure with the help of Terraform. Along with all the goodness which Terraform has to offer, it also abstracts the underlying complexity while provisioning the infrastructure.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Co_ufQr7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://cdn-images-1.medium.com/max/2000/0%2A9GC0Zm1p8NP452At.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Co_ufQr7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://cdn-images-1.medium.com/max/2000/0%2A9GC0Zm1p8NP452At.gif" alt="***– too good isn’t it?***"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Imagine, a Life without Infrastructure as a code
&lt;/h3&gt;

&lt;p&gt;Suppose, if you have been given a task to provision a Virtual Machine on Azure it involves.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Open your favorite browser (Chrome for me!)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Head over to portal.azure.com&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Navigate to the Virtual Machine Page.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Provide the name, and then &lt;strong&gt;select&lt;/strong&gt; the type, and then &lt;strong&gt;click&lt;/strong&gt; and &lt;strong&gt;click **and **click&lt;/strong&gt; as you configure it and wait till it gets created. Sound’s simple isn’t it?&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Now imagine getting the same task, but spinning up to 100 Virtual Machine’s well that involves me doing the same task all over and over again, *&lt;em&gt;click click click… **Sound’s simple isn’t it? but isn’t that too much. *(Frustrating)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--68A4m_HM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://cdn-images-1.medium.com/max/2000/0%2AXJl9btFc6Zp5Caqk.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--68A4m_HM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://cdn-images-1.medium.com/max/2000/0%2AXJl9btFc6Zp5Caqk.gif" alt="I love my job."&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  So, Why Infrastructure as a Code, then?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Np_s4dXE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/0%2ALhyj802CLNOx4VXd.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Np_s4dXE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/0%2ALhyj802CLNOx4VXd.jpg" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Below are a couple of reasons to choose infrastructure as code against the traditional point and click.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Integrate &lt;strong&gt;best practices&lt;/strong&gt; and standards as the Infrastructure is stored as code.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;**Version Control **helps to incrementally, implement, and provision infrastructure, also with the ability to rollback to a specific version if needed.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Helps the task of creation, scaling, and deletion to be easily &lt;strong&gt;Automated&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;**Tracking **the changes as the infrastructure is version controlled.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;**Reusability, **as the code as well as the configuration files, can be later be reused and shared among teams.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now since we have understood the goodness that Infrastructure as code has to offer, lets quickly get an overview of how would we create an AKS Cluster using Terraform.&lt;/p&gt;

&lt;h3&gt;
  
  
  Just an Overview.
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--LpDnb81G--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/0%2Az2jZq0EGhky9JhF9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--LpDnb81G--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/0%2Az2jZq0EGhky9JhF9.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let’s discuss the overall flow, of provision AKS using Terraform.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Terraform Authentication to Azure:&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Initially, we would be creating a Service Principal in Azure and provide its credential to Terraform for Authentication to Azure.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Communication with Azure API:&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After successfully authenticating to Azure using the credentials provided, Terraform would then communicate with Azure Resource Manager and send requests for provisioning the resource on Azure.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Azure Provisioning the Resource:&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Azure or any cloud-based provider for that matter, based upon the resource requested checks the availability of the resource a then provisions the requested resource. Azure in the background does most of the heavy lifting and hides the underlying complexity.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites.
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Terraform CLI&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Azure Account&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Text Editor(Optional)&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Azure CLI (Optional)&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;em&gt;– you can skip this section if you already have terraform, text-editor, azure-cli installed on your machine.&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Installing Terraform.
&lt;/h3&gt;

&lt;p&gt;Terraform is a very simple command-line executable, which is available on all major platforms like Windows, Linux, and macOS as well as OpenBSD and Solaris.&lt;/p&gt;

&lt;p&gt;We would now quickly setup Terraform on the Windows environment in 3 easy steps.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Download the executable of Terraform from the &lt;a href="https://www.terraform.io/downloads.html"&gt;official website&lt;/a&gt; and extract the executable.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create a folder under the C:/ Drive or any drive for that matter and name the folder as terraform, and move the terraform.exe into that folder.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Add the full path of the folder which now contains terraform in my case it’s &lt;em&gt;C:/terraform&lt;/em&gt;/ in the environment variables. This can be done by typing &lt;em&gt;sysdm.cpl *in the run and then navigating to the *Advanced&lt;/em&gt; tab and then clicking on &lt;em&gt;Environment Variables.&lt;/em&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In order to verify if the terraform has successfully installed, we can open up the command prompt and type in &lt;strong&gt;terraform –version&lt;/strong&gt;, if everything went well you should have Terraform’s version displayed.&lt;/p&gt;

&lt;h3&gt;
  
  
  Choosing a text-editor (Optional)
&lt;/h3&gt;

&lt;p&gt;Downloading a third-party text-editor is completely optional for that matter, as you can also use &lt;strong&gt;notepad, vim&lt;/strong&gt; which would be completely fine, but for ease and a bunch of other features, I prefer to use visual studio code.you can download and install visual studio code from this &lt;a href="https://code.visualstudio.com/"&gt;official link&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;After downloading and installing visual studio code you can install the &lt;a href="https://marketplace.visualstudio.com/items?itemName=mauve.terraform"&gt;terraform extension&lt;/a&gt; which helps in a lot of ways such as syntax highlighting, linting, formatting, validation, and auto-completion.&lt;/p&gt;

&lt;p&gt;All the files presented in this demo is hosted on Github Repository. &lt;a href="https://github.com/chrisedrego/aks_terraform"&gt;https://github.com/chrisedrego/aks_terraform&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Installing Azure CLI
&lt;/h3&gt;

&lt;p&gt;Azure CLI is available on all the major operating systems including Windows, macOS, and Linux. Please refer to the official &lt;a href="https://docs.microsoft.com/en-us/cli/azure/install-azure-cli-windows?view=azure-cli-latest"&gt;download link.&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Authentication to Azure using terraform.
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--IirfrC1h--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://cdn-images-1.medium.com/max/2000/0%2Ab1vHnXVJ3OgeQZ8M.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--IirfrC1h--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://cdn-images-1.medium.com/max/2000/0%2Ab1vHnXVJ3OgeQZ8M.gif" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In order to provision Infrastructure on any given cloud provider for that matter, we first need to authenticate as well as make sure that we have the required permissions needed for the requested resources.&lt;/p&gt;

&lt;p&gt;As we are focusing on Azure as a cloud provider, let’s understand the various ways in which we can authenticate to Azure using Terraform.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Authentication using Azure-CLI&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Authentication using Managed Service Identity&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Authentication using Service Principal&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;For now, we would be Authenticating to Azure using Service Principal, before that let’s have an understanding of what is a Service Principal.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is a service principal?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--R2P7xD8t--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://cdn-images-1.medium.com/max/2000/0%2AkFQyY8sObJq3zrm4.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--R2P7xD8t--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://cdn-images-1.medium.com/max/2000/0%2AkFQyY8sObJq3zrm4.gif" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Service Principal is a security identity that has certain roles, permission assigned to it to access specific Azure resources.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why do we need a service principal?
&lt;/h3&gt;

&lt;p&gt;When a Service Principal is created it generates credentials that are used by applications to authenticate to Azure and access cloud-based resources on Azure. In this example, the Service principal will be used by Terraform to authenticate to Azure.&lt;/p&gt;

&lt;h2&gt;
  
  
  Two ways to create a Service Principal
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;Azure Portal&lt;br&gt;
 Azure CLI&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Creating a Service Principal using Azure (Portal)
&lt;/h2&gt;

&lt;p&gt;Before creating a Service Principal, we need to make sure we provide, just the adequate amount of permission needed. Providing the Service Principal a much higher amount of resources then what’s expected, exposes the system to vulnerability and thereby decreases the overall safety &amp;amp; security.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1:&lt;/strong&gt; Log in to your &lt;a href="http://portal.azure.com"&gt;Azure Portal&lt;/a&gt;, and in the search bar type in *“App registrations” *and then head over to the App registrations page.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2: **Click on **New Registrations&lt;/strong&gt;, After which you’ll have a page which requests for the name of the application, supported Account types as well as redirect URL.&lt;/p&gt;

&lt;p&gt;Provide a unique application name followed by which you can provide a Redirect URL (optional) . A Redirect URL can also be set to &lt;a href="http://localhost"&gt;*http://localhost&lt;/a&gt;* or any valid domain name which has https-enabled.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8aXinpsk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/0%2ANo6kE3s3a7u6TyQZ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8aXinpsk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/0%2ANo6kE3s3a7u6TyQZ.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After creating the Service Principal there is more it as we need to configure the required permissions needed as well as also grab the credentials needed for authentication.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--v9w83cjD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2152/0%2Abt-dP5wGkJ8L19bz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--v9w83cjD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2152/0%2Abt-dP5wGkJ8L19bz.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this case, we need to take note of the Application (client_id), Directory (tenant_id) and then head over to the &lt;em&gt;Certificate &amp;amp; secrets tab&lt;/em&gt; to get access to secrets.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--B1T_D38W--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2430/0%2Ab3JrT0tA7ryeQvQ5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--B1T_D38W--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2430/0%2Ab3JrT0tA7ryeQvQ5.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After grabbing hold of the &lt;strong&gt;client_id, client_secret &amp;amp; tenant_id&lt;/strong&gt; head over to your Azure Subscription page and get the **Subscription Id **which would also be needed.&lt;/p&gt;

&lt;h3&gt;
  
  
  Adding roles/permission to SERVICE PRINCIPAL
&lt;/h3&gt;

&lt;p&gt;After successfully creating the Service Principal for Terraform we need to make sure that we assign the Service Principal specific Roles that are needed, which will allow Terraform to provision the requested resources.&lt;/p&gt;

&lt;p&gt;We can provision roles to the Service Principal for an entire Subscription or just to specific Resource group as well, below I have attached the Screenshot in order to go with both the approaches.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Adding Contributor access to the Service Principal at the subscription level.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--CSMrkw-0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2430/0%2AyOrzapLxwE7PiHfs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--CSMrkw-0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2430/0%2AyOrzapLxwE7PiHfs.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Adding Contributor access to the Service Principal at a specific Resource Group level.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5XHDSsTF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2446/0%2ArWyZLPG0mwZkl038.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5XHDSsTF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2446/0%2ArWyZLPG0mwZkl038.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Creating and Assigning roles to Service Principal (Azure CLI)
&lt;/h3&gt;

&lt;p&gt;(You can skip this step if you already used the above approach by using the Azure Portal)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--hXuKhXvt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://cdn-images-1.medium.com/max/2000/0%2AFmjNG2JXAP0-sjJ-.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--hXuKhXvt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://cdn-images-1.medium.com/max/2000/0%2AFmjNG2JXAP0-sjJ-.gif" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I have mentioned similar steps but using the Azure Portal UI below, you can skip this step if don’t have Azure CLI installed on your machine.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Login to Azure using &lt;strong&gt;Azure CLI&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;az login&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;After Authenticating to Azure, select specific Subscription id if in case you have many, you can view your subscription id with the help&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;az account list&lt;/p&gt;

&lt;p&gt;Select the subscriptionId of the account and then set the account&lt;/p&gt;

&lt;p&gt;az account set --subscription "SUBSCRIPTION_ID"&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Now after switching the Subscription account on your machine, we can create and assign the service principal Contributor access for the subscription&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;$ az ad sp create-for-rbac --role="Contributor" --scopes="/subscriptions/SUBSCRIPTION_ID"&lt;/p&gt;

&lt;p&gt;After which now it outputs a JSON which contains the clientId,tenantId,password&lt;/p&gt;

&lt;p&gt;{&lt;br&gt;
"appId": "00000000-0000-0000-0000-000000000000",&lt;br&gt;
"displayName": "azure-cli-XXX",&lt;br&gt;
"name": "&lt;a href="http://azure-cli-XXX"&gt;http://azure-cli-XXX&lt;/a&gt;",&lt;br&gt;
"password": "0000-0000-0000-0000-000000000000",&lt;br&gt;
"tenant": "00000000-0000-0000-0000-000000000000"&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;When we now have Service Principal created with the Role as contributor access along with its client_id, client_secret/password, tenant_id and subscription_id which we will be using in terraform so now we all set to start (terraforming)&lt;/p&gt;

&lt;h2&gt;
  
  
  Now, Let's start Terraforming
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--BpCm5pfx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/0%2Ag_V3M45tY3Xls3DT.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--BpCm5pfx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/0%2Ag_V3M45tY3Xls3DT.jpg" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;All the code, mentioned here is present on my &lt;a href="https://github.com/chrisedrego/aks_terraform"&gt;Github Repository&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;For the purpose of simplicity we would just create a folder named as aks-basic, which would have three files, lets have a basic understanding of all these files.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;main.tf (configuration)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;variables.tf (variables)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;outputs.tf (output)&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;**main.tf: **contains the details of the cloud provider and the resource to be provisioned on the cloud provider specified.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;variables.tf&lt;/strong&gt; — contains the list of variables and the values, which are referenced inside of the main.tf file.&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;outputs.tf *&lt;/em&gt;— contains the value which would be returned/output after successfully provisioning the infrastructure, which can be later be used by other modules.&lt;/p&gt;

&lt;p&gt;You can consider the modules in terraform as a function, which is a combination of (main.tf + variables.tf + outputs.tf) which is where the main.tf is the body of the function which has certain operation while as variables are inputs for the main.tf which are passed to functions, as a final resultant final output the outputs.tf can be considered to be as return value that module returns.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;main.tf&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;provider “azurerm” {

# Azure Provider version (Optional)

version = “=1.34.0″

# Credentials are specified authenticating to Azure

client_id = “${var.client_id}“

client_secret = “${var.client_secret}“

tenant_id = “${var.tenant_id}“

subscription_id = “${var.subscription_id}“

}

resource“azurerm_resource_group” “rg”{

name = “${var.resource_group_name}“

location = “${var.resource_group_location}“

}

resource“azurerm_kubernetes_cluster” “testcluster”{

name = “${var.cluster_name}“

location = “${var.resource_group_location}“

resource_group_name = “${azurerm_resource_group.rg.name}“

dns_prefix = “dns”

agent_pool_profile {

name = “agentpool”

count = 3

vm_size = “Standard_B2ms”

}

service_principal {

# Specifying a Service Principal for AKS Cluster

client_id = “${var.client_id}“

client_secret = “${var.client_secret}“

}

# Tag’s for AKS Cluster’s environment along with  nclustername

tags = {

environment = “test”

cluster_name = “${var.cluster_name}“

}

# Enable Role Based Access Control

role_based_access_control {

enabled = true

}

}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Breaking down, the main.tf
&lt;/h3&gt;

&lt;p&gt;In this case, let's understand main.tf to have a better understanding of what’s going on in the background.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Provider Block&lt;/strong&gt;&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;provider “azurerm” {

version = “1.28.0”

client_id = “${var.client_id}“

client_secret = “${var.client_secret}“

tenant_id = “${var.tenant_id}“

subscription_id = “${var.subscription_id}“

}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;As we already know, that terraform can be used to provision cloud resources on multiple cloud providers such as AWS, Azure, GCP, Heroku. a provider is responsible for understanding API interactions and exposing resources. The provider comes into the picture at the very initial phase while interacting with the Cloud Provider (Azure), as you can call it as an entry point to decide which cloud provider would we be provisioning the resources. To understand more about the various cloud providers that terraform has to offer to refer to the official &lt;a href="https://www.terraform.io/docs/providers/index.html"&gt;link&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this block, we watch carefully we are specifying the Azure (arurerm) Azure Resource Manager provider along with the credentials from the Service Principal to authenticate to Azure.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AZURE_RM_KUBERNETES_CLUSTER&lt;/strong&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource“azurerm_resource_group” “rg”{

# Name/Location of the Resource Group in which the

AKS cluster will be created.

name = “${var.resource_group_name}“

location = “${var.resource_group_location}“

}

resource“azurerm_kubernetes_cluster” “testcluster”{

name = “${var.cluster_name}“

location = “${var.resource_group_location}“

resource_group_name = “${azurerm_resource_group.rg.name}“

dns_prefix = “-dns”

agent_pool_profile {

name = “agentpool”

count = 3

vm_size = “Standard_B2ms”

os_type = “Linux”

os_disk_size_gb = 100

}

service_principal {

client_id = “${var.client_id}“

client_secret = “${var.client_secret}“

}

tags = {

environment = “test”

cluster_name = “${var.cluster_name}“

}

role_based_access_control {

enabled = true

}

}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;*azurerm_kubernetes_cluster *block is used to define the overall configuration needed to spin a Kubernetes cluster, in this case, we wouldn’t be configuring a highly advanced Kubernetes cluster with all the subnet and other networking details specified, to know more about how to highly configure a Kubernetes cluster refer to the official &lt;a href="https://www.terraform.io/docs/providers/azurerm/r/kubernetes_cluster.html"&gt;link&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;name&lt;/strong&gt; &amp;amp; &lt;strong&gt;location&lt;/strong&gt; as we know specifies the name and the location where the AKS cluster will be created.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;resource_group_name&lt;/strong&gt; refers to the above block of the resource group name specified.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;dns_prefix&lt;/strong&gt; is the DNS prefix which will be used for the API Server of the AKS Cluster. in our case, we have specified it as DNS which will further contain a unique domain name. which will together form a unique endpoint which presents the API server for the AKS Cluster.&lt;/p&gt;

&lt;p&gt;Example: dns-3xMXa.hcp.eastus.azmk8s.io&lt;/p&gt;

&lt;p&gt;**Agent_pool_prefix **contains a lot of details about the nitty-gritty details about the type &amp;amp; count of Virtual machines that would be used along with the disk size and OS installed on them.&lt;/p&gt;

&lt;p&gt;**tags **are an optional entity but prove useful to tag or label resource on Azure which performs a certain operation.&lt;/p&gt;

&lt;p&gt;**role_based_access_control **a is set enabled which makes sure that the Kubernetes Cluster will be RBAC enabled.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Resource group&lt;/strong&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource “azurerm_resource_group” “rg” {

name = “${var.resource_group_name}“

location = “${var.resource_group_location}“

}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;A resource group in Azure is used to logical group the resources in Azure. As we are provisioning an AKS Cluster in Azure we are providing a resource group in which the cluster will be created.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;variables.tf&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable “client_id” {

description = “contains the Client Id for service principal”

client_id = “XXXXX-XXXX-XXXXX-XXXXX”

}

variable “client_secret” {

description = “contains the Client Secret for service principal”

client_id = “XXXXX-XXXX-XXXXX-XXXXX“

}


variable “tenant_id” {

description = “contains the Tenant Id for service principal”

client_id = “XXXXX-XXXX-XXXXX-XXXXX”

}


variable “subscription_id” {

description = “contains the Subscription Id for service principal”

client_id = “XXXXX-XXXX-XXXXX-XXXXX“

}


variable “resource_group_name” {

description = “contains the name of the Resource Group”

default = “test_rg”

}


variable “resource_group_location” {

description = “contains the location Resource Group of cluster”

default = “XXXXX”

}

variable “cluster_name” {

description = “contains AKS Cluster Name”

default = “XXXXX”

}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;we had a look close to the main.tf we haven't specified the much of the values hardcoded, rather all of them refer to var followed by the name of the variables all of these variables are specified in these variables.tf.&lt;/p&gt;

&lt;p&gt;Please make note that its not recommended approach to store secrets/credentials in plain text &lt;em&gt;**variables.tf *&lt;/em&gt;*file, you could store these variables in environment variables if in case of CI/CD environment as the secret to avoid exposure and thereby hampering the security.&lt;/p&gt;

&lt;p&gt;Now after understanding the nitty-gritty details of what main.tf and variables.tf is, let's learn how to plan and apply the configuration present in the main.tf on Azure.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Im5qfDEL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/0%2A3StSqDSqdfZYp6-4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Im5qfDEL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/0%2A3StSqDSqdfZYp6-4.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  TERRAFORM: STAGES
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--s8RoKM-4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/0%2AzB3eEibJf_bo2p4x.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--s8RoKM-4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/0%2AzB3eEibJf_bo2p4x.jpg" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let’s quickly understand what does each phase has to offer, as we would be implementing the same while we provision an AKS cluster.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;terraform init&lt;/strong&gt; is used to initialize the current module or folder that we are currently in which contains the main.tf and if there is any cloud provider block defined inside of the main.tf in the current directory where terraform init command is run, it goes ahead and downloads the binary need in order to communicate with APIs of the specific cloud provider.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;terraform plan&lt;/strong&gt; does a great job as it authenticates to the cloud provider, and then provides a summary of what will be the changes that will be applied after applying configuration present in the main.tf&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;terraform apply&lt;/strong&gt; After running a terraform plan once we have understood that the proposed changes are needed to be applied, we can now run terraform apply which goes ahead and start provisioning the infrastructure with our approval.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;terraform destroy,&lt;/strong&gt; After successfully provisioning the resource on cloud providers, if we want to destroy the changes, we can run terraform destroy which goes ahead and destroys the resources.&lt;/p&gt;

&lt;p&gt;Let’s understand each stage in a bit of detail here.&lt;/p&gt;

&lt;h3&gt;
  
  
  TERRAFORM: INIT
&lt;/h3&gt;

&lt;p&gt;We need to navigate to the module/directory which contains the code (main.tf) after which you need to run the terraform init.&lt;/p&gt;

&lt;h3&gt;
  
  
  What magic does Terraform init do?
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--H0hON5Iv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://cdn-images-1.medium.com/max/2000/0%2AY4ti8UdDKFpEd8rq.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--H0hON5Iv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://cdn-images-1.medium.com/max/2000/0%2AY4ti8UdDKFpEd8rq.gif" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When we run terraform init it goes ahead and initializes if there is any external module specified in the main.tf as well if the provider block is declared it goes ahead and downloads the binaries needed in order for future communication with the specific cloud provider. In this case, if we run terraform init, it goes ahead and downloads the azure binaries inside .terraform directory, this binary is useful for communication with the Azure API.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Output after running: terraform init 
Initializing the backend... 
Initializing provider plugins... 
Terraform has been successfully initialized! 
You may now begin working with Terraform. Try running "terraform plan" to see any changes that are required for your infrastructure. All Terraform commands should now work. 
If you ever set or change modules or backend configuration for Terraform, rerun this command to reinitialize your working directory. If you forget, other commands will detect it and remind you to do so if necessary.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  TERRAFORM: PLAN
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--rY39UYF7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/0%2AJz-VICYozxbwjHyk.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rY39UYF7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/0%2AJz-VICYozxbwjHyk.jpg" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When you run the command terraform plan it goes ahead and gives us an overview, about how the infrastructure would look like after applying the configuration that needs to be provisioned. The resultant output from the terraform plan often lists the resources that would either be created (+) , removed (-), or modified (+/-).&lt;/p&gt;

&lt;p&gt;terraform plan can be compared to the Linux command diff (+) (-) (~)&lt;/p&gt;

&lt;p&gt;An execution plan has been generated and is shown below.&lt;/p&gt;

&lt;p&gt;Resource actions are indicated with the following symbols:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;+ create

Terraform will perform the following actions:

# azurerm_kubernetes_cluster.testcluster will be created

+ resource “azurerm_kubernetes_cluster” “testcluster” {

+ dns_prefix = “dns”

+ resource_group_name = “TEST”

+ “environment” = “test”

}

+ addon_profile {

+ aci_connector_linux {

+ enabled = (known after apply)

}

+ http_application_routing {

+ enabled =

(known after apply)

}

........

........

+ service_principal {

+ client_id = “92409b6a-00eb-40f7–9af6–16faef7206c8″

+ client_secret = (sensitive value)

}

}
# azurerm_resource_group.rg will be created

+ resource “azurerm_resource_group” “rg” {id = (known after apply)}
Plan: 2 to add, 0 to change, 0 to destroy.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;— — — — — — — — — — — — — — — — — — — — — — — —&lt;/p&gt;

&lt;p&gt;If we have to look carefully terraform gives us a complete overview of how changes will be applied (+) sign means the specific resources will be added, this immensely helps us when don’t want to directly apply the changes but rather would like to see what changes will occur and based upon the output if it seems suitable we then go ahead and apply the plan.&lt;/p&gt;

&lt;h3&gt;
  
  
  TERRAFORM APPLY
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--umSy9MXR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/0%2AxrKhv7rM_N6XBJcA.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--umSy9MXR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/0%2AxrKhv7rM_N6XBJcA.jpg" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Fingers crossed&lt;/em&gt;, after having a rough idea of how the state of our infrastructure would look after running &lt;strong&gt;&lt;em&gt;terraform plan&lt;/em&gt;&lt;/strong&gt;, we can now go ahead and run &lt;strong&gt;terraform apply&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What does terraform really do?
&lt;/h2&gt;

&lt;p&gt;*terraform apply *command does the actual heavy lifting, it goes ahead and ensures that the expected configuration that is mentioned in the configuration file is provisioned on the cloud provider.&lt;/p&gt;

&lt;p&gt;Running &lt;strong&gt;terraform apply **commands re-runs **terraform plan&lt;/strong&gt; and output’s the overview of the proposed state of the infrastructure along with confirmation to apply the changes, with a Yes or No and also generates local state files which contain the current state of infrastructure on the cloud in context to the resources mentioned&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--TfpShHdu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/0%2Ab4eKMHWkH570fnjs.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--TfpShHdu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/0%2Ab4eKMHWkH570fnjs.jpg" alt="Click Yes, and let the journey begin."&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  (YES == wait)
&lt;/h2&gt;

&lt;p&gt;After entering &lt;strong&gt;yes&lt;/strong&gt; on the terraform apply prompt, just sit back and wait as it might take some time,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--1G7k87EA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/0%2A2rDGBIf5jA5xmsR5.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--1G7k87EA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/0%2A2rDGBIf5jA5xmsR5.jpg" alt="***waiting might be forever***"&gt;&lt;/a&gt;&lt;/p&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;azurerm_resource_group.rg: Creating…

&lt;p&gt;azurerm_resource_group.rg: Creation complete after 5s [id=/subscriptions/f7e20517–6ec1–460d-9712-aa3ee55ccc6a/resourceGroups/TEST]&lt;/p&gt;

&lt;p&gt;.testcluster: Creating…&lt;/p&gt;

&lt;p&gt;.testcluster: Still creating… [10s elapsed]&lt;/p&gt;

&lt;p&gt;…..&lt;/p&gt;

&lt;p&gt;.testcluster: Creation complete after 13m27s&lt;/p&gt;

&lt;p&gt;[id=/subscriptions/XXXXXX/resourcegroups/TEST/providers&lt;/p&gt;

&lt;p&gt;/Microsoft.ContainerService/managedClusters/testcluster]&lt;/p&gt;

&lt;p&gt;Apply complete! Resources: 2 added, 0 changed, 0 destroyed.&lt;br&gt;
&lt;/p&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
&lt;br&gt;
  &lt;br&gt;
  &lt;br&gt;
  Boom! finally, we have an AKS Cluster launched.&lt;br&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_aDBr5Ry--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/6000/0%2A1BLCPKwEEoSpFha0" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_aDBr5Ry--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/6000/0%2A1BLCPKwEEoSpFha0" alt="Photo by [SpaceX](https://unsplash.com/@spacex?utm_source=medium&amp;amp;utm_medium=referral) on [Unsplash](https://unsplash.com?utm_source=medium&amp;amp;utm_medium=referral)"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--bpf-1cjx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2352/0%2AZhoz4RM3kXXZMkj3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--bpf-1cjx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2352/0%2AZhoz4RM3kXXZMkj3.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We were successfully able to provision an AKS Cluster with terraform.&lt;/p&gt;

&lt;p&gt;If you do face any issues, please do let me know, All the code mentioned in this blogpost is available on my GitHub repository (&lt;a href="https://github.com/chrisedrego/aks_terraform"&gt;aks_terraform)&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Till then happy Terraforming… :)&lt;/p&gt;

&lt;blockquote&gt;
&lt;h1&gt;
  
  
  if you found this article useful, feel free to click ❤️ Heart many times or share it with your friends.
&lt;/h1&gt;
&lt;/blockquote&gt;

</description>
      <category>terraform</category>
      <category>devops</category>
      <category>kubernetes</category>
      <category>azure</category>
    </item>
    <item>
      <title>How to Setup CI/CD Pipeline using Gitlab-CI to Deploy to Azure Storage &amp; Azure CDN.</title>
      <dc:creator>chrisedrego</dc:creator>
      <pubDate>Sat, 29 May 2021 07:56:58 +0000</pubDate>
      <link>https://dev.to/chrisedrego/how-to-setup-ci-cd-pipeline-using-gitlab-ci-to-deploy-to-azure-storage-azure-cdn-15g5</link>
      <guid>https://dev.to/chrisedrego/how-to-setup-ci-cd-pipeline-using-gitlab-ci-to-deploy-to-azure-storage-azure-cdn-15g5</guid>
      <description>&lt;h3&gt;
  
  
  A Complete Zero-to-Hero guide in setting up a CI/CD Pipeline using Gitlab-CI to deploy with the help of Azure Storage
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F0%2ABJIzAC206YHrB7Km.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F0%2ABJIzAC206YHrB7Km.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For the purpose of this demo, I have already created a simple Angular 7 application that is hosted on Gitlab, the application is a simple, digital clock that looks something like this.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F0%2AP54UIyNKdpqxE2rD.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F0%2AP54UIyNKdpqxE2rD.png" alt="**Link to the Gitlab Repository: [**https://gitlab.com/chrisedrego/tym_mchyn.git](https://gitlab.com/chrisedrego/tym_mchyn.git)"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  The Overview
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2048%2F0%2A6TqIdnjER3Nptlml.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2048%2F0%2A6TqIdnjER3Nptlml.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The above diagram describes the whole process, right from a developer pushing the code, to the point where the CI/CD Pipeline builds &amp;amp; deploys the code to Azure Storage (Blob) which is linked to Azure CDN Endpoint.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Developer goes ahead and performs the development locally of the application after which he goes ahead and &lt;strong&gt;commits and pushes the code&lt;/strong&gt; to the version control system in our case it’s &lt;a href="https://gitlab.com/" rel="noopener noreferrer"&gt;Gitlab&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;We would then have &lt;a href="https://gitlab.com/" rel="noopener noreferrer"&gt;Gitlab&lt;/a&gt; perform the CI/CD in the form of the steps mentioned in &lt;strong&gt;gitlab-ci.yml&lt;/strong&gt; file.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;From Angular 2 onward’s we use Typescript. the typescript was released by Microsoft and is super-set of JavaScript. For the purpose of this tutorial we are using Angular 7 which uses typescript, the browser doesn’t understand typescript so hence we need to Build the application which converts it into Native JavaScript so that it can be rendered by the browser.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;After &lt;strong&gt;Building the application&lt;/strong&gt;, which converts the Angular 7 Application from Typescript to JavaScript we have a dist folder that contains the final build containing static (html,css, js) files which are then uploaded to Azure Storage.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;After uploading the files to Azure Storage, for hassle-free access to the Angular Application by the end-user, we would be using a &lt;strong&gt;CDN Endpoint&lt;/strong&gt; which helps us to further optimize the delivery, boost performance with the help of caching as well as adding redirection rules which we would be discussing later.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;CDN Endpoint can then be linked to the Custom domain name, by creating CNAME Record in your &lt;strong&gt;DNS Zone.&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Now since we have understood the overall flow for Deployment, lets get started.&lt;/p&gt;

&lt;h3&gt;
  
  
  Creating Azure Resources (BLOB + CDN ENDPOINT):
&lt;/h3&gt;

&lt;p&gt;In this case, as we already know we would be deploying the Angular application’s static build content over to the Azure Storage and that would be made accessible using a CDN Endpoint. We need to create a Storage Account along with a Blob, as well as a CDN Endpoint which will be pointing to the respective blob containing the static build files.&lt;/p&gt;

&lt;h3&gt;
  
  
  Azure Storage Account (WHAT? WHY? HOW?)
&lt;/h3&gt;

&lt;p&gt;An azure storage account contains all of your Azure Storage data objects: blobs, files, queues, tables, and disks for our example we would be using blobs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Creating a Storage Account&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Log in to the Azure portal, and search storage account and &lt;strong&gt;create a Storage account&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F0%2A9Pjf4sIn1_sb6HFy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F0%2A9Pjf4sIn1_sb6HFy.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;After the Creation of the Storage Account &lt;strong&gt;create a Blob/Container&lt;/strong&gt; in order to store static Angular build files.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F0%2AwnNaVnioxTInF29V.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F0%2AwnNaVnioxTInF29V.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;After Blob is created we need to &lt;strong&gt;grab the Access Keys for authentication&lt;/strong&gt; on the Gitlab-CI platform for uploading the content to Azure.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Creating Azure CDN Profile + Endpoint.&lt;/strong&gt;&lt;br&gt;
For Caching, Increased Performance we would be using &lt;a href="https://docs.microsoft.com/en-us/azure/cdn/" rel="noopener noreferrer"&gt;Azure CDN&lt;/a&gt;. We need to first create a CDN Profile under which we will be creating a CDN Endpoints.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F0%2ATndVS_Bl2NbqCCJt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F0%2ATndVS_Bl2NbqCCJt.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We used Verizon Premium, which allows us to configure redirection rules, and loads of other features that are present, more on this later.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Creating &amp;amp; Configuring AZURE CDN ENDPOINT.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Let’s create a CDN Endpoint for the application to be accessible in this case , we need to click on +Endpoint and then enter the name of CDN Endpoint, and select the Origin Type as Storage as we need to link the CDN Endpoint to the Storage Blob and then select Origin Hostname with the Storage Account hostname URL. Followed by which we need to provide the Origin path which includes the name of Blob Name.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Optimised types in CDN Endpoint.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;There are two types of Optimisation type which includes.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;General web delivery.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Dynamic site acceleration.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;After having used both the types, i prefer to use Dynamic site acceleration as its more optimized, and has better performance and faster caching as compared to General web delivery, but with higher price tag involved.&lt;/p&gt;

&lt;p&gt;**Probe Path: **Probe path includes the file placed on the origin server to optimize network routing configurations for the CDN. In our case the origin server is Storage Blob to which the CDN Endpoint is pointing to.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2256%2F0%2A6dmSF9WcQ9RjraoU.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2256%2F0%2A6dmSF9WcQ9RjraoU.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After successfully creating the CDN Endpoint, here’s how it looks like, we need to now make sure that whenever the user enters the CDN Endpoint URL which is linked to the storage blob it should be able to access the website but that’s not how it works by default, as it's not able to locate index.html.It’s never recommended for the user to enter &lt;a href="http://www.domainname.com/index.html" rel="noopener noreferrer"&gt;**www.domainname.com/index.html&lt;/a&gt;** as that adds an extra overhead from the user's side to append index.html each time he wants to access the website. In order to fix this, we will be using Verizon CDN Portal to configure the redirection rules for index.html as well as http-to-https redirection.&lt;/p&gt;

&lt;p&gt;Configuring rules for CDN endpoint.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F0%2AnVLeZQpsXjeOhuZF.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F0%2AnVLeZQpsXjeOhuZF.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We need to click on Manage, which will redirect us to Verizon Portal in order to configure the rules.&lt;/p&gt;

&lt;p&gt;In this case, we will configure two rules.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. HTTP-to-HTTPS Redirection Rule:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This will perform a URL-redirect if in case the user request for http it will perform a URL redirect to https.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. URL Rewrite (index.html):&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This rule will perform URL rewrite so that even if the user enters the CDN endpoint it will perform URL rewrite and add index.html automatically so that it removes the overhead for the user to append index.html to the domain name each time.&lt;/p&gt;

&lt;p&gt;STEPS TO CONFIGURE THE RULES.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2642%2F0%2AGPz652ywzrApVIkv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2642%2F0%2AGPz652ywzrApVIkv.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After the CDN portal opens up we need to click on ADN, as we have DSA as a CDN type enabled and then click on Rules Engine after which we now can configure the rules for the same.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. http-to-https redirection&lt;/strong&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;rules schema-version=“2” rulesetversion=“2”

rulesetid=“XXXXXXX”

xmlns=“http://www.whitecdn.com/schemas/rules/2.0/rulesSchema.xsd”&amp;gt;

&amp;lt;rule id=“XXXXXXX” platform=“adn” status=“active”

version=“0” custid=“XXXXXXX”&amp;gt;

&amp;lt;description&amp;gt;tymmychn_http_to_https&amp;lt;/description&amp;gt;

&amp;lt;!–If–&amp;gt;

&amp;lt;match.request-scheme value=“http”&amp;gt;

&amp;lt;feature.url-redirect code=“301”

pattern=“/XXXXXXX/tymmychn/tymmychn/(.*)”

value=“https://%{host}/$1” /&amp;gt;

&amp;lt;/match.request-scheme&amp;gt;

&amp;lt;/rule&amp;gt;

&amp;lt;/rules&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;2. url rewrite for index.html&lt;/strong&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;rules schema-version=“2” rulesetversion=“2”

rulesetid=“XXXX”

xmlns=“http://www.whitecdn.com/schemas/rules/2.0/rulesSchema.xsd”&amp;gt;

&amp;lt;rule id=“XXXX” platform=“adn” status=“active”

version=“0” custid=“XXXX”&amp;gt;

&amp;lt;description&amp;gt;tymmychn_rewrite_index.html&amp;lt;/description&amp;gt;

&amp;lt;!–If–&amp;gt;

&amp;lt;match.always&amp;gt;

&amp;lt;feature.url-user-rewrite

pattern=“/XXXX/tymmychn/tymmychn/$”

value=“/XXXX/tymmychn/tymmychn/index.html” /&amp;gt;

&amp;lt;/match.always&amp;gt;

&amp;lt;/rule&amp;gt;

&amp;lt;/rules&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;After applying the rules here how it should look like.&lt;/p&gt;

&lt;p&gt;INDEX.HTML — URL REWRITE&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2692%2F0%2Arb_H6Qc8obbiiDP2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2692%2F0%2Arb_H6Qc8obbiiDP2.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For HTTP-TO-HTTPS Redirection.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2694%2F0%2AlsIOjcE5-jWDs6V9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2694%2F0%2AlsIOjcE5-jWDs6V9.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After Saving the Rules it will take some time for the changes to the reflected, which can be up to 4 hours and in the worst case even more. After the Pending XML changes to Active XML that’s an indication that the changes have been applied.&lt;/p&gt;

&lt;h3&gt;
  
  
  BACK TO GITLAB.
&lt;/h3&gt;

&lt;p&gt;Now after setting up the whole infrastructure that was needed at the Azure side, which involved setting up a storage account which will have blobs that could store the Angular static build files (HTML, CSS and JavaScript) as well as to serve the end-user with help of CDN Endpoint for fast delivery.&lt;/p&gt;

&lt;p&gt;Now its time to configure the CI/CD pipeline on Gitlab, as we already know that GitLab provides a simple way to configure CI/CD on each repository by mentioned all the steps in .gitlab-ci.yml file.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;image: chrisedrego/azng_ubuntu:latest

stages:
  - build_deploy

build_deploy:
  stage: build_deploy
  script:
    - az login -u $AZ_USER_NAME -p $AZ_USER_PASSWORD
    - az account set --subscription $AZ_SUBSCRIPTION_ID
    - npm i -g @angular/cli &amp;amp;&amp;amp;  npm install --save-dev @angular-devkit/build-angular
    # - /usr/bin/ng build --output-hashing none --base-href https://tymmychn.blob.core.windows.net/tymmychn/
    - /usr/bin/ng build --output-hashing none --base-href https://tymmychn.azureedge.net/
    - az storage blob upload-batch -s ./dist/tymmchyn -d $BLOB_NAME --account-key $ACCOUNT_KEY --account-name $ACCOUNT_NAME 
    - az cdn endpoint purge -g $RESOURCE_GROUP --name tymmychn --profile-name tymmchyn --content-paths "/*"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://gitlab.com/chrisedrego/tym_mchyn/-/blob/master/.gitlab-ci.yml" rel="noopener noreferrer"&gt;https://gitlab.com/chrisedrego/tym_mchyn/-/blob/master/.gitlab-ci.yml&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Link to this GitLab-repo contains the codebase along with the gitLab-ci.yml. In this case, let’s quickly examine the GitLab-ci.yml and here what it does.&lt;/p&gt;

&lt;p&gt;Line 1: IMAGE, refers to the link to the docker image that I have used for this example which contains the needed tools involving Azure CLI and Node JS, NPM&lt;/p&gt;

&lt;p&gt;After which we have declared the stages, followed by which we have defined single-stage named “build_deploy”.&lt;/p&gt;

&lt;p&gt;First, we authenticate to the Azure using az login and the username and password which are stored as secret variables in this case. More on Gitlab Secrets on this &lt;a href="https://docs.gitlab.com/ee/ci/variables/" rel="noopener noreferrer"&gt;link&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;After authenticating and selecting the right subscription if you multiple subscriptions (9,10)&lt;/p&gt;

&lt;p&gt;Gitlab already does the overhead of cloning the repository with the branch on the which we are currently, so taking that into consideration we can now go ahead and perform the following steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;install Angular CLI&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Install Angular-Devkit (Optional)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Install all the dependent packages in the package.json for Angular Application&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Use Angular CLI to go ahead and build the Angular Application and then set the base-href to the CDN-Endpoint. The reason behind setting the base-href is that once the application is hosted and when we try to access the application from the CDN endpoint at runtime, we face an issue while trying to access the application as its not able to locate the absolute path of Javascript files.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;After the build is created in the specific folder mentioned in angular.json present in your root directory, we can now go ahead and upload the files in the Storage Blob.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;After successfully uploading the contents we make sure that after each commit/push we get the latest code by removing the purged content which might previously be present as cache on the CDN Endpoint node.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Now if the pipeline was successfully and the Verizon rules have applied, we can head over to our CDN Endpoint and we should see the final magic.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F0%2A8uzu4oEJdlLHvtRJ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F0%2A8uzu4oEJdlLHvtRJ.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Well, that looks good.&lt;/p&gt;

&lt;blockquote&gt;
&lt;h1&gt;
  
  
  if you found this article useful, feel free to press &amp;gt;&amp;gt; ❤️ love many times or share it with your friends.
&lt;/h1&gt;
&lt;/blockquote&gt;

</description>
      <category>microservices</category>
      <category>angular</category>
      <category>webdev</category>
      <category>devops</category>
    </item>
    <item>
      <title>21 Best Practises in 2021 for Dockerfile</title>
      <dc:creator>chrisedrego</dc:creator>
      <pubDate>Sat, 29 May 2021 07:50:42 +0000</pubDate>
      <link>https://dev.to/chrisedrego/21-best-practise-in-2021-for-dockerfile-1dji</link>
      <guid>https://dev.to/chrisedrego/21-best-practise-in-2021-for-dockerfile-1dji</guid>
      <description>&lt;p&gt;“This is a curate long-list of 20+ Best Practises for Dockerfile for the Year 2020”.&lt;/p&gt;

&lt;p&gt;Since the inception of Docker on 20th March 2013, it has already taken the world by storm, by revolutionizing the way how easily applications can be packaged and delivered across multiple platforms with ease. All though containers existed, even before the Docker era, what made Docker stand out of the crowd and making it globally famed was the fact that, it easily bootstrap most of the underlying complexity involved with containers in general, making it fairly available on all the major operating systems &amp;amp; platforms with power of open-source community always backed for better support.&lt;/p&gt;

&lt;p&gt;Docker has always been my personal favourite in terms of the technology shift that has happened in recent years. From the transition of the bare-metal machines to Virtual-Machines in most respect. Similarly, Docker is replacing Virtual Machines with containers for all good reasons. Docker, in a nutshell, contains some basic components involved which start off with a simple Dockerfile which is a plain-text file where we write the (code) which contains a straightforward set of steps or instructions which define what needs to be done in simple terms and what you want your application to contain and how it would run. After writing the Docker file we build an image out of it (consider this as executable) which gets created after compiling some code (Dockerfile). After the image is built we need to launch that image. Launching an Image creates a container which is a running instance of the image, which is similar to launching an executable which is running instance of the executable.&lt;/p&gt;

&lt;h2&gt;
  
  
  CHOOSE MINIMAL BASE IMAGES
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F0%2Avz9IdKespGFFxxOf.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F0%2Avz9IdKespGFFxxOf.gif"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Every custom image that we build in docker needs to be built on top of an existing base image, we need to cherrypick and select images that are more minimal &amp;amp; compact. There are various flavours available which offer light-weight images. These include Alpine, Busybox and other distribution-specific images like Debian, Ubuntu, CentOS which have &lt;strong&gt;-slim *&lt;em&gt;or *&lt;/em&gt;-minimal&lt;/strong&gt; version of them to choose from.&lt;/p&gt;

&lt;p&gt;While choosing a base image, it does need to be a perfect mix of choosing the image which offers the needed support, tools and binaries along with being lightweight. As some time you might also come across issues where choosing a lightweight image involves a trade-off with compatibility with the application and not having the needed dependencies or libraries need to run the application.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM alpine

WORKDIR /app

COPY package.json /app

RUN npm install

CMD [“node”,“index.js”]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  REMOVE CACHE PACKAGES
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F8064%2F0%2A0Sq1g_FNhDD8JmqR" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F8064%2F0%2A0Sq1g_FNhDD8JmqR" alt="Photo by [Gary Chan](https://unsplash.com/@gary_at_unsplash?utm_source=medium&amp;amp;utm_medium=referral) on [Unsplash](https://unsplash.com?utm_source=medium&amp;amp;utm_medium=referral)"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For our application to run inside of a container it often requires a runtime environment, dependencies &amp;amp; binaries. While trying to install packages from package-manager such as (apt, apk, yum), it often first downloads the packages from the remote source-repositories on to the local machine and then installs the package. After installing packages, often at times cache package files that were downloaded get stored and consume additional unnecessary space. The best recommendation is to remove these cached/package files after the package is installed, this further optimizes the docker image.&lt;/p&gt;

&lt;p&gt;Depending on the type of the image which is used there are different package managers which have default locations where the package-managers cache is been stored, some of which are listed below.&lt;/p&gt;

&lt;p&gt;**Image/Distro: **Debian / Ubuntu&lt;/p&gt;

&lt;p&gt;**Package Manager: **apt&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;Location of the Cache: *&lt;/em&gt;/var/cache/apt/archives&lt;/p&gt;

&lt;p&gt;**Image/Distro: **Alpine&lt;/p&gt;

&lt;p&gt;**Package Manager: **apk&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;Location of the Cache: *&lt;/em&gt;/var/cache/apk&lt;/p&gt;

&lt;p&gt;**Image/Distro: **centos&lt;/p&gt;

&lt;p&gt;**Package Manager: **yum&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;Location of the Cache: *&lt;/em&gt;/var/cache/&lt;/p&gt;

&lt;p&gt;In the example below, we will be installing Nginx webserver to server static HTML webpages. As we install the Nginx package alongside we will also remove the cache packages that have been stored in the specific path of the cache directory. In this case, as we are using alpine we have specified the directory which contains the cache for the packages.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM alpine

RUN apk add nginx &amp;amp;&amp;amp; **rm -rf /var/cache/apt/***

COPY index.html /var/www/html/

EXPOSE 80

CMD [“nginx”,“-g”,“daemon off;”]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;An alternative to the above solution in the case of alpine is to use &lt;strong&gt;–no-cache&lt;/strong&gt; which ensures that no cache is stored for the package that would be installed, which removes the additional need of deleting the packages manually.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM alpine

RUN apk add –no-cache nginx

COPY index.html /var/www/html/

EXPOSE 80

CMD [“nginx”,“-g”,“daemon off;”]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  AVOID MULTIPLE LAYERS
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F0%2AFMAsrFS11TCJaQl5.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F0%2AFMAsrFS11TCJaQl5.gif"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Wow! this burger is an eye-candy with these extra layers of patty &amp;amp; cheese, which makes it really yummy &amp;amp; &lt;strong&gt;heavy&lt;/strong&gt;. Docker images are similar to this burger with each extra layer which gets added to the Dockerfile file while building the image it makes it more &lt;strong&gt;heavier&lt;/strong&gt;. It’s always recommended to make sure to keep the number of layers as low as possible.&lt;/p&gt;

&lt;p&gt;Below is a Dockerfile that contains instruction where we install Nginx along with other utilities that are needed. In the case of Dockerfile, each new line of instruction forms a &lt;strong&gt;separate layer.&lt;/strong&gt;&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM alpine

RUN apk update

RUN apk add curl

RUN apk add nodejs

RUN apk add nginx-1.16.1-r6

RUN apk add nginx-mod-http-geoip2-1.16.1-r6

COPY index.html /var/www/html/

EXPOSE 80

CMD [“nginx”,“-g”,“daemon off;”]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Above Dockerfile can be optimized with the help of chaining and effectively using &lt;strong&gt;&amp;amp;&amp;amp;&lt;/strong&gt; and *&lt;em&gt;*&lt;/em&gt; where ever needed to reduce the number of layers created for the Dockerfile.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM alpine

RUN apk update &amp;amp;&amp;amp; apk add curl nginx nginx-mod-http-geoip2-1.16.1-r6 \

rm -rf /var/cache/apt/*

COPY index.html /var/www/html/

EXPOSE 80

CMD [“nginx”,“-g”,“daemon off;”]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;With the help of chaining, we have clubbed most of the layers and avoided creating multiple layers which overall helps to optimize the Dockerfile to make the burger look even more &lt;strong&gt;Yummy&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  DON’T IGNORE .DOCKERIGNORE
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F0%2AFNJQYGBJTCJ6CwwI.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F0%2AFNJQYGBJTCJ6CwwI.gif"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;.dockerignore *&lt;em&gt;as the name suggests, is a quick and easy way to ignore the files that shouldn’t be apart of the Docker image. Similar to the *&lt;/em&gt;.gitignore **file which ignores the files from being tracked under version control. Before going further any further, let’s understand **build-context&lt;/strong&gt;. While building a Dockerfile all files/ folders in the current working directory are copied &amp;amp; used as the build context. The tradeoff here is that if the current working directory from where we are building the Dockerfile contains Gigabytes of data, in that case, it often increases the unnecessary build time, well that’s a problem, does that mean we have to move the Gigabytes of data to separate directory while building Dockerfile, Naah!!, but then how do we solve this?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;.dockerignore&lt;/em&gt;&lt;/strong&gt; to your rescue, it can be used for a couple of use-cases some of which I have been mentioned below:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Ignore files &amp;amp; Directories&lt;/strong&gt; which are not needed to be part of the image which will be built.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Avoid accidentally copying &lt;strong&gt;Sensitive data&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Let’s try to understand this a bit better with an example of Dockerfile in which we will be dockerize a nodejs application and use .&lt;strong&gt;dockerignore&lt;/strong&gt; to ignore the files/directories which are not required to copied at the time of building this image.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Ignore unrequired files &amp;amp; directories&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;FROM node:10&lt;/p&gt;

&lt;p&gt;WORKDIR /nodeapp&lt;/p&gt;

&lt;p&gt;COPY package.json ./&lt;/p&gt;

&lt;p&gt;RUN npm install&lt;/p&gt;

&lt;p&gt;COPY . .&lt;/p&gt;

&lt;p&gt;EXPOSE 8888&lt;/p&gt;

&lt;p&gt;CMD [ “node”, “index.js” ]&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In this case, we are choosing &lt;strong&gt;node:10&lt;/strong&gt; as the base image, setting the app as the working directory for our docker image, exposing port &lt;strong&gt;8888&lt;/strong&gt; for external access, after which we copy the package.json and then install all the dependencies mentioned in package.json using npm which will create a node_modules directory which will contain all the latest dependencies installed under it after which comes the crucial part where we copy all the contents from our current working directory in the docker image. Often at times while copying all the contents from the current working directory there is certain files/directory which is not needed, in this case, its &lt;strong&gt;node_modules&lt;/strong&gt; because we have already installed the latest binaries using npm install. so with that in mind, we can add node_modules in the .dockerignore to avoid it being copied over while the image gets build.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Avoid copying sensitive details.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Developers cannot deny the fact of storing .env, ssh keys, certificates, and files that contain sensitive details in their local development environment (been there done that), while it makes things easy to access it exposes the overall system to a whole new level of vulnerabilities and security loops-holes. however, these practices should be avoided by all means, as well as in order to prevent further damage in a development environment that contains Docker the best thing that can be done is to avoid these files from getting copied over in the docker image that we are building. This can be easily done with the help of &lt;strong&gt;.dockerignore&lt;/strong&gt; by specifying the files that need to be avoided from being accidentally copied over.&lt;/p&gt;

&lt;p&gt;So ideal here’s how our &lt;strong&gt;.dockerignore&lt;/strong&gt; file should like&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;node_modules

.env

secrets/

*pem

*.md
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;In this case, we have added, &lt;strong&gt;node_modules&lt;/strong&gt; which isn’t needed as mentioned above, &lt;strong&gt;.env&lt;/strong&gt; as it might contain sensitive details or variables specific to the local development environment which shouldn’t create a conflict in other environments such as staging, production. We have also excluded all sensitive data stored in the &lt;strong&gt;.*pem&lt;/strong&gt;* files as well as the files that are present in the secret folder along with markdown/documentation files that are often not needed inside of a Docker Imag&lt;/p&gt;

&lt;h2&gt;
  
  
  CHOOSE SLIM VARIANTS
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F16384%2F0%2AEsomby0oBJnh_MUO" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F16384%2F0%2AEsomby0oBJnh_MUO" alt="Photo by [Ricardo Velarde](https://unsplash.com/@rickvel?utm_source=medium&amp;amp;utm_medium=referral) on [Unsplash](https://unsplash.com?utm_source=medium&amp;amp;utm_medium=referral)"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;While choosing a base image, one should prefer to choose a much slimmer &amp;amp; minimal base image. They are often tagged as &lt;strong&gt;-slim&lt;/strong&gt; or &lt;strong&gt;-minimal&lt;/strong&gt;. These images are lighter and have far less footprint as compared to their default counterparts.&lt;/p&gt;

&lt;p&gt;Here are a couple of examples of a &lt;em&gt;slim version&lt;/em&gt; vs &lt;em&gt;default&lt;/em&gt; counterparts.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2504%2F1%2ARxR8ozJDacSPx7ZFgmrdXA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2504%2F1%2ARxR8ozJDacSPx7ZFgmrdXA.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  CUT THE ROOT
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F12000%2F0%2A8vpif4-d_NuTO53A" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F12000%2F0%2A8vpif4-d_NuTO53A" alt="Photo by [Matteo Grando](https://unsplash.com/@mang5ta?utm_source=medium&amp;amp;utm_medium=referral) on [Unsplash](https://unsplash.com?utm_source=medium&amp;amp;utm_medium=referral)"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Every image that we built using docker has default user as root, well that is a security evil, hence we refer to it as “&lt;strong&gt;&lt;em&gt;cut the root”&lt;/em&gt;&lt;/strong&gt;. Most of the time, we don’t need the user for the images to be root, as we can specify a default user with all the minimal permission needed for the application to function inside of the container&lt;/p&gt;

&lt;p&gt;Below is an example of an image in which we don’t specify a user which means the default user is **root, **Well that’s where we have opened a whole new level of a security loophole.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM node:10

WORKDIR /app

COPY package.json ./

RUN npm install

COPY . .

EXPOSE 8888

CMD [ “node”, “index.js” ]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now, since we are aware of the fact that having the default user means root in order to avoid this we can specify the default user besides root.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM node:10

RUN user add -m nodeapp

USER nodeappuser

RUN whoami

WORKDIR /app

COPY package.json ./

RUN npm install

COPY . .

EXPOSE 8888

CMD [ “node”, “index.js” ]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  REMOVE THE UNWANTED
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F0%2A5Kgip-LsvTRHXb7f.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F0%2A5Kgip-LsvTRHXb7f.gif"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;While trying to dockerize an application our primary goal is to make sure that the application run’s inside of a docker container successfully. It often happens that after choosing the base image, there are a lot of tools and packages, utilities that do come along with the image its either that we can choose to use &lt;strong&gt;-slim /-minimal&lt;/strong&gt; version of the image or prefer to remove tools and utilities which might not be needed.&lt;/p&gt;
&lt;h2&gt;
  
  
  TAG WISELY
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F0%2AXchFJrkVCTwcC1Aj.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F0%2AXchFJrkVCTwcC1Aj.gif"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Tags are a unique way for the image to be identified. While tagging the image we can use any specific naming convention of our choice but its would-be really is optimal to choose the image tag based upon features, commit, or something which is more meaningful. Tags are for the end-user to choose which version of the image to use&lt;/p&gt;

&lt;p&gt;Example of tagging involves tagging them with an incremental version of the image or use git versioning hash to be used, all of this can be integrated into your CI/CD pipelines for automating the purpose of tagging the images&lt;/p&gt;
&lt;h2&gt;
  
  
  SO NO TO LATEST TAG
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F7744%2F0%2Acjx71mKIEPn1AnqB" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F7744%2F0%2Acjx71mKIEPn1AnqB" alt="Photo by [Gemma Evans](https://unsplash.com/@stayandroam?utm_source=medium&amp;amp;utm_medium=referral) on [Unsplash](https://unsplash.com?utm_source=medium&amp;amp;utm_medium=referral)"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Been there done that, haven faced a lot of issues with docker images being tagged*&lt;em&gt;: latest&lt;/em&gt;* , here are a couple of reasons why I prefer not to use the latest tag anymore.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;myimage: latest&lt;/em&gt;&lt;/strong&gt; is similar for it being tagged with nothing, or let’s say the default &lt;em&gt;myimage&lt;/em&gt; (which has no tags)&lt;/p&gt;

&lt;p&gt;Avoid using the latest tags for Kubernetes in production as that makes it far hard to debug and find out which version might have caused the problem. That’s why it’s often recommended to tag images meaningfully with a specific version which depicts the changes which occur and rollback if needed. It breaks the whole philosophy behind unique tags that depict different versions.&lt;/p&gt;
&lt;h2&gt;
  
  
  PUBLIC OR PRIVATE REGISTRY
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F0%2AcWv3QB9YHc_5a3D_.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F0%2AcWv3QB9YHc_5a3D_.gif"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The question here is to choose between a Public Image or a Private Image?&lt;/p&gt;

&lt;p&gt;Public images are great, basic, and easy to use for smaller teams that are not that concerned about the overall security of the system.&lt;/p&gt;

&lt;p&gt;Private images stored with an added layer of security and ensure that only authorized personnel can access these images. Docker hub, as well as a couple of other container registry tools, provide the option to choose between Public or private images (although in the case Docker hub you can only choose to have 1 image as private in the default-free plan)&lt;/p&gt;
&lt;h2&gt;
  
  
  KEEP IT SINGLE
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F0%2ARJiU_CQIcwj5t-6I.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F0%2ARJiU_CQIcwj5t-6I.gif"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Keep it Single, what it really means though is to keep the philosophy of single application for a single container. This applies to a lot of real-world things as well, The &lt;strong&gt;Single Responsibility Principle&lt;/strong&gt; for software design also applies to Docker images as well. An image should represent only a single piece of application, thereby avoiding the overall complexity.&lt;/p&gt;

&lt;p&gt;It’s often a good practice having a modular approach towards dockerizing the whole application stack, which might get involved which solving the issues that might arise unexpectedly. For example: If we trying to dockerize an application that has a dependency for MYSQL as the database we shouldn’t club both the application as well as the database in a single image, but rather split both the instances in separate Docker images.&lt;/p&gt;
&lt;h2&gt;
  
  
  USE LINTER
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F12032%2F0%2AcXGEh5cZsmKf-kl9" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F12032%2F0%2AcXGEh5cZsmKf-kl9" alt="Photo by [Chris Ried](https://unsplash.com/@cdr6934?utm_source=medium&amp;amp;utm_medium=referral) on [Unsplash](https://unsplash.com?utm_source=medium&amp;amp;utm_medium=referral)"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Linter is a simple piece of software that analyzes the code for a given language and then detects errors, suggests best practices write from the moment that you are writing the code. There are a couple of linters which are available for each language. In the case of Docker, we do have a couple of linters that you can choose, some of which are mentioned below.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://github.com/hadolint/hadolint" rel="noopener noreferrer"&gt;Hadolint&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://marketplace.visualstudio.com/items?itemName=henriiik.docker-linter" rel="noopener noreferrer"&gt;Docker Linter (VSCODE)&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;My personal favorite is Docker Linter which is vscode extension which indicates warnings or syntactic errors right as you go.&lt;/p&gt;
&lt;h2&gt;
  
  
  DONT STORE SECRETS
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F0%2AzSesLaNW9lgX8j9E.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F0%2AzSesLaNW9lgX8j9E.gif"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Just how it is in real life, never disclose your secrets. Same when it comes to Dockerfile never store plaintext username, password, or other sensitive so that it gets revealed. To avoid secrets from being stored using &lt;strong&gt;.dockerignore&lt;/strong&gt; to prevent accidentally copying files that might contain sensitive information.&lt;/p&gt;
&lt;h2&gt;
  
  
  AVOID HARD CODING
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F0%2Acg2kGD2eKdypyJzT.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F0%2Acg2kGD2eKdypyJzT.gif"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;While this principle does not just apply to Dockerfile but also Software Design in general, it’s often not recommended to hard-code values inside of a Dockerfile. The best example could be instead of hardcoding specific version of the software which might update or need to change we can dynamically pass the values for them at build time using ARGS.&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;ARGS *&lt;/em&gt;— is a keyword in Dockerfile that allows us to dynamically pass values to the Dockerfile at build time.&lt;/p&gt;

&lt;p&gt;To better understand this, let's have an example.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ARG VERSION

FROM node:$VERSION

WORKDIR /app

COPY package.json ./

RUN npm install

COPY . .

EXPOSE 8888

CMD [ “node”, “index.js” ]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Using Dynamic values to pass and build the images.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker build -t testimage –build-arg VERSION=10 .

docker build -t testimage –build-arg VERSION=9 .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;With this technique, we can dynamically decide the version of the base image to choose from rather than hardcoding it and pass the value of the version at runtime&lt;/p&gt;

&lt;h2&gt;
  
  
  AVOID DEBUGGING TOOLS
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F11060%2F0%2AKsAqrH0lR5fIGWCd" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F11060%2F0%2AKsAqrH0lR5fIGWCd" alt="Photo by [Arian Darvishi](https://unsplash.com/@arianismmm?utm_source=medium&amp;amp;utm_medium=referral) on [Unsplash](https://unsplash.com?utm_source=medium&amp;amp;utm_medium=referral)"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;While building the image their often at times we add debugging tools such as curl, ping, netstat, telnet, and other networking utilities which further increase the overall size of the image. It might be a good choice to avoid adding these debugging tools in the Dockerfile and only install them when actually needed at runtime.&lt;/p&gt;

&lt;h2&gt;
  
  
  ADDING METADATA
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F0%2APz7-FhTCNRu7Qs9B.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F0%2APz7-FhTCNRu7Qs9B.jpg"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;LABEL&lt;/strong&gt; is a keyword in the Dockerfile which adds metadata details about Dockerfile.&lt;/p&gt;

&lt;p&gt;LABEL allows text-based metadata details to be added to the Dockerfile which adds more verbose information. LABEL can be used to add details about the maintainer name and email address for the Dockerfile,. In the example below, we have added details about the maintainer as well as the version of the image.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM node:10

**LABEL version=“1.0” maintainer=“Chris Rego &amp;lt;cXXXXXXo@gmail.com&amp;gt;”**

WORKDIR /app

COPY package.json ./

RUN npm install

COPY . .

EXPOSE 8888

CMD [ “node”, “index.js” ]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  USING VULNERABILITY CHECK
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F0%2A7RNDdqpQOK8cgOgK.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F0%2A7RNDdqpQOK8cgOgK.gif"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Vulnerability check!&lt;/p&gt;

&lt;p&gt;With the recent attack on Tesla’s Kubernetes Infrastructure, it made everyone understand that the move from Bare-Metal-Machines to Virtual-Machines all the way up to containers never fixes the security loopholes that often get left behind. Well, there are a couple of best practises in terms of security that can be followed while Dockerizing and application such as taking care of secrets/credentials, avoid using root user as the default user for the container and couple of others. A better approach to counter-attack security vulnerabilities in the container sphere is to include tools / technology-driven towards performing reliable security checks towards the container which are present in your environment. There are a couple of tools present that can be added to your security arsenal.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://coreos.com/clair/docs/latest/" rel="noopener noreferrer"&gt;Clair&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://github.com/goodwithtech/dockle" rel="noopener noreferrer"&gt;Dockle&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://github.com/falcosecurity/falco" rel="noopener noreferrer"&gt;Falco&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://anchore.com/" rel="noopener noreferrer"&gt;Anchore&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;
  
  
  AVOID COPYING EVERYTHING
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F0%2AoUPXIsQ1s8SjH4Ej.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F0%2AoUPXIsQ1s8SjH4Ej.jpg"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It’s always good to COPY , but it’s wise to copy selectively. In the case of Docker, it’s recommended to try to avoid using &lt;strong&gt;COPY . .&lt;/strong&gt; which tends to copy everything from your current work directory to that of your Docker image. It’s recommended to choose only the files which are needed to be copied as well as also specify files in .&lt;em&gt;dockerignore&lt;/em&gt; from accidentally copying (unwanted or files that contain sensitive data)&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM node:10

WORKDIR /app

COPY package.json ./

RUN npm install

COPY . .

EXPOSE 8888

CMD [ “node”, “index.js” ]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;For example below we avoid the use of copying everything and rather specify only the files/directory which is exclusively needed which decreases the risk of an accidental copy of unwanted data which might ultimately lead to a bloated Docker image as well as increased build time overall.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM node:10

WORKDIR /app

COPY package.json ./

RUN npm install

COPY index.js src ./

EXPOSE 8888

CMD [ “node”, “index.js” ]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  USE WORKDIR WHEN NEEDED
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F0%2AhVvmTpqUnB2LHvKS.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F0%2AhVvmTpqUnB2LHvKS.gif"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;WORKDIR&lt;/strong&gt; is another important keyword in a Dockerfile that helps to do most of the heavy lifting and avoids the additional use of creating &amp;amp; navigating to a specific working directory when needed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;WORKDIR&lt;/strong&gt; can be extensively used in a specific use-case where we involve writing an additional step in Dockerfile which involves navigating to the specific directory. In that case, we safely remove these references of navigation i.e. cd by just using &lt;strong&gt;WORKDIR &lt;/strong&gt;&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM node:10

RUN mkdir -p /app/mynodejsapp

COPY package.json /app/mynodejsapp

RUN cd /app/mynodejsapp &amp;amp;&amp;amp; npm install

COPY . ./app/mynodejsapp

EXPOSE 8888

CMD [ “node”, “index.js” ]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;In this case, we have to create a folder using **mkdir **and then for each additional reference we have to mention the whole path for that specific folder and also it involves navigating to the folder using cd, all of this additional reference which is present can be replaced with a simple WORKDIR&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM node:10

WORKDIR /app/mynodejsapp

COPY package.json ./

RUN npm install

COPY . .

EXPOSE 8888

CMD [ “node”, “index.js” ]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;In this case, WORKDIR automatically creates a folder if it doesn’t exist as well there is no additional need for navigation to the current work directory as WORKDIR as already done what it’s supposed to do.&lt;/p&gt;

&lt;h2&gt;
  
  
  MULTI-STAGE BUILDS
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F12032%2F0%2A5Lm01m4C61-XROIb" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F12032%2F0%2A5Lm01m4C61-XROIb" alt="Photo by [Ganapathy Kumar](https://unsplash.com/@gkumar2175?utm_source=medium&amp;amp;utm_medium=referral) on [Unsplash](https://unsplash.com?utm_source=medium&amp;amp;utm_medium=referral)"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The multi-stage build technique is best suited in the scenario where the docker image contains the process of building the application inside of the Dockerfile, While it’s fine to build the application with all dependencies but for further optimization, we can further segregate the process of build and final deployment which is needed into two stages. Dividing the whole image into two stages helps to make sure that we avoid unnecessary dependencies that get early needed while building the application and which aren’t needed anymore after the build.&lt;/p&gt;

&lt;p&gt;Using Multi-Stage build is a good practice as it encourages to keep only the things which are needed in the final Docker image and leaving behind all the build-dependencies and other files that are not needed.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# STAGE ONE: INVOLVES BUILDING THE APPLICATION

FROM node:10 AS build

WORKDIR /myapp

COPY package.json index.js ./

RUN npm install ./

# STAGE TWO: COPYING THE ONLY CONTENTS

# NEEDED TO RUN THE APPLICATION WHILE

# LEAVING BEHIND THE REST

FROM node:10-prod

WORKDIR /myapp

COPY –from=build /app/package.json /app/index.js ./

COPY –from=build /app/node_modules /app/node_modules ./

EXPOSE 8080

CMD [“node”, “/app/index.js”]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;In this above example, we have separated the whole Dockerfile into two stages. The first stage involves installing the required dependencies and then copy only the files that we actually need into the second stage which will be the final build that will be used. This approach offers separation of concern as well as ensures that we can cherry-pick and select what really goes inside of the final build which will be used.&lt;/p&gt;

&lt;h2&gt;
  
  
  LASTLY CACHE
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F0%2AZrB1qdyqPNYto5Nt.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F0%2AZrB1qdyqPNYto5Nt.gif"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let’s talk about something which is really important now that is cache.&lt;/p&gt;

&lt;p&gt;Packaging and Building overall takes a lot of time, the same applies while building a Dockerfile which contains a series of steps that ultimately gets build into a docker image. While building a Docker Image, Docker will build step-by-step from top-to-bottom and checks if any step which is mentioned already has a layer which is present in the cache, if the layer already exists then it doesn’t build a new layer rather it will use an existing layer which overall saves a lot of time.&lt;/p&gt;

&lt;p&gt;Caching proves effectively helpful while updating changes to the Dockerfile as well as when the Dockerfile contains often series of instruction which involves downloading specific packages, over the network often takes more time as well as consumes additional network bandwidth this can be reduced drastically with the help of caching. Although Docker provides cache by default there are chances that the cache might break due to changes that are detected which is expected behavior. So it’s the end-user responsibility to ensure that the instruction which is present in the Dockerfile is played out in specific order to avoid the cache from breaking as the order matters for caching.&lt;/p&gt;

&lt;p&gt;Caching in Docker follows a chain reaction so that at the beginning itself if there are changes that are detected in the Dockerfile then the instruction mentioned after that, are not consider to be cached and that basically breaks caching. Therefore it’s often recommended to include steps that are predicted not to change frequently at the beginning of the Dockerfile which will ensure that caching won’t break. Docker will cache the results of the first build of a Dockerfile, allowing subsequent builds to be super fast. The cache will only work only if there is a cache stored if we delete that cache next time we try to build the image it will rebuild from scratch and consume time. Docker works quite intelligently and provides on the go caching without any additional configuration.&lt;/p&gt;

&lt;p&gt;Docker is quite a flexible tool it allows us to completely ignore the cache while building the image which can be done using –no-cache. This makes sure that while building the image caching mechanism doesn’t work but this leads to an increase in the overall build time.&lt;/p&gt;

&lt;h2&gt;
  
  
  THIS NOT THE END
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F0%2A69gHBuolpVXme2dm.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F2000%2F0%2A69gHBuolpVXme2dm.gif"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As I had promised this is a &lt;strong&gt;20+ list of best practices for Dockerfile for 2020&lt;/strong&gt;, I will be adding more to the list as I go along on this dark, unknown endeavor in building great Docker containers. If you did&lt;/p&gt;

</description>
      <category>docker</category>
      <category>kubernetes</category>
      <category>opensource</category>
      <category>devops</category>
    </item>
    <item>
      <title>How making Vada Pav is similar to that of Multi-Stage Docker Build?</title>
      <dc:creator>chrisedrego</dc:creator>
      <pubDate>Sat, 29 May 2021 07:46:02 +0000</pubDate>
      <link>https://dev.to/chrisedrego/how-making-vada-pav-is-similar-to-that-of-multi-stage-docker-build-2omn</link>
      <guid>https://dev.to/chrisedrego/how-making-vada-pav-is-similar-to-that-of-multi-stage-docker-build-2omn</guid>
      <description>&lt;p&gt;&lt;em&gt;“This post is about how to build, light-weight, highly optimized Docker images with the help of a multi-stage image build technique, to get the most out of this article it’s expected for the viewer to have a **basic understanding of Docker&lt;/em&gt;&lt;em&gt;. if not, you could follow along and the rest would be a history”&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  THE NOBLE VADA PAV
&lt;/h2&gt;

&lt;p&gt;Vada Pav is the most beloved Indian street food, which doesn’t burn a hole in your pocket and it’s a perfect combination of Spicy, Sweet, Salty all at the same time, A soft cushiony pav, stuffed with a golden-fried spiced (potato) patty, covered with coriander chutneys and a sprinkling of garlicky masala — the vada pav is food heaven, an instant energy booster, rightly termed as &lt;em&gt;disco-in-your-mouth.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  HOW TO MAKE THE NOBLE VADA PAV ?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--hTV4Uap2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://cdn-images-1.medium.com/max/2000/0%2AU5mNTkb6-Kk5hYnu.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--hTV4Uap2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://cdn-images-1.medium.com/max/2000/0%2AU5mNTkb6-Kk5hYnu.gif" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Ingredients:&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Pav &lt;em&gt;(an Indian round dumpling bread)&lt;/em&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Potato&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;White Flour&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Vegetable Oil&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Salt &amp;amp; Spice&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Chilly (Garnish)&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Steps:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Get a Potato + Wash + Peal + Boil + Smash &amp;gt; Make Round dumpling of the potato.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Cover it with Batter (White Flour + Water) and Deep Fry the round dumpling of the potato in Oil&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Put Potato dumplings in the bread granish it with green chillies &amp;amp; sliced onions&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  WHAT MAKES A VADA PAV REALLY GOOD?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5BuCWcdL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/0%2Ax_qM60g7_LPpYRKh.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5BuCWcdL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/0%2Ax_qM60g7_LPpYRKh.jpg" alt="A Really Good Vada Pav"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  WHAT MAKES A VADA PAV REALLY BAD?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2kuWzW3F--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/0%2AYSEfFgBB6pmX9rB4.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2kuWzW3F--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/0%2AYSEfFgBB6pmX9rB4.jpg" alt="A Really Bad Vada Pav"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is what a really bad, ugly looking Vada pav looks like, which contains most of the unrequired things which were used while creating the Vada Pav itself, that includes &lt;strong&gt;oil&lt;/strong&gt; spill on top of it, with white flour on the vada pav which was used to make the batter as well as left-over potato peels. well, that’s a complete disaster.&lt;/p&gt;

&lt;p&gt;What really made the difference between both of them, were that the bad vada pav &lt;strong&gt;&lt;em&gt;contains most of the stuff which was used to make it vada pav, which isn’t even needed&lt;/em&gt;&lt;/strong&gt; while serving finally to Mr. Gordon Ramsay.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Khc_yR3k--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://cdn-images-1.medium.com/max/2000/0%2A3aPhqvkVqAd05_4f.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Khc_yR3k--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://cdn-images-1.medium.com/max/2000/0%2A3aPhqvkVqAd05_4f.gif" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Wait, How is Vada Pav, Related to that of Dockerfile?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--yxPGxinr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://cdn-images-1.medium.com/max/2000/0%2ADxVAds6e_X8tr9sU.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--yxPGxinr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://cdn-images-1.medium.com/max/2000/0%2ADxVAds6e_X8tr9sU.gif" alt="Confused?"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  WHAT IS A MULTI-STAGE DOCKER BUILD?​
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;&lt;em&gt;“It’s a technique which involves removing all the unrequired tools, dependencies needed to build an application and only contain the final application build which helps to create lightweight Docker Images.”&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The Multi-Stage Docker build feature is available from Docker 17.05 onwards. Using Multi-Stage build is a good practice as it encourages to keep only the things which are needed in the final Docker image and leaving behind all the build-dependencies and other files that are not needed. This involves breaking the Dockerfile into two or more stages, Build stages contains tools needed for building the application inside of the DockerImage and then the Second stage will only contain the final build artifacts being copied from the first layer to be executed.&lt;/p&gt;

&lt;p&gt;In order to get a better understanding of how Multi-Stage Docker builds works to let’s understand the same with the help of an example. In this case, we would be Dockerizing a NODE.JS application and separate it into two stages and Build and Execution stages.&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;
&lt;br&gt;
In the above example, we could see that there are two stages, the first stage copies files required to build the application after which the dependencies are installed and finally the application is built. the build application is then copied into a final stage which contains instructions to execute the application.

&lt;p&gt;The Final image copies the contents from the build stage using the &lt;strong&gt;COPY&lt;/strong&gt; instruction and specifies the reference of the image from which the contents can be copied, this really helps in only containing the final application which is needed rather than the build dependencies, and tools which are not needed when the application is used by the end-user. Thereby it improves the overall performance &amp;amp; size of the Docker image which is built.&lt;/p&gt;

&lt;p&gt;So if compare the Analogy of Dockerfile with that of Vada Pav, the both mean the same thing is to remove unrequired Build Dependencies&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--6SlqK3xH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/0%2AKN29DwesPZwVqsIh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6SlqK3xH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/0%2AKN29DwesPZwVqsIh.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  CONCLUDING
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;What is really common between a good Vada Pav and a Docker Image is that we need to avoid any build level dependencies that are present while finally serving it to end-user. In order to further understand the best practises that can be applied while Building Docker images here’s a curated list 20+ of tips and tricks that can be followed.&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>docker</category>
      <category>kubernetes</category>
      <category>devops</category>
      <category>container</category>
    </item>
    <item>
      <title>Kubernetes Monitoring Series: Kubeview</title>
      <dc:creator>chrisedrego</dc:creator>
      <pubDate>Fri, 28 May 2021 13:42:58 +0000</pubDate>
      <link>https://dev.to/chrisedrego/kubernetes-monitoring-series-kubeview-27li</link>
      <guid>https://dev.to/chrisedrego/kubernetes-monitoring-series-kubeview-27li</guid>
      <description>&lt;h1&gt;
  
  
  What is KubeView?
&lt;/h1&gt;

&lt;p&gt;KubeView is a simple Web interface that provides the complete overview of the Kubernetes Objects across namespaces and how they are interconnected to each other with intuitive UI &amp;amp; icons.&lt;/p&gt;

&lt;h2&gt;
  
  
  Displays the details about the following Objects.
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Deployments&lt;/li&gt;
&lt;li&gt;ReplicaSets / StatefulSets / DaemonSets&lt;/li&gt;
&lt;li&gt;Pods&lt;/li&gt;
&lt;li&gt;Services&lt;/li&gt;
&lt;li&gt;Ingresses&lt;/li&gt;
&lt;li&gt;LoadBalancer IPs&lt;/li&gt;
&lt;li&gt;PersistentVolumeClaims&lt;/li&gt;
&lt;li&gt;Secrets&lt;/li&gt;
&lt;li&gt;ConfigMaps&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Installing: Kubeview
&lt;/h3&gt;

&lt;p&gt;Before Installing KubeView, make sure to have helm installed on your machine.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;git clone https://github.com/benc-uk/kubeview&lt;br&gt;
cd kubeview/charts/&lt;br&gt;
helm install kubeview kubeview&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This will go ahead and install all the components needed such as ServiceAccount, ClusterRole, ClusterRoleBinding along with Deployment, and Service.&lt;/p&gt;

&lt;p&gt;Let’s test it locally, by exposing the service, run the command&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl port-forward svc/kubeview -n default 80:80
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Looks Beautiful ❤&lt;br&gt;
if you found this article useful, feel free to 👏 clap many times or share it with your friends.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>docker</category>
      <category>monitoring</category>
    </item>
  </channel>
</rss>
