<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Hrittik Roy</title>
    <description>The latest articles on DEV Community by Hrittik Roy (@hrittikhere).</description>
    <link>https://dev.to/hrittikhere</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/hrittikhere"/>
    <language>en</language>
    <item>
      <title>Check this new post out!</title>
      <dc:creator>Hrittik Roy</dc:creator>
      <pubDate>Mon, 03 Mar 2025 13:44:06 +0000</pubDate>
      <link>https://dev.to/hrittikhere/-mm9</link>
      <guid>https://dev.to/hrittikhere/-mm9</guid>
      <description>&lt;div class="ltag__link"&gt;
  &lt;a href="/loft" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__org__pic"&gt;
      &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Forganization%2Fprofile_image%2F3686%2Fe5c44179-dafb-498d-a6b7-09d03a0bff6f.png" alt="LoftLabs" width="170" height="170"&gt;
      &lt;div class="ltag__link__user__pic"&gt;
        &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F440201%2F7a1c2c68-6680-4e68-b569-11edd66e7ee9.jpeg" alt="" width="745" height="990"&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
  &lt;a href="https://dev.to/loft/technical-guide-syncing-ingress-resources-from-various-vcluster-on-gke-with-vcluster-3c56" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__content"&gt;
      &lt;h2&gt;Technical Guide: Syncing Ingress Resources from various Virtual Cluster on GKE with vCluster&lt;/h2&gt;
      &lt;h3&gt;Hrittik Roy for LoftLabs ・ Mar 3 '25&lt;/h3&gt;
      &lt;div class="ltag__link__taglist"&gt;
        &lt;span class="ltag__link__tag"&gt;#kubernetes&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#ingress&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#vcluster&lt;/span&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
&lt;/div&gt;


</description>
      <category>kubernetes</category>
      <category>ingress</category>
      <category>vcluster</category>
    </item>
    <item>
      <title>Technical Guide: Syncing Ingress Resources from various Virtual Cluster on GKE with vCluster</title>
      <dc:creator>Hrittik Roy</dc:creator>
      <pubDate>Mon, 03 Mar 2025 13:43:35 +0000</pubDate>
      <link>https://dev.to/loft/technical-guide-syncing-ingress-resources-from-various-vcluster-on-gke-with-vcluster-3c56</link>
      <guid>https://dev.to/loft/technical-guide-syncing-ingress-resources-from-various-vcluster-on-gke-with-vcluster-3c56</guid>
      <description>&lt;p&gt;Kubernetes Ingress is the most widely used Kubernetes resource for exposing an application to the outside world. Understanding the concepts and Layer-7 load balancing may sound difficult, but with this article, it won’t be. &lt;/p&gt;

&lt;p&gt;This article uses &lt;a href="https://cloud.google.com/kubernetes-engine?hl=en" rel="noopener noreferrer"&gt;Google Kubernetes Engine (GKE)&lt;/a&gt; as the host Kubernetes cluster, where you will install the Nginx Ingress controller and cert-manager to get TLS certificates for your web apps. With that, you can run an application with proper TLS. &lt;/p&gt;

&lt;p&gt;This article touches on how to create a virtual cluster using vCluster to reuse the host cluster ingress controller and cert-manager to create ingress. This approach allows your virtual clusters to reuse the ingress controller running on the host cluster, GKE in our case, and the cert-manager. &lt;/p&gt;

&lt;p&gt;If you are hearing about virtual clusters for the first time, then read more &lt;a href="https://www.vcluster.com/" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;Ensure these are installed: &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Google Cloud SDK (&lt;code&gt;gcloud&lt;/code&gt;)&lt;/strong&gt; – To interact (create/delete) with GKE. Make sure to have a project and a billing account linked and authenticated. &lt;a href="https://cloud.google.com/sdk/docs/install" rel="noopener noreferrer"&gt; Install &lt;/a&gt; &lt;strong&gt;[Note: You can use the UI if you prefer]&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Kubernetes CLI (&lt;code&gt;kubectl&lt;/code&gt;)&lt;/strong&gt; – To manage Kubernetes clusters. &lt;a href="https://kubernetes.io/docs/tasks/tools/#kubectl" rel="noopener noreferrer"&gt; Install &lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;vCluster CLI&lt;/strong&gt; – To create virtual clusters within a single Kubernetes cluster.&lt;a href="https://www.vcluster.com/docs/get-started/#deploy-vcluster" rel="noopener noreferrer"&gt; Install &lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;A Domain Name&lt;/strong&gt; – This is used to configure A Records, DNS, and TLS.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Basic Kubernetes Knowledge&lt;/strong&gt; – Familiarity with &lt;strong&gt;Deployments&lt;/strong&gt;, &lt;strong&gt;Ingress&lt;/strong&gt;, and &lt;strong&gt;Services&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Setting Up GKE Cluster
&lt;/h2&gt;

&lt;p&gt;Once you have completed the prerequisites, you can proceed to the next step: creating your Kubernetes cluster. This cluster will serve as the environment where you deploy your controllers, deployments, services as well as your virtual clusters.  &lt;/p&gt;

&lt;p&gt;With gcloud installed, you can create your cluster using the command &lt;code&gt;gcloud container clusters create&lt;/code&gt;, along with a few parameters to specify the project and location of the deployment:&lt;/p&gt;

&lt;p&gt;Command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gcloud container clusters create test  --project=hrittik-project --zone asia-southeast1-c
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After you apply the above command, your &lt;code&gt;gcloud SDK&lt;/code&gt; will begin creating the cluster. This process will take a few minutes, and once completed, a kubeconfig entry will be generated that allows you to interact with the cluster using kubectl.&lt;/p&gt;

&lt;p&gt;Successful cluster creation will look something similar:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Creating cluster test in asia-southeast1-c... Cluster is being health-checked (Kubernetes Control Plane is healthy)...done.                                                                     
Created [https://container.googleapis.com/v1/projects/hrittik-project/zones/asia-southeast1-c/clusters/test].
To inspect the contents of your cluster, go to: https://console.cloud.google.com/kubernetes/workload_/gcloud/asia-southeast1-c/test?project=hrittik-project
kubeconfig entry generated for test
NAME  LOCATION           MASTER_VERSION      MASTER_IP       MACHINE_TYPE  NODE_VERSION        NUM_NODES  STATUS
test  asia-southeast1-c  1.30.8-gke.1051000  34.124.254.219  e2-medium     1.30.8-gke.1051000  3          RUNNING

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Setting Up the Ingress Controller
&lt;/h2&gt;

&lt;p&gt;Once your GKE cluster is set up and operational, the next step is to install an &lt;strong&gt;Ingress Controller&lt;/strong&gt; to handle the routing of external traffic to your services. In this section, we will use &lt;strong&gt;NGINX&lt;/strong&gt; as our Ingress Controller. NGINX is a popular and reliable choice for Kubernetes ingress management, renowned for its flexibility and performance.&lt;/p&gt;

&lt;p&gt;Ingress will act as a reverse proxy to route traffic outside the cluster to specific Kubernetes Services based on the Ingress Rules. &lt;/p&gt;

&lt;h3&gt;
  
  
  Cluster Admin Permissions
&lt;/h3&gt;

&lt;p&gt;To configure Ingress on GKE, the first step is to provide the user &lt;code&gt;cluster-admin&lt;/code&gt; permissions to carry out the operational tasks:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl create clusterrolebinding cluster-admin-binding \
  --clusterrole cluster-admin \
  --user $(gcloud config get-value account)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Success in this step will create a &lt;code&gt;cluster-admin-binding&lt;/code&gt;in your cluster like below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;clusterrolebinding.rbac.authorization.k8s.io/cluster-admin-binding created

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Ingress Controller Deployment
&lt;/h3&gt;

&lt;p&gt;With the appropriate permissions configured, the next step is to deploy the controller which will manage all of the routing logic. The step’s as simple as running the command in your cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.12.0/deploy/static/provider/cloud/deploy.yaml

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Success will create resources in an ingress-nginx namespace as shown below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;namespace/ingress-nginx created
serviceaccount/ingress-nginx created
serviceaccount/ingress-nginx-admission created
role.rbac.authorization.k8s.io/ingress-nginx created
role.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrole.rbac.authorization.k8s.io/ingress-nginx created
clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission created
rolebinding.rbac.authorization.k8s.io/ingress-nginx created
rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
configmap/ingress-nginx-controller created
service/ingress-nginx-controller created
service/ingress-nginx-controller-admission created
deployment.apps/ingress-nginx-controller created
job.batch/ingress-nginx-admission-create created
job.batch/ingress-nginx-admission-patch created
ingressclass.networking.k8s.io/nginx created
validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission created


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Testing Ingress Controller
&lt;/h3&gt;

&lt;p&gt;The key point to focus on is that the ingress-controller will be exposed as a &lt;strong&gt;LoadBalancer&lt;/strong&gt; by default, which we will explore later in the configuring A Record Section. For now, to verify the installation, check if your pods are running by using the command below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get pods -n ingress-nginx

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The Status as Running means everything is configured correctly:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;~ ❯ kubectl get pods -n ingress-nginx                                  
NAME                                       READY   STATUS      RESTARTS   AGE
ingress-nginx-admission-create-bh7nl       0/1     Completed   0          100s
ingress-nginx-admission-patch-bvlhf        0/1     Completed   0          100s
ingress-nginx-controller-cbb88bdbc-5dkxt   1/1     Running     0          101s


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Installing Cert-Manager on the Host Cluster
&lt;/h2&gt;

&lt;p&gt;Now that your &lt;strong&gt;Ingress&lt;/strong&gt; &lt;strong&gt;Controller&lt;/strong&gt; is set up and running, the next step is to &lt;strong&gt;install&lt;/strong&gt; &lt;strong&gt;Cert-Manager&lt;/strong&gt; to automate the management and issuance of TLS certificates. &lt;strong&gt;Cert-Manager&lt;/strong&gt; is a powerful Kubernetes project that simplifies obtaining and renewing SSL/TLS certificates from various certificate authorities (CAs) like for eg: &lt;strong&gt;Let's Encrypt&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;This tutorial uses Cert Manager to create and store certificates as Kubernetes Secrets automatically.&lt;/p&gt;

&lt;h3&gt;
  
  
  Installation of cert-manager
&lt;/h3&gt;

&lt;p&gt;The installation process is straightforward; simply execute the command below on your cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.16.3/cert-manager.yaml

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Successful installation will look similar to the screenshot below: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffvkojuxxdjqm6fe731ua.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffvkojuxxdjqm6fe731ua.png" alt=" " width="800" height="626"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Verifying cert-manager Installation
&lt;/h3&gt;

&lt;p&gt;Once installed, you can verify that Cert-Manager is running by checking the pods in the &lt;code&gt;cert-manager&lt;/code&gt; namespace:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get pods --namespace cert-manager
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Successful creation will display a manager pod, a cainjector pod, and a webhook pod to verify domain authority within that namespace, as shown below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;~ ❯ kubectl get pods --namespace cert-manager                                                                                                

NAME                                       READY   STATUS    RESTARTS   AGE
cert-manager-6c7fdcbcd5-6lp94              1/1     Running   0          2m11s
cert-manager-cainjector-64d77f8498-f6p7d   1/1     Running   0          2m12s
cert-manager-webhook-68796f6795-59sqq      1/1     Running   0          2m11s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Creating a Certificate Issuer CRD
&lt;/h3&gt;

&lt;p&gt;A &lt;strong&gt;Certificate Issuer&lt;/strong&gt; instructs the manager on how to obtain certificates. There are two primary types of issuers: &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. ClusterIssuer&lt;/strong&gt;: This issuer is available throughout the entire cluster and is recommended for most cases.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Issuer&lt;/strong&gt;: This issuer is limited to a specific namespace.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f - &amp;lt;&amp;lt;EOF
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
 name: letsencrypt-prod
 namespace: cert-manager
spec:
 acme:
   # The ACME server URL
   server: https://acme-v02.api.letsencrypt.org/directory 
   # Email address used for ACME registration
   email: test-hrittik@example.com # Replace with your email
   # Name of a secret used to store the ACME account private key
   privateKeySecretRef:
     name: letsencrypt-prod
   # Enable the HTTP-01 challenge provider
   solvers:
   - http01:
       ingress:
         class: nginx
EOF


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this example, we will use a ClusterIssuer with &lt;strong&gt;Let's Encrypt&lt;/strong&gt; as the verification server. Don’t forget to update your email address in the YAML below before applying:&lt;/p&gt;

&lt;p&gt;Success will look similar:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;clusterissuer.cert-manager.io/letsencrypt-prod created

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Verify the Cluster Issuer
&lt;/h3&gt;

&lt;p&gt;To verify successful installation, use the below command to get the  &lt;code&gt;clusterissuer&lt;/code&gt; CRD:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get clusterissuer letsencrypt-prod

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Successful output will be similar to:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME AGE 
letsencrypt-prod 3m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Configuring A Records for Domain
&lt;/h2&gt;

&lt;p&gt;Now that you have set up your &lt;strong&gt;NGINX Ingress Controller&lt;/strong&gt; and are issuing certificates with &lt;strong&gt;Cert-Manager&lt;/strong&gt;, it’s time to configure the &lt;strong&gt;A Records **for your domain. In Google Kubernetes Engine (GKE), A Records **are used for mapping&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;An &lt;strong&gt;A Record&lt;/strong&gt; (Address Record) in DNS points a domain or subdomain to a specific &lt;strong&gt;IP address&lt;/strong&gt;. In this case, you need to direct your domain (for example, &lt;code&gt;hrittikhere.live&lt;/code&gt;) to the &lt;strong&gt;external&lt;/strong&gt; &lt;strong&gt;IP address&lt;/strong&gt; of your &lt;strong&gt;NGINX Ingress Controller.&lt;/strong&gt; This will allow external traffic to be properly routed to your cluster.&lt;/p&gt;

&lt;h3&gt;
  
  
  Getting the External-IP
&lt;/h3&gt;

&lt;p&gt;After installing the &lt;strong&gt;NGINX Ingress Controller&lt;/strong&gt;, a &lt;code&gt;LoadBalancer&lt;/code&gt; type service will be created that automatically allocates an &lt;strong&gt;external IP address&lt;/strong&gt;. To find the allocated IP, use the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; kubectl get svc -A | grep -E "NAME|ingress-nginx-controller"

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The output will display a list of services, from which you should select the one that has an external IP. Keep the IP address readily available:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fergavgxhom8ovn2yilvv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fergavgxhom8ovn2yilvv.png" alt=" " width="800" height="221"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Configuring A Name Record on Domain Provider
&lt;/h3&gt;

&lt;p&gt;The next step is to log in to your domain provider and navigate to the DNS Record section. Depending on the provider, the UI might change but you just need to configure four three things once you click on Create New Record:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Name/Host&lt;/strong&gt;: * # Wildcard to capture all Hosts&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Type&lt;/strong&gt;: &lt;code&gt;A   # A record for GCP&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Value&lt;/strong&gt;: &lt;code&gt;34.142.255.42  #  External IP address of your NGINX Ingress Controller&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;TTL&lt;/strong&gt;: &lt;code&gt;300  #  Use the default value.&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;On the Dashboard, this will look something similar to below: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fckcjvqhtg8ys1a6aj4pf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fckcjvqhtg8ys1a6aj4pf.png" alt=" " width="800" height="196"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click on &lt;code&gt;Add Record&lt;/code&gt; and you will be all ready for the next step! &lt;/p&gt;

&lt;h2&gt;
  
  
  Creating an Application on the Host Cluster
&lt;/h2&gt;

&lt;p&gt;Now, you have all of the moving pieces in place: domain to serve your application, ingress controller for Load Balancing and Cert-Manager for TLS. The only missing piece is application so the next step is to deploy one simple game to your cluster.&lt;/p&gt;

&lt;p&gt;The following YAML deployed three things together: &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Service&lt;/strong&gt;: Exposes the game application via a Kubernetes &lt;code&gt;Service&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Deployment&lt;/strong&gt;: Deploys the game application.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ingress&lt;/strong&gt;: Defines routing rules for the Ingress Controller to route external traffic to the Service.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f - &amp;lt;&amp;lt;EOF
apiVersion: v1
kind: Service
metadata:
  name: game-2048
  namespace: default
spec:
  ports:
    - name: http
      port: 80
      targetPort: 80
  selector:
    app: game-2048
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: game-2048
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      app: game-2048
  template:
    metadata:
      labels:
        app: game-2048
    spec:
      containers:
        - name: backend
          image: alexwhen/docker-2048
          ports:
            - name: http
              containerPort: 80
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: game-2048-ingress
  namespace: default
  annotations:
    cert-manager.io/cluster-issuer: "letsencrypt-prod"
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  ingressClassName: nginx
  rules:
    - host: game.hrittikhere.live
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: game-2048
                port:
                  number: 80
  tls:
  - hosts:
    - game.hrittikhere.live
    secretName: letsencrypt-prod
EOF
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Success will create three objects in your cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;service/game-2048 created
deployment.apps/game-2048 created
ingress.networking.k8s.io/game-2048-ingress created
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Verification of Ingress
&lt;/h3&gt;

&lt;p&gt;To find the ingress Host you can use the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get ingress
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Success will look something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;~/Code/ingress ❯ kubectl get ingress                                                                                               
NAME                CLASS    HOSTS                   ADDRESS         PORTS     AGE

game-2048-ingress   nginx    game.hrittikhere.live   34.142.255.42   80, 443   51s


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you go to the following URL, you will find your application running on the host: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqe7xe7f3dhv44fdarls2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqe7xe7f3dhv44fdarls2.png" alt=" " width="800" height="520"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you check the network tab, the Remote Address corresponds to the LoadBalancer assigned by the Ingress Controller. In simple terms, the Ingress Controller assigns the subdomain and serves the service smoothly.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqdqltsazpzh6ao7hwzel.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqdqltsazpzh6ao7hwzel.png" alt=" " width="800" height="325"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Syncing Ingress Resources with vCluster between your Virtual Clusters
&lt;/h2&gt;

&lt;p&gt;After completing this tutorial, you will see that it’s a bit complicated to set up the backbone to create ingress resources. However, imagine the IT headache if you’re operating 100s of Clusters and managing all these certificates and secrets while making sure your domains are linked well. Sounds Challenging? &lt;/p&gt;

&lt;p&gt;vCluster saves you from these and many similar problems. What vCluster does is create virtual clusters on top of your host clusters in specific namespaces. This will help you create a virtual cluster for each of your teams with full admin privileges in seconds instead of the hours it takes to create a normal cluster and configure all the required resources.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkcr7s5bx8orrnffye9pd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkcr7s5bx8orrnffye9pd.png" alt=" " width="800" height="384"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You just define a virtual cluster and what host cluster features you want in your virtual cluster. For example, if we want to ingressClasses from our Host Cluster and sync ingresses back to our host cluster from the virtual cluster to back to your host it’s as simple as defying it in the vcluster.yaml file like below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sync:
  fromHost:
    ingressClasses:
      enabled: true
  toHost:
    ingresses:
      enabled: true
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With this, whatever application you’re running on your virtual cluster can request and run an ingress smoothly. Let’s see a demo but the first step is to create the virtual cluster using the vCluster CLI [&lt;a href="https://www.vcluster.com/docs/get-started/#deploy-vcluster" rel="noopener noreferrer"&gt;Installation Step&lt;/a&gt;]:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; vcluster create my-vcluster -f vcluster.yaml


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Success will look something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;11:35:20 done Successfully created virtual cluster my-vcluster in namespace vcluster-my-vcluster
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With that, make sure that you’re in the** virtual cluster context** and you can apply the same application again with a new subdomain path:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f - &amp;lt;&amp;lt;EOF
apiVersion: v1
kind: Service
metadata:
  name: game-2048
  namespace: default
spec:
  ports:
    - name: http
      port: 80
      targetPort: 80
  selector:
    app: game-2048
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: game-2048
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      app: game-2048
  template:
    metadata:
      labels:
        app: game-2048
    spec:
      containers:
        - name: backend
          image: alexwhen/docker-2048
          ports:
            - name: http
              containerPort: 80
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: game-2048-ingress
  namespace: default
  annotations:
    cert-manager.io/cluster-issuer: "letsencrypt-prod"
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  ingressClassName: nginx
  rules:
    - host: game1.hrittikhere.live
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: game-2048
                port:
                  number: 80
  tls:
  - hosts:
    - game1.hrittikhere.live
    secretName: letsencrypt-prod
EOF
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The success will look like this again when you have three things configured:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;service/game-2048 created
deployment.apps/game-2048 created
ingress.networking.k8s.io/game-2048-ingress created
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;On the CLI, if you do &lt;code&gt;kubectl get ingress&lt;/code&gt; you will see a new ingress resource created with the new specified host:  \&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvhvr3rn8xbojpu7r3dbo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvhvr3rn8xbojpu7r3dbo.png" alt=" " width="800" height="304"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, if you check &lt;code&gt;game1.hrittikhere.live&lt;/code&gt;, you will find a similar game running alongside. With just one vCluster configuration, you can create multiple new clusters, and all of them will have ingress functioning seamlessly.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fngmhx50mgdsys1av9688.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fngmhx50mgdsys1av9688.png" alt=" " width="800" height="520"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The IP is also again the same as your host cluster, ensuring all the networking is working smoothly: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fng6hsiuym7x4agu5j4rm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fng6hsiuym7x4agu5j4rm.png" alt=" " width="800" height="325"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Clean Up
&lt;/h2&gt;

&lt;p&gt;With everything working the cleaning steps are very simple. You can remove the DNS Record from your service provider through the UI by clicking on &lt;code&gt;delete record&lt;/code&gt; and for the GKE Cluster use the following command with your own parameters:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gcloud container clusters delete test  --project=hrittik-project --zone asia-southeast1-c

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Successful deletion will look something like this: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhtsrsdof0qhrnxlbi31p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhtsrsdof0qhrnxlbi31p.png" alt=" " width="800" height="455"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With that, you have cleared all the resources, including your virtual clusters. &lt;/p&gt;

&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;With this guide, we’ve explored how to configure an Ingress Controller using NGINX in a GKE (Google Kubernetes Engine) environment and to set up TLS certificates with Cert-Manager. We’ve also seen how to configure DNS A Records to point to a Kubernetes cluster’s external IP and route traffic efficiently to our services.&lt;/p&gt;

&lt;p&gt;In addition to all of this, you’ve learned how powerful vCluster can be, especially when dealing with multi-tenant Kubernetes environments. If you're managing several teams or clusters, using vCluster allows you to create isolated, lightweight Kubernetes clusters (virtual clusters) within a larger host cluster, providing the flexibility of separate clusters without needing to pay for the control plane cost of each cluster.&lt;/p&gt;

&lt;p&gt;More Questions?&lt;a href="https://slack.loft.sh/" rel="noopener noreferrer"&gt; Join our Slack&lt;/a&gt; to talk to the team behind vCluster! &lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>ingress</category>
      <category>vcluster</category>
    </item>
    <item>
      <title>Supercharge your remote development environment with DevPod</title>
      <dc:creator>Hrittik Roy</dc:creator>
      <pubDate>Tue, 19 Dec 2023 04:53:23 +0000</pubDate>
      <link>https://dev.to/hrittikhere/supercharge-your-remote-development-environment-with-devpod-11ah</link>
      <guid>https://dev.to/hrittikhere/supercharge-your-remote-development-environment-with-devpod-11ah</guid>
      <description>&lt;p&gt;&lt;a href="https://devpod.sh/" rel="noopener noreferrer"&gt;DevPod&lt;/a&gt; is that new kid in town that works on the same standard of &lt;code&gt;devcontainers.json&lt;/code&gt; that Codespaces uses but is on the infrastructure of your choice and is open source. The project was just launched this May and has gathered more than 5.3K stars in this period. The advantage is the lower costs (around 5-10 times cheaper than cloud VMs) with auto-shutdown.&lt;/p&gt;

&lt;p&gt;This blog talks about the tool, how you can get started with DevPod quickly on GCP, and the benefits of Cloud Development Environment in general. &lt;/p&gt;

&lt;h1&gt;
  
  
  About DevPod
&lt;/h1&gt;

&lt;p&gt;DevPod is a client-side tool, which means you can install either the Desktop application or the CLI locally and easily get started. The steps are simple:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Install DevPod&lt;/li&gt;
&lt;li&gt;Configure your Provider or your infrastructure connection, which can be your local Kubernetes Cluster, Docker Instance, or a Cloud Provider. Here in this tutorial, we’ll use GCP. &lt;/li&gt;
&lt;li&gt;Choose your IDE among the ones like VScode, Jupyter Notebook, Fleet, or others from JetBrains.&lt;/li&gt;
&lt;li&gt;Click on Start and get your development environment ready.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;But again, there’s a simple question: Why go through this hassle when you can do things locally on your local machine?&lt;/p&gt;

&lt;p&gt;Well, let me ask you a few questions:&lt;/p&gt;

&lt;p&gt;What’s the most complicated thing while onboarding a new project or team? Is it setting it up and ensuring everything, including the dependencies, works smoothly? &lt;/p&gt;

&lt;p&gt;Is managing a lot of dependencies and fixing broken dev environments a waste of time?&lt;/p&gt;

&lt;p&gt;Do you feel you need a powerful machine or more bandwidth for your day-to-day operations?&lt;/p&gt;

&lt;p&gt;Suppose the answer to these questions is Yes. In that case, you understand there’s a very big requirement for a system that can use the flexibility of bigger machines and easier dev environment creation. Here, the combination of devcontainer and DevPod comes up. &lt;/p&gt;

&lt;h1&gt;
  
  
  Features that you want
&lt;/h1&gt;

&lt;p&gt;The central problem that DevPod addresses is creating a smooth and consistent developer experience (DevEx) that is smooth and helps developers get to work without configuring broken environments. Let’s see some other features: &lt;/p&gt;

&lt;p&gt;Local development: You get the same developer experience locally, so you only need to rely on a cloud provider if you need the extra boost. &lt;/p&gt;

&lt;p&gt;Rich feature set: DevPod already supports prebuilds, auto inactivity shutdown, git &amp;amp; docker credentials sync, and many more features.&lt;/p&gt;

&lt;h1&gt;
  
  
  Time for a Tutorial?
&lt;/h1&gt;

&lt;p&gt;Now, with all the features set and praise about Dev Containers, you might want to know how you can get started with it as soon as possible. Remember the four steps from above where we discuss the simplicity that’s exactly how you get started. Before diving deeper, ensure you have a Google Cloud account, an unrestricted local machine, and an active internet connection.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 1: Install DevPod
&lt;/h2&gt;

&lt;p&gt;DevPod supports any OS of your choice, so go to the following links according to your choice and install the Desktop Application on your system. The CLI can be installed independently or via the App.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/loft-sh/devpod/releases/latest/download/DevPod_macos_aarch64.dmg" rel="noopener noreferrer"&gt;MacOS Silicon/ARM&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/loft-sh/devpod/releases/latest/download/DevPod_macos_x64.dmg" rel="noopener noreferrer"&gt;MacOS Intel/AMD&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/loft-sh/devpod/releases/latest/download/DevPod_windows_x64_en-US.msi" rel="noopener noreferrer"&gt;Windows&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/loft-sh/devpod/releases/latest/download/DevPod_linux_amd64.AppImage" rel="noopener noreferrer"&gt;Linux AppImage&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/loft-sh/devpod/releases/latest/download/DevPod_linux_x86_64.tar.gz" rel="noopener noreferrer"&gt;Linux Targz&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you prefer just a CLI, here’s where you’ll find &lt;a href="https://devpod.sh/docs/getting-started/install#optional-install-devpod-cli" rel="noopener noreferrer"&gt;additional instructions.&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2: Get your backend sorted
&lt;/h2&gt;

&lt;p&gt;Now, assuming if you have Docker installed, you can use your local machine. But if you require to use some extra resources for extra compute power then you can use the remote backends via SSH or cloud. &lt;/p&gt;

&lt;p&gt;The steps are simple: Have a Cloud Account (Here it’s GCP) and then two steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Install and Configure &lt;code&gt;gcloud&lt;/code&gt; CLI locally using the &lt;a href="https://cloud.google.com/sdk/docs/install" rel="noopener noreferrer"&gt;documentation&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click on &lt;code&gt;Add Provider&lt;/code&gt; in the Providers sidebar. &lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa51eklakt27hbcrrgcwe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa51eklakt27hbcrrgcwe.png" alt=" " width="800" height="556"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Select your desired Provider (Here Google Cloud):&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fza53vgytvzmijx3p28ka.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fza53vgytvzmijx3p28ka.png" alt=" " width="800" height="556"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Select the Project ID and your desired region on the UI and click on &lt;code&gt;Add Provider&lt;/code&gt;:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcra4yff2x0ofb3prqjxo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcra4yff2x0ofb3prqjxo.png" alt=" " width="800" height="556"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Congrats 🎉 Your Provider is now available 👏&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd4bsslj26qn952acf06d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd4bsslj26qn952acf06d.png" alt=" " width="800" height="556"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Step 3: Get Started with Your Workspaces
&lt;/h1&gt;

&lt;p&gt;Now, workspaces are where you will code and develop your application, i.e., your development environment. Let’s move to the Workspace tab on your side panel and then click on &lt;code&gt;Create Workspace&lt;/code&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl846lc7hjan81px0scbh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl846lc7hjan81px0scbh.png" alt=" " width="800" height="556"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Add your Repository URL and choose your preferred IDE among the many options:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhb8wl74hli1osgn7vpc3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhb8wl74hli1osgn7vpc3.png" alt=" " width="800" height="556"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once done, click &lt;code&gt;Create Workspace&lt;/code&gt; and the process starts. If you scroll down, you can choose the name of workspace prebuilds, but we’ll keep this for a more advanced tutorial.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Wait for the daemon to be installed; in a few minutes, your workspace will be running on your browser.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flm4ayjyshilsh3379yk9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flm4ayjyshilsh3379yk9.png" alt=" " width="800" height="556"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Get Started coding&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Now, you can install your required extensions and start your development quickly. Also, if you want to switch your default IDE in between development, you can easily do that from the workspace tab:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr4pq34gtq0bgugpk35ll.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr4pq34gtq0bgugpk35ll.png" alt=" " width="800" height="619"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Step 4: Clean UP your workspace
&lt;/h1&gt;

&lt;p&gt;Navigate back to your workspace Tab to see all your workspaces running. Here you have an option to Rebuild, Stop, and Delete.&lt;/p&gt;

&lt;p&gt;Choosing Delete will destroy your workspace:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvuhq1d5lbumn9oh8q2m5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvuhq1d5lbumn9oh8q2m5.png" alt=" " width="800" height="556"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Confirm, and it will remove the cloud resources as well:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fogtfvffgglk7ihqp1mzl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fogtfvffgglk7ihqp1mzl.png" alt=" " width="800" height="556"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Final Thoughts
&lt;/h1&gt;

&lt;p&gt;Having a reliable method to create and reproduce setups can greatly benefit teams. In this tutorial, you learned how effortless it is to generate a workspace and utilize it for isolated workspaces. For more complex scenarios, you can implement pre-built configurations to reduce boot times and also utilize templates for more consistent workflows.&lt;/p&gt;

&lt;p&gt;Also, remember there are data protection laws that prevent data from leaving the site so solutions like DevPod are very helpful. Like here’s a &lt;a href="https://devpod.sh/docs/other-topics/advanced-guides/minikube-vscode-browser" rel="noopener noreferrer"&gt;tutorial&lt;/a&gt; for a small-scale representation of on-premise Kubernetes and how you can use DevPod with VS Code Browser.&lt;/p&gt;

&lt;p&gt;Join the &lt;a href="https://slack.loft.sh/" rel="noopener noreferrer"&gt;DevPod Slack&lt;/a&gt; if you face any issues while trying it out.&lt;/p&gt;

</description>
      <category>devpod</category>
      <category>cloud</category>
      <category>googlecloud</category>
      <category>devops</category>
    </item>
    <item>
      <title>Software Delivery Foundation: How you can crack it?</title>
      <dc:creator>Hrittik Roy</dc:creator>
      <pubDate>Fri, 28 Oct 2022 17:29:39 +0000</pubDate>
      <link>https://dev.to/hrittikhere/software-delivery-foundation-how-you-can-crack-it-jfn</link>
      <guid>https://dev.to/hrittikhere/software-delivery-foundation-how-you-can-crack-it-jfn</guid>
      <description>&lt;p&gt;SDF, or Software Delivery Foundation, is the brand new certification from Harness that is intended for software delivery practitioners with at least 1-2 years of experience and about 2-3 months of Harness experience. The certification lets you demonstrate your knowledge of basic software delivery concepts.&lt;/p&gt;

&lt;p&gt;I passed my cert last month, and this blog can be a good fit for how you can excel in your certification attempt as well!&lt;/p&gt;

&lt;h2&gt;
  
  
  Who needs the Certification?
&lt;/h2&gt;

&lt;p&gt;If you’re interviewing with Harness, then this will be one of the rounds which will help you to understand the breadth of Harness products on which you will be working, so this certification acts as an important knowledge base for those who want to work on Harness products or use them.  &lt;/p&gt;

&lt;p&gt;Moreover, if you were already an employee before the certificate launch, this certification gives you a look into all the other awesome harness products that your team can complement and helps you understand other modules that Harness offers.&lt;/p&gt;

&lt;p&gt;Also, if you’re the curious kind and always love upskilling yourself this certification will be very helpful to grade yourself and learn new technologies.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is the Harness Software Delivery Foundation?
&lt;/h2&gt;

&lt;p&gt;The certification program is designed to ensure that Hanress’s customers, partners, and employees have the skills and knowledge necessary to be successful with Harness. These goals put the certification in a hard category, and it will take you some time to prepare if you’re starting to learn the product from scratch.&lt;/p&gt;

&lt;p&gt;The exam is designed for software delivery practitioners, costs $150 with a validity of one year, and consists of four domains:&lt;/p&gt;

&lt;p&gt;Domain 1: Software Delivery Fundamentals    &lt;/p&gt;

&lt;p&gt;Domain 2: Harness Second Generation Architecture&lt;/p&gt;

&lt;p&gt;Domain 3: Security&lt;/p&gt;

&lt;p&gt;Domain 4: Account Level Information&lt;/p&gt;

&lt;h2&gt;
  
  
  Passing Criteria
&lt;/h2&gt;

&lt;p&gt;To pass the exam you need to get at least 42 out of 60 questions correct so covering all the domains is essential! In the exam, there’s no negative marking and you get 65 questions, which you need to complete in 90 minutes. 5 of the 65 questions don’t carry any marks and are seeded.&lt;/p&gt;

&lt;p&gt;For seeded questions, whether you attend them or not, you don’t get any marks from that. They are just added to maintain the quality and later decide if they want to keep it or not. You, as a candidate, will have no idea which is seeded among those 65. :)&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites:
&lt;/h2&gt;

&lt;p&gt;The prerequisites as mentioned on the website are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Basic computer skills&lt;/li&gt;
&lt;li&gt;Basic networking skills&lt;/li&gt;
&lt;li&gt;Basic computer security knowledge&lt;/li&gt;
&lt;li&gt;Fundamental understanding of the Linux Operating System and the capability to configure and deploy workloads using command-line interfaces&lt;/li&gt;
&lt;li&gt;Fundamental understanding of and experience configuring and deploying workloads&lt;/li&gt;
&lt;li&gt;Fundamental understanding of containers and the capability to configure and deploy workloads using command-line interfaces&lt;/li&gt;
&lt;li&gt;Fundamental understanding of container runtimes&lt;/li&gt;
&lt;li&gt;Fundamental understanding of DevOps practices and methodologies&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In my experience, having all of the prerequisites isn’t necessary but learning them before starting the preparation can save you a lot of time and energy.&lt;/p&gt;

&lt;h2&gt;
  
  
  Preparation Strategy
&lt;/h2&gt;

&lt;p&gt;Harness has a guide that you can use as a checklist to prepare for the exam! Covering all of the modules isn’t necessary if you have previous knowledge of the product, but if this is your first time, I highly recommend checking all of the content on the list:&lt;a href="https://university.harness.io/exam-prep-sdf/1191834" rel="noopener noreferrer"&gt; https://university.harness.io/exam-prep-sdf/1191834&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fydl3eqy6xcl31d7velpt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fydl3eqy6xcl31d7velpt.png" alt="Recommended Checklist" width="800" height="488"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Don’t miss the courses from Harness University as they are very important to clear your basics and some direct questions from the content can appear in your certification attempt.&lt;/p&gt;

&lt;p&gt;However, to excel, the list won’t be enough and you need to focus on some specific areas as well. The areas are:&lt;/p&gt;

&lt;p&gt;Understanding Pricing and what one plan offers on top of another &lt;strong&gt;&lt;em&gt;for all the different modules&lt;/em&gt;&lt;/strong&gt;:&lt;a href="https://harness.io/pricing?module=cd" rel="noopener noreferrer"&gt; https://harness.io/pricing?module=cd#&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Understanding the Drone License and its &lt;strong&gt;&lt;em&gt;limitations&lt;/em&gt;&lt;/strong&gt;: &lt;a href="https://docs.drone.io/enterprise/" rel="noopener noreferrer"&gt;https://docs.drone.io/enterprise/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Organizations and Projects Overview with*&lt;em&gt;_ Resources Across Scopes_&lt;/em&gt;*:&lt;a href="https://docs.harness.io/article/7fibxie636-projects-and-organizations" rel="noopener noreferrer"&gt; https://docs.harness.io/article/7fibxie636-projects-and-organizations&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Delegate in details with &lt;strong&gt;&lt;em&gt;architecture&lt;/em&gt;&lt;/strong&gt;:&lt;a href="https://docs.harness.io/article/h9tkwmkrm7-delegate-installation" rel="noopener noreferrer"&gt; https://docs.harness.io/article/h9tkwmkrm7-delegate-installation&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.harness.io/article/2k7lnc7lvl-delegates-overview" rel="noopener noreferrer"&gt;https://docs.harness.io/article/2k7lnc7lvl-delegates-overview&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.harness.io/article/lwynqsgxt9-delegate-requirements-and-limitations" rel="noopener noreferrer"&gt;https://docs.harness.io/article/lwynqsgxt9-delegate-requirements-and-limitations&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;FAQs for &lt;strong&gt;&lt;em&gt;all&lt;/em&gt;&lt;/strong&gt; the modules:&lt;a href="https://docs.harness.io/article/320domdle1-harness-security-faqs" rel="noopener noreferrer"&gt; https://docs.harness.io/article/320domdle1-harness-security-faqs&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;SaaS authentication: &lt;a href="https://docs.harness.io/article/gdob5gvyco-authentication-overview" rel="noopener noreferrer"&gt;https://docs.harness.io/article/gdob5gvyco-authentication-overview&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The best way to learn about the certification will be to follow the tutorials and try out the steps, which will help you to remember the process and options. Memorizing all the combinations will be hard.&lt;/p&gt;

&lt;p&gt;If you get time, going through all the documentation can be very handy as it will help you understand the bigger picture of the whole platform, and that’s what the certification aims towards and increases the chances of you getting a good score.&lt;/p&gt;

&lt;h2&gt;
  
  
  Shiny Credly Badge:
&lt;/h2&gt;

&lt;p&gt;Once you sit for the exam and pass, you get the &lt;a href="https://www.credly.com/badges/60fb0176-0c8f-40c6-b92e-fea590dd9046" rel="noopener noreferrer"&gt;shiny badge&lt;/a&gt; that you can brag about on social media! 🥳🥳🥳 &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpkwy1abupq0gqdzpfzhg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpkwy1abupq0gqdzpfzhg.png" alt="[Shinny Badge](https://www.credly.com/badges/60fb0176-0c8f-40c6-b92e-fea590dd9046)" width="680" height="680"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Some Tips:
&lt;/h2&gt;

&lt;p&gt;All the best for your certification, and remember the following tips to have a good experience:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Schedule your exam at least 72 hours in advance, giving yourself time to schedule a proctor. Read this Guide for more details on desk setup, scheduling and planning: &lt;a href="https://university.harness.io/live-sdf/1268412" rel="noopener noreferrer"&gt;https://university.harness.io/live-sdf/1268412&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Make sure that you are using the current version of Chrome and have downloaded the ProctorU Chrome extension available at &lt;a href="https://protect-us.mimecast.com/s/lXOVCW60Yqt6w1W1t1gPhq?domain=google.com" rel="noopener noreferrer"&gt;http://bit.ly/proctoruchrome&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Keep your ID handy and make sure your desk is clean &lt;/li&gt;
&lt;li&gt;Use your personal laptop and not the one issued by your organization.&lt;/li&gt;
&lt;li&gt;Consider all the alternatives before settling on one. Some may be misleading.&lt;/li&gt;
&lt;li&gt;If there are any questions that are difficult for you, flag them and schedule time to review them at the end.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;To learn more about the certification program, make sure to subscribe to the &lt;a href="https://www.youtube.com/@harnesscommunity" rel="noopener noreferrer"&gt;harness-community YT channel&lt;/a&gt; and to follow on &lt;a href="https://harness-community.github.io/" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt; for more updates on the exam! 🤩&lt;/p&gt;

</description>
      <category>certification</category>
      <category>sdf</category>
      <category>harness</category>
    </item>
    <item>
      <title>The DevOps Roadmap: Containerization, Containers and why do you need them?</title>
      <dc:creator>Hrittik Roy</dc:creator>
      <pubDate>Wed, 11 May 2022 11:39:29 +0000</pubDate>
      <link>https://dev.to/hrittikhere/the-devops-roadmap-containerization-containers-and-why-do-you-need-them-350b</link>
      <guid>https://dev.to/hrittikhere/the-devops-roadmap-containerization-containers-and-why-do-you-need-them-350b</guid>
      <description>&lt;p&gt;Containers are a popular term in the industry and help developers develop and deploy apps a lot faster. While by using virtualization, you can run various operating systems on your hardware containerization to run multiple instances or deploy multiple applications using the same operating system on a single virtual machine or server.&lt;/p&gt;

&lt;p&gt;This capacity to run multiple applications on the same resource summarizes into making our development life cycle efficient. In this post, we would summarize everything you must know about these unfamiliar words and how exactly containerization makes our life easy.&lt;/p&gt;

&lt;p&gt;Let’s dive in!&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Containerization?
&lt;/h2&gt;

&lt;p&gt;Containerization is simply packaging all the required environment, libraries, frameworks, directories, and application code to create a container. Citrix defined it as:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Containerization is defined as a form of operating system virtualization, through which applications are run in isolated user spaces called containers, all using the same shared operating system (OS). A container is essentially a fully packaged and portable computing environment that already has all the necessary packages to run the application inside.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fon0mmco7dgl9qwwhb4z0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fon0mmco7dgl9qwwhb4z0.png" alt=" " width="800" height="528"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Simply put, the process of creating containers is called containerization.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What is a Container?
&lt;/h2&gt;

&lt;p&gt;The word container has its origin form contain, and it does the same thing, i.e., it contains everything you need to run an application.&lt;/p&gt;

&lt;p&gt;Containers packages and contains code and all its dependencies, so the application runs quickly, portably, and reliably from one computing environment to another.&lt;/p&gt;

&lt;h2&gt;
  
  
  How does a Container work?
&lt;/h2&gt;

&lt;p&gt;Containers run via a containerization engine over a single host operating system by sharing the operating system kernel with other containers. This sharing is achieved by limiting parts of the OS to a read-only mode. This sharing makes containers extremely lightweight on the resources as you don’t need to configure a new operating system for each new container.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyepbm3cwca6jet1j1zgi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyepbm3cwca6jet1j1zgi.png" alt=" " width="800" height="639"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Docker a containerization engine running multiple containers. Image Credits: Docker&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;As you see, containers differ a lot from &lt;a href="https://web.archive.org/web/20211206070613/https://www.p3r.one/the-devops-roadmap-virtualization/" rel="noopener noreferrer"&gt;virtual environments&lt;/a&gt; (you’d need a separate OS per system here), and if you’d want to know more, we have put a blog discussing the core differences elaborately.&lt;/p&gt;

&lt;p&gt;We would also discuss why VMs are more secure than containers and other drawbacks in the same blog.🤯&lt;/p&gt;

&lt;h2&gt;
  
  
  Why do you need Containers?
&lt;/h2&gt;

&lt;p&gt;Development is a complicated task, and as a developer, you need to tackle issues and keep in mind many things while you develop. Containers help you keep your focus on the code and care less about other substances like the environment during development.&lt;/p&gt;

&lt;p&gt;A few reasons you should use a container to make your dev life easy are:&lt;/p&gt;

&lt;h3&gt;
  
  
  Consistent Environment
&lt;/h3&gt;

&lt;p&gt;Development is a critical task, and you must take into account a lot of factors ranging from libraries, frameworks to network configuration or directory management while deploying the application. Think about how different Linux and windows manage their directories.&lt;/p&gt;

&lt;p&gt;Problems arise when the supporting software environment is not identical, says Docker creator Solomon Hykes. “You’re going to test using Python 2.7, and then it’s going to run on Python 3 in production and something weird will happen. Or you’ll rely on the behavior of a certain version of an SSL library and another one will be installed. You’ll run your tests on Debian and production is on Red Hat and all sorts of weird things happen.”&lt;/p&gt;

&lt;p&gt;Containers have the opportunity for developers to build predictable environments isolated from other applications. The application’s software dependencies can also be contained in containers, such as particular versions of programming language runtimes and other software libraries.&lt;/p&gt;

&lt;p&gt;This all turns in favor of a developer. The focus stays on quality code and not on the bugs that might creep while migrating code from his system to the server.&lt;/p&gt;

&lt;h3&gt;
  
  
  Less overhead
&lt;/h3&gt;

&lt;p&gt;A container can be just tens of megabytes in size, while it may be several gigabytes in size for a virtual machine with its entire operating system. Because of this, far more containers than virtual machines will hold a single server, which boils down to consuming less resources for one individual container.&lt;/p&gt;

&lt;h3&gt;
  
  
  Just In Time
&lt;/h3&gt;

&lt;p&gt;It can take several minutes for virtual machines to boot up their operating systems and begin running their host applications. At the same time, it is possible to start containerized applications almost instantly. That implies that when they are needed, containers can be instantiated in a “just in time” mode and can disappear when they are no longer required, freeing up resources on their hosts.&lt;/p&gt;

&lt;h3&gt;
  
  
  Modularity
&lt;/h3&gt;

&lt;p&gt;Containers can split applications into modules (such as the database, the application front end, and so on) instead of running an entire complex application within a single container.&lt;/p&gt;

&lt;p&gt;This is the method of so-called microservices. It is simpler to handle applications designed in this way because each module is relatively easy, and improvements can be made to modules without the whole application needing to be rebuilt. Since containers are so lightweight, it is possible to instantiate individual modules (or microservices) only when required and almost immediately accessible.&lt;/p&gt;

&lt;h3&gt;
  
  
  Run Anywhere
&lt;/h3&gt;

&lt;p&gt;Containers can run almost anywhere, making development and deployment much easier: on Linux, Windows, and Mac operating systems; on virtual or bare metal machines; on the developer’s machine or on-site data centers; and, of course, in the public cloud.&lt;/p&gt;

&lt;p&gt;Container images like that of docker help enhance portability as they are widespread and well supported.&lt;/p&gt;

&lt;p&gt;You can make use of containers anywhere you want to run your apps.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Container Image?
&lt;/h2&gt;

&lt;p&gt;To rebuild a container, you need some file or a template that contains the instructions of what the replication must include.&lt;/p&gt;

&lt;p&gt;Container images are the same templates that help you to rebuild a container. The template consists of unchangeable static files that can be shared, and upon building, using the shared image yields a similar container.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi6zb9nyaxsxy69og9yja.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi6zb9nyaxsxy69og9yja.png" alt=" " width="800" height="396"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Docker Image is an example for Container Image used to build containers.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;An image comes in handy when you need to recreate a consistent working environment within another container without dealing with traditional long and boring manual configurations.&lt;/p&gt;

&lt;h2&gt;
  
  
  What are the popular Container Engines?
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Docker
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://www.docker.com/" rel="noopener noreferrer"&gt;Docker’s &lt;/a&gt;open-source containerization engine is the first and still most popular container technology among various competitors. Docker works with most commercial/enterprise products, as well as many open-source tools.&lt;/p&gt;

&lt;p&gt;You can read more about docker and docker images in this curated beginner-friendly post.&lt;/p&gt;

&lt;h3&gt;
  
  
  CRI-O
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://www.docker.com/" rel="noopener noreferrer"&gt;CRI-O&lt;/a&gt;, a lightweight alternative to docker, allows you to run containers without any unnecessary code or configuration, directly from &lt;a href="//kubernetes.io"&gt;Kubernetes&lt;/a&gt;, a container management system.&lt;/p&gt;

&lt;h3&gt;
  
  
  Kata Containers
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://katacontainers.io/" rel="noopener noreferrer"&gt;Kata Containers&lt;/a&gt; is an open-source container runtime with lightweight virtual machines that feel and function like containers but use hardware virtualization technology as a second layer of protection to provide more robust workload isolation.&lt;/p&gt;

&lt;h3&gt;
  
  
  Microsoft Containers
&lt;/h3&gt;

&lt;p&gt;Positioned as an alternative to Linux, &lt;a href="https://docs.microsoft.com/en-us/virtualization/windowscontainers/about/" rel="noopener noreferrer"&gt;Microsoft Containers&lt;/a&gt; supports Windows OS in very specific conditions. Typically, they run on a real virtual machine rather than a cluster manager like Kubernetes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Final Thoughts ⭐
&lt;/h2&gt;

&lt;p&gt;Containers are becoming essential as the direction of development is moving towards cloud-native. The advantages to using containers like the flexibility and agility are unmatched, and adoption is on the rise.&lt;/p&gt;

&lt;p&gt;I hope this blog helped you understand containerization and containers in-depth, and if you want to experiment with containers, you must know more about the best practices used by various companies of all sizes while adopting containers.&lt;/p&gt;

&lt;p&gt;Happy Containerizing!&lt;/p&gt;

</description>
      <category>docker</category>
      <category>linux</category>
    </item>
    <item>
      <title>Prometheus: As Simple As Possible</title>
      <dc:creator>Hrittik Roy</dc:creator>
      <pubDate>Tue, 15 Mar 2022 04:34:31 +0000</pubDate>
      <link>https://dev.to/kcdchennai/prometheus-as-simple-as-possible-2aog</link>
      <guid>https://dev.to/kcdchennai/prometheus-as-simple-as-possible-2aog</guid>
      <description>&lt;p&gt;Distributed systems help an organisation absorb countless benefits but at the cost of complexity. With the rise of the adoption of container orchestrators like Kubernetes, a need for monitoring and alerting systems came.&lt;/p&gt;

&lt;p&gt;One such system is Prometheus which is famous for being “the Kubernetes monitoring solution.”&lt;/p&gt;

&lt;p&gt;This post will explore Prometheus in a beginner way without any intricate details that generally scare a novice.&lt;/p&gt;

&lt;p&gt;Let’s Start!&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Prometheus?
&lt;/h2&gt;

&lt;p&gt;SoundCloud first developed Prometheus in 2012 after determining that their existing monitoring tools were insufficient for their needs. However, the first release of Prometheus v1 would not be available to the public until 2016, but now it’s fully open sourced and a CNCF graduate project.&lt;br&gt;
Prometheus is a time series database-based Monitoring and Alerting tool. Prometheus gathers data from apps and systems and allows you to visualise it and set up alerts. We will go deeper with this in a minute, but why do we need monitoring again?&lt;/p&gt;

&lt;h2&gt;
  
  
  The Stroy Behind Prometeus and Cloud
&lt;/h2&gt;

&lt;p&gt;Long before you start reading this blog, there used to stay giant creatures called Monoliths. They were slow and had complexity. It was hard to understand them and harder to diagnose problems. But then we had an evolution, and micro creatures was a better option. They had a lot of benefits and had less complexity with them. But managing one creature is more demanding than managing hundreds or thousands.&lt;/p&gt;

&lt;p&gt;You’re correct!&lt;/p&gt;

&lt;p&gt;So, we need someone to orchestrate the herd and someone to look over or monitor the flock. The orchestrator is Kubernetes, and Prometheus manages the monitoring. Suppose your micro creatures had one telephone in their camp. Prometheus helps you look for how many hours they’re talking, whether they can connect to the telephone service and how many times they can’t call someone. All without going to every individual’s tent and enquiring!&lt;/p&gt;

&lt;p&gt;Your micro creatures are &lt;a href="https://www.p3r.one/the-full-stack-developers-roadmap-part-4-apis/" rel="noopener noreferrer"&gt;microservices &lt;/a&gt;in a container (or Virtual Machine) that performs a job.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prometheus Architecture
&lt;/h2&gt;

&lt;p&gt;The Prometheus Server consists of 3 components:&lt;/p&gt;

&lt;h3&gt;
  
  
  Time series database (TSDB)
&lt;/h3&gt;

&lt;p&gt;All of the metrics get stores in a time series database which is optimised for time-stamped data that needs measuring changes over time and also help query efficiently. So, simply it holds every generated metric for you to reference later.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz97s1qb1rqbl8uiksx45.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz97s1qb1rqbl8uiksx45.png" alt=" " width="788" height="473"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;The architecture of Prometheus and its components Source: Prometheus&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Data Retrieval Worker
&lt;/h3&gt;

&lt;p&gt;It collects (by pulling/scraping) metrics from external sources and stores them in the TSBD.&lt;/p&gt;

&lt;h3&gt;
  
  
  Web UI
&lt;/h3&gt;

&lt;p&gt;It provides a simple web interface UI for configuring and querying the database to help you visualise your queries. We receive centralised management and configuration using the server that can track when we show new data from each unique target.&lt;/p&gt;

&lt;h2&gt;
  
  
  How does Prometheus work?
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Querying and Scraping
&lt;/h3&gt;

&lt;p&gt;Prometheus scrapes metrics from our apps and services regularly via HTTP endpoints from the target systems that have the client library installed. So, in order to collect metrics, you don’t need to install custom software or configure anything on your physical servers, nor do you need to do so on your container images.&lt;/p&gt;

&lt;p&gt;This pull based approach use by Prometheus is better than the traditional push based approach because:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"You can run your monitoring on your laptop when developing changes. You can more easily tell if a target is down.&lt;br&gt;
You can manually go to a target and inspect its health with a web browser." — Prometheus — FAQ&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Service discovery
&lt;/h3&gt;

&lt;p&gt;Prometheus was built from the bottom up to function in dynamic environments like Kubernetes and requires very minimal configuration when first installed. As a result, it undertakes automatic discovery of operating services in order to make a “best guess” as to what it should be monitoring.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0e7o8apfbcv6mrd0dwly.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0e7o8apfbcv6mrd0dwly.png" alt=" " width="800" height="476"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Prometheus Exporters Source: Devconnected&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Snapshots and Querying
&lt;/h3&gt;

&lt;p&gt;Now, as your data is scraped what’s next? We want to store the metrics.&lt;/p&gt;

&lt;p&gt;Prometheus stores a database record with each scraped metric’s snapshot of the data. You can use PromQL queries and the Prometheus web UI, or other tools like Grafana to explore how metric data evolves over time by querying and analysing metric data snapshots.&lt;/p&gt;

&lt;p&gt;Also, you can label your metrics to manage the metrics. Now you don’t have to worry about searching the metrics name manually after changes because you can search them by labels. How cool is that?&lt;/p&gt;

&lt;h3&gt;
  
  
  Different Type of Metrics
&lt;/h3&gt;

&lt;p&gt;Metrics is the specific set of data from an endpoint. It consists of &lt;code&gt;TYPE&lt;/code&gt; and &lt;code&gt;HELP&lt;/code&gt; Attributes which are described below.&lt;/p&gt;

&lt;h3&gt;
  
  
  HELP
&lt;/h3&gt;

&lt;p&gt;For a token like HELP, it is expected that another token would follow, which is the metric name. The docstring for the metric name consists of all of the tokens that are left over. HELP lines should be composed of a metric name at a time.&lt;/p&gt;

&lt;h3&gt;
  
  
  TYPE
&lt;/h3&gt;

&lt;p&gt;There are two more tokens required if the token is TYPE. The first identifier is the metric name, and the second identifier is a qualifier specifying what kind of metric it is (for example, a counter, gauge, histogram, summary, or type unknown). A single TYPE line is allowed for each metric name. Metric names should have their TYPE lines, which specify the metric’s unit of measurement, positioned first before any metric samples are given. If a metric name does not have a TYPE line, the type is left as &lt;code&gt;untyped&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Metrics Type
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Counter
&lt;/h3&gt;

&lt;p&gt;It simply counts the metrics. We’ll count these by things like the number of errors or the number of requests. This type is recommended unless your metric value can fluctuate and go down.&lt;/p&gt;

&lt;h3&gt;
  
  
  Gauge
&lt;/h3&gt;

&lt;p&gt;It is ideal for metrics that fluctuate, like CPU utilisation.&lt;/p&gt;

&lt;h3&gt;
  
  
  Histogram
&lt;/h3&gt;

&lt;p&gt;A histogram is used to measure information by counting information in specific observed buckets. Additionally, it presents a complete account of the total amount of all observed values, so it is one of the most challenging types of metrics to read.  &lt;/p&gt;

&lt;h3&gt;
  
  
  Summary
&lt;/h3&gt;

&lt;p&gt;For every observation, a summary takes a sample (usually things like request durations and response sizes). It likewise gives you the total number of observations and a list of all observed values, but it is capable of generating a customisable number of quantiles across a sliding time window.&lt;/p&gt;

&lt;p&gt;You can only utilise four sorts of metrics (Counter, Gauge, Summary and Histogram), therefore choose the best one for the purpose.&lt;/p&gt;

&lt;h3&gt;
  
  
  Fail Proofs
&lt;/h3&gt;

&lt;p&gt;When “pulling” metrics is not possible (for example, short-lived jobs that will not live long enough to be scraped), Prometheus provides a Pushgateway that allows applications to still push metric data if necessary. Essentially, we get the best of both worlds.&lt;/p&gt;

&lt;h3&gt;
  
  
  Alerting
&lt;/h3&gt;

&lt;p&gt;You can use &lt;a href="https://prometheus.io/docs/prometheus/latest/querying/basics/" rel="noopener noreferrer"&gt;PromQL&lt;/a&gt;, the querying language used by prometheus to retrieve metrics via the database and use Prometheus Web UI, API clients and Grafana to visualise. Setting up external alerting services like pagerduty and email is even possible via Alertmanager.&lt;/p&gt;

&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;Prometheus is popular, and everyone in the CNCF landscape knows this. The features help it become one of the most promising monitoring systems and beat the likes of &lt;a href="https://aws.amazon.com/cloudwatch/" rel="noopener noreferrer"&gt;Amazon CloudWatch&lt;/a&gt;, &lt;a href="https://docs.microsoft.com/en-us/azure/azure-monitor/app/app-insights-overview" rel="noopener noreferrer"&gt;ApplicationInsights&lt;/a&gt;, NewRelic, which are push based platforms. But, Prometheus itself states that the pull based factor shouldn’t be the major point when considering a monitoring system. Also, even if Prometheus serves as Kubernetes best friend it’s capable of more!&lt;/p&gt;

&lt;p&gt;If you want to get started with prometheus no other place is better than the &lt;a href="https://prometheus.io/docs/introduction/first_steps/" rel="noopener noreferrer"&gt;First Steps with Prometheus documentation.&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>cncf</category>
      <category>prometheus</category>
    </item>
    <item>
      <title>Helm: Package Manager for k8s</title>
      <dc:creator>Hrittik Roy</dc:creator>
      <pubDate>Sun, 13 Feb 2022 07:31:19 +0000</pubDate>
      <link>https://dev.to/hrittikhere/helm-package-manager-for-k8s-58hp</link>
      <guid>https://dev.to/hrittikhere/helm-package-manager-for-k8s-58hp</guid>
      <description>&lt;p&gt;Kubernetes was started inside Google to provide a layer of abstractions with containers for the modern infrastructure. Now, the technology is adopted by the masses and has become a de facto standard for any cloud native application. The open source system provides management, deployment, and scaling of your containers.&lt;/p&gt;

&lt;p&gt;Kubernetes is hard to beat in orchestration, but one of the most significant drawbacks is its lack of reproducibility. Here comes Helm: A package manager for Kubernetes and a CNCF Graduate Project.&lt;/p&gt;

&lt;p&gt;I was thrilled to speak at &lt;a href="https://www.meetup.com/Data-on-Kubernetes-community/events/283335251/" rel="noopener noreferrer"&gt;DoK Talks on the 114 Edition&lt;/a&gt; about Helm and how it tackles the reproducibility problem for Kubernetes. This post will be a summary of the beginner focused event which took place on 27th January 2022. &lt;/p&gt;

&lt;p&gt;It's not required for you to go through the recordings as this is a very extensive summary and going through this will give you a basic understanding of Helm!&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;What’s the reproducibility problem?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;After you have deployed your application with numerous objects: Deployments, Services, ConfigMaps, etc., how do you help your friend get to a similar state? Of course, you will share your YAML files with your friend. Correct?&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Yes and No!&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;You can do the hard work of copy and pasting for a small application, but what if your application is a full-stack web app with 100s of configuration files. Can you still copy them? No, you can’t, as it’s prone to errors with that many large numbers.&lt;/p&gt;

&lt;p&gt;Even if you send the object manifest to your friend, there’s one question still available. How would your friend convert  site to  site, or maybe change how much resource each application consumes? Will he go through the 100 manifests? No, that’s not scalable, prone to errors, and wastes a lot of time.&lt;/p&gt;

&lt;p&gt;Now suppose there’s a security flaw on one of your dependencies. How will you update them? Find your YAML, or edit your live resources? Absolutely NO!&lt;/p&gt;

&lt;p&gt;You need a saviour. You need Helm to your rescue.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;What is Helm?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://helm.sh/" rel="noopener noreferrer"&gt;Helm&lt;/a&gt; is your saviour for the reproducibility problem, a package manager, and a CNCF graduate project. It was launched in 2016 and has seen massive adoption among organizations, individuals since then. Under the &lt;a href="https://www.cncf.io/" rel="noopener noreferrer"&gt;CNCF&lt;/a&gt; Umbrella, Helm has become the de facto package manager for your clusters.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;What does Helm do?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Helm helps you to achieve reproducibility in the following ways:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Provides an easy way to deploy complex application&lt;/li&gt;
&lt;li&gt;Provides easy way to update specific values for your deployments&lt;/li&gt;
&lt;li&gt;Provides a way to version a particular package&lt;/li&gt;
&lt;li&gt;Provides a way to share your templates across organisation, Internet&lt;/li&gt;
&lt;li&gt;Provides an easy way to manage dependency&lt;/li&gt;
&lt;li&gt;Provides an easy way to rollback changes&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Architecture of Helm&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The Helm Repository contains all the charts (packages) created by you or other people that you can use to reach the desired state. The &lt;a href="https://helm.sh/docs/intro/install/" rel="noopener noreferrer"&gt;Helm CLI&lt;/a&gt; pulls the package, unarchives it, and then converts the charts to a valid YAML, which is then pushed to &lt;a href="https://kubernetes.io/docs/concepts/overview/kubernetes-api/" rel="noopener noreferrer"&gt;Kubernetes API&lt;/a&gt; server, which creates a release.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdcd8n5arytumko2zb9v7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdcd8n5arytumko2zb9v7.png" alt="Helm Architecture" width="800" height="431"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Basic Components of Helm Charts&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;A basic Helm Chart has the following structure:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;package-name/

charts/

templates/

Chart.yaml

values.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;charts/&lt;/code&gt;: This directory can be used to store manually maintained chart dependencies.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;templates/&lt;/code&gt;: These contain the template files which would be used to create the final manifest after combining with &lt;code&gt;values.yaml&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Chart.yaml&lt;/code&gt;: This file contains information about the chart, such as the name and version of the chart, the maintainer, dependencies, a related website, and search terms.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;values.yaml&lt;/code&gt;: This contains the default configuration for your charts. You can edit this for updating values and remove the complexity of finding specific editable items in the different manifests.&lt;/p&gt;

&lt;p&gt;The below example shows a &lt;code&gt;deployent.yaml&lt;/code&gt; from &lt;code&gt;templates&lt;/code&gt; being rendered with the custom values from &lt;code&gt;values.yaml&lt;/code&gt; to produce the valid YAML.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcp49vakkr1ffnsmv9081.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcp49vakkr1ffnsmv9081.png" alt="Final Manifest Creation" width="800" height="649"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Helm Templates Creation with values.yaml&lt;/p&gt;

&lt;h2&gt;
  
  
  How to edit the default values?
&lt;/h2&gt;

&lt;p&gt;Manually pulling the charts and unzipping it to edit your &lt;code&gt;values.yaml&lt;/code&gt; is not that straightforward. We have &lt;a href="https://www.portainer.io/solutions/kubernetes-ui" rel="noopener noreferrer"&gt;Portainer&lt;/a&gt; here, which does all the heavy lifting for you and helps you to get straight to editing the default configuration.&lt;/p&gt;

&lt;p&gt;First Navigate to Helm from the Menu and then add a repository. In a standard installation of Helm, you need to add a repository, but with Portainer, you get &lt;a href="https://bitnami.com/stacks/helm" rel="noopener noreferrer"&gt;Bitnami&lt;/a&gt; by default and can add more when required. Select a namespace and an application name. Then you need to select the chart you want to deploy to your cluster.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9bmph9fk2byd220b3w6u.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9bmph9fk2byd220b3w6u.gif" alt="Portainer Overview" width="760" height="484"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once selected, you can navigate to a chart, and Portainer will load the values on your dashboard for you to edit. Editing values are straightforward and abstract the complexity you had to deal with manual installation and initialization via a simple GUI.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuxntcwh352x0aljwneor.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuxntcwh352x0aljwneor.png" alt="Editing Default Values" width="800" height="509"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;Install&lt;/code&gt; button would install your chart with the specified values to your cluster. If you want to go with the default values, click on &lt;code&gt;Install&lt;/code&gt; without editing the values.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1uqdy32px2j25ylhhliz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1uqdy32px2j25ylhhliz.png" alt="Exposed Ports and Secrets" width="800" height="319"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After installation, Portainer detects and shows you &lt;code&gt;Published URLs&lt;/code&gt; to access your applications and secrets for you to access default passwords. Forget digging through commands to get to your Services and Secrets.&lt;/p&gt;

&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;Helm abstract the complexity of installing applications to your cluster. Portainer abstracts the complexity of managing your cluster. This post went through how Portainer can help you simplify your Kubernetes workflows with Helms.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.portainer.io/pricing/take5" rel="noopener noreferrer"&gt;Try Portainer now&lt;/a&gt; and learn more about the different ways to streamline managing Kubernetes with our &lt;a href="https://docs.portainer.io/v/ce-2.11/user/kubernetes" rel="noopener noreferrer"&gt;documentation&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Portainer Documentation for Helm here: &lt;a href="https://docs.portainer.io/v/ce-2.11/user/kubernetes/helm" rel="noopener noreferrer"&gt;https://docs.portainer.io/v/ce-2.11/user/kubernetes/helm&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Recordings are here:&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/3zXgLght57s"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>helm</category>
      <category>cncf</category>
      <category>devops</category>
    </item>
    <item>
      <title>DVC (Git For Data): A Complete Intro</title>
      <dc:creator>Hrittik Roy</dc:creator>
      <pubDate>Thu, 09 Sep 2021 11:13:22 +0000</pubDate>
      <link>https://dev.to/hrittikhere/dvc-git-for-data-a-complete-intro-4626</link>
      <guid>https://dev.to/hrittikhere/dvc-git-for-data-a-complete-intro-4626</guid>
      <description>&lt;p&gt;As a data scientist or ML engineer, have you ever faced the inconvenience of experimenting with the model? When we train the model, the model file is generated. Now, if you want to experiment with some different parameters or data, generally people rename the existing model file and train the model again and this process goes on. &lt;/p&gt;

&lt;p&gt;The final directory looks something as shown in the below figure 😁. The same thing goes with data as well. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3js4h247xd7nmnmrn3x2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3js4h247xd7nmnmrn3x2.png" alt="Alt Text" width="400" height="350"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Fig 1: Typical models dir after experimentation&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;But if we talk about code, we don't have this problem there because we have Git to version the code. I mean that I can create a separate branch for changing some code to see its behaviour without altering the previous one. What if we can get a Git kind of versioning control system for data and model? Wouldn't that be amazing? Here comes the DVC (Data Version Control) which exactly does the thing which solves this problem.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1xrwbu5t7z0waek0vozf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1xrwbu5t7z0waek0vozf.png" alt="Alt Text" width="800" height="1121"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Although, DVC is much more than just data and model versioning. It also helps create end-to-end pipelines, capturing metrics related to pipelines and experimenting with ML models. But in this blog, we will see the data versioning aspect of DVC.&lt;/p&gt;
&lt;h2&gt;
  
  
  Installing DVC
&lt;/h2&gt;

&lt;p&gt;DVC can be used as a typical Python library and can be installed using package managers like &lt;strong&gt;pip&lt;/strong&gt; or &lt;strong&gt;conda&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;To install DVC using pip, you can execute below command in the python supported terminal.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;dvc
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To install DVC using conda, you need to execute below commands in conda terminal.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;conda &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-c&lt;/span&gt; conda-forge mamba &lt;span class="c"&gt;# installs much faster than conda&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;mamba &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-c&lt;/span&gt; conda-forge dvc
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Apart from these conventional installation techniques, DVC can be installed in other different ways too as described &lt;a href="https://dvc.org/doc/install" rel="noopener noreferrer"&gt;here&lt;/a&gt; on the official documentation website.&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting Started
&lt;/h2&gt;

&lt;p&gt;Code versioning is not really a new thing for us because we use Git in our daily life but the problem with Git is it becomes extremely slow when the tracking file size is huge as 100 GB. That is where DVC comes in. Git tracks the changes in code whereas DVC tracks the changes in data as well as models. &lt;/p&gt;

&lt;p&gt;The foundation of DVC consists of a few commands you can run along with Git to track large files, directories, or ML model files. In short, you can call DVC &lt;strong&gt;"Git for data"&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Let us understand data versioning by dvc using a simple demo project. Let us assume that we want to create an end-to-end pipeline to prepare, train and evaluate the MNIST dataset and we have code as well as data available and structured as shown in the below figure.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frzjt8gex4hjj4873v2ui.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frzjt8gex4hjj4873v2ui.png" alt="MNIST pipeline file structure" width="492" height="526"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Fig 2 MNIST pipeline file structure.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;By observing the file structure, we can see the MNIST dataset is stored in the &lt;strong&gt;data&lt;/strong&gt; directory as a &lt;strong&gt;data.xml&lt;/strong&gt; file and the code corresponding to every stage of the pipeline is stored in the &lt;strong&gt;src&lt;/strong&gt; directory. Now as we understand, the code tracking (src) is the responsibility of Git and the data tracking (data) is the responsibility of DVC.&lt;/p&gt;

&lt;p&gt;Before we version the data, we need to initialise the repository as a DVC repository. Just like we run command &lt;code&gt;git init&lt;/code&gt; to initialise the current directory as a Git repository, we need to execute the below command which will initialise the current directory as a DVC repository.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;dvc init
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command will create a new directory &lt;strong&gt;.dvc&lt;/strong&gt; just like &lt;code&gt;git init&lt;/code&gt; creates &lt;strong&gt;.git&lt;/strong&gt; directory. Two files &lt;strong&gt;.gitignore&lt;/strong&gt; and &lt;strong&gt;config&lt;/strong&gt; will be generated in &lt;strong&gt;.dvc&lt;/strong&gt; directory. Now, we have successfully initialized this as DVC repository.&lt;/p&gt;

&lt;h2&gt;
  
  
  Start Tracking
&lt;/h2&gt;

&lt;p&gt;In Git, if we want it to track the changes in the code, we run a &lt;code&gt;git add&lt;/code&gt; command which adds the code changes in the local repository. DVC has a similar command as shown below to start tracking the data files.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;dvc add data/data.xml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note here that a &lt;code&gt;dvc add&lt;/code&gt; command accepts arguments which are data or model file names that you want DVC to track. All the filenames must be space-separated.&lt;/p&gt;

&lt;p&gt;By executing this command, DVC will create two files &lt;strong&gt;data/data.xml.dvc&lt;/strong&gt; and &lt;strong&gt;data/.gitignore&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;File &lt;strong&gt;data/.gitignore&lt;/strong&gt; looks something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;/data.xml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It means that it forces Git not to track original &lt;strong&gt;data.xml&lt;/strong&gt; file which is expected because the tracking of &lt;strong&gt;data.xml&lt;/strong&gt; is now responsibility of DVC and not Git.&lt;/p&gt;

&lt;p&gt;All files ending with extension .dvc are special files which contains the information about where the actual data files will be stored and cached. If you take a look at data/data.xml.dvc, it just stores an MD5 signature as shown below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;md5: a7cd139231cc35ed63541ce3829b96db
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now looking at the complete picture, Git will not track &lt;strong&gt;data/data.xml&lt;/strong&gt; because it is added in &lt;strong&gt;data/.gitignore&lt;/strong&gt; but Git will track &lt;strong&gt;data/data.xml.dvc&lt;/strong&gt; which will have information about where the actual &lt;strong&gt;data.xml&lt;/strong&gt; is stored.&lt;/p&gt;

&lt;p&gt;Now the Git tracked 2 files mentioned above will be stored in Git remote repository on Github, BitBucket or Gitlab and DVC tracked data file will be stored in DVC remote repository on any file system that we are going to describe in next section.&lt;/p&gt;

&lt;p&gt;Once we do this, the directory structure for MNIST pipeline project looks something like this.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftx5aapkziw3gliou0ds0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftx5aapkziw3gliou0ds0.png" alt="File structure after dvc add" width="520" height="796"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Fig 3 File structure after dvc add&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Storing &amp;amp; Sharing
&lt;/h2&gt;

&lt;p&gt;The actual data files (which are not tracked by Git) will be stored in any kind of file system which can also be called DVC repository. File system for DVC repository can be AWS S3 bucket, Google Drive, Google storage bucket, Azure storage, Object Storage Service or any custom created file system. Depending on where you will store the data, you will need to install external dependencies like &lt;code&gt;dvc-s3&lt;/code&gt;, &lt;code&gt;dvc-azure&lt;/code&gt;, &lt;code&gt;dvc-gdrive&lt;/code&gt;, &lt;code&gt;dvc-gs&lt;/code&gt;, &lt;code&gt;dvc-oss&lt;/code&gt;, &lt;code&gt;dvc-ssh&lt;/code&gt;. You can know more about installation &lt;a href="https://dvc.org/doc/install" rel="noopener noreferrer"&gt;here&lt;/a&gt; on the documentation website.&lt;/p&gt;

&lt;p&gt;To push the data files in DVC repository, first we need to configure the remote origin of the repository where the data will be stored. We can do that using below commands.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;dvc remote add &lt;span class="nt"&gt;-d&lt;/span&gt; storage s3://mybucket/dvcstore
&lt;span class="nv"&gt;$ &lt;/span&gt;git add .dvc/config
&lt;span class="nv"&gt;$ &lt;/span&gt;git commit &lt;span class="nt"&gt;-m&lt;/span&gt; &lt;span class="s2"&gt;"Configure remote storage"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;dvc remote add&lt;/code&gt; command adds the remote origin of the DVC repository where the data will be stored. Executing this command will actually add this origin information in &lt;strong&gt;.dvc/config&lt;/strong&gt; file which Git tracks. That is why we also need to commit the changes done in &lt;strong&gt;.dvc/config&lt;/strong&gt; file as well.&lt;/p&gt;

&lt;p&gt;Now, adding the remote origin doesn't automatically push the data in DVC remote repository. We can push the entire data file using below command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;dvc push
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It pushes all the data and model files on which &lt;code&gt;dvc add&lt;/code&gt; command is applied.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8d7pvewpr30jjwke355m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8d7pvewpr30jjwke355m.png" alt="DVC" width="800" height="570"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Retrieving
&lt;/h2&gt;

&lt;p&gt;There is a reason behind &lt;code&gt;dvc remote add&lt;/code&gt; stores remote origin information in &lt;strong&gt;.dvc/config&lt;/strong&gt;. It does so because when next time somebody clones this Git repository, It will not have any data. It will only have MD5 signature of the data in corresponding &lt;strong&gt;.dvc&lt;/strong&gt; file and it will also have DVC remote repository location information stored in &lt;strong&gt;.dvc/config&lt;/strong&gt; file. So, the complete data can be pulled using below command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;dvc pull
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It pulls all the data and model files stored in DVC remote repository.&lt;/p&gt;

&lt;h2&gt;
  
  
  ML Pipeline &amp;amp; Versioning
&lt;/h2&gt;

&lt;p&gt;While working with DVC, we have to create two files &lt;code&gt;dvc.yaml&lt;/code&gt; which contains information about the stages the ML pipeline will have and &lt;code&gt;params.yaml&lt;/code&gt; which contains the parameters that different stages of the ML pipeline would use. You can take a look at the structure of both files on the &lt;a href="https://github.com/iterative/get-started-experiments" rel="noopener noreferrer"&gt;example repository&lt;/a&gt;.  &lt;/p&gt;

&lt;p&gt;DVC is so flexible that you can use command &lt;code&gt;dvc exp run&lt;/code&gt; to run the complete pipeline with just single command. The command is not only limited to this.&lt;/p&gt;

&lt;p&gt;If you want to run your experiments with different parameters, you can mention the values of the parameters which are defined in &lt;code&gt;params.yaml&lt;/code&gt; as shown below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;dvc exp run &lt;span class="nt"&gt;-S&lt;/span&gt; prepare.test_split&lt;span class="o"&gt;=&lt;/span&gt;0.2 &lt;span class="nt"&gt;-S&lt;/span&gt; train.learning_rate&lt;span class="o"&gt;=&lt;/span&gt;0.0001
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note here that whenever you run the experiments using the above command, DVC caches the output so well that it will only run those states which are affected by the parameters. Moreover, if you think one of the stages in the pipeline are too much time consuming and you want to checkpoint it so that in case the program crashes, it can resume from where it crashed, you can define &lt;code&gt;checkpoint: true&lt;/code&gt; in the &lt;strong&gt;output&lt;/strong&gt; of that stage in &lt;code&gt;dvc.yaml&lt;/code&gt; file and it will keep checkpointing your results for you.&lt;/p&gt;

&lt;p&gt;DVC keeps track of all the experiments that you perform and even you can see the history of the previously run experiments using &lt;code&gt;dvc exp show&lt;/code&gt;. It will show you all the previously run experiments. You can filter this history as well. For example, &lt;code&gt;dvc exp show --include-params=train&lt;/code&gt; will only show you the past experiments where any of the parameters of &lt;code&gt;train&lt;/code&gt; stage was changed. Not just that, but you can also see the difference between two experiments' parameters and their resultant metrics as well using &lt;code&gt;dvc exp diff exp-1dad0 exp-1df77&lt;/code&gt;. where &lt;strong&gt;exp-1dad0&lt;/strong&gt; and &lt;strong&gt;exp-1df77&lt;/strong&gt; are tags of two experiments as shown in history given by &lt;code&gt;dvc exp show&lt;/code&gt;. Moreover, You can navigate to any previously run experiment using &lt;code&gt;dvc exp apply&lt;/code&gt;. For example, if you run command &lt;code&gt;dvc exp apply exp-1dad0&lt;/code&gt;, then the files will be changed as per the parameter changes specified in that particular experiment.&lt;/p&gt;

&lt;p&gt;DVC gives us the flexibility to commit the experiment into a separate Git branch. As shown in the below command, using &lt;code&gt;dvc exp branch&lt;/code&gt; command you can commit the experiment &lt;code&gt;exp-1dad0&lt;/code&gt; to separate branch &lt;code&gt;my_branch&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;dvc exp branch exp-1dad0 my_branch
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Alternatively, you can also commit different experiments in different branches just by switching the branches as described below. &lt;/p&gt;

&lt;p&gt;As we already understand switching between different versions of code by changing Git branch. Similarly, we can also change the version of the data as well.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;git checkout experiment_v2
&lt;span class="nv"&gt;$ &lt;/span&gt;dvc checkout data/data.xml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can observe that changing the version of data is now as simple as changing the Git branch. Note here that to switch to another version of data, we first need to switch to that particular Git branch where that version of data resides. Also, note here that you must apply &lt;code&gt;dvc checkout&lt;/code&gt; to those files for which you want to switch the data version since &lt;code&gt;git checkout&lt;/code&gt; only changes the code version.&lt;/p&gt;

&lt;h2&gt;
  
  
  How data versioning simplifies experimentation?
&lt;/h2&gt;

&lt;p&gt;As described in the beginning of this blog, we don't need to keep multiple copies of model files anymore and keep renaming them. Now, we can have single model file which can be versioned using DVC during experimentation. Below image explains the same.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsvcs653tp3869ud6at99.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsvcs653tp3869ud6at99.png" alt="Model versioning for experimentation" width="428" height="376"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Fig 4 Model versioning for experimentation&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Let us suppose we are working in Git branch &lt;strong&gt;v1&lt;/strong&gt; where the hyper-parameter &lt;strong&gt;n_estimators&lt;/strong&gt; to train model has value &lt;strong&gt;50&lt;/strong&gt; and we have trained a model which is stored as &lt;strong&gt;model.pkl&lt;/strong&gt;. Now, we want to experiment with value &lt;strong&gt;100&lt;/strong&gt; for the same hyper-parameter &lt;strong&gt;n_estimators&lt;/strong&gt;. So, we don't need to rename the old model file anymore. We will switch to another Git branch v2 using &lt;code&gt;git checkout v2&lt;/code&gt; command and we will switch to different model file version using &lt;code&gt;dvc checkout model.pkl&lt;/code&gt;. We will change the value of hyper-parameter &lt;strong&gt;n_estimators&lt;/strong&gt; to &lt;strong&gt;100&lt;/strong&gt; now in code in &lt;strong&gt;v2&lt;/strong&gt; branch and we can train the model again. The new &lt;strong&gt;model.pkl&lt;/strong&gt; file generated in &lt;strong&gt;v2&lt;/strong&gt; branch is corresponding to &lt;code&gt;n_estimators = 100&lt;/code&gt; and &lt;strong&gt;model.pkl&lt;/strong&gt; file in &lt;strong&gt;v1&lt;/strong&gt; branch is corresponding to &lt;code&gt;n_estimators = 50&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Although, this is very small experiment but the same thing can be applied in complex experimentation process. And with data versioning, the experimentation process becomes very flexible. We don't need to worry about tracking of data and model files.&lt;/p&gt;

&lt;h2&gt;
  
  
  Large datasets versioning
&lt;/h2&gt;

&lt;p&gt;When the dataset is too large, we need very efficient mechanism in terms of space and performance to share different versions of data. That is why it is suggested to store the data in shared volume or on an external system. DVC supports both of these mechanisms as described below.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A &lt;a href="https://dvc.org/doc/use-cases/fast-data-caching-hub#example-shared-development-server" rel="noopener noreferrer"&gt;shared cache&lt;/a&gt; can be setup to store, version and access lot of data on a large shared volume efficiently.&lt;/li&gt;
&lt;li&gt;As we described earlier, the more advanced approach is to store and version the data directly into remote storage (ex. S3 bucket, Google Drive, GCP storage bucket etc.). You can look &lt;a href="https://dvc.org/doc/user-guide/managing-external-data" rel="noopener noreferrer"&gt;here&lt;/a&gt; to know more about how to configure external data storage for DVC.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;With this, I hope you were able to get a quick overview of how DVC does data versioning. DVC also has features to streamline ML reproducibility and support cloud provisioning through cloud providers. This would come in handy for the entire MLOps community as now they could interact with each other without worrying about monolithic tools.&lt;/p&gt;

&lt;p&gt;If you like to know more about how to do it and get your hands dirty, &lt;a href="https://dvc.org/doc/start" rel="noopener noreferrer"&gt;the documentations&lt;/a&gt; are a great way to start.&lt;/p&gt;

&lt;p&gt;If you prefer videos then they have a &lt;a href="https://www.youtube.com/channel/UC37rp97Go-xIX3aNFVHhXfQ" rel="noopener noreferrer"&gt;youtube channel with detailed tutorials&lt;/a&gt;!&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Happy Versioning!&lt;/em&gt;&lt;/p&gt;

</description>
      <category>datascience</category>
      <category>deeplearning</category>
      <category>machinelearning</category>
      <category>monitoring</category>
    </item>
  </channel>
</rss>
