<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Yash Hegde</title>
    <description>The latest articles on DEV Community by Yash Hegde (@yh010).</description>
    <link>https://dev.to/yh010</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/yh010"/>
    <language>en</language>
    <item>
      <title>🚀Navigating the Microservices Universe with Layer5: A Beginner's Guide</title>
      <dc:creator>Yash Hegde</dc:creator>
      <pubDate>Fri, 20 Jan 2023 07:06:21 +0000</pubDate>
      <link>https://dev.to/yh010/navigating-the-microservices-universe-with-layer5-a-beginners-guide-4959</link>
      <guid>https://dev.to/yh010/navigating-the-microservices-universe-with-layer5-a-beginners-guide-4959</guid>
      <description>&lt;p&gt;Microservices architecture has become the norm for building and operating modern applications. However, managing communication between these small, independent services can be daunting. This is where a service mesh like Layer5 comes in - it provides a configurable infrastructure layer that sits between the application code and the underlying network infrastructure. In this beginner's guide, we'll explore how Layer5 can help navigate the microservices universe and make communication between services more flexible, reliable, and fast.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is a Service Mesh?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--xWh2ZyW6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/24p3c0s953vk3o6y348f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--xWh2ZyW6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/24p3c0s953vk3o6y348f.png" alt="Image description" width="880" height="388"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A service mesh is an infrastructure layer that sits between the application code and the underlying network infrastructure. It helps manage communication between microservices in a distributed system and provides features such as traffic management, service discovery, load balancing, and security. These features are crucial for building and operating microservices-based applications and can make communication between services more flexible, reliable, and fast.&lt;/p&gt;

&lt;h2&gt;
  
  
  How does it work?
&lt;/h2&gt;

&lt;p&gt;A service mesh works by inserting a proxy, called a sidecar proxy, into each service instance. These sidecar proxies handle all the communication between service instances and the service mesh control plane. The control plane is responsible for configuring the behavior of the sidecar proxies and provides features such as traffic management, service discovery, load balancing, and security.&lt;/p&gt;

&lt;h2&gt;
  
  
  🚦 "Traffic Management in a Microservices World with Layer5"
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--FDs2F3FT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bg2jtfhzjhlsp0qk6uay.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--FDs2F3FT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bg2jtfhzjhlsp0qk6uay.png" alt="Image description" width="880" height="805"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;One of the key features of a service mesh like Layer5 is traffic management. It allows for precise control over how traffic flows between different service instances. This can be used for things like canary releases, A/B testing, and blue-green deployments. With Layer5, it is possible to route traffic to different versions of a service, or to different instances based on various criteria such as user identity or geographic location. &lt;/p&gt;

&lt;h2&gt;
  
  
  🔍 "Discovering the Power of Service Discovery with Layer5"
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--vrdI-B9s--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/soigf50acq0xofj94boi.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--vrdI-B9s--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/soigf50acq0xofj94boi.jpg" alt="Image description" width="880" height="786"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Service discovery is another important feature of a service mesh like Layer5. In a microservices-based application, new instances of a service can come and go dynamically. Layer5 can automatically discover these new instances and update the routing information accordingly. This allows for automatic scaling of services and failover in the case of an instance failure. &lt;/p&gt;

&lt;h2&gt;
  
  
  🔒 "Securing Microservices Communication with Layer5"
&lt;/h2&gt;

&lt;p&gt;Security is also a crucial aspect of a service mesh like Layer5. It provides features such as mutual TLS and role-based access control to secure communication between service instances and control access to services based on the identity of the caller.&lt;/p&gt;

&lt;h2&gt;
  
  
  💻 "Getting Started with Layer5"
&lt;/h2&gt;

&lt;p&gt;To use Layer5, you'll need to deploy the Layer5 control plane in your cluster and configure the sidecar proxies to communicate with it.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Deploy the Layer5 control plane: You can deploy the Layer5 control plane using Kubernetes manifests or Helm charts.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Configure the sidecar proxies: Once the control plane is deployed, you'll need to configure the sidecar proxies in your services to communicate with them. This can typically be done by adding a few lines of configuration to your service's code.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Configure traffic management: You can use Layer5's traffic management features to control how traffic flows between different service instances. This can be done using the Layer5 control plane's APIs or the Layer5 CLI.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Configure service discovery: Layer5 can automatically discover new instances of a service and update the routing information accordingly. You can configure this feature using the Layer5 control plane's APIs or the Layer5 CLI.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Configure security: Layer5 provides features such as mutual TLS and role-based access control to secure communication between service instances and control access to services based on the identity of the caller. You can configure these features using the Layer5 control plane's APIs or the Layer5 CLI.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In conclusion, a service mesh like Layer5 is a powerful tool for managing communication between microservices in a distributed system. With its traffic management, service discovery, load balancing, and security features, it makes communication between services more flexible, reliable, and fast. By following this tutorial, you would now have got a brief overview of how to use Layer5 to manage communication between your microservices.&lt;/p&gt;

&lt;h2&gt;
  
  
  Outro:
&lt;/h2&gt;

&lt;p&gt;There’s no better way to test-drive Layer5 than by diving in and playing with it. The purpose of the blog is to create awareness about a service mesh like Layer5. To learn further it is recommended to go through the official documentations of &lt;a href="https://layer5.io/"&gt;Layer5&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Have fun!&lt;br&gt;
Feel free to connect with me on&lt;br&gt;
 &lt;a href="https://www.linkedin.com/in/yash-hegde-927721201/"&gt;LinkedIn&lt;/a&gt;&lt;br&gt;
 &lt;a href="https://twitter.com/YashHegde7"&gt;Twitter&lt;/a&gt;&lt;br&gt;
 &lt;a href="https://github.com/Yh010"&gt;Github&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Continuous deployment: The ArgoCD way</title>
      <dc:creator>Yash Hegde</dc:creator>
      <pubDate>Tue, 18 Oct 2022 07:28:58 +0000</pubDate>
      <link>https://dev.to/yh010/continuous-deployment-the-argocd-way-2854</link>
      <guid>https://dev.to/yh010/continuous-deployment-the-argocd-way-2854</guid>
      <description>&lt;p&gt;If you have used GitHub for creating a pull-request, you might have come across this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ot9DKrrN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/z45y1xmidvn85o4rx7n1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ot9DKrrN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/z45y1xmidvn85o4rx7n1.png" alt="Image description" width="880" height="521"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Have you ever wondered what these “checks” actually mean? And how the project actually implements these tests? Whether you know or don’t know the answer to this question, delve into this article and get a glimpse of ArgoCD’s world :)&lt;/p&gt;

&lt;h2&gt;
  
  
  So, what is ArgoCD?
&lt;/h2&gt;

&lt;p&gt;As the name suggests, ArgoCD is a continuous delivery tool. And to understand how it works, let’s first look into how continuous delivery is implemented in most projects using common tools like Jenkins, Gitlab, etc (since Jenkins and Gitlab are also CD tools, one question would arise in your mind,” is ArgoCD a replacement of these established CD tools?”. So, let’s try to answer this question as well).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--37Vvsktd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rbprzhda3bv30wihvxjd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--37Vvsktd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rbprzhda3bv30wihvxjd.png" alt="Image description" width="880" height="544"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;An example of a continuous deployment workflow without ArgoCD:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Let’s say we have a microservices application running in a Kubernetes cluster. Now, when you make some changes to your application, like adding a new feature or some bug fix, the CI tool like Jenkins will carry out the tests, and when tests are passed, Docker will create a new image and push it to the Docker depository.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;But how does this new image get deployed to the Kubernetes cluster?&lt;/em&gt;&lt;br&gt;
This is done by updating the application’s deployment YAML file with the new image tag(this will be done using Jenkins), and then it should be applied to Kubernetes using tools like Kubectl. &lt;/p&gt;

&lt;p&gt;Following is an example:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--InEdodl9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uezju96x3b38w97o6cxw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--InEdodl9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uezju96x3b38w97o6cxw.png" alt="Image description" width="880" height="155"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;However, there are some challenges to this approach:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  You need to set up tools like Kubectl, and Helm to access the Kubernetes cluster and execute changes on the build automation tools like Jenkins, which in turn means you need to configure these tools on Jenkins.&lt;/li&gt;
&lt;li&gt;  You need to configure access to Kubernetes for these tools(when using cloud platforms like AWS,  some more configuration will be required). Apart from being a tiring configuration process, this also becomes a security challenge as you have to share your cluster credentials with external services and tools.&lt;/li&gt;
&lt;li&gt;  Once Jenkins has changed the config file for applying the changes to the cluster, it bears no responsibility for checking whether your changes were deployed successfully or not in the cluster. This can only be found by only following test steps.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  2018: ArgoCD arrives
&lt;/h2&gt;

&lt;p&gt;ArgoCD was created for improving the CD part of your application building and deployment process. It was purpose-made for Kubernetes and is based on Gitops principles.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;So how does ArgoCD make the process more efficient?&lt;/strong&gt;&lt;br&gt;
ArgoCD does this by using the pull workflow. It is a part of the k8s cluster, and instead of pushing the manifest changes, it pulls the changes and applies them to the cluster.&lt;/p&gt;

&lt;p&gt;Thus, the workflow can be listed as follows:&lt;/p&gt;

&lt;p&gt;1)deploy ArgoCD in k8s cluster&lt;br&gt;
2)configure ArgoCD to track the git repository&lt;br&gt;
3)ArgoCD monitors for any changes and applies them automatically&lt;br&gt;
So, when a developer applies some changes to the application source code, the CI pipeline tool like Jenkins will first test the changes, build the image, push the image to the docker repository and finally update the k8s manifest file.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--bgR_NMgQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lnarw2pbsj5oy7ffqf9p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--bgR_NMgQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lnarw2pbsj5oy7ffqf9p.png" alt="Image description" width="433" height="285"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;_A POINT TO BE NOTED: _&lt;/p&gt;

&lt;p&gt;It is considered best practice to separate the git repository for application source code and configuration code ie k8s manifest files. The benefit of doing this is that the application configuration file can not only be the deployment file but also configmap, secret, ingress, etc, and everything else the application needs to run in the cluster. And these manifest files can change completely independent of the application’s source code. And when you update, say, a service YAML file for the application, which is just the application configuration and not a part of the code, you don’t want to run the whole CI pipeline for the application when the app source code hasn’t even changed. Also, you don’t want to have complex logic in the pipeline to decide what actually changed.&lt;/p&gt;

&lt;p&gt;So now, Jenkins will update the manifest file in a separate git repo where the k8s manifest files exist. And as soon as the configuration files for the repo change, ArgoCD will start tracking changes as we had initially configured it to monitor changes in the cluster, and it will pull and apply the changes in the cluster automatically.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--4IR2bfle--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5tpsdpwisn21kpcnzcng.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--4IR2bfle--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5tpsdpwisn21kpcnzcng.png" alt="Image description" width="880" height="425"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;ArgoCD supports k8s YAML files, helm charts, Kustomize files, and all other template files that generate k8s manifests. The repo which is tracked by ArgoCD is sometimes also known as the Gitops repository.&lt;/p&gt;

&lt;p&gt;Thus, when the configuration files get changed by either Jenkins or by DevOps engineers, all of these changes will be tracked and applied in the cluster by ArgoCD. As a result, we have separate CI and CD, where the CI is completely owned by developers and configured on Jenkins for example, and the CD pipeline is owned by operations/ DevOps teams and configured using ArgoCD.&lt;/p&gt;

&lt;h2&gt;
  
  
  Benefits of using ArgoCD (answering the question, “Why ArgoCD?”)
&lt;/h2&gt;

&lt;p&gt;a)  &lt;em&gt;Git as a single source of truth:&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--YEGw-YyC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/78dhq4tg7own2cmoxklf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--YEGw-YyC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/78dhq4tg7own2cmoxklf.png" alt="Image description" width="485" height="255"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Since the whole k8s configuration is defined as code in the git repo, the config files don’t have to be manually applied from local laptops using helm or kubectl, as everyone will have the same interface for updating the cluster.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What happens if someone updates the cluster manually?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--QkygcuV8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pwm4fobhxdi3eaff3ifw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--QkygcuV8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pwm4fobhxdi3eaff3ifw.png" alt="Image description" width="880" height="371"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;One point to be noted here is that ArgoCD tracks the changes in the config file repo, as well as in the whole cluster. It continuously tracks and compares the actual state of the cluster with the desired state. So, when someone manually does some changes, the actual state will be different from that defined in the configuration file, and thus ArgoCD will sync the changes, overwriting the manual change. Thus, this guarantees that the k8s manifests in git remain the single source of truth.&lt;/p&gt;

&lt;p&gt;But let’s say we do need a way to quickly manually update the cluster. In that case, we can configure ArgoCD to not sync manual cluster changes automatically, and send an alert in such a case.&lt;/p&gt;

&lt;p&gt;Also, the benefit of using git is that we can track all the changes made, as opposed to untraceable changes made directly to the cluster by using kubectl or helm, etc. This also provides better team collaboration.&lt;br&gt;
b)  easy rollback:&lt;br&gt;
since git tracks all the changes, so we can easily revert back to a previous commit if the new git commit causes a failure in the application. This is especially useful when we have thousands of clusters that point to the same git repository. Thus, we don’t have to manually update all the clusters&lt;br&gt;
but just changing the configuration file will do the work for us.&lt;/p&gt;

&lt;p&gt;c)  Cluster disaster recovery:&lt;br&gt;
Let’s say we have a cluster in a region A. and for some reason, the cluster completely crashes. So, using the configuration file, we can create a new cluster easily, thus recreating the exact state as the previous one.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--tkHsILx3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4ge96rzuyvwahs1dpmdi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--tkHsILx3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4ge96rzuyvwahs1dpmdi.png" alt="Image description" width="433" height="285"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;d)      k8s access control with git and ArgoCD:&lt;br&gt;
Since all team members should not have access to make changes to the config repo, especially in a production environment, we configure access rules in the git repositories.&lt;/p&gt;

&lt;p&gt;Thus, using permissions, all team members can propose changes to the cluster, but only a handful of senior engineers can approve and merge those requests. In this way, we can manage cluster access indirectly via git, without having to create ClusterRole and user resources in Kubernetes.&lt;/p&gt;

&lt;p&gt;Also, we need to only give access to the git repo and not the whole cluster. We also don’t need to give external cluster access to non-human users like Jenkins etc as we have ArgoCD inside the cluster which will apply those changes. Thus, no cluster credentials outside of K8s, resulting in better management of the security of our cluster.&lt;br&gt;
e)      ArgoCD as k8s extension:&lt;/p&gt;

&lt;p&gt;ArgoCD uses existing k8s functionalities: &lt;br&gt;
eg1) using etcd to store data&lt;br&gt;
eg2)using k8s controllers for monitoring and comparing actual and desired states.&lt;/p&gt;

&lt;p&gt;The major benefit is that we get visibility in the cluster as we can get real-time updates of the application state.&lt;/p&gt;

&lt;p&gt;Now that we have learned about ArgoCD, let’s try to learn how we configure it in our project:&lt;/p&gt;

&lt;p&gt;For doing this, we have to follow the following steps:&lt;br&gt;
1)deploy ArgoCD into the k8s cluster&lt;br&gt;
2)configure ArgoCD with k8s native YAML file:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--04f1nIBW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/528ik2wbnkv2crc2gx9l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--04f1nIBW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/528ik2wbnkv2crc2gx9l.png" alt="Image description" width="880" height="481"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The main component of ArgoCD is “application”, and we can define this application CRD (custom resource definition) in a k8s native YAML file.&lt;/p&gt;

&lt;p&gt;While defining, we have to mention which git repository is to be synced with which k8s cluster.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How do we work with Multiple clusters using ArgoCD?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Since the same ArgoCD instance is able to sync a fleet of k8s clusters, we need to configure and manage ArgoCD only once.&lt;/p&gt;

&lt;h2&gt;
  
  
  Finally, is ArgoCD a replacement for other ci/cd tools?
&lt;/h2&gt;

&lt;p&gt;Not really as we still need a CI pipeline to test and build app code changes. ArgoCD, as the name suggests is for the CD pipeline, along with its other functionalities. &lt;/p&gt;

&lt;p&gt;But there are many alternatives for ArgoCD as well like fluxCD, Jenkinsx etc, to name a few. Each has its own functionality, which can be discussed in some other article :)&lt;/p&gt;

&lt;h2&gt;
  
  
  Outro:
&lt;/h2&gt;

&lt;p&gt;There’s no better way to test-driveArgoCD than by diving in and playing with it. The purpose of the blog is to create awareness about CD pipeline tools like ArgoCD. To learn further it is recommended to go through ArgoCD’s &lt;a href="https://github.com/argoproj"&gt;Github&lt;/a&gt; and &lt;a href="https://argo-cd.readthedocs.io/en/stable/"&gt;blogs&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Have fun!&lt;/p&gt;

&lt;p&gt;Feel free to connect with me on &lt;a href="https://www.linkedin.com/in/yash-hegde-927721201/"&gt;LinkedIn&lt;/a&gt;,&lt;a href="https://twitter.com/YashHegde7"&gt;Twitter&lt;/a&gt;&lt;br&gt;
And &lt;a href="https://github.com/Yh010"&gt;GitHub&lt;/a&gt; &lt;/p&gt;

</description>
      <category>argocd</category>
      <category>devops</category>
      <category>cdtools</category>
      <category>jenkins</category>
    </item>
    <item>
      <title>YAML: A short and crisp tutorial</title>
      <dc:creator>Yash Hegde</dc:creator>
      <pubDate>Sat, 20 Aug 2022 19:15:35 +0000</pubDate>
      <link>https://dev.to/yh010/yaml-a-short-and-crisp-tutorial-4ggf</link>
      <guid>https://dev.to/yh010/yaml-a-short-and-crisp-tutorial-4ggf</guid>
      <description>&lt;p&gt;Before going into the details of YAML, what do you think YAML could be? One of the guesses by some of my friends was “You Ate My Lunch”. Jokes apart, let’s get started with the article.&lt;/p&gt;

&lt;p&gt;Previously, YAML’s full form was “Yet Another Markup Language”. But now, it is “YAML Ain’t Markup Language”. By the way, what is a Markup language? &lt;br&gt;
One of the common examples of a markup language is HTML. But why is it a markup language? &lt;/p&gt;

&lt;p&gt;A markup language provides a structure to the page. Like, in HTML, you can give a definite structure to the page by providing, say, a header, under that header you can provide lists, and under the lists, you provide paragraphs and so on.&lt;/p&gt;

&lt;p&gt;One question you would have in your mind is, “why YAML ain’t a markup language?”. First of all, YAML isn’t a programming language. It is a data format, used to exchange data, similar to XML and JSON data types. So, it is used to store information related to some configurations. Or simply put, you can’t write commands using YAML, you can only store data (this storing of data in files is known as data serialization).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Ok, so what are Data serialization and Deserialization?&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Let’s take a hypothetical scenario:&lt;/p&gt;

&lt;p&gt;Let’s say, the police department of Mumbai has a list of all the most wanted criminals in Mumbai. One of the criminals is say, Abdul.  The news comes out that Abdul has escaped Mumbai. The cops suspect that he might have run to Delhi, and thus they want to provide all details related to Abdul to the Delhi Police.  Let’s take that our police use web apps and machine learning models to transfer data to each other, predict a crime’s outcomes, or guess where the criminal might have gone. &lt;/p&gt;

&lt;p&gt;So, the problem at hand is, that data from the Mumbai police needs to be sent to a web app and further it needs to be sent to a machine learning model for carrying out the mission. So how is this data transfer done? Say, you have a file containing the data, how will you convert the file into a web app readable file and a machine learning model readable file? This process of file conversion is known as data serialization.&lt;/p&gt;

&lt;p&gt;So, the process of converting data objects in complex data structures into streams of data to provide it to some devices is known as data serialization. The opposite of this is known as data deserialization.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Y7Z7nh57--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/45vtq12k90zh1u4y4dks.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Y7Z7nh57--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/45vtq12k90zh1u4y4dks.png" alt="Image description" width="276" height="183"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The files that store this object is known as data serialization file, and the language used is known as data serialization language. Examples of data serialization languages include YAML, JSON, and XML.  This file can now be shared anywhere easily, with no conversion at every step required.&lt;/p&gt;

&lt;p&gt;An example-the Kubernetes way:&lt;/p&gt;

&lt;p&gt;Imagine you have made an app and you want to deploy it on ten servers. For that, you have to provide data to Kubernetes about your file so that it can create pods in the servers according to your data. This data is given to Kubernetes by creating what is known as Kubernetes configuration files. These files are written in YAML.&lt;/p&gt;

&lt;p&gt;What are the benefits of YAML?&lt;/p&gt;

&lt;p&gt;Some of the benefits include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Simple and easy to read&lt;/li&gt;
&lt;li&gt;Has a strict syntax, indentation is important&lt;/li&gt;
&lt;li&gt;Easily convertible to JSON, YAML, etc&lt;/li&gt;
&lt;li&gt;Most languages use YAML&lt;/li&gt;
&lt;li&gt;More powerful when representing complex data.&lt;/li&gt;
&lt;li&gt;Various tools like parsers can be used using YAML&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Background done, now let’s get started with some YAML coding:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Some things to keep in mind:&lt;/p&gt;

&lt;p&gt;-Extension for YAML files: .yaml or .yml.&lt;br&gt;
-Note: YAML is case-sensitive. So “small” and “Small” are two different things.&lt;br&gt;
-strictly following spaces and block style is important.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;to separate different types of documents (like lists, key-value pairs, etc), put “---” at the end of each document.
-for ending a document, put “…” at the end of the document.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As mentioned earlier, JSON, XML, and YAML are quite similar in function. And you might have to convert code from JSON/XML to YAML and vice versa or any combination of the three. Here’s a cool which might come in handy to you for code conversion: &lt;a href="https://onlineyamltools.com/convert-yaml-to-json"&gt;Online YAML Tool&lt;/a&gt; .&lt;/p&gt;

&lt;p&gt;A sample code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# storing key value pairs(these are called maps):
"apple" : "red fruit"
2: "my roll number"
---
# lists 
- apple
- mango
- banana
- Apple
---
cities :
- wekm
- wekdw
- wedo
... #for ending the doc

#for storing lists in a single line:
cities: [wefwf,efe,efqeq2]
---

#for key value pairs in a single line:

{apple : "red fruit" , roll number : 58} 
---

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;&lt;em&gt;Datatypes in YAML- The basics:&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;-everything after “:” is known as variable&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: Yash #here Yash is a variable
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Variables can be of many types like string, integer, etc :&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;String:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#string variables:
name: Yash #here Yash is a variable
address: "avgfwge"
gender: 'male'

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Note1: strings can be represented using no quotes/single quotes (‘ ‘ )/double quotes (“ ”)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note2: say want to write multiple paragraphs/multiple lines of string. For that, you need to use “|” to preserve the format of your paragraph.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;para: |
 the output
 will be in
 multiple lines.

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Note3: if your input is in multiple lines and you want to store it in a single line in the key, use “&amp;gt; “&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;input: &amp;gt;
 this will
 be in
 a single line.

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Integer:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#Integer data type:
number: 5461

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Float:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#Float data type:
percentage: 96.4
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Boolean:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#Boolean:
booleanValue: No # or you can also write n/N/false/False/FALSE
#or
BooleanValue: Yes # or you can also write y/Y/true/True/TRUE

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;### Specifying the datatypes:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Format:&lt;br&gt;
key: !!datatype value &lt;/p&gt;

&lt;p&gt;Example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#specifying the data type
number: !!int 1
hexa: !!int 0x45
floatingnumber: !!float 5464.168
infinity: !!float .inf
not a number: .nan
string: !!str this is a string
something: !!null Null #or null NULL ~
~: when your key is null
date: !!timestamp 2022-08-20
date with time: 2022-08-20T17:13:43.10 +5:30 # 17:13:43.10 is the UTC time by default, hence add 5:30 for IST
if no time zone: 2022-08-20T17:13:43.10
exponential number: 54851.648E161


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Now that we have learned the basic datatypes, let’s level up and learn some advanced ones:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#sequence data type:
cities: [wefwf,efe,efqeq2] #can also be written as:
---
Cities: !!seq
 - efwef
 - wefwe
---
#sparse sequence: when some variable of the sequence is empty
 sparse sequence:
  -wfwf
  -wefwef
  -wefweefw
  - 
  -wefwewefwef   
---
#nested sequence:

- 
 -one
 -two
 -three
-
 -somthing
 -somthingmore
 -somethingonemore
---
#for maps, use !!map

#nested maps:
name: Yash
role:
 age: 58
 job: student

#this can also be represented as:
Name: Yash
Role: { age: 58, job: student}

#note: in the older version of Yakey-value pairs could have multiple values, but it’s not allowed:
#pairs example: !!pairs
# -nickname: Rahul
# -nickname: Raj
#the opposite of this is set datatype

#Dictionary datatype:
people: !!omap 
 - Sahil:
     name: Sahil P
     age: 54
     height: 468
 - Supra: 
     name: Supra P
     age: 57
     height: 548
#Anchor tags: used for avoiding repetition . example:
roles: &amp;amp;role
 role1: student
 role2: software engineer

 #say 3 people have the same roles as above. so instead of writing the same code again 3 times, we can use " &amp;lt;&amp;lt;: *role "" :

person1:
 name: yash 
 &amp;lt;&amp;lt;: *role

person2:
 name: supra
 &amp;lt;&amp;lt;: *role

person3:
 name: sahil 
 &amp;lt;&amp;lt;: *role

#say sahil has changed his role from software engineer to electronics engineer. we need to update that in code. for that we can do :

#person3:
# name: sahil
# &amp;lt;&amp;lt;: *role
# role2: electronics engineer
#writing role2 below &amp;lt;&amp;lt;: *role will override the data for role2

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Some handy tools to install:&lt;/strong&gt;&lt;br&gt;
(Suggestion: Watch tutorials on Youtube on how to use this tools effectively. For setup and installation, follow the blogs attached below) &lt;/p&gt;

&lt;p&gt;&lt;a href="https://hub.datree.io/"&gt;Datree&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://kubeshop.github.io/monokle/"&gt;Monokle&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://k8slens.dev/?utm_source=CloudNativeHackathon&amp;amp;utm_medium=Youtube&amp;amp;utm_campaign=DevOpsBoot"&gt;Lens&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Do check out &lt;a href="https://www.tutorialspoint.com/yaml/index.htm"&gt;Tutorialspoint&lt;/a&gt; for more info on YAML.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Outro:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;There’s no better way to test-drive YAML than by diving in and playing with it. The purpose of the blog is to create awareness about YAML. To learn further it is recommended to go through &lt;a href="https://yaml.org/"&gt;YAML's blog&lt;/a&gt; and &lt;a href="https://github.com/yaml/www.yaml.org/"&gt;GitHub&lt;/a&gt;.&lt;br&gt;
Have fun!&lt;br&gt;
Feel free to connect with me on &lt;br&gt;
&lt;a href="https://www.linkedin.com/in/yash-hegde-927721201/"&gt;LinkedIn&lt;/a&gt;&lt;br&gt;
&lt;a href="https://twitter.com/YashHegde7"&gt;Twitter&lt;/a&gt;&lt;br&gt;
&lt;a href="https://github.com/Yh010"&gt;GitHub&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I would recommend you to go through this &lt;a href="https://spacelift.io/blog/yaml"&gt;article&lt;/a&gt; as well for gaining more perspectives on this topic!&lt;/p&gt;

&lt;p&gt;All code can be found on : &lt;a href="https://github.com/Yh010/YAML"&gt;YAML Code&lt;/a&gt;&lt;br&gt;
 Enjoy!&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Understanding Containers: Docker</title>
      <dc:creator>Yash Hegde</dc:creator>
      <pubDate>Mon, 01 Aug 2022 06:50:00 +0000</pubDate>
      <link>https://dev.to/yh010/understanding-containers-docker-54fj</link>
      <guid>https://dev.to/yh010/understanding-containers-docker-54fj</guid>
      <description>&lt;p&gt;Many companies are running their applications and stuff. But where are these apps run? The answer is servers. Some companies have their servers while some borrow from big providers (like cloud providers).&lt;/p&gt;

&lt;p&gt;Initially, only one application could be run on one server. As one might guess, there were some problems with this approach.  Firstly, this increased the load on a single server since the load increased as the number of people using that app increased. Secondly, only one app could be run, thus, to run multiple apps, companies had to add multiple servers, thus significantly increasing the costs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Enter Virtual Machines
&lt;/h2&gt;

&lt;p&gt;Virtual machines solved this problem. The concept of the virtual machine was invented by IBM as a method of time-sharing extremely expensive mainframe hardware. In simple terms, virtual machines help us to run multiple apps on the same server. But the problem with virtual machines is that they require their own operating systems, and operating systems require RAM, CPU, etc. So, say we want to run a virtual machine on our computer. For doing this, we will have to dual boot our computer, that is, install another operating system (e.g., ubuntu) on another part of our hard disk. Other flaws include reduced speed, need for dedicated storage, need for dedicated CPU and RAM allocation, and installation of dependencies management for each virtual machine. Thus, virtual machines are better than one-app-one-server, but still not perfect.&lt;/p&gt;

&lt;h2&gt;
  
  
  Could containers be a better solution?
&lt;/h2&gt;

&lt;p&gt;Won’t it be great if we could run multiple instances of applications on the same operating system? That’s where containers come into the picture. Docker (by Docker Inc.) made containers popular with Linux containers. Before docker, big companies like Google were already using containers in a way. Not just that, there were many other initiatives like containers as well, but just weren’t popular (In 2015, an organization named Open Container Initiative (OCI) was put in place for the express purpose of creating open industry standards around container formats and runtimes). &lt;/p&gt;

&lt;h2&gt;
  
  
  So, what are containers?
&lt;/h2&gt;

&lt;p&gt;Containers are just like virtual machines but don’t require multiple operating systems.&lt;/p&gt;

&lt;p&gt;The analogy for understanding containers is:&lt;/p&gt;

&lt;p&gt;Say you have built a website. Now you want feedback, hence you share the website files with your friend.  You must have encountered this situation where your friend is not able to run the website due to version incompatibility or similar reasons. Thus, to avoid this situation, you do this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcfsqk0f7ablmx6zyhrm2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcfsqk0f7ablmx6zyhrm2.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Thus, you will have to share all the files (for example, including dependencies) with your friend for him to be able to run the website on his system. &lt;/p&gt;

&lt;p&gt;This is what is called a container. Now your friend can run your website on his system without any errors!&lt;/p&gt;

&lt;h2&gt;
  
  
  Virtual Machine vs Containers: Extended
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F837t6fjmibj85gdt3sbp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F837t6fjmibj85gdt3sbp.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the above image for Virtual machine vs container, you can see a term called a hypervisor. The hypervisor is used to create multiple machines on a host operating system and it manages virtual machines. These virtual machines have their operating system and every operating system will be dedicated some amount of hardware, resources, and CPU. On the other hand, you have only one operating system in containers, along with a container engine (container engine consists of the parts covered in the Docker architecture section). Thus, you run multiple apps on the same operating system using the container engine. The container helps the apps to run in an isolated environment (that is, an app running in a container will not know what is happening outside that container). This in turn also ensures security. Docker (by Docker, Inc) is a container tool that helps us to create, manage &amp;amp; scale containers. &lt;/p&gt;

&lt;h2&gt;
  
  
  Next, how do you get docker on your system?
&lt;/h2&gt;

&lt;p&gt;Just follow the steps below:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;go to &lt;a href="https://docs.docker.com/" rel="noopener noreferrer"&gt;https://docs.docker.com/&lt;/a&gt;
-downloads and install section
-select docker desktop for mac/ windows/Linux according to your operating system.
easy-peasy!&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Installation is done, but what setup is required to run docker on your system?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A container runs on your host operating system (that is host kernel). Thus, when a windows app is containerized, it will not run on a Linux-based kernel and vice versa. A windows-based container will require a windows kernel, similarly for Linux based container. Docker desktop can be run in two modes: 1) run on windows containers and 2) run on Linux containers.&lt;/p&gt;

&lt;p&gt;For the Windows system, you need to install the docker desktop and windows subsystem for Linux (WSL).&lt;/p&gt;

&lt;p&gt;For the Mac system, just installing the Docker desktop will work.&lt;/p&gt;

&lt;h2&gt;
  
  
  Wait, wait, wait! First, understand what Docker is in technical terms: Docker Architecture
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzi1nd90qrx9nqttzmmhx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzi1nd90qrx9nqttzmmhx.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Docker consists of three parts:&lt;br&gt;
a) Docker runtime b) Docker engine c) Orchestration&lt;br&gt;
Its architecture is as follows:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxvcv65cg8wk2xxnuv86j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxvcv65cg8wk2xxnuv86j.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let’s have a look at these terms individually:&lt;/p&gt;

&lt;p&gt;a)  docker runtime:&lt;br&gt;
It helps us to start and stop a container. It is of two types:&lt;br&gt;
   i)low-level-runtime, known as runC:&lt;br&gt;
It works with the operating system and helps to start and stop the containers&lt;br&gt;
   ii)high-level-runtime, known as containerd:&lt;br&gt;
-it is a CNCF project&lt;br&gt;
-manages runc and containers&lt;br&gt;
-connecting to the internet and pulling the images into containers:&lt;br&gt;
  Pulling the images means bringing data from the internet to the containers. Thus, containerd helps in making the interaction between containers and the internet.&lt;/p&gt;

&lt;p&gt;b)  docker engine:&lt;br&gt;
Used to interact with the docker. Docker daemon is used.&lt;br&gt;
Daemon works with the docker runtime and executes the commands.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Explanation of diagram:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Example:&lt;br&gt;
Docker CLI is used to write docker commands&lt;/p&gt;

&lt;p&gt;On CLI:&lt;br&gt;
Docker run ubuntu&lt;/p&gt;

&lt;p&gt;The CLI passes this message to the docker daemon via the RestAPI. The daemon interacts with the docker runtime and instructs it to run ubuntu in the container.&lt;/p&gt;

&lt;p&gt;c)  Orchestration:&lt;br&gt;
Examples:&lt;br&gt;
Let’s say there are 100 containers for an application. All containers contain version 1 of the application. And now, a new version of the application is made. Thus, to update the containers, we can either update containers manually or update them all at once. Updating all at once is one of the functions of orchestration engines.  Examples include docker swarm, and Kubernetes.&lt;/p&gt;
&lt;h2&gt;
  
  
  How will you share your files with your friend using containers?
&lt;/h2&gt;

&lt;p&gt;A Docker file is what you need.&lt;/p&gt;

&lt;p&gt;A Docker file is a set of instructions. It contains the required operating system files and the dependencies required to run the application.&lt;/p&gt;

&lt;p&gt;When we run the docker file, we get a Docker image. When the image is run, we get the container. When we have an app that we want to containerize, we first have to write a docker file. This file will be converted to an image that can be shared with other systems.  Images are immutable. Once built, the files making up an image do not change. Images can be stored locally or in remote locations like &lt;a href="https://hub.docker.com/" rel="noopener noreferrer"&gt;https://hub.docker.com/&lt;/a&gt;. A single image can be used to create multiple containers. Images are built in layers. Each layer is an immutable file but is a collection of files and directories. The last layer can be used to write out data. Each layer has an ID, calculated via an SHA 256 hash of the layer contents. Thus, if the layer contents change, the SHA 256 hash changes as well. Note: The Image ID listed by docker commands (that is ‘docker images’) is the first 12 characters of the hash. These hash values are referred to by ‘tag’ names.&lt;/p&gt;

&lt;p&gt;Some hands-on examples of Docker commands:&lt;/p&gt;

&lt;p&gt;a)Listing hash values of docker images:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ docker images -q --no-trunc
sha256:3556258649b2ef23a41812be17377d32f568ed9f45150a26466d2ea26d926c32
sha256:9f38484d220fa527b1fb19747638497179500a1bed8bf0498eb788229229e6e1
sha256:fce289e99eb9bca977dae136fbe2a82b6b7d4c372474c9235adc1741675f587e

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note the first 12 characters of the hash values given above.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ docker images
REPOSITORY   TAG     IMAGE ID        
ubuntu       18.04   3556258649b2   
centos       latest  9f38484d220f    
hello-world  latest  fce289e99eb9

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now notice the IMAGE ID above. The first 12 characters of the hash values are equal to the IMAGE ID.&lt;br&gt;
A point to note:&lt;br&gt;
Two containers may share the same images. Thus, when we try to run the new container, CLI will show that it is pulling images from the container which already had that image. This in turn makes the process fast by not downloading the same files again. Common images are identified by the image ID.&lt;/p&gt;

&lt;p&gt;b)Pulling an image from the Docker registry:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ docker images
REPOSITORY      TAG      IMAGE ID     CREATED      SIZE
$ docker pull ubuntu:18.04
18.04: Pulling from library/ubuntu
7413c47ba209: Pull complete
0fe7e7cbb2e8: Pull complete
1d425c982345: Pull complete
344da5c95cec: Pull complete
Digest:sha256:c303f19cfe9ee92badbbbd7567bc1ca47789f79303ddcef56f77687d4744cd7a
Status: Downloaded newer image for ubuntu:18.04
$ docker images
REPOSITORY     TAG        IMAGE ID          CREATED         SIZE
ubuntu         18.04      3556258649b2      9 days ago      64.2MB

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;c)Running image to create a container:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ docker run -it ubuntu:18.04
root@4183618bcf17:/# ls
bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var
root@4183618bcf17:/# exit
exit

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;d)creating your own docker image:&lt;br&gt;
Create a file named ‘Dockerfile’&lt;br&gt;
By default on building, docker searches for ‘Dockerfile’ $ docker build -t myimage:1.0 .&lt;br&gt;
During the building of the image, the commands in the RUN section of Dockerfile will get executed. $ docker run ImageID&lt;br&gt;
The commands in the CMD section of Dockerfile will get executed when you create a container out of the image.&lt;/p&gt;

&lt;p&gt;Dockerfile example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Dockerfile example:
FROM ubuntu
MAINTAINER Yash &amp;lt;yash@gmail.com&amp;gt;
RUN apt-get update
CMD [“echo”, “Hello World”]

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;e) some basic image-related commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ docker pull ubuntu:18.04 (18.04 is tag/version)
$ docker images (Lists Docker Images)
$ docker run image (creates a container out of an image)
$ docker rmi image (deletes a Docker Image if no container is using it)
$ docker rmi $(docker images -q) (deletes all Docker images)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;f) listing containers:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ docker ps -a
CONTAINER ID   IMAGE          COMMAND       CREATED         STATUS
4183618bcf17   ubuntu:18.04   “/bin/bash”   4 minutes ago   Exited

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  A simplified explanation of Docker’s working:
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff9gziafuv7dtcwsqairw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff9gziafuv7dtcwsqairw.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let’s say, you type&lt;br&gt;
 ~ docker run hello-world&lt;br&gt;
On your CLI (command line interface):&lt;br&gt;
This means that you want to run the image hello world in your containers.&lt;/p&gt;

&lt;p&gt;What happens inside is this message will be passed to the docker daemon via the restAPIs. Docker daemon will now check whether your system has an image of hello world in it. If it does not, it will download the hello-world image from the docker registry. The Docker registry is where Docker images are stored. Docker Hub is a public registry that anyone can use. When you pull an image, Docker by default looks for it in the public registry and saves the image on your local system on DOCKER_HOST. You can also store images on your local machine or push them to the public registry. &lt;/p&gt;

&lt;p&gt;If your system has those images, it will directly run them.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://hub.docker.com/" rel="noopener noreferrer"&gt;Docker hub&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Fun fact: You can now run Mongo DB/ MySQL etc without even installing it on your computer.&lt;/p&gt;

&lt;p&gt;Just run the container! &lt;/p&gt;

&lt;h2&gt;
  
  
  Outro:
&lt;/h2&gt;

&lt;p&gt;There’s no better way to test-drive Docker than by diving in and playing with it. The purpose of the blog is to create awareness about containers. To learn further it is recommended to go through Docker’s &lt;a href="https://github.com/docker" rel="noopener noreferrer"&gt;Github&lt;/a&gt; and &lt;a href="https://www.docker.com/blog/" rel="noopener noreferrer"&gt;blogs&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Have fun!&lt;br&gt;
Feel free to connect with me on &lt;a href="https://www.linkedin.com/in/yash-hegde-927721201/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;&lt;/p&gt;

</description>
      <category>tutorial</category>
      <category>devops</category>
      <category>opensource</category>
      <category>docker</category>
    </item>
  </channel>
</rss>
