<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: vmtechhub</title>
    <description>The latest articles on DEV Community by vmtechhub (@vmtechhub).</description>
    <link>https://dev.to/vmtechhub</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/vmtechhub"/>
    <language>en</language>
    <item>
      <title>The theory of Kubernetes in 10 mins</title>
      <dc:creator>vmtechhub</dc:creator>
      <pubDate>Fri, 22 Oct 2021 14:50:36 +0000</pubDate>
      <link>https://dev.to/vmtechhub/the-theory-of-kubernetes-in-10-mins-2e10</link>
      <guid>https://dev.to/vmtechhub/the-theory-of-kubernetes-in-10-mins-2e10</guid>
      <description>&lt;p&gt;Containerization of an application is not the end of story.&lt;br&gt;
For any serious application, it’s the beginning of a new world of orchestration.&lt;br&gt;
We need to think about many things, such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It shouldn’t be down. If down or crashed, either restart or start a new one as soon as possible.&lt;/li&gt;
&lt;li&gt;Is it performing as expected? How do we monitor resource consumption?&lt;/li&gt;
&lt;li&gt;How to scale up/down with minimal efforts?&lt;/li&gt;
&lt;li&gt;How to move it to a different machine in case the host encountered a problem?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It’s not as easy as it seems, even for a simple use case.&lt;br&gt;
So, what can we do about this? That’s where an orchestrator comes into the picture.&lt;/p&gt;

&lt;h1&gt;
  
  
  What is Kubernetes?
&lt;/h1&gt;

&lt;p&gt;The official &lt;a href="https://kubernetes.io/docs/concepts/overview/what-is-kubernetes/" rel="noopener noreferrer"&gt;documentation&lt;/a&gt; defines Kubernetes as:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Kubernetes, also known as K8s, is an open-source system for automating deployment, scaling, and management of containerized applications.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Running containers on a single host or running a single container is not sufficient for a large-scale application.&lt;br&gt;
We need a scalable solution like K8s which can handle the containers on scale and in fault-tolerant manner.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Kubernetes can run virtually anywhere — laptop, on-prem, cloud, bare-metal, VMs, etc.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The biggest advantage with K8s is that if an application can run in a container, it would(most probably) run on K8s irrespective of the underlying infrastructure.&lt;/p&gt;

&lt;p&gt;K8s provides an abstraction over the underlying infrastructure which makes it possible.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9p11n1sgdfedvilmgbqd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9p11n1sgdfedvilmgbqd.png" alt="Kubernetes on top of various runtimes"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Features
&lt;/h2&gt;

&lt;p&gt;Here’s a brief list from the official documentation:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Service discovery and load balancing&lt;/li&gt;
&lt;li&gt;Rolling updates and rollbacks&lt;/li&gt;
&lt;li&gt;Self healing — kills and restarts unresponsive containers
Scaling&lt;/li&gt;
&lt;li&gt;Automatically mounting a wide variety of storage systems to store data&lt;/li&gt;
&lt;li&gt;Secret and config management — to manage sensitive data and config separately from the containers&lt;/li&gt;
&lt;li&gt;RBAC(Role Based Access Controls)&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Kubernetes Cluster
&lt;/h1&gt;

&lt;p&gt;A K8s cluster has two main parts — Master nodes and Worker nodes&lt;/p&gt;

&lt;h3&gt;
  
  
  Control Plane with Master nodes
&lt;/h3&gt;

&lt;p&gt;Control plane with the help of master nodes, manages the state of a k8s cluster.&lt;br&gt;
It is not recommended to run user applications on master nodes.&lt;/p&gt;

&lt;h3&gt;
  
  
  Worker nodes
&lt;/h3&gt;

&lt;p&gt;Worker nodes run user/client applications.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqhl4cenoq3mo17ynndz6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqhl4cenoq3mo17ynndz6.png" alt="Kubernetes components"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Master Nodes
&lt;/h1&gt;

&lt;p&gt;In a K8s cluster, control plane with master nodes manages the worker nodes, and the overall cluster.&lt;br&gt;
Let’s see what is inside the control plane.&lt;/p&gt;

&lt;h3&gt;
  
  
  Components
&lt;/h3&gt;

&lt;h4&gt;
  
  
  1) API Server(kube-apiserver)
&lt;/h4&gt;

&lt;p&gt;API server is the face of a K8s cluster. It exposes a set of APIs that are used by all the components. All the components talk to each other via API.&lt;br&gt;
The main implementation of a Kubernetes API server is kube-apiserver which is designed to scale horizontally.&lt;/p&gt;

&lt;h4&gt;
  
  
  2) Scheduler(kube-scheduler)
&lt;/h4&gt;

&lt;p&gt;It watches for the new tasks(newly created pods) submitted to K8s cluster, and selects a worker node that can run those pods.&lt;br&gt;
To select a worker node, it considers the health of worker nodes, their load, affinity rules, any other software or hardware requirements.&lt;/p&gt;

&lt;h4&gt;
  
  
  3) Cluster store(etcd)
&lt;/h4&gt;

&lt;p&gt;K8s uses etcd to store config and state of the cluster. etcd is strongly consistent and reliable distributed key-value store. Please note that the etcd is not used to store data of containers or user applications, it’s only for cluster state.&lt;/p&gt;

&lt;h4&gt;
  
  
  4) Controller manager(kube-controller-manager)
&lt;/h4&gt;

&lt;p&gt;It manages and runs controllers, and responds to various events.&lt;br&gt;
A controller’s main job is to monitor the shared state of the cluster and make every possible change to achieve the desired state if the current state is not the desired state, more on this later.&lt;/p&gt;

&lt;h4&gt;
  
  
  5) Cloud controller(cloud-controller-manager)
&lt;/h4&gt;

&lt;p&gt;This is specific to the cloud on which a K8s cluster is running. If you’re not running K8s on a cloud, there will be no cloud controller manager.&lt;br&gt;
From Kubernetes docs — the cloud controller manager lets you link your cluster into your cloud provider’s API, and separates out the components that interact with that cloud platform from components that only interact with your cluster.&lt;/p&gt;

&lt;h3&gt;
  
  
  Concept of Desired state and Current State
&lt;/h3&gt;

&lt;p&gt;When we on-board a workload/app on k8s, we tell k8s what’s our expectations eg. We need 3 containers always up and running.&lt;/p&gt;

&lt;p&gt;So, this becomes the desired state of our cluster. Generally, it is part of the different payloads like your Pod deployment config in which you may define the number of replicas.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;K8s continuously monitors the current state of the system, and if there’s a different between desired and current state, it tries to achieve the expected state — it scales up/down automatically, restarts/terminates containers automatically, etc.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;So, let’s say, we started with 3 replicas of our container.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Desired state = 3 replicas&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After time T, one replica crashed.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Current state = 2 replicas&lt;/li&gt;
&lt;li&gt;K8s controller manager noticed this event and found out that current state is no longer same as desired state.&lt;/li&gt;
&lt;li&gt;K8s will take corrective actions and launch a new replica to match the desired state of 3 replicas.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And the good thing is — it will do it in an automated way.&lt;/p&gt;

&lt;h1&gt;
  
  
  Worker Nodes
&lt;/h1&gt;

&lt;p&gt;A worker node is responsible for running a user application.&lt;br&gt;
On a very high level, a worker node:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Gets a new task when scheduler(kube-scheduler) selects this node to “do something” via API server(kube-apiserver)&lt;/li&gt;
&lt;li&gt;Executes the given task&lt;/li&gt;
&lt;li&gt;Responds back to the master via API once the task is finished.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A worker node is a combination of:&lt;/p&gt;

&lt;h3&gt;
  
  
  Kubelet
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Kubelet runs on a worker node. Kubelet is the main agent that does many critical things like — registering the node with cluster and reporting back to scheduler whether or not it would be able to run a task.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Kube-proxy
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Kube-proxy is responsible for local cluster network&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Container runtime
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Container runtime is responsible for creating and running containers. K8s can use any CRI compliant runtime e.g. Docker, containerd, etc.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The diagram summarizes the flow that we’ve seen up till now.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmeu4xi4ziu6vs2fa7aqc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmeu4xi4ziu6vs2fa7aqc.png" alt="K8s application flow"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  K8s Objects
&lt;/h1&gt;

&lt;p&gt;Let’s now look at some of the most critical K8s objects.&lt;/p&gt;

&lt;h3&gt;
  
  
  Pods
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;Pod is a wrapper around a container. It is the smallest deployable unit in K8s.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;So, when we provide a Docker image and ask K8s to run and manage a container, it creates a Pod for that application. The containerized application runs inside the Pod.&lt;/p&gt;

&lt;p&gt;It’s recommended to have one-container-per-pod container, but a Pod can have multiple related containers.&lt;/p&gt;

&lt;p&gt;A Pod has a template that’s generally defined in a YAML file which tells what kind of container a Pod should host.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;When a Pod goes down, K8s creates a new Pod.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Each new Pod has a different IP.&lt;/p&gt;

&lt;p&gt;If a Pod has more than containers — those containers will share the same Pod IP and other network resources.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy288h0ooq28nfsrdkkd4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy288h0ooq28nfsrdkkd4.png" alt="Pod sharing the same Pod IP"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The container inside a Pod can be in one of the three states -&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Waiting&lt;/li&gt;
&lt;li&gt;Running&lt;/li&gt;
&lt;li&gt;Terminated&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The container can also have a restart policy:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Always(default)&lt;/li&gt;
&lt;li&gt;OnError&lt;/li&gt;
&lt;li&gt;Never&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  ReplicaSet
&lt;/h3&gt;

&lt;p&gt;A ReplicaSet is another K8s object that acts as a wrapper around Pod. It is managed by DeploymentSet.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;A ReplicaSet’s purpose is to maintain a stable set of replica Pods running at any given time.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The ReplicaSet knows from its config the Pod to run and how many replicas are needed. It then uses the Pod template to create those many replicas of a Pod.&lt;br&gt;
To scale the application, ReplicaSet creates new Pods.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgi202m5ugnjr9lwbbkhi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgi202m5ugnjr9lwbbkhi.png" alt="ReplicaSet"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Deployments
&lt;/h3&gt;

&lt;p&gt;Kubernetes documentation defines Deployments as below:&lt;br&gt;
You describe a desired state in a Deployment, and the Deployment Controller changes the actual state to the desired state at a controlled rate. You can define Deployments to create new ReplicaSets, or to remove existing Deployments and adopt all their resources with new Deployments.&lt;/p&gt;

&lt;p&gt;Deployment supports self-healing, scaling, Rolling updates, Rollbacks, etc.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw367l6tpdp5r29ow7qdi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw367l6tpdp5r29ow7qdi.png" alt="A Deployment"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  StatefulSets
&lt;/h3&gt;

&lt;p&gt;When a Pod goes down, K8s creates a new Pod with a new identity but from the same Pod template, and the new Pod will have a new IP. So, basically, the old Pod is lost.&lt;/p&gt;

&lt;p&gt;StatefulSets can be used to manage the Pods when we need to retain the Pod identity.&lt;/p&gt;

&lt;p&gt;StatefulSets are useful for stateful applications. They’re similar to Deployments but when a StatefulSet relaunches a container, it retains Pod’s identity, also called — sticky identity.&lt;/p&gt;

&lt;h3&gt;
  
  
  DaemonSet
&lt;/h3&gt;

&lt;p&gt;This is straight from the official documentation:&lt;br&gt;
A DaemonSet ensures that all (or some) Nodes run a copy of a Pod. As nodes are added to the cluster, Pods are added to them. As nodes are removed from the cluster, those Pods are garbage collected. Deleting a DaemonSet will clean up the Pods it created.&lt;br&gt;
Some typical uses of a DaemonSet are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;running a cluster storage daemon on every node&lt;/li&gt;
&lt;li&gt;running a logs collection daemon on every node&lt;/li&gt;
&lt;li&gt;running a node monitoring daemon on every node&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So, the flow looks something like below:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjutt4e1f9dk3wlsgzv3b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjutt4e1f9dk3wlsgzv3b.png" alt="K8s application flow — expanded"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Deployments — Scaling, Rolling updates, and Rollbacks
&lt;/h1&gt;

&lt;p&gt;To run a user application, K8s uses Deployments to manage Pods. Deployment manages ReplicaSets, and ReplicaSet manages Pods.&lt;/p&gt;

&lt;h3&gt;
  
  
  Scaling and Self-healing
&lt;/h3&gt;

&lt;p&gt;The sole purpose of a ReplicaSet is to maintain a stable set of replica Pods at any given time.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Scaling —&lt;/strong&gt; Depending on the replicas defined in the YAML template, ReplicaSet can create that many new Pods to match the demand.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Self-healing —&lt;/strong&gt; In the same way, when a Pod gets crashed, ReplicaSet notices the change in the cluster state, and tries to launch a new Pod to replace the dead one.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Rolling Updates and Rollbacks
&lt;/h3&gt;

&lt;p&gt;Here’s how it works in K8s-&lt;br&gt;
Let’s suppose, we have a Java application.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;We created a Docker image V1&lt;/li&gt;
&lt;li&gt;We created a Deployment using a YAML template which refers to the image — V1&lt;/li&gt;
&lt;li&gt;Deployment creates a new ReplicaSet RS1&lt;/li&gt;
&lt;li&gt;ReplicaSet creates a new set of Pods depending on the configured number of replicas&lt;/li&gt;
&lt;li&gt;Now, let’s say, we changed something in the application — a bug fix, an enhancement, etc.&lt;/li&gt;
&lt;li&gt;We create a new Docker image V2&lt;/li&gt;
&lt;li&gt;We update the YAML and change the image reference to V2&lt;/li&gt;
&lt;li&gt;The controller observes a change in the image i.e. V1 -&amp;gt; V2&lt;/li&gt;
&lt;li&gt;K8s creates a new ReplicaSet RS2(for V2) in parallel without touching old ReplicaSet RS1&lt;/li&gt;
&lt;li&gt;K8s starts creating new Pods in RS2 in parallel&lt;/li&gt;
&lt;li&gt;At this time, Deployment is running both the ReplicaSets — RS1 and RS2&lt;/li&gt;
&lt;li&gt;RS1 — with old image V1&lt;/li&gt;
&lt;li&gt;RS2 — with new image V2&lt;/li&gt;
&lt;li&gt;At the same time, K8s starts a new Pod in the new ReplicaSet and drops a Pod in the old ReplicaSet&lt;/li&gt;
&lt;li&gt;Finally, old ReplicaSet becomes empty and new ReplicaSet becomes fully operational with new Pods, running image V2&lt;/li&gt;
&lt;li&gt;At this point, K8s doesn’t remove old ReplicaSet which is empty&lt;/li&gt;
&lt;li&gt;The empty ReplicaSet is used in Rollback process&lt;/li&gt;
&lt;li&gt;To rollback, K8s just makes a switch to the empty ReplicaSet, and it starts the same process in opposite direction — RS2 to RS1&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;During this whole process, K8s keeps running the application, there’s no downtime.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h1&gt;
  
  
  Services
&lt;/h1&gt;

&lt;p&gt;K8s documentation defines a Service like this-&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;A Service is an abstraction which defines a logical set of Pods and a policy by which to access them . The set of Pods targeted by a Service is usually determined by a selector.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;A Service is a logical grouping of a set of Pods which also acts as a load balancer of that set.&lt;/p&gt;

&lt;p&gt;Pods can talk to each other via Service.&lt;/p&gt;

&lt;p&gt;A call from outside of K8s cluster is intercepted by the Service which then forwards the request to a certain Pod from its set.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;So, Service also acts as a network abstraction which hides all the networking complexities.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Main components of a Service
&lt;/h3&gt;

&lt;p&gt;There are two main components of a Service:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Selector —&lt;/strong&gt; Selector is used to select Pods which forms the logical group that is represented by the Service&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Endpoints —&lt;/strong&gt; It is a list of healthy Pods. Service keeps the list up-to-date by monitoring the changes in the Pods e.g. crashed Pod, new Pod joins the cluster, etc.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  How does a Service form a group of Pods?
&lt;/h3&gt;

&lt;p&gt;The service uses a label selector to select the Pods.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Labels are simply a set of key-value pairs that we can attach to certain K8s objects like Pod.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Let’s suppose, we have three Pods &lt;strong&gt;P1, P2, and P3&lt;/strong&gt;. And, we have two labels — &lt;strong&gt;env&lt;/strong&gt; and &lt;strong&gt;version&lt;/strong&gt;&lt;br&gt;
Here’s how these Pods are tagged with these labels:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;P1 — env = prod and version = 1.0&lt;/li&gt;
&lt;li&gt;P2 — env = prod and version = 1.1&lt;/li&gt;
&lt;li&gt;P3 — env = prod and version = 1.0
Let’s also suppose that we have a Service with label selector defined as:
&amp;gt; env = prod and version=1.0&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So, based on this, Service will have two Pods in its set — &lt;strong&gt;P1 and P3&lt;/strong&gt;&lt;br&gt;
P2 won’t be selected by the Service because while its env label matches with value prod, but version is different — Service expects version=1.0 and P2 has version=1.1&lt;/p&gt;

&lt;h3&gt;
  
  
  Service Types
&lt;/h3&gt;

&lt;p&gt;There are mainly three types of services:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Cluster IP —&lt;/strong&gt; To access Pods from inside&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;NodePort —&lt;/strong&gt; To access Pods from outside&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;LoadBalancer —&lt;/strong&gt; To integrate cloud specific load balancer e.g. Azure and AWS would have different load balancers.&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Volumes and ConfigMaps
&lt;/h1&gt;

&lt;h3&gt;
  
  
  Volumes
&lt;/h3&gt;

&lt;p&gt;Volumes are used as a storage solution for a K8s cluster.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;We use or mount volumes to store the application data permanently so that if a Pod is crashed and started, it would not loose its data.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;K8s uses a plugin layer to handle the volumes so it is capable of working with different types of storage solutions.&lt;/p&gt;

&lt;p&gt;For instance, we can use a Azure Disk, EBS, etc. as a storage solution.&lt;/p&gt;

&lt;h4&gt;
  
  
  Persistent Volume
&lt;/h4&gt;

&lt;blockquote&gt;
&lt;p&gt;It acts as a storage abstraction which provides APIs to access and manage persistent storage.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Persistent volume represents a storage, an application would link to a persistent volume via a plugin.&lt;/p&gt;

&lt;p&gt;For instance, Azure storage would have its own plugin that can be used to link an Azure storage as Persistent Volume.&lt;/p&gt;

&lt;h4&gt;
  
  
  PersistentVolume Claims
&lt;/h4&gt;

&lt;p&gt;These are storage requests made by the user. Users request for a Persistent Volume based on certain criteria, and if a persistent volume is found, it gets linked to the PersistentVolume Claims.&lt;/p&gt;

&lt;h1&gt;
  
  
  ConfigMaps
&lt;/h1&gt;

&lt;p&gt;ConfigMap is a K8s object which is used to store non-sensitive Pod configurations. This is a key-value pair.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;For sensitive data, it is recommended to use Secrets rather than ConfigMap.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;ConfigMap is a great way to separate configs from the code.&lt;/p&gt;

&lt;p&gt;ConfigMap can be injected into a container in three ways:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Environment variables&lt;/li&gt;
&lt;li&gt;Command line args&lt;/li&gt;
&lt;li&gt;As a file in volume&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The only problem with first two is that these are static.&lt;/p&gt;

&lt;p&gt;So, once an app gets started, and if there’s any change in the ConfigMap then it won’t be reflected until we restart the app because there’s no way to reload ConfigMap which was injected via environment variables or startup command.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Third option, using volumes, is the most flexible way where we create a volume for ConfigMap. Any change in the file will be reflected in the volume, and will be picked by the application.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Feel free to check out my page &lt;a href="https://www.vmtechblog.com/search/label/kubernetes" rel="noopener noreferrer"&gt;https://www.vmtechblog.com/search/label/kubernetes&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>programming</category>
      <category>microservices</category>
    </item>
    <item>
      <title>Docker Quick Start</title>
      <dc:creator>vmtechhub</dc:creator>
      <pubDate>Sun, 17 Oct 2021 14:14:50 +0000</pubDate>
      <link>https://dev.to/vmtechhub/docker-quick-start-1lb5</link>
      <guid>https://dev.to/vmtechhub/docker-quick-start-1lb5</guid>
      <description>&lt;p&gt;Before talking about Docker, let’s first talk about Virtualization in general.&lt;/p&gt;

&lt;h1&gt;
  
  
  Virtualization
&lt;/h1&gt;

&lt;blockquote&gt;
&lt;p&gt;Virtualization is a way to utilize the underlying hardware more efficiently.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Do we really need virtualization?
&lt;/h2&gt;

&lt;p&gt;In general, yes. If every single application requires a separate machine, chances are that most of the time that machine will be underutilized and we will be wasting the hardware resources like CPU, memory, etc.&lt;/p&gt;

&lt;p&gt;With virtualization, many applications can share the same physical hardware.&lt;/p&gt;

&lt;p&gt;A virtualization solution acts as a layer on top of actual hardware.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--AaX1Q4N8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qp1j1khycymsoeq81abi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--AaX1Q4N8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qp1j1khycymsoeq81abi.png" alt="Virtualization Layer on top of Hardware"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How does a system achieve virtualization?
&lt;/h2&gt;

&lt;p&gt;It does so via a Hypervisor.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Hypervisor is a software through which we achieve desired virtualization.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;There are two types of hypervisors:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Type-1 aka bare-metal because it runs directly on the hardware.&lt;/li&gt;
&lt;li&gt;Type-2 runs as an application on the host operating system.&lt;/li&gt;
&lt;/ol&gt;

&lt;h1&gt;
  
  
  Virtual Machines(VMs)
&lt;/h1&gt;

&lt;p&gt;Virtual machine as the name suggests, is a machine with its own OS. A VM emulates physical hardware.&lt;/p&gt;

&lt;p&gt;An application running inside a VM doesn’t know if it’s running on the real machine or not, hence the name Virtual Machine.&lt;/p&gt;

&lt;p&gt;Many VMs can run on the same machine without knowing each other.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;A virtual machine runs on the Hypervisor and accesses the underlying hardware through the Hypervisor.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In this diagram, we see two different VMs running on the same machine using a Hypervisor.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;VM1 has Linux and is running an application named App1.&lt;/li&gt;
&lt;li&gt;VM2 has Windows and is running two applications — App2 and App3.
&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--EXGikg58--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3g4wxq5qg852ecemal9r.png" alt="Virtual Machines on the same host"&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h1&gt;
  
  
  Containerization
&lt;/h1&gt;

&lt;p&gt;Containerization or OS-level virtualization is an OS feature in which the kernel allows multiple isolated user-space instances, these instances are called containers.&lt;/p&gt;

&lt;p&gt;A Container is like a box in which an application runs. The container provides virtualization to that application.&lt;/p&gt;

&lt;p&gt;The application doesn’t know on which OS it is actually running, for that application container is everything.&lt;/p&gt;

&lt;p&gt;It is the container’s job to provide libraries, dependencies, and everything else that an application needs for a successful execution.&lt;/p&gt;

&lt;p&gt;The application runs within the boundaries of the container, hence it is less dependent on the host OS.&lt;br&gt;
And for the same reasons, completely isolated from other containers running on the same machine.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--yMac2UEI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2r1jfyvpt8k315m4bzih.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--yMac2UEI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2r1jfyvpt8k315m4bzih.png" alt="A container engine running multiple containers"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  What problems does a container solve?
&lt;/h2&gt;
&lt;h3&gt;
  
  
  Portability
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--uUwgynIs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/15ku3izzevj3n5vkp1nx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--uUwgynIs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/15ku3izzevj3n5vkp1nx.png" alt="Moving a container from one machine to another"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let’s suppose, we’re building an application from scratch.&lt;br&gt;
So,&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;We find a machine.&lt;/li&gt;
&lt;li&gt;We download all the required software, their dependencies.&lt;/li&gt;
&lt;li&gt;Then we install and configure everything.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Configuring a software or tool generally involves changing some config file on the OS, or may be changing some network port, etc.&lt;br&gt;
Finally, everything is up &amp;amp; running on your dev machine. Cool stuff!&lt;/p&gt;

&lt;p&gt;Now, how would you move this application along with its configurations to another machine?&lt;/p&gt;

&lt;p&gt;Without any containerization tool or technology, we would need to repeat everything that we did on our dev machine.&lt;br&gt;
And, if everything is scattered and not properly documented, and managed, we may easily miss an important config on the new machine.&lt;/p&gt;

&lt;p&gt;In that case, either the machine itself or some other application is screwed.&lt;/p&gt;

&lt;p&gt;With the help of containers or VM, we can just copy the VM or the container to the new machine.&lt;/p&gt;

&lt;p&gt;Because apps are configured to run inside a container and nothing is actually installed on the host machine directly, you move them as a package.&lt;/p&gt;
&lt;h3&gt;
  
  
  Packaging and Deployment
&lt;/h3&gt;

&lt;p&gt;In extension of the previous point, as soon as you containerize your application, it is ready to be moved to any machine.&lt;/p&gt;

&lt;p&gt;To create a container for your app, you tell the container everything about you app — configs, ports, file system dependencies, other s/w dependencies, etc.&lt;/p&gt;

&lt;p&gt;The container becomes the runnable and deployable package.&lt;br&gt;
Any machine which has the right container infrastructure will be able to run your container.&lt;/p&gt;

&lt;p&gt;It is similar to Java classes, once compiled you can run the same class file on any machine as long as it has the right JVM.&lt;/p&gt;
&lt;h3&gt;
  
  
  Scaling
&lt;/h3&gt;

&lt;p&gt;Containers promote microservices pattern. There should be one service per container.&lt;br&gt;
Under heavy load, if you need another instance to share the load, you just need to add more containers, and your service is immediately scaled.&lt;/p&gt;
&lt;h3&gt;
  
  
  Dependency conflicts and clean removal of applications
&lt;/h3&gt;

&lt;p&gt;Without containers, dependency conflicts may arise as multiple apps may require the different versions of the same dependency.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Let’s suppose, A needs x version of Sybase and B needs y version of Sybase.&lt;/li&gt;
&lt;li&gt;Let’s consider another scenario — If you are uninstalling/removing an app from your system, chances are that you may forget to remove its dependencies which will remain there and eat the system’s space.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Dependencies like these not just make the working difficult but also pollute the machine when you don’t need them anymore.&lt;/p&gt;

&lt;p&gt;In case of containers, an app and its dependencies are packaged as a bundle. So there is no dependency conflict.&lt;/p&gt;

&lt;p&gt;Also, instead of removing an app, you remove the container. So, everything will be removed as a package which keeps a machine clean.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---V0-Dm1m--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cpn3ik3m1hvs0udmktd4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---V0-Dm1m--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cpn3ik3m1hvs0udmktd4.png" alt="Two containers running on different Java version without any conflict"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h1&gt;
  
  
  Docker
&lt;/h1&gt;

&lt;p&gt;Let’s finally talk about Docker.&lt;br&gt;
&lt;a href="https://docs.docker.com/get-started/overview/"&gt;Docker&lt;/a&gt; website defines Docker as an open platform for developing, shipping, and running applications.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Docker is an application container which enables us to bundle an application and all its dependencies as a package so that it can be shipped anywhere and run on any machine.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;It is written in Go language.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Under the hood, a Docker container is simply a Linux process that uses Linux features e.g. namespaces, cgroups, seccomp, etc.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;
  
  
  Docker terminology
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Docker client —&lt;/strong&gt; Docker client is installed on the client machine. We run docker commands using docker client.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Docker daemon —&lt;/strong&gt; Generally, installed on the same machine. It listens for Docker API requests and does everything required for a docker container to run.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Docker registry —&lt;/strong&gt; A registry is a place where docker images are stored. For instance, GitHub is a registry where people manage their codebases. In the same way, docker images are stored in a docker registry. A registry could be public like Docker hub or private.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Docker image —&lt;/strong&gt; An image is a read-only template with instructions for creating a Docker container.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Docker container —&lt;/strong&gt; A container is a runnable instance of an image.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Difference between a Docker image and Docker container
&lt;/h2&gt;

&lt;p&gt;Let’s take an example of a Java class. A class has some code which could be stored as a .java file on a local directory or as a github project. A class doesn’t do anything on its own.&lt;/p&gt;

&lt;p&gt;But, when we compile and run a .class file, the same read-only class becomes a runtime object, an executable piece of code that does what it was coded for.&lt;/p&gt;

&lt;p&gt;A docker image is just like a .java file that has some instructions on what to do when you want to run a container.&lt;br&gt;
And when we run a docker image via docker client, docker creates a running container.&lt;/p&gt;

&lt;p&gt;Just as we can create multiple objects of a single class, we can create multiple running containers from the single docker image.&lt;/p&gt;
&lt;h1&gt;
  
  
  High Level Working
&lt;/h1&gt;

&lt;p&gt;There’s a nice diagram on Docker site which depicts the high level flow.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--sqptu5jX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/700/0%2A69pWrQj8GsDMIM3Z" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--sqptu5jX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/700/0%2A69pWrQj8GsDMIM3Z" alt="Docker flow"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When we run a command to run a container from an image:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Docker checks if the image already exists on the system.&lt;/li&gt;
&lt;li&gt;If yes, it uses the image to run a container.&lt;/li&gt;
&lt;li&gt;If no, it goes to a registry to find and pull the image.&lt;/li&gt;
&lt;li&gt;Docker downloads the image on the system&lt;/li&gt;
&lt;li&gt;Docker runs the image and starts the container.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/F2xhGpdmjCQ"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;h1&gt;
  
  
  Useful Docker commands
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_6EPeK-k--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rdgx76uj331fnoms7ayx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_6EPeK-k--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rdgx76uj331fnoms7ayx.png" alt="Frequently used Docker commands"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Docker architecture and key components
&lt;/h1&gt;

&lt;p&gt;Docker has a layered architecture. Some of the key components, in the order from low to high, are as below:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;runc / libcontainer —&lt;/strong&gt; &lt;a href="https://github.com/opencontainers/runc/tree/master/libcontainer"&gt;https://github.com/opencontainers/runc/tree/master/libcontainer&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;containerd —&lt;/strong&gt; &lt;a href="https://github.com/containerd/containerd"&gt;https://github.com/containerd/containerd&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Docker daemon —&lt;/strong&gt; &lt;a href="https://docs.docker.com/engine/reference/commandline/dockerd/"&gt;https://docs.docker.com/engine/reference/commandline/dockerd/&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Orchestration —&lt;/strong&gt; &lt;a href="https://docs.docker.com/get-started/orchestration/"&gt;https://docs.docker.com/get-started/orchestration/&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here is a diagram which shows these components with a small description — what it is and what a component does.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--FkPg2v9R--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gci9r4iez8ym0s1q1udf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--FkPg2v9R--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gci9r4iez8ym0s1q1udf.png" alt="Key components"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What happens when we run a container?&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--lMuFMVwh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tocn3tqstoh78efik6fb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--lMuFMVwh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tocn3tqstoh78efik6fb.png" alt="typical flow of a docker run command"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Feel free to checkout my blog -  &lt;a href="https://www.vmtechblog.com/search/label/docker"&gt;https://www.vmtechblog.com/search/label/docker&lt;/a&gt;&lt;/p&gt;

</description>
      <category>docker</category>
      <category>programming</category>
      <category>devops</category>
      <category>microservices</category>
    </item>
  </channel>
</rss>
