<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Madhavam Saxena</title>
    <description>The latest articles on DEV Community by Madhavam Saxena (@msaxena14).</description>
    <link>https://dev.to/msaxena14</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/msaxena14"/>
    <language>en</language>
    <item>
      <title>Persistent Volumes and Persistent Volumes Claims</title>
      <dc:creator>Madhavam Saxena</dc:creator>
      <pubDate>Thu, 19 Jan 2023 18:12:44 +0000</pubDate>
      <link>https://dev.to/msaxena14/persistent-volumes-and-persistent-volumes-claims-1fea</link>
      <guid>https://dev.to/msaxena14/persistent-volumes-and-persistent-volumes-claims-1fea</guid>
      <description>&lt;p&gt;&lt;strong&gt;Persistent Volumes:&lt;/strong&gt;&lt;br&gt;
Persistent volumes is the concept that comes into play when disadvantages of hostpath are observed. Persistent volumes, unlike hostpath and emptydir, are simply independent of pods and slave / worker Node, hence, data stored in persistent volumes is available even after a pod or even a slave / worker node crashes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Persistent Volume Claim:&lt;/strong&gt;&lt;br&gt;
PVC is present on the slave / worker node which is basically connected to a persistent volume. With the help of this connection, worker / slave nodes and pods are able to write into a persistent volume.&lt;br&gt;
A single claim can be attached to multiple persistent volumes, also, different volume claims on different volumes, attached with different pods on different node, i.e. you’ve full flexibility with the volumes and data remains available.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Defining Persistent Volume:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: PersistentVolume
metadata:
    name: host-pv
spec:
    capacity:
        storage: 4Gi
        volumeMode: Filesystem
        accessMode:
            - ReadWriteOnce
            # - ReadOnlyMany
            # - ReadWriteMany
    hostPath:
        path: /data
        type: DirectoryOrCreate
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;spec.capacty&lt;/strong&gt;: Defines the capacity of the volumes.&lt;br&gt;
&lt;strong&gt;spec.capacity.storage&lt;/strong&gt;: Here we define the size of persistent volume in Gi format which basically is defining a Gb.&lt;br&gt;
&lt;strong&gt;spec.capacity.volumeMode&lt;/strong&gt;: It is of two types: &lt;br&gt;
Filesystem&lt;br&gt;
Block&lt;br&gt;
&lt;strong&gt;spec.capacity.accessMode&lt;/strong&gt;: As the name suggests, it defines the mode of access. Since I’ve takes example of hostPath, therefore, I only can use ReadWrtiteOnce access mode as hostPath allows different pods within same slave / worker node to write into the volume, therefore only these pods can access it and ReadWriteOnce access mode defines that Different number of pods can access a persistent volume that are part of same salve / worker node.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Persistent Volumes Claims:&lt;/strong&gt;&lt;br&gt;
Persistent volume claims are used to map the pods with the persistent volumes, hence this claim needs to be defined in the pod that needs to access the persistent volume.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: PersistentVolumeClaim
metadata:
    name: host-pvc
spec:
    volumeName: &amp;lt;name_of_the_volume_that_is_to_be_mapped&amp;gt;
                host-pv
    accessModes:
        - ReadWriteOnce
    resources:
        requests:
            storage: &amp;lt;counter_part_of_volume_Defined_in_deployment.yml&amp;gt;
                     1Gi
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;spec.resources&lt;/strong&gt;: defines the resource&lt;br&gt;
&lt;strong&gt;spec.resources.requests&lt;/strong&gt;: requests the resource&lt;br&gt;
&lt;strong&gt;spec.resources.requests.storage&lt;/strong&gt;: maximum as that is defined in the deployment.yml&lt;/p&gt;

&lt;p&gt;Adding this Persistent Volume claim in the pods:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: apps/v1
kind: Deployment
metadata:
    name: story_deployment
spec:
    replicas: 1
    selector:
        matchLabels:
            app: story
    spec:
        template:
            metadata:
                labels:
                    app: story
            containers:
                - name: story
                  image: deocker_repo/&amp;lt;image_name&amp;gt;
                  volumeMounts:
                    - mountPath: /app/story
                      name: story-volume
            volumes:
                - name: story-volumes
                  persistentVOlumeClaim:
                    claimName: &amp;lt;name_of_the_persistent_volume_claim&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>tutorial</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Volumes in Kubernetes - Part 2 (hostPath)</title>
      <dc:creator>Madhavam Saxena</dc:creator>
      <pubDate>Tue, 17 Jan 2023 17:50:06 +0000</pubDate>
      <link>https://dev.to/msaxena14/volumes-in-kubernetes-part-2-hostpath-1h3n</link>
      <guid>https://dev.to/msaxena14/volumes-in-kubernetes-part-2-hostpath-1h3n</guid>
      <description>&lt;p&gt;&lt;strong&gt;hostPath:&lt;/strong&gt;&lt;br&gt;
The emptyDir volume is a very basic volume and has a few downsides as well. Let's suppose if the count of pods running is more than one, and data is stored in these emptyDir volumes and if a pod crashes, the data stored in that volume also gets lost.&lt;br&gt;
Here comes hostPath driver in the action.&lt;br&gt;
hostPath driver basically allows us to map a path on the worker / slave node on which the pods are running to a specific path inside the pods, i.e. the data from inside of this path will be exposed to multiple pods, therefore, multiple pods can use same path on the host (worker) machine to store the data instead of specific path for specific pods.&lt;br&gt;
hostPath is very much like the bind mount in docker, similar to docker we can perform read / write actions in this hostPath.&lt;/p&gt;

&lt;p&gt;Syntax:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: apps/v1
kind: Deployment
metadata:
    name: &amp;lt;name_of_deployment&amp;gt;
spec:
    replicas: 1
    selector:
        matchLabels:
            a: b
    spec:
        template:
            metadata:
                labels: 
                    a: b
            containers:
                - name: &amp;lt;name_of_container&amp;gt;
                  image: deocker_repo/&amp;lt;image_name&amp;gt;
            volumes:
                - name: &amp;lt;name_of_volumes&amp;gt;
                  hostPath:
                      path: &amp;lt;path_on_the_host_machone_where_data_should_be_stored&amp;gt;
                      type: &amp;lt;declares_how_the_above_declare_path_is_to_be_handled&amp;gt;
                            DirectoryOrCreate
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;type:&lt;/strong&gt; Directory defines the path that already exists.&lt;br&gt;
        Creates defines that the path does not exist but needs to be created.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Disadvantage of hostPath:&lt;/strong&gt;&lt;br&gt;
Let's suppose there are multiple pods, running on multiple slaves / worker nodes, now defining hostpath lets you map a volume on the slave / worker node to the N number of pods running on that particular slave / worker node i.e. all other pods that are running on other slaves / worker nodes will not be able to access the volume defined in this slave / worker node.&lt;/p&gt;

</description>
      <category>welcome</category>
      <category>developer</category>
      <category>serverless</category>
      <category>ai</category>
    </item>
    <item>
      <title>Volumes in Kubernetes- Part 1 (emptyDir)</title>
      <dc:creator>Madhavam Saxena</dc:creator>
      <pubDate>Mon, 16 Jan 2023 18:04:04 +0000</pubDate>
      <link>https://dev.to/msaxena14/volumes-in-kubernetes-2lkj</link>
      <guid>https://dev.to/msaxena14/volumes-in-kubernetes-2lkj</guid>
      <description>&lt;p&gt;&lt;strong&gt;Volumes:&lt;/strong&gt;&lt;br&gt;
State in kubernetes:&lt;br&gt;
Data created and used by your application which must not be lost.&lt;br&gt;
Types of data&lt;br&gt;
    1. User generated data&lt;br&gt;
    2. Intermediate data (either memory or temp. database)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Volumes&lt;/strong&gt; in kubernetes are used to store the data of running pods. There are situations in live production environments when production pods get terminated, data inside those pods is important, now to avoid the loss of this data volumes are used in kubernetes.&lt;br&gt;
Kubernetes can mount volumes in containers. We can set an instruction in the pod template which basically is part of &lt;strong&gt;deployment.yml&lt;/strong&gt; that mounts a volume to the container that gets created with the pod which came up via deployment.yml.&lt;br&gt;
Kubernetes provides various types of volumes: &lt;br&gt;
Local Node: A directory on the worker node where the pod is running.&lt;br&gt;
Cloud Provider Specific&lt;/p&gt;

&lt;p&gt;Lifetime of a volume by default depends on the lifetime of a pod as volumes are part of the pod.&lt;/p&gt;

&lt;p&gt;In the deployment.yml file: &lt;br&gt;
We need to add the volume key inside the &lt;strong&gt;spec.template.spec&lt;/strong&gt;. It basically is a list of volumes. We can give names to every volume that we create and these volumes will be accessible by all the containers  inside that pod.&lt;br&gt;
Under the name, we have to set up the volume type.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: apps/v1
kind: Deployment
metadata:
    name: &amp;lt;name_of_deployment&amp;gt;
spec:
    replicas: 1
    selector:
        matchLabels:
            a: b
    spec:
        template:
            metadata:
                labels: 
                    a: b
            containers:
                - name: &amp;lt;name_of_container&amp;gt;
                  image: deocker_repo/&amp;lt;image_name&amp;gt;
            volumes:
                - name: &amp;lt;name_of_volumes&amp;gt;
                  emptyDir: {}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;emptyDir: {}&lt;/strong&gt;&lt;br&gt;
EmptyDir is a type of volume that is used in kubernetes to store data.&lt;br&gt;
This means we want to use all the default values and no configuration is declared instead default is used as it.&lt;br&gt;
It simply creates a new empty directory whenever the pod starts and keeps the directory and writes the data till the lifespan of the pod. Containers can easily perform Write operation over it but if the pod dies, emptyDir dies with the pod i.e. this volume also terminates, and when a pod is created again a new empty directory is created.&lt;/p&gt;

&lt;p&gt;Now, after creating a volume we need to bind the volume with the container, therefore we need to make it available inside of the container in container configuration with the volumeMounts keyword.&lt;br&gt;
It takes the list. It takes mountPath i.e. which is the container internal path where the volume is mounted .&lt;br&gt;
name keyword is used to mount the volume/s that we created above to the mountPath i.e. path internal of containers.&lt;/p&gt;

</description>
      <category>career</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Liveness Probes in kubernetes</title>
      <dc:creator>Madhavam Saxena</dc:creator>
      <pubDate>Tue, 10 Jan 2023 16:06:49 +0000</pubDate>
      <link>https://dev.to/msaxena14/liveness-probes-in-kubernetes-5ffb</link>
      <guid>https://dev.to/msaxena14/liveness-probes-in-kubernetes-5ffb</guid>
      <description>&lt;p&gt;Suppose you’ve created a service and a deployment with the replica count as ‘n’, this means that there will be at least ‘n’ number of pods running which will be part of the service we create. Now you need to run your application on these pods, but are you sure if the pods and containers are healthy or not?&lt;br&gt;
The answer is   NO and to get sure for the same we use livenessprobe. Now this livenessprobe is defined inside the deployment.yml file.&lt;/p&gt;

&lt;p&gt;Syntax:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;livenessProbe:
httpGet: 
        path: /status
        port: 8080
    periodSeconds: 10
    initalDelaySeconds: 30
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: apps/v1
kind: Deployments
metadata:
      name: first-dep
      labels: 
           app: hello-world
spec:
    replicas: 3
    selector:
        matchLabels:
            app: hello-world
    template:
        metadata:
            name: first-pod
            labels: 
                app: hello-world
        spec:
            containers:
                -   name:
                    image:                              
                    livenessProbe:
                        httpGet: 
                            path: /status
                            port: 8080
                        periodSeconds: 10
                        initalDelaySeconds: 30

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;path&lt;/strong&gt;: it is used to check if the desired output is being received at defined path&lt;br&gt;
&lt;strong&gt;port&lt;/strong&gt;: it is the port number of the container i.e. port number at which container is exposedRunning.&lt;br&gt;
&lt;strong&gt;periodSeconds&lt;/strong&gt;: This is used to define how often the health check will be performed&lt;br&gt;
&lt;strong&gt;initialDelaySeconds&lt;/strong&gt;: this defined time is used as delay between the two simultaneous health checks.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>kubernetes</category>
      <category>tutorial</category>
      <category>livenessprobe</category>
    </item>
    <item>
      <title>Rollback and Updating Deployment</title>
      <dc:creator>Madhavam Saxena</dc:creator>
      <pubDate>Mon, 09 Jan 2023 17:29:08 +0000</pubDate>
      <link>https://dev.to/msaxena14/rollback-and-updating-deployment-3b71</link>
      <guid>https://dev.to/msaxena14/rollback-and-updating-deployment-3b71</guid>
      <description>&lt;p&gt;&lt;strong&gt;Scaling a deployment:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl scale &amp;lt;object_type&amp;gt; &amp;lt;object name&amp;gt; --replicas=&amp;lt;count_of_objects_needed&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Updating a deployment:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get deployments
kubectl set image deployment &amp;lt;name_of_deployment&amp;gt; &amp;lt;name_of_new_image_that_you_want_to_give / old_image_name&amp;gt;=&amp;lt;actual_name_of_image(full path of repository)&amp;gt;:&amp;lt;tag&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now what it does is, this command basically looks for the actual_name_of_image on docker repository, and downloads it from there. Now after downloading it updates the current image of deployment with this downloaded image, and that's how a deployment is updated.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: Images are differentiated on the basis of tags, i.e K8s re-downloads the image if and only if a new tag is passed to it, if the tag is not mentioned while updating a deployment, it will not update the image. Hence, passing a tag while executing a set image command is necessary.&lt;/p&gt;

&lt;p&gt;To check &lt;strong&gt;status of updation&lt;/strong&gt;, one can execute:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl rollout status deployment &amp;lt;name_of_deployment&amp;gt; 
                          OR
kubectl rollout status deployment/&amp;lt;name_of_deployment&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, what if the tag passed doesn’t actually exists? I.e.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl set image deployment/&amp;lt;name_of_deployment&amp;gt; &amp;lt;old_image_name&amp;gt;=&amp;lt;repo_name/image_name&amp;gt;:&amp;lt;some_random_value&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It will still show that the deployment is updated although the deployment has not actually updated. To check if the deployment is actually updated, one can test it via:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl rollout status deployment/&amp;lt;name_of_deployment&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will lead to an infinite loop of updation of deployment as, k8s follows rolling strategy, but since the tag passed doesn’t exists, the image will not be downloaded, and hence the new pod that's trying to spinnup will not get live, and the old pod will not terminate, and will be stuck in this. &lt;br&gt;
In such cases, we need to &lt;strong&gt;rollback to the previous deployment&lt;/strong&gt; by the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl rollout undo deployment/&amp;lt;name_of_deployment&amp;gt;
kubectl rollout status deployment/&amp;lt;name_of_deployment&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can check the &lt;strong&gt;history of deployment&lt;/strong&gt; by following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl rollout history deployment/&amp;lt;name_of_deployment&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This wil list out all the deployments in the form of revisions.&lt;/p&gt;

&lt;p&gt;To particularly &lt;strong&gt;inspect a deployment&lt;/strong&gt;, we can use:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl rollout history deployment/&amp;lt;name_of_deployment&amp;gt; --revision=&amp;lt;nth_deployment&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This also prints the image which was used in that deployment.&lt;/p&gt;

</description>
      <category>productivity</category>
      <category>career</category>
      <category>discuss</category>
    </item>
    <item>
      <title>Declarative approach for creating a service</title>
      <dc:creator>Madhavam Saxena</dc:creator>
      <pubDate>Sun, 08 Jan 2023 16:21:44 +0000</pubDate>
      <link>https://dev.to/msaxena14/declarative-approach-for-creating-a-service-5ec0</link>
      <guid>https://dev.to/msaxena14/declarative-approach-for-creating-a-service-5ec0</guid>
      <description>&lt;p&gt;Service.yml:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Service
metadata:
  name: &amp;lt;name_of_service&amp;gt;
spec:
  selector:
    &amp;lt;labelkey_of_pod_that’s_supposed_to_be_part_of_service&amp;gt;: &amp;lt;labelvalue_of_pod_that’s_supposed_to_be_part_of_service&amp;gt;
  ports: &amp;lt;value_entered_is_as_list&amp;gt;
    protocol: ‘TCP’
    port: 
    targetPort:
  type: 
protocol: &amp;lt;by_default_value_is_TCP&amp;gt;
port: &amp;lt;port_number_at_which_service_will_be_exposed&amp;gt; i.e. outside_world_port
targetPort: &amp;lt;port_at_which_container_will_be_exposed&amp;gt; i.e. port_inside_container
  type: &amp;lt;clusterIP / Nodeport / Loadbalancer&amp;gt; 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;//refer “Exposing a deployment in kubernetes” blog for getting detailed idea of type.  &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;.kind: Defines the type of k8s object being created&lt;br&gt;
.metedata.name: Assigns the name to the service object.&lt;br&gt;
.spec: Defines the specification of service object&lt;br&gt;
.spec.selector: Here, the value of selector is always set to “Matchlabels” by default and hence, we don't define it.&lt;br&gt;
What is does is, it basically looks for the pods that have the labelkey and labelvalue pair as defined under selectors, and then controls those pods under this service.&lt;br&gt;
So, for e.g. if key value pair is,&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;spec:
  selector:
    app:backend 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So, it will look for all the pods having this key,value pair as label. Now, if backtracking to the origin of pods, we find that pods are created via deployment. Therefore, now on joining the dots we can conclude that, the value in selector should be same as:&lt;br&gt;
.spec.selector.matchLabels&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;.spec.ports: The value accepted here is in list format as we can enter more than one value and hence its defined as portS.&lt;br&gt;
.spec.ports.protocol: By default is ‘TCP’&lt;br&gt;
.spec.ports.port:&lt;br&gt;
 port_number_at_which_service_will_be_exposed&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;To apply this service, run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl -f apply service.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>gratitude</category>
    </item>
    <item>
      <title>Exposing a deployment in kubernetes</title>
      <dc:creator>Madhavam Saxena</dc:creator>
      <pubDate>Sat, 07 Jan 2023 18:35:35 +0000</pubDate>
      <link>https://dev.to/msaxena14/exposing-a-deployment-in-kubernetes-nn</link>
      <guid>https://dev.to/msaxena14/exposing-a-deployment-in-kubernetes-nn</guid>
      <description>&lt;p&gt;Exposing a deployment:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl expose deployment &amp;lt;name_of_deployment&amp;gt; --type=LoadBalancer/NodePort/ClusterIP(Default) --port=&amp;lt;port_no_at_which_deployment_needs_to_be_exposed&amp;gt; 

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;type= LoadBalancer is only available for those environments that support load balancer like AWS. &lt;br&gt;
Now via above command what we are doing is, we are creating a service that will be exposed on the assigned port number.&lt;br&gt;
Now, What is service?&lt;/p&gt;

&lt;p&gt;Service:&lt;br&gt;
Service basically help us to reach the pods and container running in a pod. Another k8s object responsible for exposing pod within cluster and outside it.&lt;br&gt;
If you’ll see, Pods do have an IP address but we can’t access them from outside of cluster Via this IP, also, whenever the pod gets terminated and another pod spinnup’s, the IP address of the pod will change.&lt;br&gt;
Service, basically groups these pods together and assign a particular IP address to that group of pods. Benefit of this is, pods might get terminated but the service will never terminate, and hence, pods inside the service will be reachable. Also, when on exposing this service, we make pods reachable from outside of cluster. Without service, it becomes very difficult to reach out these pods and hence we need service.&lt;/p&gt;

&lt;p&gt;--type=NodePort :&lt;br&gt;
Actually means that this deployment should be exposed with the help of worker/slave node on which its running, which makes this deployment accessible from outside, but a more better approach is loadbalancer.&lt;/p&gt;

&lt;p&gt;--type=LoadBalancer :&lt;br&gt;
This not only helps to assign IP to the service, but also, routes the load in the balanced way to the service.&lt;/p&gt;

&lt;p&gt;---type=ClusterIP is default type and lets you expose the deployment via cluster IP.&lt;/p&gt;

&lt;p&gt;To see the running services&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get services
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is basically an imperative approach of running or basically creating a service.&lt;br&gt;
For declarative approach, stay tunned.&lt;/p&gt;

</description>
      <category>tutorial</category>
      <category>kubernetes</category>
      <category>beginners</category>
      <category>devops</category>
    </item>
    <item>
      <title>Deployment in Kubernetes</title>
      <dc:creator>Madhavam Saxena</dc:creator>
      <pubDate>Fri, 06 Jan 2023 17:49:26 +0000</pubDate>
      <link>https://dev.to/msaxena14/creating-a-deployment-in-kubernetes-3a69</link>
      <guid>https://dev.to/msaxena14/creating-a-deployment-in-kubernetes-3a69</guid>
      <description>&lt;p&gt;&lt;strong&gt;What is Deployment in Kubernetes?&lt;/strong&gt;&lt;br&gt;
Deployment basically is a higher level Kubernetes object which is used to manage POD's and their version. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Creating a deployment&lt;/strong&gt;&lt;br&gt;
There are basically two methods to create a deployment: &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Imperative&lt;/li&gt;
&lt;li&gt;Declarative
We are going to learn about Declarative method.
Attached below is the snippet for creating a deployment in k8s.
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: apps/v1
kind: Deployment
metadata:
  name: &amp;lt;Name Of Deployment that you want to assign&amp;gt;
  labels: &amp;lt;Labels that you want to attach with deployment&amp;gt;
spec:
  replicas: &amp;lt;Defines the total number of pods expected to run&amp;gt;
  selector: &amp;lt;Selects the pods to control on the basis of 
             selector.Matchlabel is the mostly used type&amp;gt;
    matchLabels:
      &amp;lt;labelkey&amp;gt;: &amp;lt;labelvalue&amp;gt;
  template: &amp;lt;Template is basically a defined structure for 
             containers running inside the pod&amp;gt;
    metadata:
      labels: &amp;lt;if needed&amp;gt;
      name: &amp;lt;if needed&amp;gt;
    spec:
      containers: &amp;lt;Values assigned is in the form of list&amp;gt;
        - name: &amp;lt;Name of container that you want to give&amp;gt;
          image: &amp;lt;Image of container&amp;gt;
          ports: &amp;lt;port num. on which its expected to listen
                  This value is also given in form of list&amp;gt;
            - containerPort:    
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;So what is happening above? &lt;br&gt;
We have created an kubernetes object whose kind is deployment. Name of this deployment can be defined at .metadata.name and labels can be attached to this deployment at .metadata.labels .&lt;br&gt;
Once the deployment is created, it generates a replicaset (rs) which is responsible to create the declared number of pods. Now, after defining replicas, we have declared selectors. So, what's the use of selector?&lt;br&gt;&lt;br&gt;
A selector is used to basically select the pods that needs to be controlled via deployment which we are creating. Now the selection of pods is done on the basis of labels i.e. labels mentioned in matchlabels and that .metadata.label should be same, else it might can be cause of misbehavior or Error.&lt;br&gt;
.spec.template is supposed to be a defined structure for the pod following which the pod('s) will spin up. &lt;br&gt;
.spec.template.spec defines the container/'s . Value if container is given in the form of list.&lt;/p&gt;

&lt;p&gt;How to run this file?&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f deployment.yml      
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;How to see deployments?&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get deployments
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;How to delete a specific deployment?&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl delete deployment &amp;lt;name_of_deployment&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>discuss</category>
      <category>showdev</category>
      <category>ui</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Basic flow / working of k8s</title>
      <dc:creator>Madhavam Saxena</dc:creator>
      <pubDate>Wed, 04 Jan 2023 17:50:28 +0000</pubDate>
      <link>https://dev.to/msaxena14/basic-flow-working-of-k8s-29e0</link>
      <guid>https://dev.to/msaxena14/basic-flow-working-of-k8s-29e0</guid>
      <description>&lt;p&gt;*&lt;em&gt;Master Node *&lt;/em&gt;:&lt;br&gt;
When a slave node needs some action to be performed it directly communicates with API server.&lt;br&gt;
Now API server forwards the request of slave node to Controller Manager which then makes a check via API server to etcd Cluster of present scenario of requesting slave node.&lt;br&gt;
Now after this, the Controller Manager checks if the present state is the same as the desired state or not.&lt;br&gt;
If a match is not established, the Controller Manager instructs the Kube scheduler to do the needful. &lt;/p&gt;

&lt;p&gt;*&lt;em&gt;Slave Node *&lt;/em&gt;:&lt;br&gt;
First of all a container is created with the help of a container engine inside a POD. &lt;br&gt;
For creation of a POD kubelet makes a request to the API server and then the Master node plays its role.&lt;br&gt;
After creation of POD, Kubeproxy assigns it IP address so that a POD can be uniquely identified.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>beginners</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Brief Overview of Higher Level Kubernetes Objects</title>
      <dc:creator>Madhavam Saxena</dc:creator>
      <pubDate>Tue, 03 Jan 2023 17:24:30 +0000</pubDate>
      <link>https://dev.to/msaxena14/brief-overview-of-higher-level-kubernetes-objects-3bkn</link>
      <guid>https://dev.to/msaxena14/brief-overview-of-higher-level-kubernetes-objects-3bkn</guid>
      <description>&lt;p&gt;They can be understood as Kubernetes plugins that allow us to maintain the versions of PODS and help us to rollback to the previous version in case of POD crash.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Replication set :&lt;br&gt;
Provides us with the facility of auto scaling and auto healing.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Deployment :&lt;br&gt;
Another K8s higher level object used for maintaining the versions of PODs which in long term helps in the process of rollback and recovery.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Services :&lt;br&gt;
This helps in assigning static IP addresses to PODs and perform other networking actions over them.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Volume :&lt;br&gt;
This provides ephemeral storage to PODS which even persist after crashing of PODS.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;NOTE :&lt;br&gt;
For single cloud like AWS commands generally start with kubectl&lt;br&gt;
For on premises command starts with kubeadm&lt;br&gt;
For hybrid clouds, kubefed is used, which stands for kube federation.&lt;br&gt;
A container can be declared in both imperative and declarative form.&lt;br&gt;
Declarative is when you declare the requirements of a container in a yaml file whereas Imperative is when you manually create a container using CLI commands.&lt;/p&gt;

</description>
      <category>docker</category>
      <category>kubernetes</category>
      <category>security</category>
      <category>books</category>
    </item>
    <item>
      <title>Architecture of Kubernetes(Slave Node)</title>
      <dc:creator>Madhavam Saxena</dc:creator>
      <pubDate>Mon, 02 Jan 2023 18:13:45 +0000</pubDate>
      <link>https://dev.to/msaxena14/architecture-of-kubernetesslave-node-7of</link>
      <guid>https://dev.to/msaxena14/architecture-of-kubernetesslave-node-7of</guid>
      <description>&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Kubelet :&lt;br&gt;
Kubelet is responsible for controlling the PODs, i.e it conveys the message to the API server which then forwards it to the Controller Manager to check if the desired state is maintained or not.&lt;br&gt;
It always listens to Master Node, on port number 10255 which can also be changed according to the need.&lt;br&gt;
Provides the feedback of success or failure to master.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;KubeProxy :&lt;br&gt;
Kubeproxy is responsible for assigning dynamic IP addresses to PODs and establishing communication between PODs. &lt;br&gt;
PODs are not allowed to communicate directly with each other, therefore KubeProxy is needed.&lt;br&gt;
It runs on each slave node and makes sure that each POD get its own IP.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Container Engine :&lt;br&gt;
The Container Engine is not supposed to be the internal part of K8s. Also, the major role of the Container Engine is to create containers in POD.&lt;br&gt;
Generally it's seen, Docker is used as a Container Engine but ContainerD, and Rocket are some Container Engines that can be used as well.&lt;br&gt;&lt;br&gt;
Responsible for pulling container images, exposing the container on the ports that are mentioned in manifest.&lt;br&gt;
Also, looks after initialization and termination of containers.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;POD :&lt;br&gt;
POD is supposed to be the atomic / control unit of K8s. &lt;br&gt;
POD’s can not communicate directly with each other.&lt;br&gt;
POD’s have containers in them.&lt;br&gt;
Ideally a POD is supposed to have one container in it but we can keep more than one container as well, in such case, all the containers will be tightly coupled i.e. if any of the container starts malfunctioning, other containers that are connected with it will also get affected and at the end, that POD will be deleted.&lt;br&gt;
Every time a new POD is created meaning if a POD has more than one container out of which a container of POD is not created then, Control Manger will create a whole new POD which will be assigned a whole new ip address.&lt;br&gt;
Either the entire request of creating n number of PODs is fulfilled or not even a single container associated to that request is created.&lt;br&gt;
Multi Container PODs share access to the same volume and memory space.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Limitations of POD : &lt;br&gt;
By default no facility of autohealing and autoscaling, for that higher objects of K8s are added.&lt;br&gt;
In case of crashing of PODs, there is a dependency over Higher Level Kubernetes objects for rollback.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xahrfxyh1j6ed120vdzm.png" rel="noopener noreferrer"&gt;Architecture of Kubernetes(Slave Node)&lt;/a&gt;&lt;/p&gt;

</description>
      <category>emptystring</category>
    </item>
    <item>
      <title>Architecture of Kubernetes(Master Node)</title>
      <dc:creator>Madhavam Saxena</dc:creator>
      <pubDate>Sun, 01 Jan 2023 08:57:24 +0000</pubDate>
      <link>https://dev.to/msaxena14/architecture-of-kubernetesmaster-node-39go</link>
      <guid>https://dev.to/msaxena14/architecture-of-kubernetesmaster-node-39go</guid>
      <description>&lt;h2&gt;
  
  
  Components of Master Node
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;API Server :&lt;br&gt;
API server is supposed to be the front face that communicates with nodes and the other components of the control plane. Every node provides its requirements to the API server (i.e. Manifest file) which are then forwarded by API server to Controller Manager. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Controller Manager : &lt;br&gt;
Controller Manager checks if all the requirements that are asked by the slave node are getting fulfilled or not i.e. if the actual state is equivalent to desired state or not. It's responsible to maintain balance b/w the two states.&lt;br&gt;
Here comes the role of the etcd cluster.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;etcd Cluster : &lt;br&gt;
If K8s is on cloud, then Cloud-Control Manager will be in role and if K8s is on premises, then Kube-Control Manager is found in action.&lt;br&gt;
Etcd is responsible for storing metadata and status of a cluster in key-value pair format.&lt;br&gt;
It's consistent and is highly available.&lt;br&gt;
Source of truth for cluster i.e. you can find all the information of cluster with etcd.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Scheduler :&lt;br&gt;
Kube scheduler responds to those requests which are generated by the user for creation and management of POD.&lt;br&gt;
If the manifest user has not mentioned the node on which it wants POD to be created, in that case, Kube Scheduler finds the best Salve node on which POD can be created and then creates the POD on that node.&lt;br&gt;
It gets information for hardware configuration from configuration files and schedules the POD’s on the slave nodes accordingly.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Architecture Diagram of Kubernetes&lt;br&gt;
(&lt;a href="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/frqk1mrp4d81dyn41v2h.png" rel="noopener noreferrer"&gt;https://dev-to-uploads.s3.amazonaws.com/uploads/articles/frqk1mrp4d81dyn41v2h.png&lt;/a&gt;)&lt;/p&gt;

</description>
      <category>themes</category>
    </item>
  </channel>
</rss>
