<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Jensen Jose</title>
    <description>The latest articles on DEV Community by Jensen Jose (@jensen1806).</description>
    <link>https://dev.to/jensen1806</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/jensen1806"/>
    <language>en</language>
    <item>
      <title>Understanding Storage in Kubernetes: A Deep Dive into Persistent Volumes and Persistent Volume Claims</title>
      <dc:creator>Jensen Jose</dc:creator>
      <pubDate>Wed, 07 Aug 2024 08:38:31 +0000</pubDate>
      <link>https://dev.to/jensen1806/understanding-storage-in-kubernetes-a-deep-dive-into-persistent-volumes-and-persistent-volume-claims-e6k</link>
      <guid>https://dev.to/jensen1806/understanding-storage-in-kubernetes-a-deep-dive-into-persistent-volumes-and-persistent-volume-claims-e6k</guid>
      <description>&lt;p&gt;Welcome to the 29th installment of our CK2024 blog series! In this article, we'll be exploring the crucial topic of storage within Kubernetes, focusing on Persistent Volumes (PV), Persistent Volume Claims (PVC), and Storage Classes. If you’re familiar with Docker storage concepts, you’ll find this discussion particularly relevant as we bridge the gap between Docker and Kubernetes storage mechanisms.&lt;/p&gt;

&lt;h3&gt;
  
  
  Introduction to Storage in Kubernetes
&lt;/h3&gt;

&lt;p&gt;Storage in Kubernetes is fundamental to ensuring that your applications can manage and persist data effectively. Unlike Docker, where storage is tied to individual containers, Kubernetes abstracts storage through the concepts of Persistent Volumes (PV) and Persistent Volume Claims (PVC). Understanding these components is key to managing stateful applications within Kubernetes.&lt;/p&gt;

&lt;h3&gt;
  
  
  Persistent Volumes (PV) and Persistent Volume Claims (PVC)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Persistent Volume (PV)&lt;/strong&gt;: A PV is a piece of storage in the cluster that has been provisioned by an administrator or dynamically provisioned using Storage Classes. It represents a storage resource that can be used by Pods.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Persistent Volume Claim (PVC)&lt;/strong&gt;: A PVC is a request for storage by a user. It specifies the amount of storage, access modes (e.g., read-write or read-only), and other requirements. The Kubernetes scheduler binds PVCs to PVs based on these requirements.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  How PV and PVC Work Together
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Provisioning a PV&lt;/strong&gt;:&lt;br&gt;
An administrator creates a PV with a defined storage capacity, access modes, and other attributes. For example, a PV could have 100 GiB of storage with read-write permissions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Creating a PVC&lt;/strong&gt;:&lt;br&gt;
A user creates a PVC specifying their storage needs, such as 10 GiB of storage. Kubernetes matches this PVC with a suitable PV based on the requested capacity and access mode.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Binding PVC to PV&lt;/strong&gt;:&lt;br&gt;
Once a match is found, the PVC is bound to the PV. The PV’s available capacity is reduced by the amount allocated to the PVC. For instance, if a PV initially had 100 GiB and a PVC requests 10 GiB, the PV’s remaining capacity becomes 90 GiB.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Usage by Pods&lt;/strong&gt;:&lt;br&gt;
The PVC is then used by Pods to mount the storage, allowing applications to read from and write to the mounted volume.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Practical Example: Using PV and PVC in a Pod
&lt;/h3&gt;

&lt;p&gt;Let’s walk through a practical example of how to use PV and PVC in a Kubernetes Pod:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Define the Persistent Volume (PV):
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: PersistentVolume
metadata:
  name: my-pv
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/mnt/data"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Create the Persistent Volume Claim (PVC):
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 500Mi
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Use the PVC in a Pod:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Pod
metadata:
  name: my-pod
spec:
  containers:
  - name: my-container
    image: nginx
    volumeMounts:
    - mountPath: "/usr/share/nginx/html"
      name: my-storage
  volumes:
  - name: my-storage
    persistentVolumeClaim:
      claimName: my-pvc
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this example, the my-pod Pod mounts storage from the my-pvc PVC, which is bound to the my-pv PV. This allows the Pod’s container to use the storage defined by the PVC.&lt;/p&gt;

&lt;h3&gt;
  
  
  Storage Classes
&lt;/h3&gt;

&lt;p&gt;Storage Classes provide a way to define different types of storage with varying performance and cost characteristics. They abstract the provisioning of storage and enable dynamic volume provisioning.&lt;/p&gt;

&lt;p&gt;A Storage Class defines the storage types and parameters used for provisioning PVs. It allows users to request specific types of storage based on their application requirements.&lt;/p&gt;

&lt;p&gt;Example of a Storage Class:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: fast-storage
provisioner: kubernetes.io/aws-ebs
parameters:
  type: gp2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this example, the fast-storage Storage Class uses AWS EBS (Elastic Block Store) with the gp2 type for high-performance storage.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;Understanding and properly implementing PVs, PVCs, and Storage Classes are essential for managing stateful applications in Kubernetes. These components help ensure that your applications can handle persistent data across Pod restarts and scaling events.&lt;/p&gt;

&lt;p&gt;Stay tuned for the next article in our CK2024 series, where we’ll dive into more advanced Kubernetes topics.&lt;/p&gt;

&lt;p&gt;For further reference, check out the detailed YouTube video here:&lt;br&gt;
&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/2NzYX8_lX_0"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>aws</category>
      <category>docker</category>
    </item>
    <item>
      <title>CK 2024 Blog Series: Understanding Docker Storage</title>
      <dc:creator>Jensen Jose</dc:creator>
      <pubDate>Thu, 01 Aug 2024 10:53:42 +0000</pubDate>
      <link>https://dev.to/jensen1806/ck-2024-blog-series-understanding-docker-storage-48gl</link>
      <guid>https://dev.to/jensen1806/ck-2024-blog-series-understanding-docker-storage-48gl</guid>
      <description>&lt;p&gt;Welcome back to the CK 2024 blog series! Today we're diving into Docker storage, this topic is essential for understanding how storage works in Docker and how to make it persistent, which is crucial before we move on to discussing Kubernetes storage in the next post.&lt;/p&gt;

&lt;h3&gt;
  
  
  Introduction to Docker Storage
&lt;/h3&gt;

&lt;p&gt;Docker storage is fundamental for managing data within containers. In this post, we will cover the basics of Docker storage, how to use it, and how to make storage persistent. Understanding these concepts will set the stage for our upcoming discussion on Kubernetes storage.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cloning a Repository and Creating a Docker File
&lt;/h3&gt;

&lt;p&gt;To start, let's clone a GitHub repository. You can use any project, but for this example, we'll use a simple to-do app that we used in our Day 2 video. Here are the steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Clone the repository:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone https://github.com/your-repo/todo-app.git
cd todo-app
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Create a Dockerfile with the following instructions:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM node:18-alpine
WORKDIR /app
COPY . .
RUN yarn install
EXPOSE 3000
CMD ["yarn", "start"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Build the Docker image:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker build -t todo-app .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Understanding Docker Image Layers
&lt;/h3&gt;

&lt;p&gt;When you build a Docker image, it is composed of multiple layers. Each instruction in the Dockerfile creates a new layer. These layers are read-only and form the base of your container. Changes to the container are made in a writable layer on top of these read-only layers.&lt;/p&gt;

&lt;p&gt;For example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;node:18-alpine creates a base layer of about 6.48 MB.&lt;/li&gt;
&lt;li&gt;Additional layers are created for the WORKDIR, COPY, and RUN instructions.
If you make changes to the Dockerfile and rebuild the image, Docker will only rebuild the layers that have changed, using a cache for the unchanged layers. This efficiency is a key benefit of Docker's layered architecture.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Making Data Persistent with Volumes
&lt;/h3&gt;

&lt;p&gt;By default, data within a Docker container is ephemeral. Once the container stops or is removed, any data written to it is lost. To make data persistent, we use Docker volumes. Volumes store data outside the container's writable layer, allowing it to persist across container restarts and removals.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create a volume:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker volume create data-volume
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Run a container with the volume:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run -d -p 3000:3000 --name todo-app -v data-volume:/app todo-app
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Verify the volume:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker volume ls
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Storage Drivers
&lt;/h3&gt;

&lt;p&gt;Docker uses storage drivers to manage how data is stored. The most common storage drivers are overlay2 for Linux, aufs, and device mapper, although aufs and device mapper are deprecated. These drivers manage the read-only layers and the writable container layer.&lt;/p&gt;

&lt;h3&gt;
  
  
  Persistent Data with Bind Mounts
&lt;/h3&gt;

&lt;p&gt;Another way to persist data is by using bind mounts, which map a directory on the host machine to a directory in the container.&lt;/p&gt;

&lt;p&gt;Run a container with a bind mount&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run -d -p 3000:3000 --name todo-app -v /path/on/host:/app todo-app
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This binds the host directory /path/on/host to the container directory /app, ensuring data is stored on the host machine.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;Understanding Docker storage and making data persistent are critical skills for managing containerized applications. By using Docker volumes and bind mounts, you can ensure your data is safe and available even if containers are stopped or removed.&lt;/p&gt;

&lt;p&gt;In the next post, we'll dive into Kubernetes storage, including persistent volumes and persistent volume claims. Stay tuned and happy coding!&lt;/p&gt;

&lt;p&gt;For further reference, check out the detailed YouTube video here:&lt;br&gt;
&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/ZAPX21TMkkQ"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>aws</category>
      <category>docker</category>
    </item>
    <item>
      <title>CK 2024 Blog Series: Understanding and Implementing Network Policies</title>
      <dc:creator>Jensen Jose</dc:creator>
      <pubDate>Tue, 30 Jul 2024 14:19:07 +0000</pubDate>
      <link>https://dev.to/jensen1806/ck-2024-blog-series-understanding-and-implementing-network-policies-22j2</link>
      <guid>https://dev.to/jensen1806/ck-2024-blog-series-understanding-and-implementing-network-policies-22j2</guid>
      <description>&lt;p&gt;Hello everyone, welcome back to my blog series CK 2024 and today, we are diving into an essential topic in Kubernetes: Network Policies. This is the 26th entry in our series, and I truly appreciate your support thus far. In this post, we'll explore what Network Policies are, how to implement them, and why they are critical for securing your Kubernetes clusters. &lt;/p&gt;

&lt;h3&gt;
  
  
  Why Network Policies?
&lt;/h3&gt;

&lt;p&gt;Before we delve into the specifics of Network Policies, let's understand the network flow and why we need such policies. Imagine a three-tier web application with a web tier, application tier, and database tier. The web tier handles user requests over ports 80 (HTTP) and 443 (HTTPS). The application tier processes business logic, and the database tier runs a database server, such as MySQL, on port 3306.&lt;/p&gt;

&lt;p&gt;In Kubernetes, by default, all pods can communicate with each other. This open communication can lead to security vulnerabilities. For instance, the front-end application should not directly access the database tier; only the back-end should have this privilege. To enforce these rules and restrict unnecessary access, we implement Network Policies.&lt;/p&gt;

&lt;h3&gt;
  
  
  Network Policies in Kubernetes
&lt;/h3&gt;

&lt;p&gt;Network Policies in Kubernetes are rules that define how pods communicate with each other and other network endpoints. They provide a way to control traffic flow at the IP address or port level. These policies are crucial for securing your application by ensuring that only authorized pods can communicate with each other.&lt;/p&gt;

&lt;h3&gt;
  
  
  Implementing Network Policies
&lt;/h3&gt;

&lt;p&gt;Let's consider a scenario where we have a Kubernetes cluster with three deployments: front-end, back-end, and database, each exposed through services. Here’s a step-by-step guide to implementing Network Policies:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Set Up Your Cluster&lt;/strong&gt;
First, ensure your cluster is running with a CNI (Container Network Interface) plugin that supports Network Policies. For this example, we'll use Weave as our CNI plugin.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
networking:
  disableDefaultCNI: true
nodes:
  - role: control-plane
  - role: worker
  - role: worker
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create the cluster with the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kind create cluster --config kind-config.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Install Weave CNI:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f "https://cloud.weave.works/k8s/v1.8/net.yaml"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Deploy Applications&lt;/strong&gt;
Deploy the front-end, back-end, and database applications along with their services.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Pod
metadata:
  name: frontend
  labels:
    app: frontend
spec:
  containers:
    - name: frontend
      image: nginx
---
apiVersion: v1
kind: Service
metadata:
  name: frontend
spec:
  ports:
    - port: 80
  selector:
    app: frontend
---
apiVersion: v1
kind: Pod
metadata:
  name: backend
  labels:
    app: backend
spec:
  containers:
    - name: backend
      image: nginx
---
apiVersion: v1
kind: Service
metadata:
  name: backend
spec:
  ports:
    - port: 80
  selector:
    app: backend
---
apiVersion: v1
kind: Pod
metadata:
  name: database
  labels:
    app: database
spec:
  containers:
    - name: database
      image: mysql
      env:
        - name: MYSQL_ROOT_PASSWORD
          value: password
---
apiVersion: v1
kind: Service
metadata:
  name: database
spec:
  ports:
    - port: 3306
  selector:
    app: database
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Apply these manifests:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f manifests.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Create Network Policies&lt;/strong&gt;
Now, let's create a Network Policy to restrict access so that only the back-end can communicate with the database.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: backend-to-database
spec:
  podSelector:
    matchLabels:
      app: database
  policyTypes:
    - Ingress
  ingress:
    - from:
        - podSelector:
            matchLabels:
              app: backend
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Apply the Network Policy:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f network-policy.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Verifying Network Policies
&lt;/h3&gt;

&lt;p&gt;To verify the Network Policies, you can exec into the front-end pod and attempt to connect to the database. This connection should be denied:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl exec -it frontend -- /bin/bash
curl database:3306
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Similarly, the back-end should be able to connect to the database:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl exec -it backend -- /bin/bash
curl database:3306
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;In this blog post, we discussed the importance of network policies in Kubernetes and how they help secure your applications. We also provided a step-by-step guide to implementing them. Network Policies are vital for maintaining a secure and controlled communication flow within your Kubernetes clusters.&lt;/p&gt;

&lt;p&gt;Stay tuned for our next topic on storage. Until next time, happy coding!&lt;/p&gt;

&lt;p&gt;For further reference, check out the detailed YouTube video here:&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/eVtnevr3Rao?start=12"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>networking</category>
    </item>
    <item>
      <title>Understanding Deployments and Replica Sets in Kubernetes</title>
      <dc:creator>Jensen Jose</dc:creator>
      <pubDate>Tue, 23 Jul 2024 14:56:15 +0000</pubDate>
      <link>https://dev.to/jensen1806/understanding-deployments-and-replica-sets-in-kubernetes-54gl</link>
      <guid>https://dev.to/jensen1806/understanding-deployments-and-replica-sets-in-kubernetes-54gl</guid>
      <description>&lt;p&gt;Welcome to the next instalment of our CK 2024 blog series, where we dive deep into Kubernetes concepts and practices. In this post, we'll be focusing on Deployments and Replica Sets, which are fundamental to managing applications in a Kubernetes cluster.&lt;/p&gt;

&lt;h3&gt;
  
  
  What are Deployments?
&lt;/h3&gt;

&lt;p&gt;A Deployment in Kubernetes is a higher-level concept that manages Replica Sets and provides declarative updates to Pods and Replica Sets. It's a powerful mechanism that ensures your application is always up and running, even in the face of failures.&lt;/p&gt;

&lt;h3&gt;
  
  
  Key Features of Deployments:
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Declarative Updates&lt;/strong&gt;: Define the desired state of your application, and Kubernetes will ensure it matches this state.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Rollback Support&lt;/strong&gt;: Easily revert to a previous state in case of a faulty update.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scaling&lt;/strong&gt;: Automatically adjust the number of replicas to handle varying loads.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Self-healing&lt;/strong&gt;: Automatically replace failed or unresponsive Pods.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Creating a Deployment
&lt;/h3&gt;

&lt;p&gt;To create a Deployment, you need to define a Deployment manifest file in YAML format. Here’s an example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.14.2
        ports:
        - containerPort: 80
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this manifest:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;apiVersion&lt;/strong&gt; and &lt;strong&gt;kind&lt;/strong&gt; specify the type of Kubernetes object.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;metadata&lt;/strong&gt; provides metadata about the Deployment.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;spec&lt;/strong&gt; defines the desired state, including the number of replicas, selector criteria, and the Pod template.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Understanding Replica Sets
&lt;/h3&gt;

&lt;p&gt;Replica Sets are responsible for maintaining a stable set of replica Pods running at any given time. They ensure that the specified number of replicas is always running.&lt;/p&gt;

&lt;h3&gt;
  
  
  Key Features of Replica Sets:
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Self-healing&lt;/strong&gt;: Automatically replaces failed Pods.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Selector-based Pod Matching&lt;/strong&gt;: Uses selectors to identify the Pods it should manage.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Creating a Replica Set
&lt;/h3&gt;

&lt;p&gt;Here's an example of a Replica Set manifest:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: nginx-replicaset
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.14.2
        ports:
        - containerPort: 80
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Deployments vs. Replica Sets
&lt;/h3&gt;

&lt;p&gt;While Replica Sets manage the number of Pod replicas, Deployments provide additional functionality, such as rolling updates and rollbacks. Deployments use Replica Sets under the hood to manage Pods.&lt;/p&gt;

&lt;h3&gt;
  
  
  Rolling Updates with Deployments
&lt;/h3&gt;

&lt;p&gt;One of the significant advantages of using Deployments is the ability to perform rolling updates. This ensures that updates are applied gradually, with minimal impact on the application’s availability.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;spec:
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
      maxSurge: 1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the strategy section:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;type&lt;/strong&gt;: RollingUpdate specifies the update strategy.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;maxUnavailable&lt;/strong&gt; indicates the maximum number of Pods that can be unavailable during the update.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;maxSurge&lt;/strong&gt; specifies the maximum number of Pods that can be created above the desired number of replicas during the update.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;Deployments and Replica Sets are essential components in Kubernetes that help you manage and scale your applications effectively. By understanding and utilizing these resources, you can ensure high availability, reliability, and seamless updates for your applications.&lt;/p&gt;

&lt;p&gt;Stay tuned for the next post in our CK 2024 blog series, where we will explore Services in Kubernetes and how they enable communication between different parts of your application.&lt;/p&gt;

&lt;p&gt;For further reference, check out the detailed YouTube video here:&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/P0bogYEyfeI"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>docker</category>
      <category>devops</category>
    </item>
    <item>
      <title>Understanding SSL/TLS in Kubernetes</title>
      <dc:creator>Jensen Jose</dc:creator>
      <pubDate>Mon, 22 Jul 2024 15:06:24 +0000</pubDate>
      <link>https://dev.to/jensen1806/understanding-ssltls-in-kubernetes-1o1n</link>
      <guid>https://dev.to/jensen1806/understanding-ssltls-in-kubernetes-1o1n</guid>
      <description>&lt;p&gt;Welcome back to the CK2024 series! In this 21st instalment, we delve into the crucial topic of SSL/TLS within Kubernetes. Building on our previous discussion about SSL/TLS basics, this blog will explore how these security protocols are implemented in Kubernetes environments, focusing on certificate creation, signing requests, and overall security mechanisms.&lt;/p&gt;

&lt;h3&gt;
  
  
  Recap of SSL/TLS Basics
&lt;/h3&gt;

&lt;p&gt;Before diving into Kubernetes specifics, let's briefly revisit the fundamental concepts of SSL/TLS. SSL (Secure Sockets Layer) and its successor TLS (Transport Layer Security) are cryptographic protocols designed to secure communications over a network. They use a combination of symmetric and asymmetric encryption to ensure the confidentiality and integrity of data.&lt;/p&gt;

&lt;p&gt;In a typical SSL/TLS setup:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Client Certificates&lt;/strong&gt;: Issued by clients to authenticate themselves to the server.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Server Certificates&lt;/strong&gt;: Issued to servers to encrypt communication and authenticate themselves to clients.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Certificate Authority (CA)&lt;/strong&gt;: The entity that issues and signs certificates. It validates the identity of the certificate requester before issuing a certificate.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  SSL/TLS in Kubernetes
&lt;/h3&gt;

&lt;p&gt;Kubernetes, as a container orchestration platform, also relies on SSL/TLS for securing communications between its various components. Here’s a breakdown of how SSL/TLS operates within a Kubernetes cluster:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Components Involved&lt;/strong&gt;:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Master Node: Manages the Kubernetes cluster and contains components like the API server, controller manager, and scheduler.&lt;/li&gt;
&lt;li&gt;Worker Nodes: Host the containerized applications.&lt;/li&gt;
&lt;li&gt;Clients: Users or tools like kubectl that interact with the Kubernetes API server.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Certificate Types&lt;/strong&gt;:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Client Certificates: Used by users or clients to authenticate with the Kubernetes API server.&lt;/li&gt;
&lt;li&gt;Server Certificates: Used by the API server and other components to secure communication.&lt;/li&gt;
&lt;li&gt;Root Certificates: Issued by the CA and used to verify the authenticity of certificates issued to clients and servers.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Certificate Workflow&lt;/strong&gt;:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Client to API Server: When a client (like kubectl) communicates with the API server, both the client and the server need certificates to establish a secure connection.&lt;/li&gt;
&lt;li&gt;Master Node to Worker Node: Communication between the master node and worker nodes also needs to be encrypted, requiring certificates for both ends.&lt;/li&gt;
&lt;li&gt;Component-to-Component Communication: Internal communications, such as between the API server and etcd (the key-value store), or between various controllers and schedulers, must also be secured with appropriate certificates.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Creating and Using Certificates in Kubernetes
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Generating Certificates&lt;/strong&gt;:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Use tools like OpenSSL to generate private keys and certificate signing requests (CSRs).&lt;/li&gt;
&lt;li&gt;Example command to generate a private key
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;openssl genrsa -out adam.key 2048
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Example command to create a CSR
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;openssl req -new -key adam.key -out adam.csr
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Creating a Certificate Signing Request (CSR) in Kubernetes&lt;/strong&gt;:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Define a CSR in YAML format to submit to the Kubernetes API server.&lt;/li&gt;
&lt;li&gt;Example YAML for a CSR:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: certificates.k8s.io/v1
kind: CertificateSigningRequest
metadata:
  name: adam
spec:
  request: &amp;lt;base64-encoded-csr&amp;gt;
  usages:
    - digital signature
    - key encipherment
    - server auth
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Apply the CSR using kubectl:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f csr.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Approving the CSR:&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;As an administrator, approve the CSR using:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl certificate approve adam
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Distributing Certificates&lt;/strong&gt;:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Once approved, you can retrieve the issued certificate and share it with the user. Decode the certificate if needed:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get csr adam -o yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Summary
&lt;/h3&gt;

&lt;p&gt;In this blog, we’ve covered the essentials of SSL/TLS in Kubernetes, including how to generate and manage certificates for securing communications between various components of a Kubernetes cluster. Understanding these concepts is crucial for maintaining the security of your Kubernetes environments.&lt;/p&gt;

&lt;p&gt;Thank you for following along with Day 21 of CK2024. Stay tuned for more in-depth coverage of Kubernetes concepts and practices. Happy learning, and see you in the next post!&lt;/p&gt;

&lt;p&gt;For further reference, check out the detailed YouTube video here:&lt;br&gt;
&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/LvPA-z8Xg4s"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>networking</category>
      <category>security</category>
    </item>
    <item>
      <title>Understanding SSL/TLS - How It Works End-to-End</title>
      <dc:creator>Jensen Jose</dc:creator>
      <pubDate>Thu, 18 Jul 2024 14:32:56 +0000</pubDate>
      <link>https://dev.to/jensen1806/understanding-ssltls-how-it-works-end-to-end-2hjl</link>
      <guid>https://dev.to/jensen1806/understanding-ssltls-how-it-works-end-to-end-2hjl</guid>
      <description>&lt;p&gt;Hello everyone, welcome back to the CK 2024 blog series! This is the 20th entry in our series. Before diving into our next topic on certificates in Kubernetes, I wanted to ensure we have a solid understanding of how SSL/TLS works. If you're already familiar with this topic, feel free to skip to the next blog in the series if not, let's get started!&lt;/p&gt;

&lt;h3&gt;
  
  
  What is SSL/TLS?
&lt;/h3&gt;

&lt;p&gt;SSL (Secure Sockets Layer) and TLS (Transport Layer Security) are protocols that provide a secure communication channel between a client (user) and a server over the internet. They are essential for protecting data transmitted over the web, ensuring that sensitive information such as usernames, passwords, and credit card details are encrypted and secure from eavesdropping and tampering.&lt;/p&gt;

&lt;h4&gt;
  
  
  The Basics: HTTP vs. HTTPS
&lt;/h4&gt;

&lt;p&gt;When a user sends a request to a server (for example, accessing a website), this communication can happen over HTTP (HyperText Transfer Protocol) or HTTPS (HTTP Secure). HTTP is not secure, meaning data sent over it can be intercepted and read by anyone who has access to the data flow. HTTPS, on the other hand, encrypts the data using SSL/TLS, making it secure.&lt;/p&gt;

&lt;h3&gt;
  
  
  Understanding SSL/TLS with an Example
&lt;/h3&gt;

&lt;p&gt;Let's break down how SSL/TLS works with a simple example. Imagine a user trying to access a web server. Here are the steps involved:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Client Requests Access&lt;/strong&gt;: The user sends an HTTP request to the server.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Server Requests Authentication&lt;/strong&gt;: The server asks for the user's authentication details (username and password).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Client Sends Credentials&lt;/strong&gt;: The user sends their credentials to the server.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Server Authenticates and Responds&lt;/strong&gt;: The server authenticates the user and sends back the requested data.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;However, this process over HTTP is vulnerable to attacks. A hacker can intercept the data (credentials) and misuse it. This is where SSL/TLS comes into play.&lt;/p&gt;

&lt;h3&gt;
  
  
  Introducing Encryption
&lt;/h3&gt;

&lt;p&gt;To secure this communication, we use encryption. There are two main types of encryption:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Symmetric Encryption&lt;/strong&gt;: The same key is used for both encryption and decryption. While simple, it has a significant vulnerability: if a hacker intercepts the key, they can decrypt all data.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Asymmetric Encryption&lt;/strong&gt;: This uses a pair of keys - a public key for encryption and a private key for decryption. This method enhances security as the private key is never shared.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h4&gt;
  
  
  How SSL/TLS Uses Asymmetric Encryption
&lt;/h4&gt;

&lt;p&gt;Here’s how SSL/TLS uses asymmetric encryption to secure communication:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Server Generates Keys&lt;/strong&gt;: The server generates a public and private key pair using tools like OpenSSL.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Client Requests Access&lt;/strong&gt;: The user sends an HTTP request to the server.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Server Sends Public Key&lt;/strong&gt;: The server responds with its public key.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Client Encrypts Data&lt;/strong&gt;: The client encrypts their data (e.g., a symmetric key for further communication) using the server’s public key.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Data Sent to Server&lt;/strong&gt;: The encrypted data is sent to the server.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Server Decrypts Data&lt;/strong&gt;: The server uses its private key to decrypt the data.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Certificates and Certificate Authorities (CA)
&lt;/h3&gt;

&lt;p&gt;To further enhance security, SSL/TLS uses certificates issued by Certificate Authorities (CA). These certificates validate that the public key truly belongs to the server and not an imposter. Here's how it works:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Server Creates a CSR&lt;/strong&gt;: The server generates a Certificate Signing Request (CSR).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CA Validates CSR&lt;/strong&gt;: The CA validates the CSR, ensuring the server's identity.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CA Issues Certificate&lt;/strong&gt;: The CA issues a certificate containing the server’s public key and other identity details.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Client Validates Certificate&lt;/strong&gt;: When the client receives the server’s certificate, it validates the certificate against the CA’s public certificates stored in the client’s browser.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  The Importance of SSL/TLS
&lt;/h3&gt;

&lt;p&gt;Using SSL/TLS ensures that:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Data integrity is maintained.&lt;/li&gt;
&lt;li&gt;Communication is secure and encrypted.&lt;/li&gt;
&lt;li&gt;Users can trust that they are communicating with the intended server.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;Understanding SSL/TLS is crucial for ensuring secure communication over the internet. In our next post, we'll dive deeper into how certificates are used specifically in Kubernetes, how to create a certificate signing request, and more.&lt;/p&gt;

&lt;p&gt;Stay tuned, and if you have any questions or need further clarification, feel free to reach out. See you in the next post!&lt;/p&gt;

&lt;p&gt;For further reference, check out the detailed YouTube video here:&lt;br&gt;
&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/njT5ECuwCTo"&gt;
&lt;/iframe&gt;
&lt;br&gt;
Happy Learning!!&lt;/p&gt;

</description>
      <category>network</category>
      <category>kubernetes</category>
      <category>devops</category>
      <category>docker</category>
    </item>
    <item>
      <title>ConfigMaps and Secrets in Kubernetes</title>
      <dc:creator>Jensen Jose</dc:creator>
      <pubDate>Wed, 17 Jul 2024 15:08:50 +0000</pubDate>
      <link>https://dev.to/jensen1806/configmaps-and-secrets-in-kubernetes-45ma</link>
      <guid>https://dev.to/jensen1806/configmaps-and-secrets-in-kubernetes-45ma</guid>
      <description>&lt;p&gt;Hello everyone, welcome back to my blog series CK 2024! Today we’ll be diving into the concepts of ConfigMaps and Secrets in Kubernetes. Although we touched on these topics briefly in earlier posts, I realized we haven’t given them the full attention they deserve. So, let's rectify that today with an in-depth look and a hands-on demo.&lt;/p&gt;

&lt;h3&gt;
  
  
  Understanding ConfigMaps and Secrets
&lt;/h3&gt;

&lt;p&gt;In Kubernetes, ConfigMaps and Secrets allow you to decouple configuration artifacts from image content to keep your containerized applications portable. ConfigMaps are used to store non-confidential data in key-value pairs, while Secrets are intended for confidential data such as passwords, OAuth tokens, and SSH keys.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why Use ConfigMaps?
&lt;/h3&gt;

&lt;p&gt;In one of our earlier discussions (Day 11), we saw how environment variables could be directly defined in the Pod's YAML file. However, as the number of environment variables grows, maintaining them directly in the Pod definition becomes impractical, especially if these variables are shared across multiple Pods. ConfigMaps help by centralizing this configuration data, making it easier to manage and reuse.&lt;/p&gt;

&lt;h3&gt;
  
  
  Creating a ConfigMap
&lt;/h3&gt;

&lt;p&gt;Let's walk through creating a ConfigMap. We’ll start with an example where we need to define an environment variable in a Pod. Here’s a basic Pod definition:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Pod
metadata:
  name: mypod
spec:
  containers:
    - name: mycontainer
      image: busybox
      command: ['sh', '-c', 'echo The app is running! &amp;amp;&amp;amp; sleep 3600']
      env:
        - name: MY_VAR
          value: "my_value"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Instead of defining MY_VAR directly in the Pod, we’ll use a ConfigMap.&lt;/p&gt;

&lt;h4&gt;
  
  
  1. Imperative Approach:
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl create configmap myconfigmap --from-literal=MY_VAR=my_value
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  2. Declarative Approach:
&lt;/h4&gt;

&lt;p&gt;Create a configmap.yaml file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: ConfigMap
metadata:
  name: myconfigmap
data:
  MY_VAR: my_value
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Apply this file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f configmap.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Injecting ConfigMap into a Pod
&lt;/h3&gt;

&lt;p&gt;Now, let's modify our Pod to use the ConfigMap:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Pod
metadata:
  name: mypod
spec:
  containers:
    - name: mycontainer
      image: busybox
      command: ['sh', '-c', 'echo The app is running! &amp;amp;&amp;amp; sleep 3600']
      envFrom:
        - configMapRef:
            name: myconfigmap
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here, envFrom is used to import all key-value pairs from the ConfigMap into the Pod’s environment.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;ConfigMaps and Secrets are essential tools in Kubernetes for managing application configuration and sensitive data. They help maintain clean and efficient Pod definitions and enhance security practices&lt;/p&gt;

&lt;p&gt;Feel free to reach out in the comments section- if you have any questions or need further assistance.&lt;/p&gt;

&lt;p&gt;For further reference, check out the detailed YouTube video here:&lt;br&gt;
&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/Q9fHJLSyd7Q"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>docker</category>
      <category>devops</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Understanding Health Probes in Kubernetes</title>
      <dc:creator>Jensen Jose</dc:creator>
      <pubDate>Tue, 16 Jul 2024 15:52:35 +0000</pubDate>
      <link>https://dev.to/jensen1806/understanding-health-probes-in-kubernetes-1b13</link>
      <guid>https://dev.to/jensen1806/understanding-health-probes-in-kubernetes-1b13</guid>
      <description>&lt;p&gt;Hello everyone, welcome back to the CK2024 blog series! This is blog number 18 and we’ll dive into health probes in Kubernetes: liveness probes, readiness probes, and startup probes. We’ll explore these concepts in detail with hands-on practice.&lt;/p&gt;

&lt;h3&gt;
  
  
  What Are Health Probes?
&lt;/h3&gt;

&lt;p&gt;Before we get into the demo, let’s understand what health probes are in Kubernetes. Health probes are mechanisms used to monitor and manage the health of your applications running in Kubernetes. There are three main types of probes:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Liveness Probes&lt;/strong&gt;: Ensure your application is running. If the liveness probe fails, Kubernetes will restart the container.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Readiness Probes&lt;/strong&gt;: Ensure your application is ready to serve traffic. If the readiness probe fails, Kubernetes will stop sending traffic to the container.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Startup Probes&lt;/strong&gt;: Used for slow-starting applications. Ensures that the application has started successfully before running liveness or readiness probes.
These probes help in maintaining the health and availability of your applications by automatically recovering from failures.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Liveness Probes
&lt;/h3&gt;

&lt;p&gt;Liveness probes monitor your application and restart the container if it fails. This is useful when your application crashes due to intermittent issues that can be resolved with a restart.&lt;/p&gt;

&lt;p&gt;Here's an example of a liveness probe using a command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Pod
metadata:
  name: liveness-exec
spec:
  containers:
  - name: liveness
    image: busybox
    args:
    - /bin/sh
    - -c
    - touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600
    livenessProbe:
      exec:
        command:
        - cat
        - /tmp/healthy
      initialDelaySeconds: 5
      periodSeconds: 5
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this example, the liveness probe runs a command to check if a file exists. If the file doesn’t exist, the probe fails, and Kubernetes restarts the container.&lt;/p&gt;

&lt;h3&gt;
  
  
  Readiness Probes
&lt;/h3&gt;

&lt;p&gt;Readiness probes ensure your application is ready to serve traffic. If the readiness probe fails, the container is removed from the service’s endpoints, stopping it from receiving traffic until it is ready again.&lt;/p&gt;

&lt;p&gt;Here's an example of a readiness probe using an HTTP GET request:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Pod
metadata:
  name: readiness-http
spec:
  containers:
  - name: readiness
    image: my-app
    readinessProbe:
      httpGet:
        path: /healthz
        port: 8080
      initialDelaySeconds: 5
      periodSeconds: 5
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this example, the readiness probe sends an HTTP GET request to /healthz. If the response is not successful, the probe fails, and the container stops receiving traffic.&lt;/p&gt;

&lt;h3&gt;
  
  
  Startup Probes
&lt;/h3&gt;

&lt;p&gt;Startup probes are used for applications that take a long time to start. This probe ensures the application starts successfully before the liveness and readiness probes are activated.&lt;/p&gt;

&lt;p&gt;Here’s an example of a startup probe:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Pod
metadata:
  name: startup-probe
spec:
  containers:
  - name: startup
    image: my-app
    startupProbe:
      httpGet:
        path: /healthz
        port: 8080
      initialDelaySeconds: 10
      periodSeconds: 10
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this example, the startup probe sends an HTTP GET request to /healthz. It will wait for the initial delay before performing the first check and continue at specified intervals. If the probe fails, the container will be restarted.&lt;/p&gt;

&lt;h3&gt;
  
  
  Hands-On Practice
&lt;/h3&gt;

&lt;p&gt;To reinforce your understanding, we’ll do a demo and configure these probes in a Kubernetes cluster. Follow along in the video to see the implementation in action.&lt;/p&gt;

&lt;h4&gt;
  
  
  Liveness Probe Demo
&lt;/h4&gt;

&lt;p&gt;We’ll create a pod with a liveness probe that checks if a file exists:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create a YAML file for the pod configuration:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Pod
metadata:
  name: liveness-exec
spec:
  containers:
  - name: liveness
    image: busybox
    args:
    - /bin/sh
    - -c
    - touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600
    livenessProbe:
      exec:
        command:
        - cat
        - /tmp/healthy
      initialDelaySeconds: 5
      periodSeconds: 5
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Apply the configuration:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f liveness-exec.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Observe the pod behavior:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get pods -w
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You’ll see that the pod restarts when the file is removed, indicating the liveness probe failure.&lt;/p&gt;

&lt;h3&gt;
  
  
  Readiness Probe Demo
&lt;/h3&gt;

&lt;p&gt;We’ll create a pod with a readiness probe that checks an HTTP endpoint:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create a YAML file for the pod configuration:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Pod
metadata:
  name: readiness-http
spec:
  containers:
  - name: readiness
    image: my-app
    readinessProbe:
      httpGet:
        path: /healthz
        port: 8080
      initialDelaySeconds: 5
      periodSeconds: 5
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Apply the configuration:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f readiness-http.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Observe the pod behavior:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get pods -w
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The pod will only start receiving traffic once the readiness probe passes.&lt;/p&gt;

&lt;h3&gt;
  
  
  Startup Probe Demo
&lt;/h3&gt;

&lt;p&gt;We’ll create a pod with a startup probe:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create a YAML file for the pod configuration:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Pod
metadata:
  name: startup-probe
spec:
  containers:
  - name: startup
    image: my-app
    startupProbe:
      httpGet:
        path: /healthz
        port: 8080
      initialDelaySeconds: 10
      periodSeconds: 10
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Apply the configuration:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f startup-probe.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Observe the pod behavior:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get pods -w
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The startup probe ensures the application is fully started before the liveness and readiness probes are activated.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;In this blog, we’ve explored health probes in Kubernetes, including liveness, readiness, and startup probes. These probes help maintain the health and availability of your applications by automatically recovering from failures.&lt;/p&gt;

&lt;p&gt;For further reference, check out the detailed YouTube video here:&lt;br&gt;
&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/x2e6pIBLKzw"&gt;
&lt;/iframe&gt;
&lt;br&gt;
Happy learning!&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>docker</category>
    </item>
    <item>
      <title>Understanding Auto Scaling in Kubernetes</title>
      <dc:creator>Jensen Jose</dc:creator>
      <pubDate>Mon, 15 Jul 2024 14:29:49 +0000</pubDate>
      <link>https://dev.to/jensen1806/understanding-auto-scaling-in-kubernetes-5gl1</link>
      <guid>https://dev.to/jensen1806/understanding-auto-scaling-in-kubernetes-5gl1</guid>
      <description>&lt;p&gt;Welcome back to the CK2024 blog series! I'm excited to dive into the concept of auto-scaling in Kubernetes which is a crucial aspect of managing Kubernetes clusters efficiently, especially for beginners and those looking to deepen their understanding.&lt;/p&gt;

&lt;h3&gt;
  
  
  What is Scaling?
&lt;/h3&gt;

&lt;p&gt;Scaling refers to adjusting your servers or workloads to meet demand. This adjustment can be done manually or automatically. Scaling ensures that your applications can handle increased traffic or resource utilization without manual intervention.&lt;/p&gt;

&lt;p&gt;In Kubernetes, we often talk about scaling in terms of Deployments and ReplicaSets. Deployments allow us to manage multiple replicas of a single pod, ensuring that our applications can handle varying loads.&lt;/p&gt;

&lt;h3&gt;
  
  
  Manual vs. Automatic Scaling
&lt;/h3&gt;

&lt;p&gt;In a traditional setup, scaling might involve manually updating the number of replicas in a Deployment or ReplicaSet. This approach can be inefficient and impractical for large-scale applications running in production environments. Automatic scaling, on the other hand, adjusts the number of pods based on current demand and resource utilization, ensuring optimal performance and resource usage.&lt;/p&gt;

&lt;h3&gt;
  
  
  Types of Auto Scaling in Kubernetes
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Horizontal Pod Autoscaling (HPA)&lt;/strong&gt;&lt;br&gt;
Horizontal Pod Autoscaling automatically adds or removes pod replicas based on CPU and memory utilization. For example, if the average CPU utilization exceeds a specified threshold, HPA will add more pods to handle the increased load.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Vertical Pod Autoscaling (VPA)&lt;/strong&gt;&lt;br&gt;
Vertical Pod Autoscaling adjusts the resource requests and limits of a pod, effectively resizing it to meet the demand. This approach can result in pod restarts, so it's suitable for non-mission-critical applications that can tolerate downtime.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Practical Example: Horizontal Pod Autoscaling
&lt;/h3&gt;

&lt;p&gt;Let's walk through a practical example to illustrate how HPA works.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Prerequisites&lt;/strong&gt;: Ensure that the metrics server is running in your cluster. The metrics server provides the necessary metrics for HPA to make scaling decisions.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get pods -n kube-system
# Ensure metrics-server is running
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Create Deployment and Service&lt;/strong&gt;: We'll create a deployment and expose it via a service.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: apps/v1
kind: Deployment
metadata:
  name: php-apache
spec:
  selector:
    matchLabels:
      app: php-apache
  replicas: 1
  template:
    metadata:
      labels:
        app: php-apache
    spec:
      containers:
      - name: php-apache
        image: k8s.gcr.io/hpa-example
        ports:
        - containerPort: 80
        resources:
          requests:
            cpu: 200m
          limits:
            cpu: 500m
---
apiVersion: v1
kind: Service
metadata:
  name: php-apache
spec:
  selector:
    app: php-apache
  ports:
  - port: 80
    targetPort: 80
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Apply the YAML file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f deployment.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Create HPA&lt;/strong&gt;: Now, we create an HPA object to scale our deployment based on CPU utilization.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl autoscale deployment php-apache --cpu-percent=50 --min=1 --max=10
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Generate Load&lt;/strong&gt;: To see HPA in action, we'll generate load on the deployment.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl run -i --tty load-generator --image=busybox /bin/sh
# Inside the pod, run the following command to generate load
while true; do wget -q -O- http://php-apache; done
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Monitor HPA&lt;/strong&gt;: Monitor the HPA to see how it scales the deployment.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get hpa -w
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As the load increases, HPA will add more replicas to handle the demand. Once the load decreases, HPA will scale down the replicas to the minimum specified.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;Understanding and implementing auto-scaling is essential for managing Kubernetes clusters efficiently. Horizontal and vertical scaling ensures that your applications can handle varying loads while optimizing resource usage. While HPA is built into Kubernetes, VPA and other advanced scaling features may require additional setup or managed cloud services.&lt;/p&gt;

&lt;p&gt;In the next post, we'll explore liveness and readiness probes in Kubernetes, which are crucial for ensuring that your applications are running smoothly and are available to serve requests. Happy learning!&lt;/p&gt;

&lt;p&gt;For further reference, check out the detailed YouTube video here:&lt;br&gt;
&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/afUL5jGoLx0"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

</description>
      <category>devops</category>
      <category>cloud</category>
      <category>kubernetes</category>
      <category>docker</category>
    </item>
    <item>
      <title>Understanding Resource Requests and Limits in Kubernetes</title>
      <dc:creator>Jensen Jose</dc:creator>
      <pubDate>Thu, 11 Jul 2024 16:42:38 +0000</pubDate>
      <link>https://dev.to/jensen1806/understanding-resource-requests-and-limits-in-kubernetes-3lhh</link>
      <guid>https://dev.to/jensen1806/understanding-resource-requests-and-limits-in-kubernetes-3lhh</guid>
      <description>&lt;p&gt;Welcome back to the CK 2024 blog series! In this post, we’ll delve into the critical concept of resource requests and limits in Kubernetes, a mechanism the scheduler uses to allocate pods to nodes efficiently. If you missed any of the previous posts in this series, I recommend checking those out to build a strong foundation.&lt;/p&gt;

&lt;h3&gt;
  
  
  Recap
&lt;/h3&gt;

&lt;p&gt;In the previous blog in this series, we explored how Kubernetes uses resource requests and limits to manage pod scheduling and ensure optimal resource utilization across nodes. Let’s break down this concept and see it in action through examples and exercises.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why Resource Requests and Limits Matter
&lt;/h3&gt;

&lt;p&gt;In Kubernetes, each pod requires a certain amount of CPU and memory to run. Without proper resource allocation, a pod can monopolize node resources, leading to performance issues and potential crashes. This is where resource requests and limits come into play:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Resource Requests&lt;/strong&gt;: The minimum amount of CPU and memory guaranteed to the pod.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Resource Limits&lt;/strong&gt;: The maximum amount of CPU and memory the pod can use.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  How It Works
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Node Specifications&lt;/strong&gt;:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Node 1: 4 CPUs, 4 GB of memory.&lt;/li&gt;
&lt;li&gt;Node 2: 4 CPUs, 4 GB of memory.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Pod Scheduling&lt;/strong&gt;:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Each pod requests 1 CPU and 1 GB of memory.&lt;/li&gt;
&lt;li&gt;The scheduler checks if the node has sufficient resources.&lt;/li&gt;
&lt;li&gt;If the node has enough resources, the pod is scheduled.&lt;/li&gt;
&lt;li&gt;Once the node is full, the scheduler moves to the next node.&lt;/li&gt;
&lt;li&gt;If no nodes have sufficient resources, the pod remains unscheduled.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Example Scenario
&lt;/h4&gt;

&lt;p&gt;Let’s consider a node with 4 CPUs and 4 GB of memory. We have a pod that requires 1 CPU and 1 GB of memory. Initially, the pod is allocated the requested resources. However, if the load increases, the pod might try to consume all available resources, leading to potential crashes. To prevent this, we set resource limits.&lt;/p&gt;

&lt;h3&gt;
  
  
  Setting Resource Requests and Limits
&lt;/h3&gt;

&lt;p&gt;Here’s a YAML configuration for a pod with resource requests and limits:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Pod
metadata:
  name: memory-demo
spec:
  containers:
  - name: memory-demo-ctr
    image: polinux/stress
    resources:
      requests:
        memory: "100Mi"
      limits:
        memory: "200Mi"
    command: ["stress"]
    args: ["--vm", "1", "--vm-bytes", "150M", "--vm-hang", "1"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Requests: 100 Mi of memory.&lt;/li&gt;
&lt;li&gt;Limits: 200 Mi of memory.&lt;/li&gt;
&lt;li&gt;Command: The pod will use 150 Mi of memory, staying within the defined limits.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Demonstration
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Apply the YAML:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f memory-demo.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Check the Pod:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get pods
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Verify that the pod is running within the specified limits.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Stress Testing:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Increase the pod’s memory usage beyond the limit to observe behavior.&lt;/li&gt;
&lt;li&gt;The pod will be terminated with an Out of Memory (OOM) error if it exceeds the limit.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;Resource requests and limits are essential for maintaining the stability and performance of your Kubernetes cluster. By defining these boundaries, you ensure that pods do not consume more resources than allowed, preventing potential node failures and ensuring a smooth operation.&lt;/p&gt;

&lt;p&gt;Remember to practice these configurations and refer to the Kubernetes documentation for further details. In the next post, we’ll explore autoscaling and how it leverages these metrics for efficient resource management.&lt;/p&gt;

&lt;p&gt;For further reference, check out the detailed YouTube video here:&lt;br&gt;
&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/Q-mk6EZVX_Q"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>docker</category>
      <category>devops</category>
      <category>containers</category>
    </item>
    <item>
      <title>Understanding Node Affinity in Kubernetes</title>
      <dc:creator>Jensen Jose</dc:creator>
      <pubDate>Wed, 10 Jul 2024 16:39:34 +0000</pubDate>
      <link>https://dev.to/jensen1806/understanding-node-affinity-in-kubernetes-3l5j</link>
      <guid>https://dev.to/jensen1806/understanding-node-affinity-in-kubernetes-3l5j</guid>
      <description>&lt;p&gt;Welcome back to our Kubernetes series! In this instalment, we delve into an essential scheduling concept: Node Affinity. Node Affinity allows Kubernetes to schedule pods based on node labels, ensuring specific workloads run on designated nodes.&lt;/p&gt;

&lt;h3&gt;
  
  
  Recap: Taints and Tolerations
&lt;/h3&gt;

&lt;p&gt;Previously, we explored Taints and Tolerations, which allow nodes to repel or accept pods based on certain conditions like hardware constraints or other node attributes. However, Taints and Tolerations have limitations when it comes to specifying multiple conditions or ensuring pods are scheduled on specific nodes.&lt;/p&gt;

&lt;h3&gt;
  
  
  Introducing Node Affinity
&lt;/h3&gt;

&lt;p&gt;Node Affinity addresses these limitations by enabling Kubernetes to schedule pods onto nodes that match specified labels. Let's break down how Node Affinity works:&lt;/p&gt;

&lt;h4&gt;
  
  
  Matching Pods with Node Labels
&lt;/h4&gt;

&lt;p&gt;Consider a scenario with three nodes labeled based on their disk types:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Node 1: disk=HDD&lt;/li&gt;
&lt;li&gt;Node 2: disk=SSD&lt;/li&gt;
&lt;li&gt;Node 3: disk=SSD&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We want to ensure:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;HDD-intensive workloads run only on Node 1.&lt;/li&gt;
&lt;li&gt;SSD-intensive workloads are scheduled on Node 2 or Node 3.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Affinity Rules
&lt;/h4&gt;

&lt;p&gt;Node Affinity uses rules like requiredDuringSchedulingIgnoredDuringExecution and preferredDuringSchedulingIgnoredDuringExecution to define scheduling behaviours:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;requiredDuringSchedulingIgnoredDuringExecution&lt;/strong&gt;: Ensures a pod is only scheduled if a matching node label is found during scheduling.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;preferredDuringSchedulingIgnoredDuringExecution&lt;/strong&gt;: Prefers to schedule a pod on a node with matching labels but can schedule on other nodes if no match is found.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Example: YAML Configuration
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Pod
metadata:
  name: example-pod
spec:
  containers:
    - name: app
      image: nginx
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
          - matchExpressions:
              - key: disk
                operator: In
                values:
                  - SSD
                  - HDD
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this example, the pod example-pod is configured to run on nodes labeled with disk=SSD or disk=SDD.&lt;/p&gt;

&lt;h4&gt;
  
  
  Practical Demo
&lt;/h4&gt;

&lt;p&gt;We applied this configuration and observed how Kubernetes scheduled pods based on node labels. Even when labels were modified post-scheduling, existing pods remained unaffected, showcasing Node Affinity's robustness in maintaining pod placement integrity.&lt;/p&gt;

&lt;h3&gt;
  
  
  Key Differences from Taints and Tolerations
&lt;/h3&gt;

&lt;p&gt;While Taints and Tolerations focus on node acceptance or rejection based on predefined conditions, Node Affinity ensures pods are scheduled on nodes that specifically match given criteria. This distinction is crucial for workload optimization and resource allocation in Kubernetes clusters.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Node Affinity enhances Kubernetes scheduling capabilities by allowing fine-grained control over pod placement based on node attributes. Understanding and effectively utilizing Node Affinity can significantly improve workload performance and cluster efficiency.&lt;/p&gt;

&lt;p&gt;Stay tuned for our next installment, where we'll explore Kubernetes resource requests and limits—a critical aspect of optimizing resource utilization in your Kubernetes deployments.&lt;/p&gt;

&lt;p&gt;For further reference, check out the detailed YouTube video here:&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/5vimzBRnoDk"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>docker</category>
      <category>devops</category>
      <category>containers</category>
    </item>
    <item>
      <title>Understanding Taints and Tolerations in Kubernetes</title>
      <dc:creator>Jensen Jose</dc:creator>
      <pubDate>Tue, 09 Jul 2024 17:12:18 +0000</pubDate>
      <link>https://dev.to/jensen1806/understanding-taints-and-tolerations-in-kubernetes-7oj</link>
      <guid>https://dev.to/jensen1806/understanding-taints-and-tolerations-in-kubernetes-7oj</guid>
      <description>&lt;p&gt;Welcome back to my blog series on Kubernetes! Today we will be taking a dive into a crucial yet confusing topic: Taints and Tolerations. Understanding this concept is vital for anyone working with Kubernetes, as it helps manage workloads more effectively. By the end of this post, you'll have a clear understanding of how to use taints and tolerations, and you'll be able to apply these concepts confidently in your own projects.&lt;/p&gt;

&lt;h3&gt;
  
  
  What Are Taints and Tolerations?
&lt;/h3&gt;

&lt;p&gt;Taints and tolerations work together to ensure that pods are not scheduled onto inappropriate nodes. Taints are applied to nodes, and they repel pods that do not have the corresponding toleration. This mechanism is essential for managing workloads that have specific requirements, such as running AI workloads on nodes with GPUs.&lt;/p&gt;

&lt;h4&gt;
  
  
  How Taints Work
&lt;/h4&gt;

&lt;p&gt;A taint is a key-value pair that you apply to a node. For instance, you might have a node dedicated to AI workloads, which requires GPUs. You can taint this node with key=value, such as GPU=true. This taint will prevent pods that do not tolerate this taint from being scheduled on the node.&lt;/p&gt;

&lt;h4&gt;
  
  
  How Tolerations Work
&lt;/h4&gt;

&lt;p&gt;To allow a pod to be scheduled on a node with a taint, you need to add a toleration to the pod. A toleration has to match the taint's key-value pair. For example, if your node has a taint GPU=true, your pod must have a toleration GPU=true to be scheduled on that node.&lt;/p&gt;

&lt;h3&gt;
  
  
  Taints and Tolerations in Action
&lt;/h3&gt;

&lt;p&gt;Let's break down a practical example:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Tainting a Node&lt;/strong&gt;:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl taint nodes &amp;lt;node-name&amp;gt; GPU=true:NoSchedule
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command applies a taint to a node, ensuring that only pods with the toleration &lt;strong&gt;GPU=true&lt;/strong&gt; can be scheduled on it.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Adding a Toleration to a Pod&lt;/strong&gt;:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Pod
metadata:
  name: ai-pod
spec:
  containers:
  - name: ai-container
    image: ai-image
  tolerations:
  - key: "GPU"
    operator: "Equal"
    value: "true"
    effect: "NoSchedule"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This YAML file defines a pod with a toleration that matches the node taint.&lt;/p&gt;

&lt;p&gt;When you create this pod, Kubernetes will check the taint on the node and the toleration on the pod. If they match, the pod will be scheduled on the tainted node.&lt;/p&gt;

&lt;h3&gt;
  
  
  Effects of Taints
&lt;/h3&gt;

&lt;p&gt;There are three main effects that you can specify with taints:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;NoSchedule&lt;/strong&gt;: Pods that do not tolerate the taint will not be scheduled on the node.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;PreferNoSchedule&lt;/strong&gt;: Kubernetes will try to avoid scheduling pods that do not tolerate the taint on the node, but it is not guaranteed.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;NoExecute&lt;/strong&gt;: Pods that do not tolerate the taint will be evicted from the node if they are already running.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Node Selectors
&lt;/h3&gt;

&lt;p&gt;While taints and tolerations control which pods can be scheduled on which nodes, node selectors are another way to control pod placement. Node selectors work by adding labels to nodes and specifying those labels in pod specifications.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Pod
metadata:
  name: ai-pod
spec:
  containers:
  - name: ai-container
    image: ai-image
  nodeSelector:
    GPU: "true"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This configuration ensures that the pod is only scheduled on nodes with the label GPU=true.&lt;/p&gt;

&lt;h3&gt;
  
  
  Example: Scheduling Pods with Taints and Tolerations
&lt;/h3&gt;

&lt;p&gt;Let's see how this works in practice. First, we'll taint a node:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl taint nodes worker1 GPU=true:NoSchedule
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, we'll create a pod with a matching toleration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Pod
metadata:
  name: ai-pod
spec:
  containers:
  - name: ai-container
    image: ai-image
  tolerations:
  - key: "GPU"
    operator: "Equal"
    value: "true"
    effect: "NoSchedule"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Apply this pod configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f ai-pod.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The pod will be scheduled on the tainted node because it has the appropriate toleration.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;Taints and tolerations are powerful tools in Kubernetes that help you manage where pods are scheduled. By using taints, you can prevent certain workloads from running on specific nodes, while tolerations allow pods to be scheduled on nodes with matching taints. Node selectors provide additional control over pod placement by matching pod labels to node labels.&lt;/p&gt;

&lt;p&gt;I hope this post has clarified the concept of taints and tolerations for you. In the next blog post, we'll explore node affinity and anti-affinity, which provide even more control over pod scheduling. &lt;/p&gt;

&lt;p&gt;Happy coding, and stay tuned for the next post in this series!&lt;/p&gt;

&lt;p&gt;For further reference, check out the detailed YouTube video here:&lt;br&gt;
&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/nwoS2tK2s6Q"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>cicd</category>
      <category>containers</category>
    </item>
  </channel>
</rss>
