<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Samuel Ogunmola</title>
    <description>The latest articles on DEV Community by Samuel Ogunmola (@developermide).</description>
    <link>https://dev.to/developermide</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/developermide"/>
    <language>en</language>
    <item>
      <title>Kubernetes: From Zero to Hero (Part 13) - Quality of Service 🔥💥</title>
      <dc:creator>Samuel Ogunmola</dc:creator>
      <pubDate>Mon, 02 Jan 2023 20:18:37 +0000</pubDate>
      <link>https://dev.to/developermide/kubernetes-from-zero-to-hero-part-13-quality-of-service-4006</link>
      <guid>https://dev.to/developermide/kubernetes-from-zero-to-hero-part-13-quality-of-service-4006</guid>
      <description>&lt;h2&gt;
  
  
  What is Quality of Service(QoS)
&lt;/h2&gt;

&lt;p&gt;Kubernetes Quality of Service (QoS) is a feature that helps you manage the resources that are allocated to your applications in a cluster. It allows you to specify the relative priority of different applications (called "pods"), which can be useful in a number of different scenarios.&lt;/p&gt;

&lt;p&gt;Pods are assigned to one of three QoS classes: "Guaranteed," "Burstable," and "Best Effort." The QoS class of a pod determines the priority of that pod and how it is treated by the scheduler when allocating resources. For example, pods in the Guaranteed class are given the highest priority and are guaranteed a certain amount of resources, while pods in the Best Effort class are given the lowest priority and may be starved of resources if the cluster is under heavy load.&lt;/p&gt;

&lt;p&gt;Using QoS can be helpful in a number of different scenarios. For example, if you have a set of applications that are critical to your business and need a consistent level of resources, you might assign them to the Guaranteed class. On the other hand, if you have batch jobs or other non-critical workloads that can tolerate some level of resource contention, you might assign them to the Best Effort class.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why is Quality of Service important?
&lt;/h2&gt;

&lt;p&gt;Quality of Service (QoS) is important in Kubernetes because it allows you to specify the relative priority of different applications (pods) and how they should be treated when allocating resources. This can be useful in a number of different scenarios.&lt;/p&gt;

&lt;p&gt;For example, if you have a set of applications that are critical to your business and need a consistent level of resources, you might want to give them a higher priority by assigning them to the Guaranteed QoS class. This will ensure that they are allocated the resources they need to run effectively, and will prevent them from being starved of resources.&lt;/p&gt;

&lt;p&gt;On the other hand, if you have batch jobs or other non-critical workloads that can tolerate some level of resource contention, you might want to give them a lower priority by assigning them to the Best Effort QoS class. This will allow them to use resources that are available, but they may be starved of resources if the cluster is under heavy load.&lt;/p&gt;

&lt;h2&gt;
  
  
  Types Of Quality of Service
&lt;/h2&gt;

&lt;p&gt;In Kubernetes, there are three Quality of Service (QoS) classes that can be assigned to pods: Guaranteed, Burstable, and Best Effort. Each QoS class has a specific set of characteristics and is intended for use in different scenarios. Here's a brief overview of each QoS class:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Guaranteed&lt;/strong&gt;: Pods in the Guaranteed class are given the highest priority and are guaranteed a certain amount of resources. If a pod in this class is not able to be scheduled due to resource constraints, the scheduler will not schedule any other pods until resources become available. Here's an example of how to specify the Guaranteed QoS class in a pod specification:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-pod&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-container&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-image&lt;/span&gt;
    &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;limits&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;1"&lt;/span&gt;
        &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;500Mi"&lt;/span&gt;
      &lt;span class="na"&gt;requests&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;1"&lt;/span&gt;
        &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;500Mi"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;NOTE: For a Pod to be given a QoS class of Guaranteed:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Every Container in the Pod must have a memory limit and a memory request.&lt;/li&gt;
&lt;li&gt;For every Container in the Pod, the memory limit must equal the memory request.&lt;/li&gt;
&lt;li&gt;Every Container in the Pod must have a CPU limit and a CPU request.&lt;/li&gt;
&lt;li&gt;For every Container in the Pod, the CPU limit must equal the CPU request&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Burstable&lt;/strong&gt;: Pods in the Burstable class are given a lower priority than pods in the Guaranteed class, but are still allocated a certain amount of resources. If a pod in this class is not able to be scheduled due to resource constraints, the scheduler may choose to schedule other pods instead. Here's an example of how to specify the Burstable QoS class in a pod specification:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-pod&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-container&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-image&lt;/span&gt;
    &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;limits&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;1"&lt;/span&gt;
        &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;500Mi"&lt;/span&gt;
      &lt;span class="na"&gt;requests&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;0.5"&lt;/span&gt;
        &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;200Mi"&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;NOTE: A Pod is given a QoS class of Burstable if:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The Pod does not meet the criteria for QoS class Guaranteed.&lt;/li&gt;
&lt;li&gt;At least one Container in the Pod has a memory or CPU request or limit.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Best Effort&lt;/strong&gt;: Pods in the Best Effort class are given the lowest priority and may be starved of resources if the cluster is under heavy load. They are typically used for batch jobs or other non-critical workloads that can tolerate some level of resource contention. Here's an example of how to specify the Best Effort QoS class in a pod specification:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-pod&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-container&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-image&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;NOTE: For a Pod to be given a QoS class of BestEffort, the Containers in the Pod must not have any memory or CPU limits or requests.&lt;/p&gt;

&lt;p&gt;In general, it's a good idea to use the Guaranteed QoS class for critical applications that need a consistent level of resources, the Burstable class for applications that need a certain amount of resources but can tolerate some level of resource contention, and the Best Effort class for batch jobs or other non-critical workloads that can tolerate resource contention.&lt;/p&gt;

</description>
      <category>jokes</category>
      <category>discuss</category>
      <category>watercooler</category>
    </item>
    <item>
      <title>Kubernetes: From Zero to Hero (Part 12) - Requests and Limits</title>
      <dc:creator>Samuel Ogunmola</dc:creator>
      <pubDate>Sun, 01 Jan 2023 17:10:36 +0000</pubDate>
      <link>https://dev.to/developermide/kubernetes-from-zero-to-hero-part-12-requests-and-limirts-1o9l</link>
      <guid>https://dev.to/developermide/kubernetes-from-zero-to-hero-part-12-requests-and-limirts-1o9l</guid>
      <description>&lt;p&gt;In Kubernetes, requests and limits are used to specify the minimum and maximum amount of resources (such as CPU and memory) that a pod or container should be allocated.&lt;/p&gt;

&lt;p&gt;Requests are the minimum amount of resources that a pod or container should receive. If a pod or container has a request for a particular resource, the scheduler will ensure that the pod or container is allocated at least that much of the resource. This is useful for ensuring that your applications have the resources they need to run effectively.&lt;/p&gt;

&lt;p&gt;Limits, on the other hand, are the maximum amount of resources that a pod or container is allowed to consume. If a pod or container exceeds its limit for a particular resource, it may be terminated or throttled to prevent it from consuming too many resources. This can be useful for preventing resource contention and overutilization in your cluster.&lt;/p&gt;

&lt;p&gt;Here's an example of how to specify requests and limits in a pod specification:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-pod&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-container&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-image&lt;/span&gt;
    &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;limits&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;1"&lt;/span&gt;
        &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;500Mi"&lt;/span&gt;
      &lt;span class="na"&gt;requests&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;0.5"&lt;/span&gt;
        &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;200Mi"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this example, we are specifying that the container should be allocated at least 0.5 CPU and 200 MiB of memory, and should not be allowed to consume more than 1 CPU and 500 MiB of memory.&lt;/p&gt;

&lt;p&gt;NOTE: that requests and limits are specified in terms of millicores (m) for CPU and mebibytes (Mi) for memory. For example, the value "1" for CPU represents one millicore, and the value "500Mi" for memory represents 500 mebibytes.&lt;/p&gt;

&lt;p&gt;It's important to note that requests and limits are not guaranteed to be the exact amount of resources that a pod or container will receive. They are used by the scheduler to make decisions about resource allocation, but the actual allocation may vary based on the available resources in the cluster and the demands of other pods and containers.&lt;/p&gt;

&lt;p&gt;Using requests and limits can be a powerful tool for managing resources in your Kubernetes cluster. By specifying the minimum and maximum amounts of resources that your applications need, you can help ensure that they are allocated the resources they need to run effectively, and can also prevent resource contention and overutilization in your cluster.&lt;/p&gt;

&lt;p&gt;scheduler to make decisions about resource allocation, but the actual allocation may vary based on the available resources in the cluster and the demands of other pods and containers.&lt;/p&gt;

&lt;p&gt;One key use case for requests and limits is to ensure that your applications have the resources they need to run effectively. For example, if you know that your application requires a certain amount of CPU and memory to function properly, you can specify those requirements using requests. This will ensure that the scheduler allocates those resources to your application, and will prevent the application from being starved of resources.&lt;/p&gt;

&lt;p&gt;Limits, on the other hand, can be useful for preventing resource contention and overutilization in your cluster. By specifying limits for your pods and containers, you can ensure that they don't consume too many resources, which can help to prevent other pods and containers from being starved of resources.&lt;/p&gt;

&lt;p&gt;It's also worth noting that requests and limits can be specified at different levels in your Kubernetes cluster. For example, you can specify requests and limits at the pod level, which will apply to all containers in the pod. You can also specify requests and limits at the container level, which will only apply to that specific container.&lt;/p&gt;

&lt;p&gt;In general, it's a good idea to specify both requests and limits for your pods and containers. This will help to ensure that your applications have the resources they need to run effectively, while also preventing resource contention and overutilization in your cluster.&lt;/p&gt;

&lt;p&gt;To summarize, requests and limits are powerful tools for managing resources in your Kubernetes cluster. By specifying the minimum and maximum amounts of resources that your applications need, you can help ensure that they are allocated the resources they need to run effectively, and can also prevent resource contention and overutilization in your cluster.&lt;/p&gt;

&lt;p&gt;🌟 🔥 If you want to switch your career into tech and you are considering DevOps, you can join our online community &lt;a href="https://chat.whatsapp.com/LARpsS1xhYi75TdJYh8TMa" rel="noopener noreferrer"&gt;here&lt;/a&gt; for live classes and FREE tutorial videos.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>minecraft</category>
      <category>opensource</category>
      <category>github</category>
    </item>
    <item>
      <title>Kubernetes: From Zero to Hero (Part 11) - Namespaces</title>
      <dc:creator>Samuel Ogunmola</dc:creator>
      <pubDate>Sat, 31 Dec 2022 11:23:13 +0000</pubDate>
      <link>https://dev.to/developermide/kubernetes-from-zero-to-hero-part-11-namespaces-1c2b</link>
      <guid>https://dev.to/developermide/kubernetes-from-zero-to-hero-part-11-namespaces-1c2b</guid>
      <description>&lt;p&gt;Kubernetes namespaces are a way to partition resources in a cluster and control access to those resources. They allow you to create multiple virtual clusters within a single physical cluster, which can be useful for a number of different purposes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why are they useful?
&lt;/h2&gt;

&lt;p&gt;One common use case for namespaces is to isolate resources and control access. For example, you might create a namespace for each team in your organization, and then grant each team access only to the resources within their own namespace. This can help to enforce separation of duties and prevent accidental or unauthorized access to resources.&lt;/p&gt;

&lt;p&gt;Another use case for namespaces is to manage multiple environments, such as development, staging, and production. By using namespaces, you can easily deploy and manage different versions of your applications in different environments, while still sharing common resources such as storage and networking.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to create a namespace
&lt;/h2&gt;

&lt;p&gt;To create a namespace in Kubernetes, you can use the kubectl create namespace command. For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl create namespace my-namespace
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will create a namespace called "my-namespace."&lt;/p&gt;

&lt;h2&gt;
  
  
  How to use namespaces to isolate resources and control access
&lt;/h2&gt;

&lt;p&gt;To use a namespace to isolate resources and control access, you can specify the namespace when creating resources in Kubernetes. For example, to create a deployment in the "my-namespace" namespace, you can use the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl create deployment my-deployment &lt;span class="nt"&gt;--namespace&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;my-namespace
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can also use namespace labels and selectors to control access to resources. For example, you can label a namespace with the team that should have access to it, and then use a namespace selector to grant access to that team.&lt;/p&gt;

&lt;h2&gt;
  
  
  Best practices for using namespaces
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Use namespaces to reflect your organization's structure or separation of duties.&lt;/li&gt;
&lt;li&gt;Consider using namespaces to isolate resources for different environments (e.g. development, staging, production).&lt;/li&gt;
&lt;li&gt;Use namespaces to manage access to shared resources, such as storage and networking.&lt;/li&gt;
&lt;li&gt;Tips for naming your namespaces&lt;/li&gt;
&lt;li&gt;Use descriptive and meaningful names for your namespaces.&lt;/li&gt;
&lt;li&gt;Avoid using names that are too general or could be confused with other resources.&lt;/li&gt;
&lt;li&gt;Use a consistent naming convention for your namespaces to make them easier to manage.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Using namespaces to manage multiple environments (e.g. dev, staging, production)
&lt;/h2&gt;

&lt;p&gt;You can use namespaces to deploy and manage different versions of your applications in different environments. For example, you might create a "development" namespace for development versions of your applications, and a "production" namespace for the versions that are live in production.&lt;/p&gt;

&lt;h2&gt;
  
  
  Using namespaces to provide multi-tenancy
&lt;/h2&gt;

&lt;p&gt;You can use namespaces to provide multi-tenancy, which is the ability to host multiple isolated groups of users (tenants) on a shared cluster. This can be useful in a number of scenarios, such as hosting applications for multiple customers on a shared cluster.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In summary, Kubernetes namespaces are a useful tool for partitioning resources and controlling access in your cluster, and can be used for a variety of different purposes. Some best practices for using namespaces include organizing them based on your organization's structure or separation of duties, and using descriptive and meaningful names. In addition to providing multitenancy, namespaces can also be used to manage multiple environments.&lt;/p&gt;

&lt;p&gt;🌟 🔥 If you want to switch your career into tech and you are considering DevOps, you can join our online community &lt;a href="https://chat.whatsapp.com/LARpsS1xhYi75TdJYh8TMa"&gt;here&lt;/a&gt; for live classes and FREE tutorial videos.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Kubernetes: From Zero to Hero (Part 10) - Helm</title>
      <dc:creator>Samuel Ogunmola</dc:creator>
      <pubDate>Fri, 30 Dec 2022 12:15:10 +0000</pubDate>
      <link>https://dev.to/developermide/kubernetes-from-zero-to-hero-part-10-helm-494k</link>
      <guid>https://dev.to/developermide/kubernetes-from-zero-to-hero-part-10-helm-494k</guid>
      <description>&lt;h2&gt;
  
  
  Introduction to Helm and its role in Kubernetes
&lt;/h2&gt;

&lt;p&gt;Helm is an open source package manager for Kubernetes that simplifies the deployment and management of applications on the Kubernetes platform. It helps you define, install, and upgrade complex Kubernetes applications, and it provides a way to package all of the resources required for an application into a single, easy-to-use package called a chart.&lt;/p&gt;

&lt;p&gt;Imagine that you have a web application that you want to deploy to a Kubernetes cluster. Without Helm, you would have to manually create all of the required Kubernetes resources, such as pods, services, and deployment objects, and then use the kubectl command-line tool to deploy them to your cluster. This process can be time-consuming and error-prone, especially if you have multiple microservices or dependencies that need to be deployed.&lt;/p&gt;

&lt;p&gt;With Helm, you can create a chart that defines all of the resources required for your application, and then use the Helm CLI to deploy the chart to your cluster with a single command. This saves time and reduces the risk of mistakes, making it easier to manage and update your applications over time.&lt;/p&gt;

&lt;h2&gt;
  
  
  Installing Helm
&lt;/h2&gt;

&lt;p&gt;To use Helm, you will need to have a Kubernetes cluster set up and the kubectl command-line tool installed on your local machine. If you don't already have a cluster, you can follow the instructions on the Kubernetes website to set one up.&lt;/p&gt;

&lt;p&gt;Once you have a cluster and kubectl installed, you can install Helm by downloading the latest release from the Helm website and following the installation instructions. On macOS or Linux, you can use the following command to install Helm:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will download the Helm binary and add it to your PATH, so you can use the Helm CLI from any directory.&lt;/p&gt;

&lt;p&gt;If you want to install Helm on a Kubernetes cluster, you can use the Helm CLI to install the Helm server-side component, Tiller, on the cluster. To do this, run the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm init
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will install Tiller on the default namespace of your cluster. If you want to install Tiller on a different namespace, you can use the --namespace flag.&lt;/p&gt;

&lt;h2&gt;
  
  
  Using Helm
&lt;/h2&gt;

&lt;p&gt;Once Helm is installed, you can use the Helm CLI to search for, install, and manage charts.&lt;/p&gt;

&lt;p&gt;To search for charts in the official Helm repository, use the search command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm search repo &amp;lt;chart-name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will return a list of charts that match the search term. You can then use the install command to install a chart from the repository:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm &lt;span class="nb"&gt;install&lt;/span&gt; &amp;lt;chart-name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will create all of the required resources for the chart on your cluster. You can use the get command to list the releases (i.e. deployments) of a chart:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm get releases
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To update a release, you can use the upgrade command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm upgrade &amp;lt;release-name&amp;gt; &amp;lt;chart-name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will update the resources for the release to the latest version of the chart.&lt;/p&gt;

&lt;p&gt;You can also create your own charts by using the create command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm create &amp;lt;chart-name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will create a starter chart with the necessary files and directories. You can then modify the chart files to define the resources that you want to deploy and customize the behavior of the chart.&lt;/p&gt;

&lt;p&gt;To view the resources that are defined in a chart, you can use the helm show command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm show chart &amp;lt;chart-name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Upgrading and rolling back deployments:&lt;br&gt;
To upgrade a deployment to the latest version of a chart, use the helm upgrade command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm upgrade &amp;lt;release-name&amp;gt; &amp;lt;chart-name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To roll back a deployment to a previous version, use the helm rollback command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;helm rollback &amp;lt;release-name&amp;gt; &amp;lt;revision&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Best practices for using Helm in a production environment
&lt;/h2&gt;

&lt;p&gt;When using Helm in a production environment, it's important to follow some best practices to ensure the stability and reliability of your deployments. Here are a few tips:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use version control for your charts to track changes and roll back if necessary&lt;/li&gt;
&lt;li&gt;Test your charts thoroughly before deploying them to production&lt;/li&gt;
&lt;li&gt;Use Helm's built-in security features, such as chart signing and repository authentication, to protect your deployments&lt;/li&gt;
&lt;li&gt;Use Helm's dependency management features to ensure that all required charts are installed and upgraded in the correct order&lt;/li&gt;
&lt;li&gt;Use Helm's release management features, such as the ability to roll back to previous versions, to recover from failures or errors in your deployments.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Advanced Helm features
&lt;/h2&gt;

&lt;p&gt;In addition to the basic Helm commands described above, there are several advanced features that can be useful in a production environment.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Using Helm with continuous integration/continuous deployment (CI/CD) pipelines:&lt;/strong&gt; Helm can be used in combination with a CI/CD tool, such as Jenkins or CircleCI, to automate the deployment of applications to a Kubernetes cluster. For example, you could set up a pipeline that automatically builds and deploys your application whenever you push changes to a Git repository. To use Helm with a CI/CD tool, you can set up a script that uses the Helm CLI to install or upgrade charts as part of the deployment process.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Customizing chart values and using templates for deployment configuration:&lt;/strong&gt; Chart values are variables that can be used to customize the behavior of a chart when it is installed. For example, you might use chart values to specify the number of replicas for a deployment or the image tag for a container. Helm charts can use templates to define how these values should be applied to the resources in the chart. This allows you to define reusable templates that can be used across multiple charts, and it makes it easier to manage and maintain your deployments over time.
Here's an example of how to use chart values and templates in a chart:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# values.yaml&lt;/span&gt;
&lt;span class="na"&gt;replicaCount&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;
&lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;repository&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
  &lt;span class="na"&gt;tag&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;latest&lt;/span&gt;

&lt;span class="c1"&gt;# templates/deployment.yaml&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{{&lt;/span&gt; &lt;span class="nv"&gt;.Release.Name&lt;/span&gt; &lt;span class="pi"&gt;}}&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{{&lt;/span&gt; &lt;span class="nv"&gt;.Values.replicaCount&lt;/span&gt; &lt;span class="pi"&gt;}}&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{{&lt;/span&gt; &lt;span class="nv"&gt;.Release.Name&lt;/span&gt; &lt;span class="pi"&gt;}}&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{{&lt;/span&gt; &lt;span class="nv"&gt;.Values.image.repository&lt;/span&gt; &lt;span class="pi"&gt;}}&lt;/span&gt;&lt;span class="s"&gt;:{{ .Values.image.tag }}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this example, the replicaCount and image values can be customized when the chart is installed using the --set flag:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm &lt;span class="nb"&gt;install&lt;/span&gt; &amp;lt;chart-name&amp;gt; &lt;span class="nt"&gt;--set&lt;/span&gt; &lt;span class="nv"&gt;replicaCount&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;5,image.tag&lt;span class="o"&gt;=&lt;/span&gt;stable
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Using Helm hooks to execute tasks before or after chart installation or upgrade:&lt;br&gt;
Helm provides the ability to use hooks to execute tasks before or after chart installation or upgrade. Hooks are defined in the chart and can be used to perform tasks such as creating a database, migrating data, or sending notifications. This can be especially useful when deploying complex applications that have dependencies on other services or resources.&lt;/p&gt;

&lt;p&gt;Here's an example of how to use a hook in a chart:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# hooks/post-install.yaml&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;batch/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Job&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{{&lt;/span&gt; &lt;span class="nv"&gt;.Release.Name&lt;/span&gt; &lt;span class="pi"&gt;}}&lt;/span&gt;&lt;span class="s"&gt;-post-install&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;post-install&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;busybox&lt;/span&gt;
        &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;echo"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Post-install&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;hook&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;ran"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
  &lt;span class="na"&gt;restartPolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Never&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this example, the hook is a Kubernetes Job that runs a simple command after the chart is installed.&lt;/p&gt;

&lt;p&gt;Using Helm to manage dependencies between charts:&lt;br&gt;
Helm provides a way to manage dependencies between charts. This can be useful when you have multiple charts that are used together to form a complete application. For example, if you have a chart for a web server and a chart for a database, you can use the dependencies feature to ensure that the database chart is installed before.&lt;/p&gt;

&lt;p&gt;In conclusion, Helm is a powerful tool for managing applications on Kubernetes. It simplifies the deployment and management of complex applications by providing a way to package all of the resources needed for an application into a single chart. Helm also offers advanced features such as the ability to use templates to customize deployment configurations, use hooks to execute tasks before or after chart installation or upgrade, and manage dependencies between charts.&lt;/p&gt;

&lt;p&gt;Some of the main benefits and features of Helm include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Simplified deployment and management of complex applications on Kubernetes&lt;/li&gt;
&lt;li&gt;Ability to search for and install charts from repositories&lt;/li&gt;
&lt;li&gt;Ability to create and manage custom charts&lt;/li&gt;
&lt;li&gt;Ability to upgrade and roll back deployments&lt;/li&gt;
&lt;li&gt;Advanced features such as chart values, templates, hooks, and dependency management&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you are using Kubernetes and want an easier way to manage your applications, Helm is a great tool to consider. To learn more about Helm and how to use it, you can check out the Helm documentation (&lt;a href="https://helm.sh/docs/" rel="noopener noreferrer"&gt;https://helm.sh/docs/&lt;/a&gt;) and the Kubernetes documentation. There are also many online resources and tutorials available for learning more about Helm and Kubernetes.&lt;/p&gt;

&lt;p&gt;🌟 🔥 If you want to switch your career into tech and you are considering DevOps, you can join our online community &lt;a href="https://chat.whatsapp.com/LARpsS1xhYi75TdJYh8TMa" rel="noopener noreferrer"&gt;here&lt;/a&gt; for live classes and FREE tutorial videos.&lt;/p&gt;

</description>
      <category>emptystring</category>
    </item>
    <item>
      <title>KUBERNETES: FROM ZERO TO HERO (PART 9) - NODE SELECTOR, TAINTS AND TOLERATIONS, POD AND NODE AFFINITY AND ANTI-AFFINITY</title>
      <dc:creator>Samuel Ogunmola</dc:creator>
      <pubDate>Thu, 29 Dec 2022 13:45:49 +0000</pubDate>
      <link>https://dev.to/developermide/kubernetes-from-zero-to-hero-part-9-node-selector-taints-and-tolerations-pod-and-node-affinity-and-anti-affinity-2g99</link>
      <guid>https://dev.to/developermide/kubernetes-from-zero-to-hero-part-9-node-selector-taints-and-tolerations-pod-and-node-affinity-and-anti-affinity-2g99</guid>
      <description>&lt;p&gt;In Kubernetes, node taints, node affinity, node selector and pod affinity are concepts that allow you to control the placement of pods on nodes in the cluster, based on the characteristics and the capabilities of the nodes and the pods. These concepts are useful when you need to enforce certain rules or constraints on the placement of the pods, or when you want to optimize the resource usage of the cluster.&lt;/p&gt;

&lt;h2&gt;
  
  
  Node Selector
&lt;/h2&gt;

&lt;p&gt;Node selector is a property of pods that allows you to specify the nodes that the pods should be scheduled on, based on the labels of the nodes. You can use node selector to optimize the resource usage of the cluster, or to ensure that the pods are placed on the nodes that are most suitable for their workloads.&lt;/p&gt;

&lt;p&gt;To specify node selector, you can use the &lt;code&gt;nodeSelector&lt;/code&gt; field in the pod &lt;code&gt;spec&lt;/code&gt;, and set the labels of the nodes that you want to target. The labels of the nodes are key-value pairs that are assigned to the nodes by the cluster administrator.&lt;/p&gt;

&lt;p&gt;Here is an example of how you can specify the node selector of a pod:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mypod&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;nodeSelector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;value&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mycontainer&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;myimage&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Node selectors are used in conjunction with pod affinity and anti-affinity rules to determine which nodes a pod should be scheduled on. Pod affinity and anti-affinity rules allow you to specify whether certain pods should be placed on the same node, or whether they should be spread across multiple nodes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Node Taints
&lt;/h2&gt;

&lt;p&gt;Node taints and tolerations are features in Kubernetes that allow you to mark certain nodes as being suitable or unsuitable for certain pods. Node taints allow you to mark a node as being unsuitable for certain pods by adding a "taint" to the node. Pods that do not have a matching "tolerance" for the taint will not be scheduled on the tainted node.&lt;/p&gt;

&lt;p&gt;On the other hand, node tolerations allow you to specify that a pod can tolerate running on a tainted node. This can be useful in situations where you need to temporarily schedule pods on a tainted node, or where you have a pod that is able to tolerate the taint.&lt;/p&gt;

&lt;p&gt;To add a taint to a node, you can use the kubectl taint command. For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl taint nodes node-1 &lt;span class="nv"&gt;key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;value:NoSchedule
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will add a taint with the key "key" and the value "value" to the node "node-1", with the effect "NoSchedule", which means that pods without a matching tolerance will not be scheduled on the node.&lt;/p&gt;

&lt;p&gt;To specify a tolerance for a pod, you can use the &lt;code&gt;tolerations&lt;/code&gt; field in the pod specification. For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-pod&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-container&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-image&lt;/span&gt;
  &lt;span class="na"&gt;tolerations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;key&lt;/span&gt;
    &lt;span class="na"&gt;operator&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Equal&lt;/span&gt;
    &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;value&lt;/span&gt;
    &lt;span class="na"&gt;effect&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;NoSchedule&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this example, the pod will be able to tolerate a taint with the key "key" and the value "value" with the effect "NoSchedule".&lt;/p&gt;

&lt;p&gt;Node taints and tolerations are useful tools for controlling which nodes pods can be scheduled on, and can be useful in situations where you need to temporarily schedule pods on a tainted node, or where you have a pod that is able to tolerate the taint.&lt;/p&gt;

&lt;h2&gt;
  
  
  Node Affinity and Anti-Affinity
&lt;/h2&gt;

&lt;p&gt;Node affinity and anti-affinity are features in Kubernetes that allow you to specify the preferred or required nodes for a pod to run on, or to specify nodes that a pod should not run on. These features are used in conjunction with node selectors, which are labels applied to nodes in a cluster that can be used to specify the characteristics of a node.&lt;/p&gt;

&lt;p&gt;Node affinity means that a pod should be scheduled on nodes with the specified labels, while anti-affinity means that a pod should not be scheduled on nodes with the specified labels. Both node affinity and anti-affinity can be used to specify either preferred or required nodes for a pod.&lt;/p&gt;

&lt;p&gt;To specify node affinity or anti-affinity for a pod, you can use the affinity field in the pod specification. For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-pod&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-container&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-image&lt;/span&gt;
  &lt;span class="na"&gt;affinity&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;nodeAffinity&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;requiredDuringSchedulingIgnoredDuringExecution&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;nodeSelectorTerms&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;matchExpressions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;environment&lt;/span&gt;
            &lt;span class="na"&gt;operator&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;In&lt;/span&gt;
            &lt;span class="na"&gt;values&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;production&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this example, the pod will only be scheduled on nodes that have the label "environment" set to "production".&lt;/p&gt;

&lt;p&gt;To specify anti-affinity, you can use the &lt;code&gt;nodeAntiAffinity&lt;/code&gt; field instead of the &lt;code&gt;nodeAffinity&lt;/code&gt; field. For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-pod&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-container&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-image&lt;/span&gt;
  &lt;span class="na"&gt;affinity&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;nodeAntiAffinity&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;requiredDuringSchedulingIgnoredDuringExecution&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;nodeSelectorTerms&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;matchExpressions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;environment&lt;/span&gt;
            &lt;span class="na"&gt;operator&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;In&lt;/span&gt;
            &lt;span class="na"&gt;values&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;staging&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this example, the pod will not be scheduled on nodes that have the label "environment" set to "staging".&lt;/p&gt;

&lt;p&gt;Node affinity and anti-affinity are useful tools for ensuring that your applications are deployed to the appropriate nodes in your cluster. They allow you to specify the preferred or required characteristics of the nodes that a particular pod should run on, or to specify nodes that a pod should not run on. This can help you to optimize the performance and availability of your applications.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pod Affinity and Anti-Affinity
&lt;/h2&gt;

&lt;p&gt;Pod affinity and anti-affinity are properties of pods that allow you to specify the pods that should be co-located or separated from each other, based on the labels and the topology of the nodes. You can use pod affinity and anti-affinity to optimize the resource usage of the cluster, or to ensure that the pods are placed on the nodes that are most suitable for their workloads.&lt;/p&gt;

&lt;p&gt;Pod affinity allows you to specify the pods that should be co-located with each other, based on the labels and the topology of the nodes. You can use pod affinity to group the pods together, if you want to take advantage of the proximity of the pods to each other, or if you want to minimize the network latency between the pods.&lt;/p&gt;

&lt;p&gt;Pod anti-affinity allows you to specify the pods that should not be co-located with each other, based on the labels and the topology of the nodes. You can use pod anti-affinity to spread the pods across the nodes, if you want to distribute the workloads evenly, or if you want to avoid overloading a single node.&lt;/p&gt;

&lt;p&gt;To specify pod affinity or anti-affinity, you can use the &lt;code&gt;affinity&lt;/code&gt; field in the pod &lt;code&gt;spec&lt;/code&gt;, and set the podAffinity or podAntiAffinity field, depending on your needs. You can then specify the labels and the topology of the pods that you want to target, using the &lt;code&gt;labelSelector&lt;/code&gt; and the &lt;code&gt;topologyKey&lt;/code&gt; fields.&lt;/p&gt;

&lt;p&gt;Here is an example of how you can specify the pod affinity of a pod:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mypod&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;affinity&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;podAffinity&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;requiredDuringSchedulingIgnoredDuringExecution&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;labelSelector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;matchExpressions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;key&lt;/span&gt;
            &lt;span class="na"&gt;operator&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;In&lt;/span&gt;
            &lt;span class="na"&gt;values&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;value1&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;value2&lt;/span&gt;
        &lt;span class="na"&gt;topologyKey&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;topologyKey&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mycontainer&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;myimage&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will specify that the pod should be co-located with the pods that have the label "key" with the values "value1" or "value2", and that are on the same node as the pod.&lt;/p&gt;

&lt;p&gt;Here is an example of how you can specify the pod anti-affinity of a pod:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mypod&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;affinity&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;podAntiAffinity&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;requiredDuringSchedulingIgnoredDuringExecution&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;labelSelector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;matchExpressions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;key&lt;/span&gt;
            &lt;span class="na"&gt;operator&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;In&lt;/span&gt;
            &lt;span class="na"&gt;values&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;value1&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;value2&lt;/span&gt;
        &lt;span class="na"&gt;topologyKey&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;topologyKey&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mycontainer&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;myimage&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In conclusion, node selector, node affinity, anti-affinity, pod affinity, anti-affinity, and node taints and tolerations are all features in Kubernetes that allow you to control where your applications are deployed within a cluster. Node selector allows you to specify the characteristics of the nodes that a particular pod should run on, while node affinity and anti-affinity allow you to specify the preferred or required nodes for a pod to run on, or to specify nodes that a pod should not run on. Pod affinity and anti-affinity allow you to specify whether certain pods should be placed on the same node, or whether they should be spread across multiple nodes. Node taints and tolerations allow you to mark certain nodes as being suitable or unsuitable for certain pods, and to specify whether a pod can tolerate running on a tainted node. These features are powerful tools that can help you to optimize the performance and availability of your applications within a Kubernetes cluster.&lt;/p&gt;

&lt;p&gt;🌟 🔥 If you want to switch your career into tech and you are considering DevOps, you can join our online community &lt;a href="https://chat.whatsapp.com/LARpsS1xhYi75TdJYh8TMa"&gt;here&lt;/a&gt; for live classes and FREE tutorial videos.&lt;/p&gt;

</description>
      <category>beginners</category>
      <category>kubernetes</category>
      <category>tutorial</category>
      <category>devops</category>
    </item>
    <item>
      <title>KUBERNETES: FROM ZERO TO HERO (PART 8) - CONFIG MAPS AND SECRETS</title>
      <dc:creator>Samuel Ogunmola</dc:creator>
      <pubDate>Wed, 28 Dec 2022 09:21:54 +0000</pubDate>
      <link>https://dev.to/developermide/kubernetes-from-zero-to-hero-part-8-config-maps-and-secrets-3aap</link>
      <guid>https://dev.to/developermide/kubernetes-from-zero-to-hero-part-8-config-maps-and-secrets-3aap</guid>
      <description>&lt;p&gt;In Kubernetes, Config Maps and Secrets are two types of resources that allow you to store and manage configuration data and sensitive information, respectively. Both types of resources are used to inject data into the containers in a pod, and they can be used to decouple the configuration of your applications from the code itself.&lt;/p&gt;

&lt;p&gt;In this article, we will cover the following topics:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What are Config Maps and Secrets, and how are they different&lt;/li&gt;
&lt;li&gt;How to create and use Config Maps and Secrets in your pods&lt;/li&gt;
&lt;li&gt;The different types of Config Maps and Secrets, and their use cases&lt;/li&gt;
&lt;li&gt;Best practices for managing Config Maps and Secrets in your cluster&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What are Config Maps and Secrets, and how are they different
&lt;/h2&gt;

&lt;p&gt;Config Maps and Secrets are both types of resources in Kubernetes that allow you to store and manage configuration data and sensitive information, respectively. However, they have some key differences:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Config Maps&lt;/strong&gt; are used to store non-sensitive configuration data, such as environment variables, command-line arguments, and configuration files. They are stored in plain text, and they can be easily accessed and modified by anyone who has access to the cluster.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Secrets&lt;/strong&gt; are used to store sensitive information, such as passwords, tokens, and certificates. They are stored in an encrypted form, and they can only be accessed and modified by authorized users.&lt;/p&gt;

&lt;p&gt;Both Config Maps and Secrets are stored as key-value pairs, and they can be used to inject data into the containers in a pod. However, Config Maps are intended for non-sensitive data, while Secrets are intended for sensitive data.&lt;/p&gt;

&lt;p&gt;How to create and use Config Maps and Secrets in your pods&lt;br&gt;
To use Config Maps and Secrets in your pods, you need to create them first, and then reference them in the pod's configuration file.&lt;/p&gt;

&lt;p&gt;Here is an example configuration file that creates a Config Map and a Secret:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ConfigMap&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-config-map&lt;/span&gt;
&lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;my-key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-value&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Secret&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-secret&lt;/span&gt;
&lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;my-key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;bXktdmFsdWU=&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This configuration file creates a Config Map named &lt;code&gt;my-config-map&lt;/code&gt; with a single key-value pair, and a Secret named &lt;code&gt;my-secret&lt;/code&gt; with the same key-value pair. The value of the Secret is encoded using &lt;code&gt;base64&lt;/code&gt;, to ensure that it is stored in an encrypted form.&lt;/p&gt;

&lt;p&gt;To create the Config Map and the Secret in the cluster, you can use the &lt;code&gt;kubectl create -f&lt;/code&gt; command, passing the configuration file as an argument:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl create &lt;span class="nt"&gt;-f&lt;/span&gt; config-map-secret.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once the Config Map and the Secret are created, you can reference them in the pod's configuration file. Here is an example configuration file that creates a pod that uses the Config Map and the Secret:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-pod&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-container&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
    &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;MY_ENV_VAR&lt;/span&gt;
      &lt;span class="na"&gt;valueFrom&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;configMapKeyRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-config-map&lt;/span&gt;
          &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-key&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;MY_SECRET_ENV_VAR&lt;/span&gt;
      &lt;span class="na"&gt;valueFrom&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;secretKeyRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-secret&lt;/span&gt;
          &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-key&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Types of Kubernetes Secret
&lt;/h2&gt;

&lt;p&gt;There are several types of Secrets that you can use in Kubernetes, depending on your needs:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Generic Secrets&lt;/strong&gt;: These are the most basic type of Secrets, and they can be used to store any kind of sensitive data. You can create Generic Secrets from a file or a directory on the host, or from literal key-value pairs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;TLS Secrets&lt;/strong&gt;: These Secrets are used to store TLS certificates and keys, and they are used to secure the communication between pods and services. You can create TLS Secrets from a file or a directory on the host, or from literal key-value pairs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Docker Registry Secrets&lt;/strong&gt;: These Secrets are used to authenticate with a Docker registry, and they are used to pull images from a private registry. You can create Docker Registry Secrets from a file or a directory on the host, or from literal key-value pairs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Service Account Tokens&lt;/strong&gt;: These Secrets are used to authenticate with the Kubernetes API server, and they are automatically created and managed by Kubernetes. You don't need to create Service Account Tokens manually, but you can reference them in your pods and services.&lt;/p&gt;

&lt;p&gt;In addition to these types of Secrets, Kubernetes also supports custom Secret types, which you can define and use in your cluster&lt;/p&gt;

&lt;h2&gt;
  
  
  Best practices for managing Config Maps and Secrets in your cluster
&lt;/h2&gt;

&lt;p&gt;Here are some best practices for managing Config Maps and Secrets in your cluster:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Use Config Maps for non-sensitive data, and Secrets for sensitive data&lt;/strong&gt;: As we saw in the beginning of this article, Config Maps are intended for non-sensitive data, while Secrets are intended for sensitive data. Make sure to use the appropriate type of resource for your data.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Use Config Maps and Secrets sparingly&lt;/strong&gt;: Config Maps and Secrets can be useful for injecting data into the containers in a pod, but they should not be used as a replacement for proper configuration management. Avoid storing large amounts of data in Config Maps and Secrets, and consider using a configuration management tool such as Ansible, chef, or puppet.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Use the &lt;code&gt;--dry-run&lt;/code&gt; flag to test your Config Maps and Secrets&lt;/strong&gt;: Before creating a Config Map or a Secret in the cluster, you can use the&lt;code&gt;--dry-run&lt;/code&gt; flag to test the resource without actually creating it. This can be useful for testing the resource and for debugging any issues.&lt;/li&gt;
&lt;li&gt;Use the&lt;code&gt;--output=yaml&lt;/code&gt; flag to view the generated Config Map or Secret: After creating a Config Map or a Secret in the cluster, you can use the &lt;code&gt;--output=yaml&lt;/code&gt; flag to view the generated resource. This can be useful for verifying the content of the resource and for debugging any issues.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Use Config Maps and Secrets sparingly&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;: Config Maps and Secrets can be useful for injecting data into the containers in a pod, but they should not be used as a replacement for proper configuration management. Avoid storing large amounts of data in Config Maps and Secrets, and consider using a configuration management tool such as ansible, chef, or puppet.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Overall, Config Maps and Secrets are useful resources for injecting data into the containers in a pod, and they can be used to decouple the configuration of your applications from the code itself. By following the best practices outlined in this article, you can effectively manage the configuration data and sensitive information in your cluster.&lt;/p&gt;

&lt;p&gt;🌟 🔥 If you want to switch your career into tech and you are considering DevOps, you can join our online community &lt;a href="https://chat.whatsapp.com/LARpsS1xhYi75TdJYh8TMa"&gt;here &lt;/a&gt;for live classes and FREE tutorial videos.&lt;/p&gt;

</description>
      <category>beginners</category>
      <category>kubernetes</category>
      <category>devops</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>KUBERNETES: FROM ZERO TO HERO (PART 7) - KUBERNETES VOLUMES💥🔥</title>
      <dc:creator>Samuel Ogunmola</dc:creator>
      <pubDate>Tue, 27 Dec 2022 09:50:57 +0000</pubDate>
      <link>https://dev.to/developermide/kubernetes-from-zero-to-hero-part-7-kubernetes-volumes-3018</link>
      <guid>https://dev.to/developermide/kubernetes-from-zero-to-hero-part-7-kubernetes-volumes-3018</guid>
      <description>&lt;p&gt;In Kubernetes, a volume is a directory that is accessible to the containers in a pod. It allows you to store and share data among the containers in the pod, and it can be used to persist data beyond the lifecycle of the pod.&lt;/p&gt;

&lt;p&gt;Imagine 😎 that you have a pod with two containers: a web server and a database. The web server needs to store its files in a directory that the database can access. You can use a volume to create a shared directory that both containers can access. This way, the web server can store its files in the shared directory, and the database can access them.&lt;/p&gt;

&lt;p&gt;There are several types of volumes that you can use in Kubernetes, each with its own characteristics and use cases. For example, you can use an &lt;code&gt;EmptyDir&lt;/code&gt; volume to store temporary data that does not need to be persisted beyond the lifecycle of the pod. Or you can use a &lt;code&gt;GCEPersistentDisk&lt;/code&gt; volume to store data that needs to be persisted beyond the lifecycle of the pod 🌈.&lt;/p&gt;

&lt;p&gt;To use a volume in a pod, you need to specify it in the pod's configuration file. This is done by adding a &lt;code&gt;volumes&lt;/code&gt; field to the configuration file, and specifying the type of volume and its parameters. You also need to add a &lt;code&gt;volumeMounts&lt;/code&gt; field to the container's configuration, and specify the name of the volume and the mount path. The mount path is the directory in the container where the volume will be mounted.&lt;/p&gt;

&lt;p&gt;For example, here is a configuration file that creates a pod with an &lt;code&gt;EmptyDir&lt;/code&gt; volume and a container that mounts the volume at the &lt;code&gt;/var/www/html&lt;/code&gt;directory:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-pod&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-container&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
    &lt;span class="na"&gt;volumeMounts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-volume&lt;/span&gt;
      &lt;span class="na"&gt;mountPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/var/www/html&lt;/span&gt;
  &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-volume&lt;/span&gt;
    &lt;span class="na"&gt;emptyDir&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This configuration file creates a pod with an &lt;code&gt;EmptyDir&lt;/code&gt; volume named &lt;code&gt;my-volume&lt;/code&gt;, and a container named &lt;code&gt;my-container&lt;/code&gt; that runs the &lt;code&gt;nginx&lt;/code&gt; image. The container mounts the volume at the &lt;code&gt;/var/www/html&lt;/code&gt; directory. You can use the &lt;code&gt;kubectl create&lt;/code&gt; command to create the pod in the cluster.&lt;/p&gt;

&lt;h2&gt;
  
  
  Types Of Volumes
&lt;/h2&gt;

&lt;p&gt;There are several types of volumes that you can use in Kubernetes, each with its own characteristics and use cases. In addition to the types of volumes that I mentioned in my previous response (such as &lt;code&gt;EmptyDir&lt;/code&gt;, &lt;code&gt;GCEPersistentDisk&lt;/code&gt;), Kubernetes also provides the following types of volumes:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;PersistentVolume&lt;/strong&gt;: This type of volume represents a piece of storage in the cluster that has been provisioned by the administrator. It can be used to store data that needs to be persisted beyond the lifecycle of the pod.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;PersistentVolumeClaim&lt;/strong&gt;: This type of volume represents a request for storage by a user. It allows a pod to claim a specific PersistentVolume in the cluster, and to use it to store data that needs to be persisted beyond the lifecycle of the pod.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;StorageClass&lt;/strong&gt;: This type of resource defines the parameters for creating a PersistentVolume. It allows the administrator to specify the type and the size of the PersistentVolume, as well as other parameters such as the access mode (e.g. &lt;code&gt;read-only&lt;/code&gt; or &lt;code&gt;read-write&lt;/code&gt;), the provisioner (e.g. GCE, AWS, Azure), and the reclaim policy (e.g. delete or retain).&lt;/p&gt;

&lt;p&gt;To use a &lt;code&gt;PersistentVolumeClaim&lt;/code&gt; volume in a pod, you need to create a &lt;code&gt;PersistentVolume&lt;/code&gt; and a &lt;code&gt;StorageClass&lt;/code&gt; first, and then create the PersistentVolumeClaim. Here is an example configuration file that creates a &lt;code&gt;PersistentVolumeClaim&lt;/code&gt; volume:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;PersistentVolume&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-pv&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;capacity&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;storage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;1Gi&lt;/span&gt;
  &lt;span class="na"&gt;accessModes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;ReadWriteOnce&lt;/span&gt;
  &lt;span class="na"&gt;persistentVolumeReclaimPolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Retain&lt;/span&gt;
  &lt;span class="na"&gt;gcePersistentDisk&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;pdName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-disk&lt;/span&gt;
    &lt;span class="na"&gt;fsType&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ext4&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;storage.k8s.io/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;StorageClass&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;standard&lt;/span&gt;
&lt;span class="na"&gt;provisioner&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kubernetes.io/gce-pd&lt;/span&gt;
&lt;span class="na"&gt;parameters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;pd-standard&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;PersistentVolumeClaim&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-pvc&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;accessModes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;ReadWriteOnce&lt;/span&gt;
  &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;requests&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;storage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;1Gi&lt;/span&gt;
  &lt;span class="na"&gt;storageClassName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;standard&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This configuration file creates a &lt;code&gt;PersistentVolume&lt;/code&gt; named &lt;code&gt;my-pv&lt;/code&gt; that uses a GCE persistent disk as the backing store. It also creates a StorageClass named standard that specifies the parameters for creating the &lt;code&gt;PersistentVolume&lt;/code&gt;. Finally, it creates a &lt;code&gt;PersistentVolumeClaim&lt;/code&gt; named &lt;code&gt;my-pvc&lt;/code&gt; that claims the &lt;code&gt;PersistentVolume&lt;/code&gt; and requests a storage capacity of &lt;code&gt;1Gi&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;To create the PersistentVolume, the StorageClass, and the PersistentVolumeClaim in the cluster, you can use the &lt;code&gt;kubectl create -f&lt;/code&gt; command, passing the configuration file as an argument:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl create &lt;span class="nt"&gt;-f&lt;/span&gt; pvc.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To use the PersistentVolumeClaim volume in a pod, you need to specify it in the pod's configuration file, using the persistentVolumeClaim field. Here is an example configuration file that creates a pod with a PersistentVolumeClaim volume:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-pod&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-container&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
    &lt;span class="na"&gt;volumeMounts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-volume&lt;/span&gt;
      &lt;span class="na"&gt;mountPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/var/www/html&lt;/span&gt;
  &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-volume&lt;/span&gt;
    &lt;span class="na"&gt;persistentVolumeClaim&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;claimName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-pvc&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This configuration file creates a pod with a PersistentVolumeClaim volume named &lt;code&gt;my-volume&lt;/code&gt;, and a container named my-container that runs the &lt;code&gt;nginx&lt;/code&gt;image. The container mounts the volume at the &lt;code&gt;/var/www/html&lt;/code&gt; directory. The volume is claimed using the &lt;code&gt;my-pvc&lt;/code&gt; PersistentVolumeClaim, which was created in the previous step.&lt;/p&gt;

&lt;p&gt;To create the pod in the cluster, you can use the &lt;code&gt;kubectl create&lt;/code&gt; command, passing the configuration file as an argument:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl create &lt;span class="nt"&gt;-f&lt;/span&gt; pod.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There are even more types of volumes that are being commonly used in kubernetes: &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;EmptyDir&lt;/strong&gt;: This type of volume is created when the pod is created, and it is deleted when the pod is deleted. It is useful for storing temporary data that does not need to be persisted beyond the lifecycle of the pod.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;HostPath&lt;/strong&gt;: This type of volume mounts a directory from the host's file system into the pod. It is useful for accessing data on the host from within the pod, or for debugging purposes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;GCEPersistentDisk&lt;/strong&gt;: This type of volume mounts a Google Compute Engine (GCE) persistent disk into the pod. It is useful for storing data that needs to be persisted beyond the lifecycle of the pod.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AWSElasticBlockStore&lt;/strong&gt;: This type of volume mounts an Amazon Web Services (AWS) Elastic Block Store (EBS) volume into the pod. It is useful for storing data that needs to be persisted beyond the lifecycle of the pod.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AzureDisk&lt;/strong&gt;: This type of volume mounts an Azure Disk into the pod. It is useful for storing data that needs to be persisted beyond the lifecycle of the pod.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NFS&lt;/strong&gt;: This type of volume mounts an Network File System (NFS) share into the pod. It is useful for storing data that needs to be shared among multiple pods, or that needs to be persisted beyond the lifecycle of the pod.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ConfigMap&lt;/strong&gt;: This type of volume mounts a ConfigMap as a set of files into the pod. It is useful for injecting configuration data into the containers in the pod.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Secret&lt;/strong&gt;: This type of volume mounts a Secret as a set of files into the pod. It is useful for injecting sensitive data into the containers in the pod. files into the pod. It is useful for injecting sensitive data into the containers in the pod.&lt;/p&gt;

&lt;p&gt;Overall, volumes are an important part of the Kubernetes network model, and they allow you to store and share data among the containers in a pod. They provide a flexible and scalable way to persist. PersistentVolume, PersistentVolumeClaim, and StorageClass are advanced types of volumes that allow you to store and share data that needs to be persisted beyond the lifecycle of the pod. They provide a flexible and scalable way to manage the storage needs of your applications in the cluster. Although GCEPersistentDisk, AzureDisk, AWSElasticBlockStore has been deprecated, they are included here because they are worth knowing.&lt;/p&gt;

&lt;p&gt;🌟 🔥 If you want to switch your career into tech and you are considering DevOps, you can join our online community &lt;a href="https://chat.whatsapp.com/LARpsS1xhYi75TdJYh8TMa"&gt;here&lt;/a&gt;for live classes and FREE tutorial videos.&lt;/p&gt;

</description>
      <category>beginners</category>
      <category>devops</category>
      <category>kubernetes</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>KUBERNETES FROM ZERO TO HERO (PART 6) - KUBERNETES SERVICES 💥🔥</title>
      <dc:creator>Samuel Ogunmola</dc:creator>
      <pubDate>Mon, 26 Dec 2022 10:48:28 +0000</pubDate>
      <link>https://dev.to/developermide/kubernetes-from-zero-to-hero-part-6-kubernetes-services-4cko</link>
      <guid>https://dev.to/developermide/kubernetes-from-zero-to-hero-part-6-kubernetes-services-4cko</guid>
      <description>&lt;p&gt;Pods are the basic units of deployment. They are containers that run on the cluster, and they can be created and destroyed at any time. When you deploy an application on Kubernetes, you usually create a group of pods to run the application.&lt;/p&gt;

&lt;p&gt;One important thing to understand about pods is that they are ephemeral, which means that they are not guaranteed to be stable. This means that if you restart a pod, it will be replaced with a new pod, which will have a different IP address. This can be important to consider if you are using the IP address of a pod to communicate with it.&lt;/p&gt;

&lt;p&gt;For example, let's say you have a pod running a database, and you are using the IP address of the pod to connect to the database from your application. If you restart the pod, it will be replaced with a new pod, which will have a different IP address. This means that you will need to update the IP address in your application to the new IP address of the pod, which would be very difficult or even impossible. So clearly, we need something more stable and that is where Services comes into play.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is a Service.
&lt;/h2&gt;

&lt;p&gt;A service is an abstract way to expose an application running on a set of pods to the external network. When you create a service, you specify which pods you want to expose and how you want to expose them.&lt;/p&gt;

&lt;p&gt;One important role of a service is to provide a fixed IP address for the application. This means that you can access the application using a fixed address, even if the underlying pods are changing or moving. For example, if you have a group of pods running a web server, you can create a service that exposes the web server on a fixed IP address. This way, you can access the web server using the fixed IP address, regardless of which pods are running the web server at any given time.&lt;/p&gt;

&lt;p&gt;This is useful because it allows you to access the application consistently, regardless of the underlying infrastructure. For example, if you are using the IP address of a pod to connect to a database running in the pod, you will need to update the IP address in your application when the pod is restarted. By using a service, you can avoid this problem because the service provides a stable network endpoint for the application, regardless of the underlying pods.&lt;/p&gt;

&lt;p&gt;Another role of a service is to act as a load balancer. This means that it distributes incoming traffic among the underlying pods in a way that is appropriate for the application. For example, if you have a group of pods running a web server, you can create a service that exposes the web server on a fixed IP address. The service will then act as a load balancer, distributing incoming requests to the web server among the underlying pods.&lt;/p&gt;

&lt;p&gt;This is useful because it allows you to scale your application horizontally by adding more pods. For example, if you have a group of pods running a web server, and the traffic to the web server increases, you can simply add more pods to the group. The service will automatically distribute the traffic among the new pods, and the application will continue to be available without any disruption.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;u&gt;Types of services&lt;/u&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;u&gt;&lt;strong&gt;ClusterIP:&lt;/strong&gt;&lt;/u&gt; This is the default type of service in Kubernetes. It exposes the application on a cluster-internal IP address, which is only reachable from within the cluster. This type of service is useful when you want to access the application from within the cluster, but you don't want to expose it to the external network.&lt;br&gt;
Here is an example configuration file for creating a ClusterIP service:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Service&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-service&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-app&lt;/span&gt;
  &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;TCP&lt;/span&gt;
    &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
    &lt;span class="na"&gt;targetPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This configuration file creates a ClusterIP service named my-service that exposes the pods with the &lt;code&gt;app: my-app&lt;/code&gt; label on &lt;code&gt;port 80&lt;/code&gt;. The pods must be running a container that listens on &lt;code&gt;port 80&lt;/code&gt;. The service exposes the application on a cluster-internal IP address, which is only reachable from within the cluster.&lt;/p&gt;

&lt;p&gt;To create the service in the cluster, you can use the &lt;code&gt;kubectl create -f command&lt;/code&gt;, passing the configuration file as an argument:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl create &lt;span class="nt"&gt;-f&lt;/span&gt; service.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will create the service in the cluster, and you can verify it by running the &lt;code&gt;kubectl get services&lt;/code&gt; command. The cluster IP address of the service can be obtained from the CLUSTER-IP column.&lt;/p&gt;

&lt;p&gt;&lt;u&gt;&lt;strong&gt;LoadBalancer&lt;/strong&gt;&lt;/u&gt;: This type of service exposes the application on a public IP address, using an external load balancer provided by the cloud provider. The load balancer distributes the traffic among the underlying pods. This type of service is useful when you want to expose the application to the external network.&lt;/p&gt;

&lt;p&gt;Here is an example configuration file for creating a LoadBalancer service:&lt;/p&gt;

&lt;p&gt;Here is an example configuration file for creating a LoadBalancer service:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Service&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-service&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;LoadBalancer&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-app&lt;/span&gt;
  &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;TCP&lt;/span&gt;
    &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
    &lt;span class="na"&gt;targetPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This configuration file creates a LoadBalancer service named my-service that exposes the pods with the &lt;code&gt;app: my-app&lt;/code&gt; label on &lt;code&gt;port 80&lt;/code&gt;. The pods must be running a container that listens on &lt;code&gt;port 80&lt;/code&gt;. The service exposes the application on a public IP address, using an external load balancer provided by the cloud provider. The load balancer distributes the traffic among the underlying pods.&lt;/p&gt;

&lt;p&gt;To create the service in the cluster, you can use the kubectl create -f command, passing the configuration file as an argument:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl create &lt;span class="nt"&gt;-f&lt;/span&gt; service.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will create the service in the cluster, and you can verify it by running the kubectl get services command. The public IP address of the service can be obtained from the EXTERNAL-IP column.&lt;/p&gt;

&lt;p&gt;Keep in mind that the exact process for creating a LoadBalancer service may vary depending on your cloud provider and the type of load balancer that you are using. Some cloud providers may require you to create the load balancer manually, and then specify the load balancer's IP address or DNS name in the service configuration file. Other cloud providers like AWS will automatically create the load balancer when you create the service. Consult the documentation of your cloud provider for more information.&lt;/p&gt;

&lt;p&gt;&lt;u&gt;&lt;strong&gt;NodePort&lt;/strong&gt;&lt;/u&gt;: This type of service exposes the application on a fixed port of each node in the cluster. This allows you to access the application from the external network using the IP address of any node in the cluster. The traffic is distributed among the underlying pods using a simple round-robin algorithm. This type of service is useful when you want to access the application from the external network, but you don't have a load balancer available.&lt;br&gt;
Here is an example configuration file for creating a NodePort service:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Service&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-service&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;NodePort&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-app&lt;/span&gt;
  &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;TCP&lt;/span&gt;
    &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
    &lt;span class="na"&gt;targetPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
    &lt;span class="na"&gt;nodePort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;30080&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This configuration file creates a NodePort service named my-service that exposes the pods with the app: my-app label on &lt;code&gt;port 80&lt;/code&gt;. The pods must be running a container that listens on &lt;code&gt;port 80&lt;/code&gt;. The service exposes the application on a fixed port (30080 in this example) of each node in the cluster. You can access the application from the external network using the IP address of any node in the cluster and the fixed port number.&lt;/p&gt;

&lt;p&gt;To create the service in the cluster, you can use the &lt;code&gt;kubectl create -f command&lt;/code&gt;, passing the configuration file as an argument:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl create &lt;span class="nt"&gt;-f&lt;/span&gt; service.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will create the service in the cluster, and you can verify it by running the &lt;code&gt;kubectl get services&lt;/code&gt; command. The node port number can be obtained from the NODE-PORT column.&lt;/p&gt;

&lt;p&gt;Overall, services are an important part of the Kubernetes network model, and they provide a stable network endpoint for applications running on the cluster. They allow you to expose your applications to the external network in a flexible and scalable way, and they provide load balancing and traffic management capabilities.&lt;/p&gt;

&lt;p&gt;🌟 🔥 If you want to switch your career into tech and you are considering DevOps, you can join our online community &lt;a href="https://chat.whatsapp.com/LARpsS1xhYi75TdJYh8TMa" rel="noopener noreferrer"&gt;here &lt;/a&gt;for live classes and FREE tutorial videos.&lt;/p&gt;

</description>
      <category>welcome</category>
    </item>
    <item>
      <title>KUBERNETES: FROM ZERO TO HERO (PART 5) - DEPLOYMENT AND PODS 💥🔥</title>
      <dc:creator>Samuel Ogunmola</dc:creator>
      <pubDate>Sun, 25 Dec 2022 08:11:14 +0000</pubDate>
      <link>https://dev.to/developermide/kubernetes-from-zero-to-hero-part-5-deployment-and-pods-15ng</link>
      <guid>https://dev.to/developermide/kubernetes-from-zero-to-hero-part-5-deployment-and-pods-15ng</guid>
      <description>&lt;p&gt;In part 2 of this series, we learnt that there are worker nodes and master nodes or control plane. We learnt that nodes are the physical machines or virtual machines that kubernetes runs on. And a kubernetes cluster is only a group of two or more nodes. Today we are going to learn more about the resources we discussed about in part 1 of this series. We are going to learn more about pods and deployment. So are you with me? Let's get started.&lt;/p&gt;

&lt;h2&gt;
  
  
  DEPLOYMENT
&lt;/h2&gt;

&lt;p&gt;In Kubernetes, a deployment is a resource that represents a set of replicas of a pod. It provides a way to update the pods in a rolling fashion, ensuring that the desired number of replicas are available at all times.&lt;/p&gt;

&lt;p&gt;A deployment consists of a set of desired properties, including the number of replicas, the labels and selectors used to identify the pods, and the container image and command to run in the pods. It also has a set of actual properties, including the current number of replicas and the status of the pods.&lt;/p&gt;

&lt;p&gt;When you create a deployment, Kubernetes will create the desired number of replicas of the pod and ensure that they are running on the worker nodes. If a pod goes down, the deployment will create a new one to replace it. You can also update the deployment by changing the desired properties, such as the container image or the number of replicas. Kubernetes will then update the pods in a rolling fashion, ensuring that the desired number of replicas are available at all times.&lt;/p&gt;

&lt;p&gt;Here is an example deployment configuration file for Kubernetes in YAML format:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-deployment&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-app&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-app&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-container&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-container-image:latest&lt;/span&gt;
        &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/bin/bash"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;-c"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
        &lt;span class="na"&gt;args&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;echo&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;hello&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;world"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This configuration file creates a deployment called my-deployment with three replicas of a pod. The pod contains a single container, my-container, which is based on the my-container-image:latest image and runs the echo hello world command.&lt;/p&gt;

&lt;p&gt;The replicas field specifies the desired number of replicas of the pod, and the selector field specifies the labels and selectors used to identify the pods. The template field specifies the template for the pods, including the labels, the container image and command, and any other desired properties.&lt;/p&gt;

&lt;p&gt;Overall, deployments are a useful resource in Kubernetes for managing the replicas of a pod and ensuring that the desired number of replicas are available at all times. They provide a way to update the pods in a rolling fashion, making it easy to deploy and manage applications in a cluster.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pods
&lt;/h2&gt;

&lt;p&gt;A pod is the basic unit of deployment. It is a logical host for one or more containers, and it represents the smallest deployable unit in the cluster. Pods are usually managed by higher-level abstractions such as deployments or stateful sets, which handle the details of scaling and self-healing. &lt;/p&gt;

&lt;p&gt;In very simple language, pods are like little houses for containers. Containers are like really small computers that can run different types of programs. Pods are like houses because they have a place for the containers to live and they keep the containers safe. Pods can have one or more containers inside them, depending on how big the program is that the containers are running.&lt;/p&gt;

&lt;p&gt;A pod configuration file is a file that describes the desired state of a pod or set of pods. It specifies the details of the pod or pods, such as the containers that should be run, the resources they should consume, and the volumes they should mount.&lt;/p&gt;

&lt;p&gt;Here is an example configuration file for creating a pod in Kubernetes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-pod&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-container&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx:latest&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
  &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-volume&lt;/span&gt;
    &lt;span class="na"&gt;emptyDir&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This configuration file creates a pod named &lt;code&gt;my-pod&lt;/code&gt; with a single container running the nginx image. The container exposes port 80, and the pod mounts an empty volume named my-volume.&lt;/p&gt;

&lt;p&gt;To create the pod in the cluster, you can use the kubectl create -f command, passing the configuration file as an argument:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl create &lt;span class="nt"&gt;-f&lt;/span&gt; pod.yaml

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will create the pod in the cluster, and you can verify it by running the kubectl get pods command.&lt;/p&gt;

&lt;p&gt;🌟 🔥 If you want to switch your career into tech and you are considering DevOps, you can join our online community &lt;a href="https://chat.whatsapp.com/LARpsS1xhYi75TdJYh8TMa" rel="noopener noreferrer"&gt;here&lt;/a&gt; for live classes and FREE tutorial videos.&lt;/p&gt;

</description>
      <category>welcome</category>
    </item>
    <item>
      <title>KUBERNETES: FROM ZERO TO HERO (PART 4) - KUBERNETES CONFIGURATION FILES💥🔥</title>
      <dc:creator>Samuel Ogunmola</dc:creator>
      <pubDate>Sat, 24 Dec 2022 08:08:27 +0000</pubDate>
      <link>https://dev.to/developermide/kubernetes-from-zero-to-hero-part-4-kubernetes-configuration-files-4gme</link>
      <guid>https://dev.to/developermide/kubernetes-from-zero-to-hero-part-4-kubernetes-configuration-files-4gme</guid>
      <description>&lt;p&gt;Kubernetes configuration files are used to define the desired state of a cluster and the resources that run on it. They are written in YAML or JSON format and are used to create, update, and delete resources in the cluster. A kubernetes config file consists of about 5 parts, the:&lt;/p&gt;

&lt;p&gt;💎&lt;strong&gt;apiVersion&lt;/strong&gt;: The version of the Kubernetes API that the configuration file is using. This field is used to ensure that the configuration file is compatible with the version of the API server.&lt;/p&gt;

&lt;p&gt;💎&lt;strong&gt;kind&lt;/strong&gt;: The type of resource that the configuration file is defining. This can be a pod, service, deployment, or any other resource type supported by Kubernetes.&lt;/p&gt;

&lt;p&gt;💎&lt;strong&gt;metadata&lt;/strong&gt;: Metadata about the resource, such as its name, labels, and annotations. This information is used to identify and classify the resource.&lt;/p&gt;

&lt;p&gt;💎&lt;strong&gt;spec&lt;/strong&gt;: The specification of the resource, including its properties and behavior. The exact contents of this field depend on the type of resource being defined.&lt;/p&gt;

&lt;p&gt;💎&lt;strong&gt;status&lt;/strong&gt;: The current status of the resource, including its conditions and any other relevant information. This field is usually generated by the API server and is not typically specified in the configuration file.&lt;/p&gt;

&lt;p&gt;There are several types of resources that can be defined in a Kubernetes configuration file, including pods, services, and deployments. Each resource type has its own set of properties that can be specified in the configuration file.&lt;/p&gt;

&lt;p&gt;Here are some examples of Kubernetes configuration files for different resource types:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pods&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-pod&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-container&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-container-image:latest&lt;/span&gt;
    &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/bin/bash"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;-c"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
    &lt;span class="na"&gt;args&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;echo&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;hello&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;world"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This configuration file creates a pod called my-pod with a single container, my-container, based on the my-container-image:latest image. The container runs the echo hello world command.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Service&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Service&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-service&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-app&lt;/span&gt;
  &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;TCP&lt;/span&gt;
    &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
    &lt;span class="na"&gt;targetPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;8080&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This configuration file creates a service called my-service that exposes the pods with the app: my-app label on port 80. The service routes traffic to the pods on port 8080.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-deployment&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-app&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-app&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-container&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-container-image:latest&lt;/span&gt;
        &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/bin/bash"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;-c"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
        &lt;span class="na"&gt;args&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;echo&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;hello&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;world"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This configuration file creates a deployment called my-deployment with three replicas of a pod. The pod contains a single container, my-container, which is based on the my-container-image:latest image and runs the echo hello world command.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ConfigMap&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ConfigMap&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-configmap&lt;/span&gt;
&lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="c1"&gt;# Add key-value pairs here&lt;/span&gt;
  &lt;span class="na"&gt;key1&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;value1&lt;/span&gt;
  &lt;span class="na"&gt;key2&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;value2&lt;/span&gt;
  &lt;span class="na"&gt;key3&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;value3&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This configuration file creates a configmap named my-configmap with three key-value pairs: &lt;code&gt;key1: value1, key2: value2, and key3: value3&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Secrets&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Secret&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-secret&lt;/span&gt;
&lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Opaque&lt;/span&gt;
&lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="c1"&gt;# Add key-value pairs here, with the value in base64 encoding&lt;/span&gt;
  &lt;span class="na"&gt;key1&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;dmFsdWUx&lt;/span&gt;
  &lt;span class="na"&gt;key2&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;dmFsdWUy&lt;/span&gt;
  &lt;span class="na"&gt;key3&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;dmFsdWUz&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This configuration file creates a secret named my-secret with three key-value pairs: &lt;code&gt;key1: value1, key2: value2, and key3&lt;/code&gt;: value3. The values are encoded in base64.&lt;/p&gt;

&lt;p&gt;To create any resource in the cluster, you can use the &lt;code&gt;kubectl create -f&lt;/code&gt; command, passing the configuration file as an argument, for example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl create &lt;span class="nt"&gt;-f&lt;/span&gt; configmap.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Overall, Kubernetes configuration files are a powerful tool for defining the desired state of a cluster and the resources that run on it. They can be used to create, update, and delete resources in the cluster and allow you to easily deploy and manage applications in a cluster.&lt;/p&gt;

&lt;p&gt;🌟 🔥 If you want to switch your career into tech and you are considering DevOps, you can join our online community &lt;a href="https://chat.whatsapp.com/LARpsS1xhYi75TdJYh8TMa" rel="noopener noreferrer"&gt;here&lt;/a&gt; for live classes and FREE tutorial videos.&lt;/p&gt;

</description>
      <category>webdev</category>
    </item>
    <item>
      <title>KUBERNETES FROM ZERO TO HERO (PART 3) - INSTALLING MINIKUBE, KUBECTL AND KUBECTL COMMANDS</title>
      <dc:creator>Samuel Ogunmola</dc:creator>
      <pubDate>Fri, 23 Dec 2022 12:01:30 +0000</pubDate>
      <link>https://dev.to/developermide/kubernetes-from-zero-to-hero-part-3-installing-minikube-kubectl-and-kubectl-commands-9bm</link>
      <guid>https://dev.to/developermide/kubernetes-from-zero-to-hero-part-3-installing-minikube-kubectl-and-kubectl-commands-9bm</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--6YE-9saT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ft80xfcx4wa98nj9ghq2.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6YE-9saT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ft80xfcx4wa98nj9ghq2.jpeg" alt="Image description" width="422" height="119"&gt;&lt;/a&gt;&lt;br&gt;
Minikube is a tool that allows you to run a single-node Kubernetes cluster locally, on your own machine. It is designed to be used for testing, development, and learning purposes, as it allows you to run a full-featured Kubernetes cluster on your laptop or desktop.&lt;/p&gt;

&lt;p&gt;Minikube uses the popular open-source hypervisor VirtualBox to create a virtual machine (VM) on which it runs the Kubernetes control plane and other components. It also uses the kubectl command-line interface (CLI) to manage the cluster and deploy applications.&lt;/p&gt;

&lt;p&gt;To install Minikube, you need to follow these steps:&lt;/p&gt;

&lt;p&gt;Install a hypervisor, such as VirtualBox and Hyper-V for Windows or Hyper-kit if on MacOS, on your machine.&lt;/p&gt;

&lt;p&gt;Download and install the minikube CLI on your machine. This can be done using a package manager, such as apt, yum, or brew, depending on your operating system.&lt;/p&gt;

&lt;p&gt;Run the minikube start command to create and start the Minikube cluster. By default, this will create a cluster with one node running in a VM on your machine.&lt;/p&gt;

&lt;p&gt;Run the kubectl get nodes command to verify that the cluster is running and the node is ready.&lt;/p&gt;

&lt;p&gt;Use the kubectl CLI to deploy applications to the cluster and manage the resources in the cluster.&lt;/p&gt;




&lt;p&gt;There are several ways to communicate with kubernetes control plane or master node and one of them is using a tool called &lt;strong&gt;kubectl&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;u&gt;KUBECTL&lt;/u&gt;
&lt;/h2&gt;

&lt;p&gt;kubectl is the command-line interface (CLI) for managing Kubernetes clusters. It is a powerful tool that allows you to control and manage the resources of a cluster, such as pods, services, and deployments.&lt;/p&gt;

&lt;p&gt;kubectl communicates with the Kubernetes API server to perform various operations, such as creating, updating, and deleting resources in the cluster. It can be used to view the status of the cluster, debug issues, and perform other tasks related to the management of the cluster.&lt;/p&gt;

&lt;p&gt;Here are the most useful kubectl commands:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;kubectl get&lt;/strong&gt;: Used to list the resources in a cluster, such as pods, services, and deployments.
Example:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get pods &lt;span class="c"&gt;#Lists all the pods in the current namespace.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;kubectl describe&lt;/strong&gt;: Used to view detailed information about a resource, such as its status, labels, and events.
Example:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl describe pod &amp;lt;pod-name&amp;gt; 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;kubectl create&lt;/strong&gt;: Used to create a new resource in the cluster, such as a pod, service, or deployment.
Example:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl create &lt;span class="nt"&gt;-f&lt;/span&gt; &amp;lt;config-file&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;kubectl apply:&lt;/strong&gt; Used to apply a configuration file to create or update a resource in the cluster.
Example:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; &amp;lt;config-file&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;kubectl delete&lt;/strong&gt;: Used to delete a resource from the cluster.
Example
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl delete pod &amp;lt;pod-name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;kubectl exec&lt;/strong&gt;: Used to execute a command in a running container.
Example:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; &amp;lt;pod-name&amp;gt; &lt;span class="nt"&gt;--&lt;/span&gt; &amp;lt;&lt;span class="nb"&gt;command&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;kubectl logs&lt;/strong&gt;: Used to view the logs of a container.
Example:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl logs &amp;lt;pod-name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;kubectl scale: Used to scale the number of replicas of a deployment or replicaset.
Example:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl expose deployment &amp;lt;deployment-name&amp;gt; &lt;span class="nt"&gt;--type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;LoadBalancer &lt;span class="nt"&gt;--port&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;80
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;kubectl rollout&lt;/strong&gt;: Used to manage the rollout of a deployment, including pausing, resuming, and rolling back a rollout.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;kubectl edit&lt;/strong&gt;: Used to edit a resource in-place, allowing you to make changes to the resource directly from the command line.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;kubectl port-forward&lt;/strong&gt;: Used to forward a local port to a port on a pod, allowing you to access the pod from your local machine.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;kubectl top&lt;/strong&gt;: Used to view resource usage of pods or nodes in the cluster.
Example:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl top pod &lt;span class="c"&gt;#Shows resource usage of pods in the cluster.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;kubectl cp&lt;/strong&gt;: Used to copy files to and from containers in a pod.
Examples:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nb"&gt;cp&lt;/span&gt; &amp;lt;local-file-path&amp;gt; &amp;lt;pod-name&amp;gt;:&amp;lt;remote-file-path&amp;gt; &lt;span class="c"&gt;#Copies a file from the local machine to a pod.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;kubectl cluster-info&lt;/strong&gt;: Used to view information about the cluster, such as the version of Kubernetes and the list of nodes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;kubectl config&lt;/strong&gt;: Used to manage the kubectl configuration, including setting the current context and namespace.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;kubectl patch&lt;/strong&gt;: Used to apply a patch to a resource, allowing you to make changes to the resource without replacing it.
Example:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl patch deployment &amp;lt;deployment-name&amp;gt; &lt;span class="nt"&gt;-p&lt;/span&gt; &lt;span class="s1"&gt;'{"spec":{"template":{"spec":{"containers":[{"name":"&amp;lt;container-name&amp;gt;","image":"&amp;lt;image&amp;gt;:&amp;lt;tag&amp;gt;"}]}}}}'&lt;/span&gt; &lt;span class="c"&gt;#Applies a patch to a deployment to change the image of a specific container.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;kubectl drain&lt;/strong&gt;: Used to drain a node in a cluster, evacuating all pods from the node and marking it as unavailable for scheduling.
Example:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl drain &amp;lt;node-name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;kubectl cordon&lt;/strong&gt;: Used to mark a node as unschedulable, preventing new pods from being scheduled on the node.
Example:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl cordon &amp;lt;node-name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;kubectl uncordon&lt;/strong&gt;: Used to mark a node as schedulable, allowing new pods to be scheduled on the node.
Example:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl uncordon &amp;lt;node-name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;kubectl run&lt;/strong&gt;: Used to create a new deployment or replicaset by running a single container in a pod.
Example:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl run &amp;lt;deployment-name&amp;gt; --image=&amp;lt;image&amp;gt;:&amp;lt;tag&amp;gt; --replicas=2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;kubectl expose&lt;/strong&gt;: Used to expose a deployment, replicaset, or pod as a service, allowing it to be accessed from outside the cluster.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl expose pod &amp;lt;pod-name&amp;gt; &lt;span class="nt"&gt;--port&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;80 &lt;span class="nt"&gt;--name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&amp;lt;service-name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;kubectl explain&lt;/strong&gt;: Used to view the documentation and detailed information about a resource, including its fields and their descriptions.
Example:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl explain pod
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;kubectl cluster-health&lt;/strong&gt;: Used to view the health of the nodes and pods in a cluster.
Example
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl cluster-health
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;kubectl taint&lt;/strong&gt;: Used to add, modify, or remove a taint on a node, allowing you to control which pods can be scheduled on the node.&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;kubectl label&lt;/strong&gt;: Used to add, modify, or remove labels on a resource, allowing you to classify and organize resources in the cluster.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Kubectl Port-forward&lt;/strong&gt;: Forwards the local port to a specific pod port.&lt;br&gt;
Example:&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl port-forward &amp;lt;pod-name&amp;gt; 8080:80 &lt;span class="c"&gt;#Forwards the local port 8080 to the port 80 of a specific pod.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;kubectl set&lt;/strong&gt;: Used to set or unset various options, such as the image of a container or the environment variables of a pod.
Examples:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nb"&gt;set &lt;/span&gt;image deployment &amp;lt;deployment-name&amp;gt; &amp;lt;container-name&amp;gt;&lt;span class="o"&gt;=&lt;/span&gt;&amp;lt;image&amp;gt;:&amp;lt;tag&amp;gt;

kubectl &lt;span class="nb"&gt;set env &lt;/span&gt;deployment &amp;lt;deployment-name&amp;gt; &lt;span class="nv"&gt;KEY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;VALUE
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;kubectl api-resources&lt;/strong&gt;: Used to list the available API resources in the cluster, including their names, short names, and categories.
Examples:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl api-resources
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;kubectl api-versions&lt;/strong&gt;: Used to list the available API versions in the cluster.
Example:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl api-versions
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;kubectl convert&lt;/strong&gt;: Used to convert a resource from one format to another, such as YAML to JSON or JSON to YAML.
Example:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl convert &lt;span class="nt"&gt;-f&lt;/span&gt; &amp;lt;config-file&amp;gt; &lt;span class="nt"&gt;-o&lt;/span&gt; yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;kubectl certificate&lt;/strong&gt;: Used to manage the certificate authorities (CAs) and certificates used by the cluster.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Overall, kubectl is a powerful and flexible tool for managing Kubernetes clusters. It can be used to perform a wide range of tasks, from listing and describing resources to creating, updating, and deleting them. It is an essential tool for anyone working with Kubernetes.&lt;/p&gt;

&lt;p&gt;🌟 🔥 If you want to switch your career into tech and you are considering DevOps, you can join our online community &lt;a href="https://chat.whatsapp.com/LARpsS1xhYi75TdJYh8TMa"&gt;here&lt;/a&gt; for live classes and FREE tutorial videos.&lt;/p&gt;

</description>
      <category>beginners</category>
      <category>devops</category>
      <category>kubernetes</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>KUBERNETES FROM ZERO TO HERO (PART 2) - KUBERNETES ARCHITECTURE 💥🔥</title>
      <dc:creator>Samuel Ogunmola</dc:creator>
      <pubDate>Thu, 22 Dec 2022 08:37:55 +0000</pubDate>
      <link>https://dev.to/developermide/kubernetes-from-zero-to-hero-part-2-kubernetes-architecture-dm</link>
      <guid>https://dev.to/developermide/kubernetes-from-zero-to-hero-part-2-kubernetes-architecture-dm</guid>
      <description>&lt;p&gt;In the last part of this series, we learnt that Kubernetes is a powerful open-source container orchestration system for automating the deployment, scaling, and management of containerized applications. It provides a set of APIs and tools that can be used to deploy, scale, and manage containerized applications across a cluster of servers. Today we are going to be learning how different parts of kubernetes works together. We are going to see the architecture of kubernetes and how it's components works together to help kubernetes do it's work. Are you with me? Let's get started right away 👍.&lt;/p&gt;

&lt;p&gt;At a high level, the architecture of Kubernetes consists of a set of master nodes and worker nodes. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcld4lm4202kwxzi02e6p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcld4lm4202kwxzi02e6p.png" alt="Image description" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The master nodes or control planes are responsible for managing the worker nodes and the resources that run on them. The worker nodes are responsible for running the containers of an application.&lt;/p&gt;

&lt;p&gt;Here is a more detailed breakdown of the components of the Kubernetes architecture ✨⭐️:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8t71qmf8vza2xrymoslt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8t71qmf8vza2xrymoslt.png" alt="Image description" width="800" height="634"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The worker node consists of three components or put in a better way, three processes. And they are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Container Runtime&lt;/li&gt;
&lt;li&gt;Kubelet and,&lt;/li&gt;
&lt;li&gt;Kube-proxy&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;while the Master node has four processes and they are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;API Server&lt;/li&gt;
&lt;li&gt;Scheduler&lt;/li&gt;
&lt;li&gt;Controller-Manager&lt;/li&gt;
&lt;li&gt;etcd&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now that we know these, lets look into each of these components in detail.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;✨🎖️. WORKER NODE PROCESSES&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Container Runtime:&lt;/strong&gt; the container runtime is responsible for running the containers of an application. It is the software that is responsible for creating and managing the containers, as well as communicating with the operating system to allocate resources and perform other tasks.&lt;br&gt;
Kubernetes supports a number of container runtimes, including Docker, containerd, and CRI-O. Docker is the most widely used container runtime and is the default runtime for Kubernetes. It provides a lightweight, standalone execution environment for containers and is easy to use and integrate with other tools.&lt;br&gt;
Containerd is an open-source container runtime that is designed to be lightweight and modular. It provides a stable and consistent runtime for containers and is used by many cloud providers and container orchestrators, including Kubernetes.&lt;br&gt;
CRI-O is an open-source container runtime that is designed to be used with Kubernetes. It is based on containerd and is optimized for use with Kubernetes, providing a lightweight and stable runtime for containers.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Kubelet:&lt;/strong&gt; The kubelet is a daemon that runs on each worker node and is responsible for managing the pods and containers on that node. It communicates with the Kubernetes API server to receive instructions and report the status of the pods and containers.&lt;br&gt;
The kubelet is responsible for several key tasks, including:&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;Running and maintaining the desired number of replicas of a pod: The kubelet ensures that the desired number of replicas of a pod are running at any given time. If a pod goes down, it will create a new one to replace it.&lt;/li&gt;
&lt;li&gt;Communicating the status of the pods and containers to the API server: The kubelet reports the status of the pods and containers to the API server, including their resource usage and health status.&lt;/li&gt;
&lt;li&gt;Mounting volumes and secrets: The kubelet is responsible for mounting volumes and secrets onto the pods as specified in the pod configuration.&lt;/li&gt;
&lt;li&gt;Managing the containers of a pod: The kubelet is responsible for starting, stopping, and restarting the containers of a pod as needed. It also monitors the health of the containers and restarts them if necessary.&lt;/li&gt;
&lt;li&gt;Managing the network namespace of a pod: The kubelet is responsible for setting up the network namespace of a pod and configuring the network interfaces, routes, and firewall rules as needed.&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Kube-proxy:&lt;/strong&gt; The kube-proxy is a daemon that runs on each worker node and is responsible for implementing the network proxy and load balancer functions for the node. It is used to forward traffic to the appropriate pods and services based on the destination IP and port. In simple words, they help services do the work of loadbalancing.
The kube-proxy works in conjunction with the kube-apiserver to ensure that the desired state of the cluster is maintained. It listens for changes to the services and endpoints in the cluster and updates the iptables rules on the node accordingly. This allows it to load balance traffic to the correct pods and services and ensure that traffic is routed correctly within the cluster.
The kube-proxy supports a number of different modes of operation, including userspace, iptables, and ipvs. The default mode is iptables, which uses the Linux kernel's built-in packet filtering system to forward traffic. The userspace and ipvs modes use userspace software to forward traffic and offer additional features, such as direct server return and shared IPs.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now lets look at the master processes&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;✨🏆 MASTER NODE PROCESSES&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;API Server:&lt;/strong&gt; The API server is the central point of communication between the master nodes and the worker nodes. It exposes a RESTful API that can be used to manage the resources of the cluster, such as pods, services, and deployments.
The API server is responsible for several key tasks, including:&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;Receiving and processing API requests: The API server receives API requests from clients, such as kubectl or Kubernetes client libraries, and processes them to create, update, or delete resources in the cluster.&lt;/li&gt;
&lt;li&gt;Storing the desired state of the cluster: The API server stores the desired state of the cluster in etcd, a distributed key-value store. It ensures that the actual state of the cluster matches the desired state by reconciling any discrepancies and making the necessary changes.&lt;/li&gt;
&lt;li&gt;Validating and authorizing API requests: The API server validates and authorizes API requests to ensure that only authorized clients can make changes to the cluster. It uses a combination of built-in and pluggable authentication and authorization modules to enforce access controls.&lt;/li&gt;
&lt;li&gt;Serving the API: The API server exposes a number of endpoints that allow clients to interact with the resources of the cluster. It serves the API over HTTPS and can be configured to use TLS certificates for secure communication.&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Scheduler:&lt;/strong&gt;  The scheduler is a component that is responsible for scheduling pods to run on the worker nodes of a cluster. It takes into account the available resources of the worker nodes and the resource requirements of the pods to determine the best place to run them.&lt;br&gt;
The scheduler works in conjunction with the API server and the kubelets to ensure that the desired state of the cluster is maintained. It receives pod scheduling requests from the API server and assigns them to a worker node based on the available resources and the resource requirements of the pod. It also communicates with the kubelets to ensure that the pods are actually running on the assigned worker nodes.&lt;br&gt;
The scheduler can be configured to use various scheduling algorithms and policies to make scheduling decisions. For example, you can specify constraints on where a pod can be scheduled, such as on a particular node or in a particular region. You can also specify resource requirements for a pod, such as the amount of CPU and memory it needs, and the scheduler will try to find a worker node that can accommodate those requirements.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Contoller-Manager:&lt;/strong&gt; In Kubernetes, the controller manager is a daemon that runs on the master nodes and is responsible for managing the controllers in the cluster. Controllers are responsible for maintaining the desired state of the cluster by reconciling any discrepancies between the actual state and the desired state.&lt;br&gt;
The controller manager is responsible for starting and stopping the controllers, as well as monitoring their health and restarting them if necessary. It also communicates with the API server to receive updates about the resources in the cluster and takes action to ensure that the desired state is maintained.&lt;br&gt;
There are several types of controllers in Kubernetes, including:&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;ReplicationController: Ensures that the desired number of replicas of a pod are running at any given time.&lt;/li&gt;
&lt;li&gt;Deployment: Provides a way to update the pods in a rolling fashion, ensuring that the desired number of replicas are available at all times.&lt;/li&gt;
&lt;li&gt;StatefulSet: Manages the deployment and scaling of a set of pods that have persistent storage.&lt;/li&gt;
&lt;li&gt;DaemonSet: Ensures that a copy of a pod is running on all or a subset of the worker nodes.&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;etcd:&lt;/strong&gt; etcd is a distributed key-value store that is used to store the configuration data of a Kubernetes cluster. It is used to store the desired state of the cluster and is used by the API server to ensure that the actual state of the cluster matches the desired state.
etcd is a highly available and distributed system that can be deployed on a cluster of machines. It stores the data in a replicated log, which allows it to maintain consistency across the cluster and recover from failures. It also provides a number of features to support distributed systems, such as leader election and distributed locks.
In Kubernetes, etcd is used to store a wide range of data, including the desired state of the pods, services, and deployments in the cluster. It is also used to store the configuration of the master nodes and the worker nodes, as well as the network and security policies of the cluster.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Overall, the architecture of Kubernetes consists of a set of master nodes and worker nodes that are responsible for managing and running the containers of an application. The API server, etcd, scheduler, and kubelet are key components that work together to ensure that the desired state of the cluster is maintained and that the containers of an application are running as intended. Understanding the architecture of Kubernetes is essential for effectively deploying and managing applications in a cluster 😋.&lt;/p&gt;

&lt;p&gt;💎 Note: This is the part 2 of the "Kubernetes: From Zero to Hero" Series. If you want to become a DevOps engineer then you should join our online community &lt;a href="http://bit.ly/Devops_training" rel="noopener noreferrer"&gt;here&lt;/a&gt; for more content like this.  Merry Christmas 🎅 &lt;/p&gt;

</description>
      <category>welcome</category>
    </item>
  </channel>
</rss>
