<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Manikanta</title>
    <description>The latest articles on DEV Community by Manikanta (@devopsbymani).</description>
    <link>https://dev.to/devopsbymani</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/devopsbymani"/>
    <language>en</language>
    <item>
      <title>Kubernetes Resource Quotas and Limit Ranges with Namespaces</title>
      <dc:creator>Manikanta</dc:creator>
      <pubDate>Fri, 08 Nov 2024 11:08:44 +0000</pubDate>
      <link>https://dev.to/devopsbymani/kubernetes-resource-quotas-and-limit-ranges-with-namespaces-3lhh</link>
      <guid>https://dev.to/devopsbymani/kubernetes-resource-quotas-and-limit-ranges-with-namespaces-3lhh</guid>
      <description>&lt;p&gt;Kubernetes is a powerful container orchestration platform that allows you to manage and scale containerized applications. When managing workloads in a Kubernetes cluster, it’s important to control resource usage to ensure fair sharing of resources, prevent any single workload from over-consuming cluster resources, and ensure that your applications run reliably. &lt;/p&gt;

&lt;p&gt;Two key concepts in Kubernetes help with resource management: &lt;strong&gt;Resource Quotas&lt;/strong&gt; and &lt;strong&gt;Limit Ranges&lt;/strong&gt;. These are applied within a &lt;strong&gt;Namespace&lt;/strong&gt;, which is a way to organize and isolate resources within a cluster. In this blog, we will explain these concepts in simple terms and show you how to configure them using Kubernetes manifests.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is a Namespace in Kubernetes?
&lt;/h2&gt;

&lt;p&gt;A &lt;strong&gt;Namespace&lt;/strong&gt; in Kubernetes is a way to partition cluster resources into logically named groups. It provides a mechanism for isolating resources (like pods, services, deployments, etc.) so that they don’t interfere with each other. Namespaces help you manage resources for different teams, projects, or environments (e.g., &lt;code&gt;dev&lt;/code&gt;, &lt;code&gt;staging&lt;/code&gt;, &lt;code&gt;prod&lt;/code&gt;) within the same Kubernetes cluster.&lt;/p&gt;

&lt;p&gt;For example, a namespace could be created for each department in an organization:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;dev&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;staging&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;production&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By grouping resources under namespaces, it becomes easier to manage permissions, policies, and quotas.&lt;/p&gt;




&lt;h2&gt;
  
  
  What is a Resource Quota?
&lt;/h2&gt;

&lt;p&gt;A &lt;strong&gt;ResourceQuota&lt;/strong&gt; in Kubernetes is a policy that limits the amount of resources (such as CPU, memory, number of pods, etc.) that can be consumed within a specific namespace. It helps prevent a single user or application from consuming all the resources in the cluster, thus ensuring fair resource allocation.&lt;/p&gt;

&lt;h3&gt;
  
  
  Key Properties of a ResourceQuota:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;CPU&lt;/strong&gt;: Limits the total CPU usage within a namespace.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Memory&lt;/strong&gt;: Limits the total memory usage within a namespace.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pods&lt;/strong&gt;: Limits the number of Pods that can be created in a namespace.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Services, Deployments, etc.&lt;/strong&gt;: Limits the number of specific types of resources that can be created.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Example: Resource Quota Manifest
&lt;/h3&gt;

&lt;p&gt;Let’s say we want to limit a namespace (&lt;code&gt;dev-namespace&lt;/code&gt;) so that it can use at most 4 CPUs, 16Gi of memory, and can create a maximum of 10 Pods. Below is an example &lt;code&gt;ResourceQuota&lt;/code&gt; manifest for this use case:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ResourceQuota&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;dev-namespace-quota&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;dev-namespace&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;hard&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;requests.cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;4"&lt;/span&gt;
    &lt;span class="na"&gt;requests.memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;16Gi&lt;/span&gt;
    &lt;span class="na"&gt;limits.cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;4"&lt;/span&gt;
    &lt;span class="na"&gt;limits.memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;16Gi&lt;/span&gt;
    &lt;span class="na"&gt;pods&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;10"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The &lt;code&gt;requests.cpu&lt;/code&gt; and &lt;code&gt;limits.cpu&lt;/code&gt; specify the maximum CPU the namespace can request or use.&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;requests.memory&lt;/code&gt; and &lt;code&gt;limits.memory&lt;/code&gt; specify the maximum amount of memory the namespace can request or use.&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;pods&lt;/code&gt; field limits the number of Pods that can be created in the &lt;code&gt;dev-namespace&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If the resources exceed the defined limits, Kubernetes will prevent new resources from being created until resources within the namespace are freed up.&lt;/p&gt;




&lt;h2&gt;
  
  
  What is a LimitRange?
&lt;/h2&gt;

&lt;p&gt;A &lt;strong&gt;LimitRange&lt;/strong&gt; is a policy that defines the minimum and maximum resource limits for individual containers running within a namespace. Unlike ResourceQuota, which applies to the entire namespace, a LimitRange applies to individual containers within that namespace. This ensures that each container is constrained to using a defined range of resources.&lt;/p&gt;

&lt;h3&gt;
  
  
  Key Properties of a LimitRange:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Limit&lt;/strong&gt;: Defines the maximum amount of resources that a container can request or use.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Request&lt;/strong&gt;: Defines the minimum amount of resources that a container should request when it starts.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Default&lt;/strong&gt;: Defines the default resource requests and limits for containers that don’t specify them explicitly.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Example: LimitRange Manifest
&lt;/h3&gt;

&lt;p&gt;Let’s say we want to define a &lt;code&gt;LimitRange&lt;/code&gt; in the &lt;code&gt;dev-namespace&lt;/code&gt; to ensure each container can have a minimum of 128Mi of memory and 100m (0.1) CPUs, and a maximum of 1Gi of memory and 1 CPU. We also want to set default values if the user doesn’t specify them.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;LimitRange&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;dev-namespace-limits&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;dev-namespace&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;limits&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Container&lt;/span&gt;
    &lt;span class="na"&gt;max&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;1Gi&lt;/span&gt;
      &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;1"&lt;/span&gt;
    &lt;span class="na"&gt;min&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;128Mi&lt;/span&gt;
      &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;100m"&lt;/span&gt;
    &lt;span class="na"&gt;default&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;512Mi&lt;/span&gt;
      &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;500m"&lt;/span&gt;
    &lt;span class="na"&gt;defaultRequest&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;256Mi&lt;/span&gt;
      &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;200m"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The &lt;code&gt;max&lt;/code&gt; field specifies the maximum resources a container can request.&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;min&lt;/code&gt; field sets the minimum resources a container can request.&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;default&lt;/code&gt; field sets default resource requests for containers that don’t specify them explicitly.&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;defaultRequest&lt;/code&gt; field specifies the default resource requests for containers.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If a container is created without explicitly defining resource limits or requests, Kubernetes will automatically apply the defaults defined in the &lt;code&gt;LimitRange&lt;/code&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  How ResourceQuota and LimitRange Work Together
&lt;/h2&gt;

&lt;p&gt;ResourceQuotas and LimitRanges complement each other:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;ResourceQuota&lt;/strong&gt; limits the total usage of resources within a namespace.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;LimitRange&lt;/strong&gt; defines the constraints for individual containers within that namespace.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A &lt;strong&gt;ResourceQuota&lt;/strong&gt; can prevent a namespace from consuming too many resources overall (e.g., 4 CPUs and 16Gi of memory).&lt;/li&gt;
&lt;li&gt;A &lt;strong&gt;LimitRange&lt;/strong&gt; can enforce individual container-level limits (e.g., containers in the namespace can only use up to 1Gi of memory).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Together, they ensure that resources are used efficiently, and prevent users or applications from over-consuming resources in a shared cluster environment.&lt;/p&gt;




&lt;h2&gt;
  
  
  Practical Use Cases
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Preventing Resource Overuse&lt;/strong&gt;: If you have multiple teams or applications running in the same cluster, you can use &lt;strong&gt;ResourceQuota&lt;/strong&gt; to make sure no team can consume all available resources.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Consistent Resource Allocation&lt;/strong&gt;: Using &lt;strong&gt;LimitRange&lt;/strong&gt;, you can ensure that each team’s containers have a reasonable amount of resources, avoiding cases where containers are too resource-hungry and causing performance degradation for others.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automatic Defaults&lt;/strong&gt;: By setting defaults in the &lt;strong&gt;LimitRange&lt;/strong&gt;, you can ensure that all containers have consistent resource requests and limits, even if users forget to specify them.&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In Kubernetes, &lt;strong&gt;Resource Quotas&lt;/strong&gt; and &lt;strong&gt;Limit Ranges&lt;/strong&gt; are powerful tools for managing resource usage and ensuring that applications run efficiently and fairly within a namespace. By setting resource limits at both the namespace and container levels, you can prevent one application from consuming too many resources, while also ensuring that containers have the resources they need to run smoothly.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Understanding LimitRange in Kubernetes: A Guide with Examples</title>
      <dc:creator>Manikanta</dc:creator>
      <pubDate>Fri, 08 Nov 2024 10:52:00 +0000</pubDate>
      <link>https://dev.to/devopsbymani/understanding-limitrange-in-kubernetes-a-guide-with-examples-5cac</link>
      <guid>https://dev.to/devopsbymani/understanding-limitrange-in-kubernetes-a-guide-with-examples-5cac</guid>
      <description>&lt;p&gt;Kubernetes provides powerful resource management tools to help you control how resources are allocated to containers within your clusters. One such tool is &lt;strong&gt;LimitRange&lt;/strong&gt;. LimitRange helps administrators set limits on the resource usage (CPU, memory) and enforce constraints on a per-namespace basis. This is particularly useful in multi-tenant environments where you want to prevent a single pod from consuming excessive resources that could affect other workloads running in the same namespace.&lt;/p&gt;

&lt;p&gt;In this blog post, we’ll walk you through what &lt;strong&gt;LimitRange&lt;/strong&gt; is, why it's useful, and provide a detailed example of how to create a namespace and apply a LimitRange with Kubernetes manifest files.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is LimitRange?
&lt;/h2&gt;

&lt;p&gt;In Kubernetes, a &lt;strong&gt;LimitRange&lt;/strong&gt; is a resource that sets default resource requests and limits for containers in a specific namespace. It allows cluster administrators to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Set default resource requests and limits for containers if they are not specified by the user.&lt;/li&gt;
&lt;li&gt;Prevent users from creating containers that exceed certain resource limits.&lt;/li&gt;
&lt;li&gt;Ensure fair resource distribution within a namespace and prevent resource starvation or hogging.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Use Cases for LimitRange
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Prevent resource overuse&lt;/strong&gt;: By setting CPU and memory limits, you can prevent runaway containers that consume excessive resources.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Enforce resource policies&lt;/strong&gt;: Set upper and lower bounds for resources to align with your organization's resource allocation policies.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Define defaults&lt;/strong&gt;: If a container doesn't specify resource requests or limits, Kubernetes will use the default values from LimitRange.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Key Concepts
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Resource Request&lt;/strong&gt;: The amount of CPU and memory a container requests from the scheduler.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Resource Limit&lt;/strong&gt;: The maximum amount of CPU and memory a container can consume.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Default Request&lt;/strong&gt;: A default resource request applied to containers that do not specify one.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Default Limit&lt;/strong&gt;: A default resource limit applied to containers that do not specify one.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Example: Creating a Namespace and Applying LimitRange
&lt;/h2&gt;

&lt;p&gt;Let’s go through a practical example of how to create a &lt;strong&gt;Namespace&lt;/strong&gt; and apply a &lt;strong&gt;LimitRange&lt;/strong&gt; to that namespace.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Create a Namespace
&lt;/h3&gt;

&lt;p&gt;Before applying a LimitRange, we need to define the namespace where it will be applied. You can create a namespace using a manifest file or with the &lt;code&gt;kubectl&lt;/code&gt; command.&lt;/p&gt;

&lt;p&gt;Here's a simple YAML manifest to create a namespace named &lt;code&gt;dev-namespace&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Namespace&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;dev-namespace&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To apply this manifest, save it as &lt;code&gt;namespace.yaml&lt;/code&gt; and run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; namespace.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  2. Define the LimitRange
&lt;/h3&gt;

&lt;p&gt;Now that we have the namespace, we can create a LimitRange that sets default CPU and memory requests and limits for any container running in the &lt;code&gt;dev-namespace&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Here’s an example of a &lt;code&gt;LimitRange&lt;/code&gt; manifest:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;LimitRange&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;resource-limits&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;dev-namespace&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;limits&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;max&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;500m"&lt;/span&gt;
      &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;1Gi"&lt;/span&gt;
    &lt;span class="na"&gt;min&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;100m"&lt;/span&gt;
      &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;128Mi"&lt;/span&gt;
    &lt;span class="na"&gt;default&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;200m"&lt;/span&gt;
      &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;512Mi"&lt;/span&gt;
    &lt;span class="na"&gt;defaultRequest&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;200m"&lt;/span&gt;
      &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;256Mi"&lt;/span&gt;
    &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Container&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Explanation:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;max&lt;/strong&gt;: The maximum amount of CPU (&lt;code&gt;500m&lt;/code&gt;, or 0.5 CPUs) and memory (&lt;code&gt;1Gi&lt;/code&gt;, 1 GiB) that containers can use in the namespace.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;min&lt;/strong&gt;: The minimum amount of CPU (&lt;code&gt;100m&lt;/code&gt;) and memory (&lt;code&gt;128Mi&lt;/code&gt;) that must be requested by containers.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;default&lt;/strong&gt;: The default CPU (&lt;code&gt;200m&lt;/code&gt;) and memory (&lt;code&gt;512Mi&lt;/code&gt;) that will be applied to containers that do not specify these values.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;defaultRequest&lt;/strong&gt;: The default CPU (&lt;code&gt;200m&lt;/code&gt;) and memory (&lt;code&gt;256Mi&lt;/code&gt;) that will be requested by containers if not explicitly set.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;type&lt;/strong&gt;: Specifies that the limit range applies to containers.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To apply this LimitRange, save the YAML file as &lt;code&gt;limitrange.yaml&lt;/code&gt; and run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; limitrange.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  3. Verify the LimitRange
&lt;/h3&gt;

&lt;p&gt;After applying the &lt;code&gt;LimitRange&lt;/code&gt;, you can verify that it has been successfully created in the &lt;code&gt;dev-namespace&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get limitrange &lt;span class="nt"&gt;-n&lt;/span&gt; dev-namespace
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This should show the &lt;code&gt;resource-limits&lt;/code&gt; LimitRange applied to the namespace.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Create a Pod that Exceeds Limits
&lt;/h3&gt;

&lt;p&gt;Now let’s create a pod within the &lt;code&gt;dev-namespace&lt;/code&gt; that does not specify resource requests or limits. This pod will automatically inherit the defaults defined in the LimitRange.&lt;/p&gt;

&lt;p&gt;Create a &lt;code&gt;pod.yaml&lt;/code&gt; file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;example-pod&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;dev-namespace&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;example-container&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Apply this file with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; pod.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The pod will be created, and Kubernetes will apply the default CPU and memory values from the LimitRange, which in this case are 200m CPU and 512Mi memory for the container.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Create a Pod that Violates the Limits
&lt;/h3&gt;

&lt;p&gt;Let’s now create a pod that violates the defined limits. For example, we might try to create a pod that requests more memory than the maximum allowed by the LimitRange.&lt;/p&gt;

&lt;p&gt;Here’s an example &lt;code&gt;pod.yaml&lt;/code&gt; that tries to allocate more memory than the defined &lt;code&gt;max&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;invalid-pod&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;dev-namespace&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;invalid-container&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
    &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;requests&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;2Gi"&lt;/span&gt;
      &lt;span class="na"&gt;limits&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;2Gi"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you apply this manifest:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; pod.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Kubernetes will reject the pod creation because the memory request and limit exceed the maximum memory defined in the LimitRange (which is 1Gi).&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;Kubernetes &lt;strong&gt;LimitRange&lt;/strong&gt; is an important feature for controlling resource usage at the namespace level. It helps maintain a balance between resource efficiency and fairness in multi-tenant environments. By setting default requests and limits, you can ensure that containers don’t consume too many resources or go underutilized, leading to more predictable and manageable workloads.&lt;/p&gt;

&lt;h3&gt;
  
  
  Key Takeaways:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;LimitRange helps you set &lt;strong&gt;default resource requests and limits&lt;/strong&gt; for containers in a namespace.&lt;/li&gt;
&lt;li&gt;It enables you to define &lt;strong&gt;maximum&lt;/strong&gt; and &lt;strong&gt;minimum&lt;/strong&gt; resource boundaries for containers.&lt;/li&gt;
&lt;li&gt;By using LimitRange, you can ensure that containers &lt;strong&gt;do not exceed specified resource limits&lt;/strong&gt; and can manage resources efficiently within your cluster.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you’re managing Kubernetes at scale, understanding and using LimitRange will help you enforce resource policies and maintain a healthy cluster environment.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>cloud</category>
      <category>aws</category>
    </item>
    <item>
      <title>Understanding Namespaces in Kubernetes: Imperative vs Declarative Approaches &amp; Resources Not Specific to Namespaces</title>
      <dc:creator>Manikanta</dc:creator>
      <pubDate>Fri, 08 Nov 2024 07:17:32 +0000</pubDate>
      <link>https://dev.to/devopsbymani/understanding-namespaces-in-kubernetes-imperative-vs-declarative-approaches-resources-not-specific-to-namespaces-2762</link>
      <guid>https://dev.to/devopsbymani/understanding-namespaces-in-kubernetes-imperative-vs-declarative-approaches-resources-not-specific-to-namespaces-2762</guid>
      <description>&lt;p&gt;In the world of Kubernetes (K8s), &lt;strong&gt;namespaces&lt;/strong&gt; are an essential concept for managing and organizing resources within a cluster. Whether you're managing a small development environment or a large-scale production system, namespaces provide a way to isolate and group related objects for better organization, access control, and resource management.&lt;/p&gt;

&lt;p&gt;In this blog, we'll take a deep dive into &lt;strong&gt;Kubernetes namespaces&lt;/strong&gt;, exploring their use cases, comparing imperative vs. declarative approaches for managing resources, and discussing which resources are not bound to a specific namespace.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Are Namespaces in Kubernetes?
&lt;/h2&gt;

&lt;p&gt;In Kubernetes, a &lt;strong&gt;namespace&lt;/strong&gt; is a logical partition of a cluster that allows you to group resources. Namespaces help you organize objects like pods, services, deployments, and configmaps, providing a way to manage resources effectively in multi-tenant environments. They offer isolation between different applications, teams, or environments within the same cluster.&lt;/p&gt;

&lt;p&gt;For example, you can use namespaces to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Isolate production and development environments.&lt;/li&gt;
&lt;li&gt;Segment resources by team or project.&lt;/li&gt;
&lt;li&gt;Manage resource quotas and access control on a per-namespace basis.&lt;/li&gt;
&lt;li&gt;Simplify security policies and networking between isolated environments.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By default, Kubernetes includes several built-in namespaces:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;default&lt;/code&gt;: The default namespace for resources that are not assigned a specific namespace.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;kube-system&lt;/code&gt;: The namespace for Kubernetes system components like controllers, services, and DNS.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;kube-public&lt;/code&gt;: A namespace that is readable by all users (including unauthenticated users), commonly used for public resources like cluster information.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;kube-node-lease&lt;/code&gt;: For managing node lease data to improve performance and reliability.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Namespace Usage Example
&lt;/h3&gt;

&lt;p&gt;Imagine you have a Kubernetes cluster used by multiple teams. You might have:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A &lt;code&gt;dev&lt;/code&gt; namespace for development workloads.&lt;/li&gt;
&lt;li&gt;A &lt;code&gt;prod&lt;/code&gt; namespace for production services.&lt;/li&gt;
&lt;li&gt;A &lt;code&gt;qa&lt;/code&gt; namespace for testing.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each namespace can have its own set of resources (e.g., deployments, pods, services) without interfering with other namespaces.&lt;/p&gt;




&lt;h2&gt;
  
  
  Imperative vs Declarative Approaches in Kubernetes
&lt;/h2&gt;

&lt;p&gt;When working with Kubernetes resources (including namespaces), you generally have two approaches to managing your resources: &lt;strong&gt;imperative&lt;/strong&gt; and &lt;strong&gt;declarative&lt;/strong&gt;. Each approach has its own benefits and trade-offs, and the choice depends on your operational needs and environment.&lt;/p&gt;

&lt;h3&gt;
  
  
  Imperative Approach
&lt;/h3&gt;

&lt;p&gt;In the &lt;strong&gt;imperative approach&lt;/strong&gt;, you directly issue commands to Kubernetes to create, update, or delete resources. This approach is often quick and simple for tasks that need immediate effect but can become cumbersome when managing large, complex configurations.&lt;/p&gt;

&lt;h4&gt;
  
  
  Example of Imperative Commands
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Create a namespace using an imperative command&lt;/span&gt;
kubectl create namespace dev

&lt;span class="c"&gt;# Create a deployment in the "dev" namespace&lt;/span&gt;
kubectl run myapp &lt;span class="nt"&gt;--image&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;nginx &lt;span class="nt"&gt;--namespace&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;dev
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this approach, you're instructing Kubernetes directly to perform a specific action. While it’s useful for quick tasks or one-off changes, it doesn’t provide an easy way to track or manage configurations over time.&lt;/p&gt;

&lt;h4&gt;
  
  
  Advantages:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Quick and easy for single, ad-hoc changes.&lt;/li&gt;
&lt;li&gt;Best suited for one-off tasks or during experimentation.&lt;/li&gt;
&lt;li&gt;Ideal for quick troubleshooting and debugging.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Disadvantages:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Less reproducible: Hard to track changes or manage configurations in the long run.&lt;/li&gt;
&lt;li&gt;Lacks version control: Difficult to know what exactly was changed over time.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Declarative Approach
&lt;/h3&gt;

&lt;p&gt;The &lt;strong&gt;declarative approach&lt;/strong&gt; involves defining your desired state of the system in configuration files (typically YAML or JSON), and then applying these configurations to Kubernetes using commands like &lt;code&gt;kubectl apply&lt;/code&gt;. With declarative management, you describe what you want to happen (desired state), and Kubernetes ensures that the actual state matches that description.&lt;/p&gt;

&lt;h4&gt;
  
  
  Example of Declarative Commands
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# namespace.yaml: Define the desired namespace in a YAML file&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Namespace&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;dev&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;myapp&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;dev&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Apply the declarative configuration to create resources&lt;/span&gt;
kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; namespace.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Advantages:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Declarative configurations are version-controlled and easy to maintain.&lt;/li&gt;
&lt;li&gt;Supports infrastructure-as-code principles, allowing reproducibility and consistency.&lt;/li&gt;
&lt;li&gt;Easier to track changes, roll back, and collaborate with others.&lt;/li&gt;
&lt;li&gt;Kubernetes takes care of managing the resources, ensuring they match the defined state.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Disadvantages:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;May take more time to set up than imperative commands.&lt;/li&gt;
&lt;li&gt;Requires maintaining YAML or JSON files for configurations.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Which Kubernetes Resources Are Not Specific to Namespaces?
&lt;/h2&gt;

&lt;p&gt;While many resources in Kubernetes are namespace-scoped, there are certain resources that exist &lt;strong&gt;outside of any namespace&lt;/strong&gt;. These resources are considered &lt;strong&gt;cluster-wide&lt;/strong&gt; and are not confined to a specific namespace.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cluster-Wide Resources in Kubernetes
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;1.Nodes&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Nodes are the physical or virtual machines that run your workloads in Kubernetes. They are part of the overall cluster and are not tied to any specific namespace.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   kubectl get nodes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;2.Persistent Volumes (PV)&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Persistent Volumes represent physical storage in the cluster and are managed at the cluster level. While &lt;strong&gt;Persistent Volume Claims (PVCs)&lt;/strong&gt; are namespace-scoped, the Persistent Volume (PV) itself is not.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   kubectl get pv
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;3.ClusterRole &amp;amp; ClusterRoleBinding&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;ClusterRoles&lt;/strong&gt; define a set of permissions that apply to the whole cluster, regardless of namespaces. Similarly, &lt;strong&gt;ClusterRoleBindings&lt;/strong&gt; grant these permissions to users or service accounts.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   kubectl get clusterrole
   kubectl get clusterrolebinding
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;4.Namespaces (themselves)&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Interestingly, the &lt;strong&gt;Namespace&lt;/strong&gt; resource is not contained within another namespace. It is a cluster-wide resource that defines a logical partitioning of the cluster.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   kubectl get namespaces
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;5.Custom Resource Definitions (CRDs)&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;CRDs are used to define custom resources in Kubernetes, but the CRD itself is a cluster-wide resource. Instances of custom resources created using CRDs, however, are namespace-scoped.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   kubectl get crd
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;6.API Aggregator Resources (API Services)&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;These are part of the Kubernetes API aggregation layer, which allows external APIs to be exposed to the Kubernetes cluster. They are not tied to a specific namespace.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   kubectl get apiservices
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;7.Service Accounts&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;While &lt;strong&gt;Service Accounts&lt;/strong&gt; are typically namespace-scoped, &lt;strong&gt;Cluster-wide Service Accounts&lt;/strong&gt; are part of the control plane, giving access to cluster-wide services.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Namespaces in Kubernetes provide an essential mechanism for organizing and managing cluster resources, particularly in multi-tenant environments. They offer isolation and effective resource management, allowing you to segment workloads based on teams, applications, or environments.&lt;/p&gt;

&lt;p&gt;When managing resources in Kubernetes, you can choose between an &lt;strong&gt;imperative&lt;/strong&gt; approach for quick, ad-hoc changes or a &lt;strong&gt;declarative&lt;/strong&gt; approach for long-term, consistent management. While imperative commands are helpful for troubleshooting or experimenting, the declarative approach provides a more scalable and reproducible method for infrastructure management.&lt;/p&gt;

&lt;p&gt;Finally, while most Kubernetes resources are tied to namespaces, certain key resources (like nodes, persistent volumes, and cluster roles) exist at the cluster level, not bound to any namespace.&lt;/p&gt;

&lt;p&gt;By understanding how namespaces work and the difference between imperative and declarative management, you can better organize and control your Kubernetes environment to meet your operational needs.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Understanding Kubernetes Requests and Limits: A Simple Guide</title>
      <dc:creator>Manikanta</dc:creator>
      <pubDate>Fri, 08 Nov 2024 06:46:40 +0000</pubDate>
      <link>https://dev.to/devopsbymani/understanding-kubernetes-requests-and-limits-a-simple-guide-1phb</link>
      <guid>https://dev.to/devopsbymani/understanding-kubernetes-requests-and-limits-a-simple-guide-1phb</guid>
      <description>&lt;p&gt;Kubernetes is a powerful container orchestration platform, widely used for deploying and managing applications. One of the key features that makes Kubernetes flexible and efficient is the ability to set &lt;strong&gt;resource requests&lt;/strong&gt; and &lt;strong&gt;limits&lt;/strong&gt; for containers. But what do these terms mean, and why are they important?&lt;/p&gt;

&lt;p&gt;In this blog post, we'll break down what &lt;strong&gt;requests&lt;/strong&gt; and &lt;strong&gt;limits&lt;/strong&gt; are, why they matter, and how you can use them to optimize your Kubernetes deployments.&lt;/p&gt;

&lt;h3&gt;
  
  
  What are Kubernetes Requests and Limits?
&lt;/h3&gt;

&lt;p&gt;When you deploy a containerized application in Kubernetes, each container needs access to system resources like CPU and memory (RAM). Kubernetes lets you define the &lt;strong&gt;amount of CPU&lt;/strong&gt; and &lt;strong&gt;memory&lt;/strong&gt; a container can use. This is done through &lt;strong&gt;requests&lt;/strong&gt; and &lt;strong&gt;limits&lt;/strong&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Requests&lt;/strong&gt;: This is the amount of resources that Kubernetes will guarantee for a container. When you set a request, you are telling Kubernetes, "I need at least this amount of CPU or memory to run the container." The scheduler uses the request values to decide which node in the cluster should run the container.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Limits&lt;/strong&gt;: This is the maximum amount of resources that the container can use. If the container tries to use more resources than the limit, Kubernetes will intervene. For CPU, it might throttle the container's CPU usage, and for memory, it could terminate the container (kill it) and possibly restart it.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Why Do Requests and Limits Matter?
&lt;/h3&gt;

&lt;p&gt;Setting requests and limits correctly helps ensure that your applications run smoothly and efficiently. Here are some of the key reasons why they matter:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Resource Efficiency&lt;/strong&gt;: By setting both requests and limits, you ensure that your containers don't use more resources than they need, which can help optimize the usage of cluster resources and prevent bottlenecks.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Fair Resource Distribution&lt;/strong&gt;: If you don’t set resource limits, one container could consume all available CPU or memory on a node, starving other containers of the resources they need. With limits, Kubernetes ensures that no container can monopolize the node’s resources.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Preventing Resource Exhaustion&lt;/strong&gt;: Without limits, you could accidentally over-allocate resources to a container, causing other workloads to suffer. If you set proper limits, Kubernetes can protect your application from consuming excessive resources.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Avoiding Unexpected Container Failures&lt;/strong&gt;: Setting memory limits is especially important because if a container exceeds its memory limit, Kubernetes will kill the container, thinking it is a misbehaving process. This helps avoid the situation where a container runs indefinitely out of control and potentially crashes the node.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  How to Set Requests and Limits
&lt;/h3&gt;

&lt;p&gt;To define requests and limits for CPU and memory, you specify them in your &lt;strong&gt;Pod's configuration YAML file&lt;/strong&gt;. Here’s an example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;example-pod&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;example-container&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
    &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;requests&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;64Mi"&lt;/span&gt;  &lt;span class="c1"&gt;# 64 MiB of memory guaranteed&lt;/span&gt;
        &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;250m"&lt;/span&gt;     &lt;span class="c1"&gt;# 250 milliCPU (0.25 of a CPU core) guaranteed&lt;/span&gt;
      &lt;span class="na"&gt;limits&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;128Mi"&lt;/span&gt; &lt;span class="c1"&gt;# 128 MiB of memory maximum&lt;/span&gt;
        &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;500m"&lt;/span&gt;     &lt;span class="c1"&gt;# 500 milliCPU (0.5 of a CPU core) maximum&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Key Points:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Requests&lt;/strong&gt; are defined under &lt;code&gt;requests&lt;/code&gt;, and they specify the minimum amount of resources the container needs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Limits&lt;/strong&gt; are defined under &lt;code&gt;limits&lt;/code&gt;, and they specify the maximum amount of resources the container can use.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Understanding CPU and Memory Units
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;CPU&lt;/strong&gt; is measured in &lt;strong&gt;millicores (m)&lt;/strong&gt;. For example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;1 CPU&lt;/code&gt; is represented as &lt;code&gt;1000m&lt;/code&gt; or &lt;code&gt;1&lt;/code&gt;, which means 1 full CPU core.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;500m&lt;/code&gt; means half of a CPU core.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;250m&lt;/code&gt; means a quarter of a CPU core.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Memory&lt;/strong&gt; is measured in units like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Mi&lt;/strong&gt; (mebibytes), which is 1024 * 1024 bytes, and is more accurate for containerized applications.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Gi&lt;/strong&gt; (gibibytes), which is 1024 MiB or approximately 1.07 GB.&lt;/li&gt;
&lt;li&gt;For example: &lt;code&gt;64Mi&lt;/code&gt; means 64 mebibytes of memory.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h3&gt;
  
  
  What Happens When a Container Exceeds Its Limits?
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;CPU&lt;/strong&gt;: If the container uses more CPU than its limit, Kubernetes will &lt;strong&gt;throttle&lt;/strong&gt; the container’s CPU usage. This means that it will not be allowed to exceed the allocated CPU time, which could slow down the container but won’t kill it.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Memory&lt;/strong&gt;: If the container exceeds its memory limit, Kubernetes will &lt;strong&gt;terminate the container&lt;/strong&gt; and possibly restart it. This is because excessive memory usage is often a sign of a memory leak or a misbehaving process, and Kubernetes tries to protect the node’s stability.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Best Practices for Requests and Limits
&lt;/h3&gt;

&lt;p&gt;To make the most of Kubernetes' resource management features, here are some best practices to follow:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Set Requests and Limits for Every Container&lt;/strong&gt;: It's a good practice to always set both requests and limits for your containers. Without them, Kubernetes may not be able to schedule your pods efficiently or might overcommit the resources, causing instability.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Use Realistic Values&lt;/strong&gt;: Don’t set requests and limits too high or too low. If you set them too low, your container may not have enough resources to run properly. If you set them too high, you might waste cluster resources, leaving less for other workloads.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Monitor and Adjust&lt;/strong&gt;: Kubernetes doesn’t provide a one-size-fits-all solution. It's important to monitor the performance of your pods and adjust the resource values accordingly. Over time, you’ll get a better sense of the resources your application really needs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Use Resource Requests to Control Pod Scheduling&lt;/strong&gt;: Use resource requests to control pod scheduling, ensuring that your application is scheduled on nodes with enough resources to handle it. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Consider Vertical Pod Autoscaling (VPA)&lt;/strong&gt;: If you are unsure about the right values for requests and limits, you can use Kubernetes Vertical Pod Autoscaler (VPA), which adjusts resource requests and limits for your containers based on historical usage.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;Kubernetes requests and limits are powerful tools to ensure that your containerized applications are both resource-efficient and resilient. By setting both minimum (requests) and maximum (limits) resource values, you can ensure that your containers get the resources they need while preventing them from consuming too much and affecting other workloads in your cluster.&lt;/p&gt;

&lt;p&gt;Remember to always monitor your containers’ resource usage and adjust these values as needed to keep your Kubernetes cluster running smoothly.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Liveness, Readiness, and Startup Probes in Kubernetes: What You Need to Know</title>
      <dc:creator>Manikanta</dc:creator>
      <pubDate>Fri, 08 Nov 2024 06:29:22 +0000</pubDate>
      <link>https://dev.to/devopsbymani/liveness-readiness-and-startup-probes-in-kubernetes-what-you-need-to-know-2h8d</link>
      <guid>https://dev.to/devopsbymani/liveness-readiness-and-startup-probes-in-kubernetes-what-you-need-to-know-2h8d</guid>
      <description>&lt;p&gt;In &lt;strong&gt;Kubernetes&lt;/strong&gt;, probes are used to check the health and readiness of containers running in a pod. Probes allow Kubernetes to manage the lifecycle of a pod by ensuring that containers are running properly, and if not, take appropriate actions such as restarting the container or removing it from service until it’s ready.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;There are three main types of probes in Kubernetes:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Liveness Probe&lt;/strong&gt;&lt;br&gt;
&lt;em&gt;&lt;strong&gt;Purpose:&lt;/strong&gt;&lt;/em&gt; Determines if a container is still running and healthy.&lt;br&gt;
If the liveness probe fails, Kubernetes will restart the container. This is useful for detecting situations where the application has hung or is in an unresponsive state.&lt;br&gt;
Example scenario: A web server that is serving traffic but is unable to process new requests due to a deadlock or resource exhaustion.&lt;br&gt;
&lt;strong&gt;2. Readiness Probe&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;&lt;em&gt;Purpose:&lt;/em&gt;&lt;/strong&gt; Determines if the container is ready to accept traffic.&lt;br&gt;
If the readiness probe fails, the container will not receive traffic from the Kubernetes Service (it will be removed from the Service endpoint list until it becomes ready).&lt;br&gt;
This is helpful when a container needs some time to initialize or warm up before it starts serving requests (e.g., loading large datasets, performing migrations).&lt;br&gt;
&lt;strong&gt;3. Startup Probe&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;&lt;em&gt;Purpose:&lt;/em&gt;&lt;/strong&gt; Determines whether a container has started successfully.&lt;br&gt;
If the startup probe fails, Kubernetes will kill the container and try to restart it. It’s useful for containers that need extra time to initialize or are slow to start.&lt;br&gt;
This probe is typically used for applications that require more time than usual for startup before the readiness or liveness checks should be performed&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Each of these probes is highly configurable and can be defined using three main options:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. HTTP GET Probe&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;2. TCP Socket Probe&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;3. Exec Probe&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Let’s dive into each of these options and how they can be configured.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. HTTP GET Probe&lt;/strong&gt;&lt;br&gt;
The HTTP GET Probe sends an HTTP GET request to a specific path on a container’s port to check if the container is healthy. This is the most common type of probe for web-based applications or services.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;How It Works:&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Kubernetes will make an HTTP request to the specified path and port inside the container.&lt;/li&gt;
&lt;li&gt;If the server responds with a 2xx or 3xx HTTP status code, the probe is considered successful.&lt;/li&gt;
&lt;li&gt;If the server responds with a non-successful status code (e.g., 4xx, 5xx), the probe is considered failed.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Example :&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;livenessProbe:
  httpGet:
    path: /healthz
    port: 8080
  initialDelaySeconds: 5
  periodSeconds: 10
  timeoutSeconds: 2
  failureThreshold: 3

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;path&lt;/strong&gt;: The URL path that the probe will try to access (e.g., /healthz).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;port&lt;/strong&gt;: The port to reach inside the container (e.g., 8080).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;initialDelaySeconds&lt;/strong&gt;: The initial wait time before performing the first probe, in seconds.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;periodSeconds&lt;/strong&gt;: The interval between each probe, in seconds.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;timeoutSeconds&lt;/strong&gt;: How long to wait for a response before timing out, in seconds.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;failureThreshold&lt;/strong&gt;: The number of consecutive failures before marking the container as unhealthy.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;When to Use:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Suitable for web servers, APIs, or any container running an HTTP-based service.&lt;/li&gt;
&lt;li&gt;Ideal for containers that expose health or readiness endpoints.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;2. TCP Socket Probe&lt;/strong&gt;&lt;br&gt;
The TCP Socket Probe checks whether a TCP socket is open on a given port in the container. This is particularly useful for non-HTTP-based services like databases, message brokers, or custom network services.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;How It Works:&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Kubernetes attempts to open a TCP connection to the specified port inside the container.&lt;/li&gt;
&lt;li&gt;If the connection is successful (i.e., the port is open), the probe passes.&lt;/li&gt;
&lt;li&gt;If the connection cannot be established (i.e., the port is closed), the probe fails.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Example :&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;readinessProbe:
  tcpSocket:
    port: 3306
  initialDelaySeconds: 5
  periodSeconds: 10
  timeoutSeconds: 3
  failureThreshold: 3
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;port&lt;/strong&gt;: The TCP port that Kubernetes should attempt to connect to (e.g., 3306 for MySQL).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;initialDelaySeconds&lt;/strong&gt;: The number of seconds to wait after the container starts before performing the first probe.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;periodSeconds&lt;/strong&gt;: How often to check the socket, in seconds.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;timeoutSeconds&lt;/strong&gt;: How long to wait for a response before timing out, in seconds.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;failureThreshold&lt;/strong&gt;: The number of failed probes before the container is marked unhealthy.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;When to Use:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Suitable for services that do not use HTTP, like databases (MySQL, PostgreSQL) or message queues (Kafka, Redis).&lt;/li&gt;
&lt;li&gt;Useful for containers that expose raw network services without HTTP endpoints.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;3. Exec Probe&lt;/strong&gt;&lt;br&gt;
The Exec Probe runs a command inside the container and checks its exit status to determine if the probe is successful. If the command returns a 0 exit status, the probe passes. If it returns a non-zero exit status, the probe fails.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;How It Works:&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
Kubernetes runs the specified command inside the container.&lt;br&gt;
If the command’s exit code is 0, the probe is successful.&lt;br&gt;
If the exit code is non-zero, the probe fails.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;startupProbe:
  exec:
    command:
      - "cat"
      - "/tmp/healthy"
  initialDelaySeconds: 10
  periodSeconds: 15
  failureThreshold: 3

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;command&lt;/strong&gt;: The command to run inside the container, specified as a list of strings (e.g., cat /tmp/healthy).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;initialDelaySeconds&lt;/strong&gt;: The number of seconds to wait after the container starts before performing the first probe.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;periodSeconds&lt;/strong&gt;: How often to run the command, in seconds.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;failureThreshold&lt;/strong&gt;: The number of consecutive failures before marking the container as unhealthy.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;When to Use:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Suitable for cases where you need to check internal states or files in the container.&lt;/li&gt;
&lt;li&gt;Useful for debugging or complex health checks that require running specific scripts or commands inside the container.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Additional Probe Configuration Options&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In addition to defining the probe type (httpGet, tcpSocket, or exec), you can also configure several timing-related parameters to fine-tune how Kubernetes performs the probes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;initialDelaySeconds: The time to wait after the container starts before performing the first probe. This is helpful when your application needs time to initialize.&lt;/li&gt;
&lt;li&gt;periodSeconds: The frequency at which Kubernetes will perform the probe. This controls how often the system checks the container's health or readiness.&lt;/li&gt;
&lt;li&gt;timeoutSeconds: The amount of time Kubernetes should wait for a probe to respond before considering it a failure.&lt;/li&gt;
&lt;li&gt;failureThreshold: The number of consecutive probe failures that Kubernetes will tolerate before considering the container unhealthy.&lt;/li&gt;
&lt;li&gt;successThreshold: The number of consecutive successful probes required to consider the container healthy. (Default is 1 for liveness and readiness probes, and typically used for readiness checks.)&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>kubernetes</category>
      <category>k8s</category>
      <category>devops</category>
      <category>aws</category>
    </item>
    <item>
      <title>A Comprehensive Guide to Dockerfile Creation with Explanation of Key Commands</title>
      <dc:creator>Manikanta</dc:creator>
      <pubDate>Thu, 07 Nov 2024 14:28:06 +0000</pubDate>
      <link>https://dev.to/devopsbymani/a-comprehensive-guide-to-dockerfile-creation-with-explanation-of-key-commands-421n</link>
      <guid>https://dev.to/devopsbymani/a-comprehensive-guide-to-dockerfile-creation-with-explanation-of-key-commands-421n</guid>
      <description>&lt;p&gt;&lt;strong&gt;&lt;em&gt;Dockerfiles&lt;/em&gt;&lt;/strong&gt; are essential for creating Docker images, which serve as the foundation for containers. These files contain a series of instructions that define the environment in which your application runs. By using a Dockerfile, you ensure your application behaves the same way regardless of where it’s deployed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Dockerfile Instructions and Their Usage&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. &lt;code&gt;FROM&lt;/code&gt; – The Base Image&lt;/strong&gt;&lt;br&gt;
The &lt;code&gt;FROM&lt;/code&gt; instruction sets the base image for your Docker image. Every Dockerfile starts with &lt;code&gt;FROM&lt;/code&gt;, as it specifies the starting point for your image. &lt;/p&gt;

&lt;p&gt;Example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; node:14&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This tells Docker to use the official &lt;code&gt;node:14&lt;/code&gt; image as the base for your image.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. &lt;code&gt;RUN&lt;/code&gt; – Execute Commands
&lt;/h3&gt;

&lt;p&gt;The &lt;code&gt;RUN&lt;/code&gt; command is used to execute commands inside the image during the build process. This is typically used to install dependencies or update the image.&lt;/p&gt;

&lt;p&gt;Example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;RUN &lt;/span&gt;apt-get update &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; apt-get &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; curl
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will update the package lists and install &lt;code&gt;curl&lt;/code&gt; in your image.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. &lt;code&gt;MAINTAINER&lt;/code&gt; – Image Maintainer Information (Deprecated)
&lt;/h3&gt;

&lt;p&gt;The &lt;code&gt;MAINTAINER&lt;/code&gt; instruction was originally used to define the author of the image. However, it’s now deprecated, and it’s recommended to use the &lt;code&gt;LABEL&lt;/code&gt; instruction instead for this purpose.&lt;/p&gt;

&lt;p&gt;Example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;MAINTAINER&lt;/span&gt;&lt;span class="s"&gt; John Doe &amp;lt;johndoe@example.com&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  4. &lt;code&gt;LABEL&lt;/code&gt; – Metadata
&lt;/h3&gt;

&lt;p&gt;The &lt;code&gt;LABEL&lt;/code&gt; instruction adds metadata to an image in the form of key-value pairs. This is useful for providing details about the image, like its version, description, or maintainer.&lt;/p&gt;

&lt;p&gt;Example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;LABEL&lt;/span&gt;&lt;span class="s"&gt; maintainer="John Doe &amp;lt;johndoe@example.com&amp;gt;"&lt;/span&gt;
&lt;span class="k"&gt;LABEL&lt;/span&gt;&lt;span class="s"&gt; version="1.0"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  5. &lt;code&gt;ADD&lt;/code&gt; – Add Files with Extra Features
&lt;/h3&gt;

&lt;p&gt;The &lt;code&gt;ADD&lt;/code&gt; command is used to copy files from your host machine into the container. It also has extra features such as the ability to unpack &lt;code&gt;.tar&lt;/code&gt; files or download files from a URL.&lt;/p&gt;

&lt;p&gt;Example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;ADD&lt;/span&gt;&lt;span class="s"&gt; source.tar.gz /app/&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will copy &lt;code&gt;source.tar.gz&lt;/code&gt; from the host and extract it into the &lt;code&gt;/app/&lt;/code&gt; directory in the container.&lt;/p&gt;

&lt;h3&gt;
  
  
  6. &lt;code&gt;COPY&lt;/code&gt; – Copy Files or Directories
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;COPY&lt;/code&gt; works similarly to &lt;code&gt;ADD&lt;/code&gt;, but it doesn't have the extra functionality (like automatic extraction of &lt;code&gt;.tar&lt;/code&gt; files) and is considered more predictable. You use &lt;code&gt;COPY&lt;/code&gt; to copy files or directories into the image.&lt;/p&gt;

&lt;p&gt;Example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; . /app/&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will copy everything from the current directory on your host machine into the &lt;code&gt;/app/&lt;/code&gt; directory in the container.&lt;/p&gt;

&lt;h3&gt;
  
  
  7. &lt;code&gt;VOLUME&lt;/code&gt; – Create Mount Points
&lt;/h3&gt;

&lt;p&gt;The &lt;code&gt;VOLUME&lt;/code&gt; command creates a mount point for volumes, which can be used to persist data in a container or share data between containers.&lt;/p&gt;

&lt;p&gt;Example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;VOLUME&lt;/span&gt;&lt;span class="s"&gt; ["/data"]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will create a mount point &lt;code&gt;/data&lt;/code&gt;, allowing data to persist even if the container is stopped or deleted.&lt;/p&gt;

&lt;h3&gt;
  
  
  8. &lt;code&gt;EXPOSE&lt;/code&gt; – Expose Ports
&lt;/h3&gt;

&lt;p&gt;The &lt;code&gt;EXPOSE&lt;/code&gt; command informs Docker that the container will listen on specific ports at runtime. It doesn’t actually publish the port, but it serves as a hint for those who want to map ports when running the container.&lt;/p&gt;

&lt;p&gt;Example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;EXPOSE&lt;/span&gt;&lt;span class="s"&gt; 8080&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will expose port 8080 on the container, signaling that the container is ready to communicate over this port.&lt;/p&gt;

&lt;h3&gt;
  
  
  9. &lt;code&gt;WORKDIR&lt;/code&gt; – Set Working Directory
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;WORKDIR&lt;/code&gt; sets the working directory for subsequent instructions like &lt;code&gt;RUN&lt;/code&gt;, &lt;code&gt;CMD&lt;/code&gt;, &lt;code&gt;ENTRYPOINT&lt;/code&gt;, &lt;code&gt;COPY&lt;/code&gt;, etc. If the directory doesn’t exist, it will be created.&lt;/p&gt;

&lt;p&gt;Example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /app&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This sets &lt;code&gt;/app&lt;/code&gt; as the working directory for the following commands.&lt;/p&gt;

&lt;h3&gt;
  
  
  10. &lt;code&gt;USER&lt;/code&gt; – Set User for Container
&lt;/h3&gt;

&lt;p&gt;The &lt;code&gt;USER&lt;/code&gt; command sets the user and group to use when running commands inside the container. By default, Docker runs commands as the root user, but it's good practice to specify a non-root user for security.&lt;/p&gt;

&lt;p&gt;Example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;USER&lt;/span&gt;&lt;span class="s"&gt; node&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This sets the user to &lt;code&gt;node&lt;/code&gt; when running subsequent commands in the Dockerfile.&lt;/p&gt;

&lt;h3&gt;
  
  
  11. &lt;code&gt;STOPSIGNAL&lt;/code&gt; – Define Stop Signal
&lt;/h3&gt;

&lt;p&gt;The &lt;code&gt;STOPSIGNAL&lt;/code&gt; instruction sets the system call signal that will be used to stop the container. By default, Docker uses &lt;code&gt;SIGTERM&lt;/code&gt;, but you can specify another signal.&lt;/p&gt;

&lt;p&gt;Example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;STOPSIGNAL&lt;/span&gt;&lt;span class="s"&gt; SIGINT&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will use the &lt;code&gt;SIGINT&lt;/code&gt; signal to stop the container instead of the default &lt;code&gt;SIGTERM&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  12. &lt;code&gt;ENTRYPOINT&lt;/code&gt; – Set the Entry Point
&lt;/h3&gt;

&lt;p&gt;The &lt;code&gt;ENTRYPOINT&lt;/code&gt; command defines the default application that gets executed when a container starts. It’s often used in combination with &lt;code&gt;CMD&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;ENTRYPOINT&lt;/span&gt;&lt;span class="s"&gt; ["node", "server.js"]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will run &lt;code&gt;node server.js&lt;/code&gt; as the entry point when the container starts.&lt;/p&gt;

&lt;h3&gt;
  
  
  13. &lt;code&gt;CMD&lt;/code&gt; – Provide Default Arguments
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;CMD&lt;/code&gt; specifies the default command to run when the container is started. It can be overridden when running the container.&lt;/p&gt;

&lt;p&gt;Example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;CMD&lt;/span&gt;&lt;span class="s"&gt; ["npm", "start"]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will run &lt;code&gt;npm start&lt;/code&gt; by default, unless the command is overridden.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; If both &lt;code&gt;ENTRYPOINT&lt;/code&gt; and &lt;code&gt;CMD&lt;/code&gt; are used together, &lt;code&gt;CMD&lt;/code&gt; provides the default arguments for the &lt;code&gt;ENTRYPOINT&lt;/code&gt; command.&lt;/p&gt;

&lt;h3&gt;
  
  
  14. &lt;code&gt;ENV&lt;/code&gt; – Set Environment Variables
&lt;/h3&gt;

&lt;p&gt;The &lt;code&gt;ENV&lt;/code&gt; instruction is used to set environment variables that will be available to the running container.&lt;/p&gt;

&lt;p&gt;Example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;ENV&lt;/span&gt;&lt;span class="s"&gt; NODE_ENV production&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This sets the &lt;code&gt;NODE_ENV&lt;/code&gt; environment variable to &lt;code&gt;production&lt;/code&gt; inside the container.&lt;/p&gt;




&lt;h2&gt;
  
  
  Putting It All Together: A Complete Dockerfile Example
&lt;/h2&gt;

&lt;p&gt;Now that we’ve covered the essential instructions, let’s create a Dockerfile for a basic Node.js application using many of the instructions discussed:&lt;/p&gt;

&lt;h3&gt;
  
  
  Example Dockerfile:
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="c"&gt;# Step 1: Set the base image&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; node:14&lt;/span&gt;

&lt;span class="c"&gt;# Step 2: Set the maintainer and other metadata&lt;/span&gt;
&lt;span class="k"&gt;LABEL&lt;/span&gt;&lt;span class="s"&gt; maintainer="John Doe &amp;lt;johndoe@example.com&amp;gt;"&lt;/span&gt;
&lt;span class="k"&gt;LABEL&lt;/span&gt;&lt;span class="s"&gt; version="1.0"&lt;/span&gt;
&lt;span class="k"&gt;LABEL&lt;/span&gt;&lt;span class="s"&gt; description="A simple Node.js app container"&lt;/span&gt;

&lt;span class="c"&gt;# Step 3: Set the working directory inside the container&lt;/span&gt;
&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /app&lt;/span&gt;

&lt;span class="c"&gt;# Step 4: Copy package.json and package-lock.json&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; package*.json ./&lt;/span&gt;

&lt;span class="c"&gt;# Step 5: Install dependencies&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;npm &lt;span class="nb"&gt;install&lt;/span&gt;

&lt;span class="c"&gt;# Step 6: Copy the rest of the application code&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; . .&lt;/span&gt;

&lt;span class="c"&gt;# Step 7: Expose the port the app will run on&lt;/span&gt;
&lt;span class="k"&gt;EXPOSE&lt;/span&gt;&lt;span class="s"&gt; 3000&lt;/span&gt;

&lt;span class="c"&gt;# Step 8: Set the environment variable&lt;/span&gt;
&lt;span class="k"&gt;ENV&lt;/span&gt;&lt;span class="s"&gt; NODE_ENV production&lt;/span&gt;

&lt;span class="c"&gt;# Step 9: Set the default command to run the app&lt;/span&gt;
&lt;span class="k"&gt;ENTRYPOINT&lt;/span&gt;&lt;span class="s"&gt; ["node", "server.js"]&lt;/span&gt;

&lt;span class="c"&gt;# Step 10: Optionally, set the default command arguments (overridden by docker run arguments)&lt;/span&gt;
&lt;span class="k"&gt;CMD&lt;/span&gt;&lt;span class="s"&gt; ["start"]&lt;/span&gt;

&lt;span class="c"&gt;# Step 11: Create a volume to persist data&lt;/span&gt;
&lt;span class="k"&gt;VOLUME&lt;/span&gt;&lt;span class="s"&gt; ["/data"]&lt;/span&gt;

&lt;span class="c"&gt;# Step 12: Set the user to run as non-root&lt;/span&gt;
&lt;span class="k"&gt;USER&lt;/span&gt;&lt;span class="s"&gt; node&lt;/span&gt;

&lt;span class="c"&gt;# Step 13: Define the stop signal for the container&lt;/span&gt;
&lt;span class="k"&gt;STOPSIGNAL&lt;/span&gt;&lt;span class="s"&gt; SIGINT&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Explanation of the Example:
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;FROM&lt;/strong&gt;: We're using the official &lt;code&gt;node:14&lt;/code&gt; image as the base.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;LABEL&lt;/strong&gt;: Metadata is added for versioning and maintainer information.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;WORKDIR&lt;/strong&gt;: All subsequent commands will execute in the &lt;code&gt;/app&lt;/code&gt; directory.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;COPY&lt;/strong&gt;: First, we copy &lt;code&gt;package.json&lt;/code&gt; to install dependencies, then copy the application code.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;RUN&lt;/strong&gt;: Installs Node.js dependencies using &lt;code&gt;npm install&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;EXPOSE&lt;/strong&gt;: Exposes port 3000 to allow communication with the outside world.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ENV&lt;/strong&gt;: Sets the &lt;code&gt;NODE_ENV&lt;/code&gt; to &lt;code&gt;production&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ENTRYPOINT&lt;/strong&gt;: Defines the main application start command.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CMD&lt;/strong&gt;: Provides default arguments to the entrypoint command (in this case, &lt;code&gt;start&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;VOLUME&lt;/strong&gt;: Creates a volume at &lt;code&gt;/data&lt;/code&gt; for data persistence.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;USER&lt;/strong&gt;: Switches to the &lt;code&gt;node&lt;/code&gt; user for better security.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;STOPSIGNAL&lt;/strong&gt;: Sets &lt;code&gt;SIGINT&lt;/code&gt; as the stop signal, which is often used for graceful shutdowns.&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Dockerfiles provide a powerful and flexible way to automate the creation of Docker images, ensuring that your applications are portable, reproducible, and easy to deploy. By understanding each Dockerfile instruction like &lt;code&gt;FROM&lt;/code&gt;, &lt;code&gt;RUN&lt;/code&gt;, &lt;code&gt;LABEL&lt;/code&gt;, &lt;code&gt;COPY&lt;/code&gt;, &lt;code&gt;ENTRYPOINT&lt;/code&gt;, &lt;code&gt;CMD&lt;/code&gt;, and others, you can build optimized, secure, and scalable images for your applications.&lt;/p&gt;

</description>
      <category>docker</category>
      <category>devops</category>
      <category>aws</category>
      <category>azure</category>
    </item>
    <item>
      <title>Top 10 Docker Commands Every Developer Should Know [Chapter 1]</title>
      <dc:creator>Manikanta</dc:creator>
      <pubDate>Tue, 05 Nov 2024 03:50:26 +0000</pubDate>
      <link>https://dev.to/devopsbymani/top-10-docker-commands-every-developer-should-know-chapter-1-54ad</link>
      <guid>https://dev.to/devopsbymani/top-10-docker-commands-every-developer-should-know-chapter-1-54ad</guid>
      <description>&lt;p&gt;&lt;strong&gt;Docker is a powerful tool that allows developers to automate the deployment of applications inside lightweight, portable containers. These containers package the application and its dependencies into a single unit that can run consistently across different environments. To make the most of Docker, you need to be familiar with a set of core commands that help you manage containers, images, and other resources. In this blog, we’ll explore the Top 10 Docker commands you should know to become more proficient with Docker.&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;1. docker --version&lt;/strong&gt;&lt;br&gt;
The first step when you're getting started with Docker is verifying its installation. You can use this command to check the installed version of Docker on your machine.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker --version
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;2. docker pull&lt;/strong&gt;&lt;br&gt;
The docker pull command allows you to download Docker images from Docker Hub or other image repositories. This command pulls the specified image (in this case, ubuntu) from the default Docker registry (Docker Hub). You can pull any image, such as a specific version (ubuntu:20.04), or any public image available on Docker Hub.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker pull ubuntu
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;3. docker build&lt;/strong&gt;&lt;br&gt;
The docker build command is used to create a Docker image from a Dockerfile. Here, myimage is the tag you're giving to the image, and . represents the current directory (where your Dockerfile is located). Docker reads the Dockerfile in the directory and builds the image based on the instructions inside it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker build -t myimage .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;4. docker run&lt;/strong&gt;&lt;br&gt;
The docker run command is used to create and start a container from a specified image.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;-d runs the container in detached mode (in the background).&lt;/li&gt;
&lt;li&gt;-p 8080:80 maps port 8080 on your local machine to port 80 inside the 
container (useful for web apps).&lt;/li&gt;
&lt;li&gt;- nginx is the image you're running (in this case, the Nginx web server).&lt;/li&gt;
&lt;li&gt;- --name provide name for container
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run -d -p 8080:80 --name &amp;lt;name-of-container&amp;gt; nginx 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;strong&gt;5. docker ps&lt;/strong&gt;&lt;br&gt;
The docker ps command shows the running containers on your system. This command lists all the containers that are currently running, showing information such as container ID, image name, status, and the ports that are exposed.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker ps
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you want to see all containers (including stopped ones), you can use&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker ps -a
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;6. docker stop&lt;/strong&gt;&lt;br&gt;
The docker stop command stops a running container.You provide the container ID or name (e.g., docker stop mycontainer). This stops the specified container gracefully.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker stop &amp;lt;container_id&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you want to forcefully stop it, you can use&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker kill &amp;lt;container_id&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;7. docker rm&lt;/strong&gt;&lt;br&gt;
After stopping a container, you may want to remove it. The docker rm command allows you to delete stopped containers. This removes the container from your system. You can remove multiple containers at once by listing their IDs separated by spaces. If you want to remove a running container, you'll need to stop it first using docker stop.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker rm &amp;lt;container_id&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker rm &amp;lt;container_1&amp;gt; &amp;lt;container_2&amp;gt; &amp;lt;container_3&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;8. docker images&lt;/strong&gt;&lt;br&gt;
The docker images command lists all the images stored locally on your system. This shows information about the available images, such as repository, tag, image ID, creation date, and size. This command helps you keep track of the images you have locally.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker images
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;9. docker rmi&lt;/strong&gt;&lt;br&gt;
The docker rmi command is used to remove Docker images. This command removes the specified image from your local system. If an image is being used by a container, you'll need to stop and remove the container before you can delete the image. You can also remove multiple images by providing their IDs in the same command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker rmi &amp;lt;image_id&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;10. docker exec&lt;/strong&gt;&lt;br&gt;
The docker exec command is used to run a command inside a running container.This allows you to start an interactive Bash shell inside a running container. The -it flags stand for interactive (-i) and pseudo-TTY (-t), which together allow you to interact with the shell. This is especially useful for debugging or managing containers in real-time.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker exec -it &amp;lt;container_id&amp;gt; bin/bash
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
Mastering these 10 essential Docker commands will help you become more comfortable working with Docker containers and images. As you continue to explore Docker, you'll likely come across more advanced commands and features, but understanding the basics is crucial for building a solid foundation.&lt;/p&gt;

&lt;p&gt;By using these commands, you can manage images, containers, and other Docker resources, helping you streamline development workflows, automate testing, and deploy applications more efficiently.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;See you in Chapter 2&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>docker</category>
      <category>devops</category>
      <category>cloud</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>Top 20 Kubernetes Commands You Should Know [Chapter 1]</title>
      <dc:creator>Manikanta</dc:creator>
      <pubDate>Mon, 04 Nov 2024 16:43:20 +0000</pubDate>
      <link>https://dev.to/devopsbymani/top-20-kubernetes-commands-you-should-know-chapter-1-30bg</link>
      <guid>https://dev.to/devopsbymani/top-20-kubernetes-commands-you-should-know-chapter-1-30bg</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fipnti0n2yn8pd3b0abrz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fipnti0n2yn8pd3b0abrz.png" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;Kubernetes (K8s) is a powerful platform for managing containerized applications. Familiarity with essential Kubernetes commands can significantly improve your productivity and help you troubleshoot issues more effectively. Below, we’ll cover 20 of the most important Kubernetes commands, along with simple explanations.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1.kubectl get&lt;/strong&gt;&lt;br&gt;
Use this command to list resources. You can specify the resource type (like pods, services, deployments) to get a concise view of what's running.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get pods
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;2.kubectl describe&lt;/strong&gt;&lt;br&gt;
This command provides detailed information about a specific resource, including events, status, and configuration.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl describe pod &amp;lt;pod-name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;3.kubectl create&lt;/strong&gt;&lt;br&gt;
Use this command to create a resource from a file or directly in the command line.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl create -f &amp;lt;file.yaml&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;4.kubectl apply&lt;/strong&gt;&lt;br&gt;
This command updates a resource by applying changes from a file. It can create the resource if it doesn't exist.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f &amp;lt;file.yaml&amp;gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;5.kubectl delete&lt;/strong&gt;&lt;br&gt;
Use this command to delete a resource. You can specify the resource type and name.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl delete pod &amp;lt;pod-name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;6.kubectl logs&lt;/strong&gt;&lt;br&gt;
This command retrieves logs from a specified pod, which is useful for debugging.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl logs &amp;lt;pod-name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;7.kubectl exec&lt;/strong&gt;&lt;br&gt;
Use this command to execute a command inside a running container within a pod.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl exec -it &amp;lt;pod-name&amp;gt; -- /bin/bash
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;8.kubectl port-forward&lt;/strong&gt;&lt;br&gt;
This command allows you to forward a local port to a port on a pod, which is useful for accessing services locally.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl port-forward &amp;lt;pod-name&amp;gt; &amp;lt;local-port&amp;gt;:&amp;lt;pod-port&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;9.kubectl scale&lt;/strong&gt;&lt;br&gt;
Use this command to scale a deployment up or down by specifying the desired number of replicas.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl scale deployment &amp;lt;deployment-name&amp;gt; --replicas=&amp;lt;number&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;10.kubectl rollout&lt;/strong&gt;&lt;br&gt;
This command manages the rollout of a resource, such as deploying a new version or rolling back to a previous version.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl rollout status deployment &amp;lt;deployment-name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;11.kubectl get nodes&lt;/strong&gt;&lt;br&gt;
This command lists all the nodes in your Kubernetes cluster, showing their status and roles.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get nodes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;12.kubectl get services&lt;/strong&gt;&lt;br&gt;
Use this command to list all services in the cluster, which helps you understand how different parts of your application communicate.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get services
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;13.kubectl get deployments&lt;/strong&gt;&lt;br&gt;
This command lists all deployments, providing an overview of your application’s desired state.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get deployments
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;14.kubectl top&lt;/strong&gt;&lt;br&gt;
This command shows resource usage (CPU and memory) of nodes or pods, which is helpful for monitoring performance.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl top pods
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;15.kubectl config&lt;/strong&gt;&lt;br&gt;
This command allows you to modify kubeconfig settings, such as changing the current context.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl config current-context
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;16.kubectl namespace&lt;/strong&gt;&lt;br&gt;
Use this command to manage namespaces in your Kubernetes cluster, which helps organize resource&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl create namespace &amp;lt;namespace-name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;17.kubectl cp&lt;/strong&gt;&lt;br&gt;
This command copies files between your local filesystem and a pod.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl cp &amp;lt;local-file-path&amp;gt; &amp;lt;pod-name&amp;gt;:&amp;lt;remote-file-path&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;18.kubectl get events&lt;/strong&gt;&lt;br&gt;
This command lists events in your cluster, which can help you diagnose issues by showing what’s happening behind the scenes.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get events
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;19.kubectl edit&lt;/strong&gt;&lt;br&gt;
Use this command to edit a resource in your default editor, allowing for quick changes without creating a new file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl edit deployment &amp;lt;deployment-name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;20.kubectl api-resources&lt;/strong&gt;&lt;br&gt;
This command lists all API resources available in your Kubernetes cluster, helping you discover new types you might want to use.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl api-resources
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
Understanding these Kubernetes commands will enhance your ability to manage and troubleshoot your applications in a Kubernetes environment. Whether you’re a beginner or an experienced user, these commands are vital for navigating the complexities of container orchestration. Happy K8s-ing!&lt;/p&gt;

</description>
      <category>devops</category>
      <category>kubernetes</category>
      <category>aws</category>
      <category>cloud</category>
    </item>
  </channel>
</rss>
