<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: B.R.O.L.Y</title>
    <description>The latest articles on DEV Community by B.R.O.L.Y (@ridwaneelfilali).</description>
    <link>https://dev.to/ridwaneelfilali</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/ridwaneelfilali"/>
    <language>en</language>
    <item>
      <title>Mastering Kubernetes Step by Step Part 2 Pods and Containers Explained</title>
      <dc:creator>B.R.O.L.Y</dc:creator>
      <pubDate>Fri, 24 Oct 2025 13:05:25 +0000</pubDate>
      <link>https://dev.to/ridwaneelfilali/mastering-kubernetes-step-by-step-part-2-pods-and-containers-explained-1pj1</link>
      <guid>https://dev.to/ridwaneelfilali/mastering-kubernetes-step-by-step-part-2-pods-and-containers-explained-1pj1</guid>
      <description>&lt;h2&gt;
  
  
  Hands On: Getting Started
&lt;/h2&gt;

&lt;p&gt;Let's dive right in and get practical experience before explaining the theory. First, I recommend installing &lt;strong&gt;Docker Desktop&lt;/strong&gt; so we can run example clusters locally on a single node.&lt;/p&gt;

&lt;p&gt;Once installed, make sure to check the settings and enable the Kubernetes cluster option. After that, we'll use &lt;strong&gt;kubectl&lt;/strong&gt;, the command-line interface for Kubernetes (we'll cover this in detail later).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get pods
NAME                                READY   STATUS    RESTARTS   AGE
nginx-deployment-66b6c48dd5-8xqmk   1/1     Running   0          2d
nginx-deployment-66b6c48dd5-k9pzx   1/1     Running   0          2d
redis-master-f46ff57fd-7jq8w        1/1     Running   0          5d
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now you have a Kubernetes cluster running on your laptop with multiple pods across multiple namespaces.&lt;/p&gt;




&lt;h2&gt;
  
  
  Understanding Pods
&lt;/h2&gt;

&lt;p&gt;Kubernetes uses pods as its fundamental deployment unit for several important reasons: they provide an abstraction layer, enable resource sharing, add essential features, enhance scheduling capabilities, and much more.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Every application runs inside a pod on Kubernetes.&lt;/strong&gt; Here's what that means in practice:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;When you deploy an app, you deploy it in a Pod&lt;/li&gt;
&lt;li&gt;When you terminate an app, you terminate its Pod&lt;/li&gt;
&lt;li&gt;When you scale an app up, you add more Pods&lt;/li&gt;
&lt;li&gt;When you scale an app down, you remove Pods&lt;/li&gt;
&lt;li&gt;When you update an app, you replace Pods with new ones&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Pods as an Abstraction Layer
&lt;/h2&gt;

&lt;p&gt;Pods abstract away the complexity of different workload types. This powerful design means you can run containers, VMs, serverless functions, and WebAssembly apps inside pods, and Kubernetes treats them all the same way.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The benefits of this abstraction:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Kubernetes can focus on deploying and managing Pods without needing to understand what's running inside them&lt;/li&gt;
&lt;li&gt;Different types of workloads can run side-by-side on the same cluster&lt;/li&gt;
&lt;li&gt;All workloads leverage the full power of the declarative Kubernetes API&lt;/li&gt;
&lt;li&gt;Every workload benefits from Pod features like health checks, restart policies, and resource limits&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;How different workloads use Pods:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Containers and WebAssembly apps work directly with standard Pods, standard workload controllers, and standard runtimes. However, serverless functions and VMs need additional components:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Serverless functions&lt;/strong&gt; run in standard Pods but require frameworks like Knative to extend the Kubernetes API with custom resources and controllers&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Virtual machines&lt;/strong&gt; need tools like KubeVirt to extend the API and run VMs as pod-like resources (called VirtualMachineInstances or VMIs)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz0mbdeoiy7zcbewo5nq3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz0mbdeoiy7zcbewo5nq3.png" alt="pod workloads" width="800" height="247"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The diagram above shows four different workload types running on the same cluster. Each workload is wrapped in a Pod (or VMI), managed by a controller, and uses a standard runtime interface.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What Pods add to your workloads:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Resource sharing between containers&lt;/li&gt;
&lt;li&gt;Advanced scheduling capabilities&lt;/li&gt;
&lt;li&gt;Application health probes&lt;/li&gt;
&lt;li&gt;Restart policies&lt;/li&gt;
&lt;li&gt;Security policies&lt;/li&gt;
&lt;li&gt;Termination control&lt;/li&gt;
&lt;li&gt;Volume management&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  How Pods Enable Resource Sharing
&lt;/h2&gt;

&lt;p&gt;A Pod can run one or more containers, and all containers within the same Pod share the Pod's execution environment. This shared environment includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Shared filesystem and volumes&lt;/strong&gt; (mount namespace)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Shared network stack&lt;/strong&gt; (network namespace) - all containers share the same IP address&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Shared memory&lt;/strong&gt; (IPC namespace)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Shared process tree&lt;/strong&gt; (PID namespace)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Shared hostname&lt;/strong&gt; (UTS namespace)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwj30xz3mra8r5q4pajop.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwj30xz3mra8r5q4pajop.png" alt="graph" width="792" height="746"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Real-world example:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Imagine a Pod running at IP address 10.0.10.15 with two containers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A main application container listening on port 8080&lt;/li&gt;
&lt;li&gt;A sidecar container listening on port 5005&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;External clients access both containers using the Pod's single IP address (10.0.10.15), but on different ports. Inside the Pod, the containers can communicate with each other using &lt;code&gt;localhost&lt;/code&gt; since they share the same network namespace.&lt;/p&gt;

&lt;p&gt;Both containers can also mount the same volume to share data. For example, the sidecar might sync static content from a Git repository and store it in a shared volume, while the main container reads that content and serves it as a web page.&lt;/p&gt;




&lt;h2&gt;
  
  
  Pods and Scheduling
&lt;/h2&gt;

&lt;p&gt;Kubernetes guarantees that all containers in the same Pod will be scheduled to the same cluster node. However, you should only group containers in the same Pod if they truly need to share resources like memory, volumes, or networking.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Important principle:&lt;/strong&gt; If you just want two applications to run on the same node (without resource sharing), place them in separate Pods and use scheduling features to co-locate them.&lt;/p&gt;

&lt;h3&gt;
  
  
  Advanced Scheduling Features
&lt;/h3&gt;

&lt;p&gt;Pods provide powerful scheduling capabilities:&lt;/p&gt;

&lt;h4&gt;
  
  
  1. &lt;strong&gt;Node Selectors&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;The simplest way to control Pod placement. You provide a list of node labels, and the scheduler only assigns the Pod to nodes that have all those labels.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;nodeSelector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;disktype&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ssd&lt;/span&gt;
  &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;production&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  2. &lt;strong&gt;Affinity and Anti-Affinity&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;More powerful than node selectors, these rules give you fine-grained control over Pod placement.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The basics:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Affinity rules&lt;/strong&gt; attract Pods to specific nodes or other Pods&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Anti-affinity rules&lt;/strong&gt; repel Pods away from specific nodes or other Pods&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Hard rules&lt;/strong&gt; (requiredDuringScheduling) must be satisfied - the Pod won't schedule if they can't be met&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Soft rules&lt;/strong&gt; (preferredDuringScheduling) are preferences - the scheduler tries to honor them but will still schedule the Pod if it can't&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Node affinity example:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc9v1elfslrdvqgfm4gxa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc9v1elfslrdvqgfm4gxa.png" alt="graph" width="800" height="321"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This works like a node selector - you provide labels, and the scheduler assigns the Pod only to nodes with those labels.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pod affinity example:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe0e45u5lhm5k1s6tn0hf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe0e45u5lhm5k1s6tn0hf.png" alt="graph" width="800" height="402"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With Pod affinity, you provide labels of other Pods, and the scheduler places your Pod on nodes that are already running Pods with those labels. This is useful when you want related services to run close together for better performance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Anti-affinity example:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhy1uzwd4xzmevitf1xa0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhy1uzwd4xzmevitf1xa0.png" alt="graph" width="800" height="341"&gt;&lt;/a&gt;&lt;br&gt;
Anti-affinity ensures your Pods spread out. For example, you might use anti-affinity to ensure database replicas run on different nodes for high availability.&lt;/p&gt;
&lt;h4&gt;
  
  
  3. &lt;strong&gt;Topology Spread Constraints&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;These rules help you distribute Pods evenly across failure domains like zones, regions, or nodes to improve reliability.&lt;/p&gt;
&lt;h4&gt;
  
  
  4. &lt;strong&gt;Resource Requests and Limits&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;You can specify how much CPU and memory your Pod needs (requests) and the maximum it can use (limits). The scheduler uses this information to place Pods on nodes with sufficient resources.&lt;/p&gt;
&lt;h2&gt;
  
  
  Deploying Pods
&lt;/h2&gt;

&lt;p&gt;Deploying a Pod involves a carefully orchestrated series of steps across multiple Kubernetes components:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Define the Pod&lt;/strong&gt; in a YAML manifest file&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Post the manifest&lt;/strong&gt; to the API server&lt;/li&gt;
&lt;li&gt;The request is &lt;strong&gt;authenticated and authorized&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;Pod spec is validated&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;scheduler filters nodes&lt;/strong&gt; based on nodeSelectors, affinity and anti-affinity rules, topology spread constraints, resource requirements and limits, and more&lt;/li&gt;
&lt;li&gt;The Pod is &lt;strong&gt;assigned to a healthy node&lt;/strong&gt; meeting all requirements&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;kubelet on the node&lt;/strong&gt; watches the API server and notices the Pod assignment&lt;/li&gt;
&lt;li&gt;The kubelet &lt;strong&gt;downloads the Pod spec&lt;/strong&gt; and asks the local container runtime to start it&lt;/li&gt;
&lt;li&gt;The kubelet &lt;strong&gt;monitors the Pod status&lt;/strong&gt; and reports status changes back to the API server&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If the scheduler can't find a suitable node, it marks the Pod as &lt;strong&gt;pending&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Deploying a Pod is an &lt;strong&gt;atomic operation&lt;/strong&gt;. This means a Pod only starts servicing requests when all its containers are up and running.&lt;/p&gt;
&lt;h3&gt;
  
  
  Pod Deployment Flow
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feskwqecluiloxikblulo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feskwqecluiloxikblulo.png" alt="deployment process" width="800" height="378"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This diagram shows the complete journey from running a &lt;code&gt;kubectl&lt;/code&gt; command to having a Pod running on a node. Each component plays a specific role in ensuring the Pod is deployed correctly and securely.&lt;/p&gt;


&lt;h2&gt;
  
  
  Pod Lifecycle
&lt;/h2&gt;

&lt;p&gt;Pods are designed to be &lt;strong&gt;mortal&lt;/strong&gt; and &lt;strong&gt;immutable&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Mortal&lt;/strong&gt; means you create a Pod, it executes a task, and then it terminates. Once it completes, it gets deleted and cannot be restarted. The same is true if it fails — it gets deleted and cannot be restarted.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Immutable&lt;/strong&gt; means you cannot modify them after they're deployed. This can be a huge mindset change if you're from a traditional background where you regularly patched live servers and logged on to them to make fixes and configuration changes. If you need to change a Pod, you create a new one with the changes, delete the old one, and replace it with the new one. If a Pod needs to store data, you should attach a volume and store the data in the volume so it's not lost when the Pod is deleted.&lt;/p&gt;
&lt;h3&gt;
  
  
  A Typical Pod Lifecycle
&lt;/h3&gt;

&lt;p&gt;You define a Pod in a declarative YAML object that you post to the API server. It goes into the &lt;strong&gt;pending&lt;/strong&gt; phase while the scheduler finds a node to run it on. Assuming it finds a node, the Pod gets scheduled, and the local kubelet instructs the container runtime to start its containers. Once all of its containers are running, the Pod enters the &lt;strong&gt;running&lt;/strong&gt; phase. It remains in the running phase indefinitely if it's a long-lived Pod, such as a web server.&lt;/p&gt;

&lt;p&gt;If it's a short-lived Pod, such as a batch job, it enters the &lt;strong&gt;succeeded&lt;/strong&gt; state as soon as all containers complete their tasks.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhduhtgoydbipkohz8e5i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhduhtgoydbipkohz8e5i.png" alt="popo" width="800" height="463"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiycppvnit3ir9xzdns69.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiycppvnit3ir9xzdns69.png" alt="pod deployment" width="800" height="275"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A note on running VMs on Kubernetes:&lt;/strong&gt; VMs are designed as mutable immortal objects. For example, you can restart them, change their configurations, and even migrate them. This is very different from the design goals of Pods, which is why KubeVirt wraps VMs in a modified Pod-like resource called a VirtualMachineInstance (VMI) and manages them using custom workload controllers.&lt;/p&gt;


&lt;h3&gt;
  
  
  Restart Policies
&lt;/h3&gt;

&lt;p&gt;Earlier, we said Pods augment apps with restart policies. However, these policies apply to &lt;strong&gt;individual containers&lt;/strong&gt;, not the actual Pod.&lt;/p&gt;

&lt;p&gt;Let's consider some scenarios:&lt;/p&gt;

&lt;p&gt;You use a Deployment controller to schedule a Pod to a node, and the node fails. When this happens, the Deployment controller notices the failed node, deletes the Pod, and replaces it with a &lt;strong&gt;new one&lt;/strong&gt; on a surviving node. Even though the new Pod is based on the same Pod spec, it has a new UID, a new IP address, and no state from the previous Pod.&lt;/p&gt;

&lt;p&gt;The same thing happens when nodes evict Pods during node maintenance or due to resource constraints — the evicted Pod is deleted and replaced with a new one on another node.&lt;/p&gt;

&lt;p&gt;This pattern even applies during scaling operations, updates, and rollbacks:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Scaling down &lt;strong&gt;deletes&lt;/strong&gt; Pods&lt;/li&gt;
&lt;li&gt;Scaling up &lt;strong&gt;creates new&lt;/strong&gt; Pods&lt;/li&gt;
&lt;li&gt;Updates &lt;strong&gt;replace&lt;/strong&gt; old Pods with new ones&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;The key takeaway:&lt;/strong&gt; Anytime we say we're "updating" or "restarting" Pods, we really mean &lt;strong&gt;replacing them with new ones&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Although Kubernetes can't restart Pods, it can definitely restart &lt;strong&gt;containers&lt;/strong&gt;. This is always done by the local kubelet and governed by the &lt;code&gt;spec.restartPolicy&lt;/code&gt; field, which can be set to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Always&lt;/strong&gt; - Always attempt to restart a failed container&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Never&lt;/strong&gt; - Never attempt to restart a container&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;OnFailure&lt;/strong&gt; - Only restart if the container fails (not if it completes successfully)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The policy is Pod-wide, meaning it applies to all containers in the Pod except for init containers (more on those later).&lt;/p&gt;
&lt;h4&gt;
  
  
  Choosing the Right Restart Policy
&lt;/h4&gt;

&lt;p&gt;The restart policy you choose depends on the nature of your application:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Long-lived containers&lt;/strong&gt; host apps such as web servers, databases, and message queues that run indefinitely. If they fail, you want to restart them, so you'll typically use the &lt;strong&gt;Always&lt;/strong&gt; restart policy.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;web-server&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;restartPolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Always&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx:1.21&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Short-lived containers&lt;/strong&gt; typically run batch-style workloads that execute a task through to completion. Most of the time, you're happy when they complete successfully, and you only want to restart them if they fail. As such, you'll probably use the &lt;strong&gt;OnFailure&lt;/strong&gt; restart policy. If you don't care whether they fail or succeed, use the &lt;strong&gt;Never&lt;/strong&gt; policy.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;batch-job&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;restartPolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;OnFailure&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;data-processor&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;batch-processor:1.0&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Remember:&lt;/strong&gt; Kubernetes never restarts Pods — when they fail, get scaled, or get updated, Kubernetes always deletes old Pods and creates new ones. However, Kubernetes can restart individual containers within a Pod on the same node.&lt;/p&gt;




&lt;h2&gt;
  
  
  Static Pods vs Controllers
&lt;/h2&gt;

&lt;p&gt;There are two ways to deploy Pods:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Directly via a Pod manifest&lt;/strong&gt; (rare)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Indirectly via a workload resource and controller&lt;/strong&gt; (most common)&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Static Pods
&lt;/h3&gt;

&lt;p&gt;Deploying directly from a Pod manifest creates a &lt;strong&gt;static Pod&lt;/strong&gt; that cannot self-heal, scale, or perform rolling updates. This is because static Pods are only managed by the kubelet on the node they're running on, and kubelets are limited to restarting containers on the same node. If the node fails, the kubelet fails as well and cannot do anything to help the Pod.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0e72754mzhwhli8zkn7k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0e72754mzhwhli8zkn7k.png" alt="popo" width="800" height="437"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Controller-Managed Pods
&lt;/h3&gt;

&lt;p&gt;On the flip side, Pods deployed via &lt;strong&gt;workload resources&lt;/strong&gt; (like Deployments, StatefulSets, or DaemonSets) get all the benefits of being managed by a highly available controller that can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Restart Pods on other nodes if a node fails&lt;/li&gt;
&lt;li&gt;Scale Pods when demand changes&lt;/li&gt;
&lt;li&gt;Perform advanced operations such as rolling updates and versioned rollbacks&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The local kubelet can still attempt to restart failed containers, but if the node fails or gets evicted, the controller can restart the Pod on a different node.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F794yylovwcig4j3xcbpp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F794yylovwcig4j3xcbpp.png" alt="popo" width="800" height="321"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The controller runs in the Kubernetes control plane and constantly watches the state of your Pods. If reality doesn't match your desired state, it takes action to fix it.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Pod Network
&lt;/h2&gt;

&lt;p&gt;Every Kubernetes cluster runs a &lt;strong&gt;pod network&lt;/strong&gt; and automatically connects all Pods to it. It's usually a flat Layer 2 overlay network that spans every cluster node and allows every Pod to talk directly to every other Pod, even if the remote Pod is on a different cluster node.&lt;/p&gt;

&lt;p&gt;The pod network is implemented by a third-party plugin that interfaces with Kubernetes and configures the network via the &lt;strong&gt;Container Network Interface (CNI)&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;You choose a network plugin at cluster build time, and it configures the Pod network for the entire cluster. Many plugins exist, each with its own pros and cons. However, at the time of writing, &lt;strong&gt;Cilium&lt;/strong&gt; is the most popular and implements advanced features such as security policies and observability.&lt;/p&gt;

&lt;h3&gt;
  
  
  How the Pod Network Works
&lt;/h3&gt;

&lt;p&gt;The Pod network creates a unified network space where every Pod gets its own unique IP address, and all Pods can communicate directly without NAT (Network Address Translation).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkechsysxv6a2r5j9f98o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkechsysxv6a2r5j9f98o.png" alt="popo" width="800" height="452"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key characteristics:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Each Pod gets a unique IP from the pod network CIDR range (e.g., 10.244.0.0/16)&lt;/li&gt;
&lt;li&gt;Pods can communicate with any other Pod using its IP address&lt;/li&gt;
&lt;li&gt;The pod network spans all nodes in the cluster&lt;/li&gt;
&lt;li&gt;Node IPs (192.168.x.x) are separate from Pod IPs (10.244.x.x)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Network Configuration Example
&lt;/h3&gt;

&lt;p&gt;When a Pod is created, the CNI plugin performs these steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Allocates an IP address&lt;/strong&gt; from the pod network range&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Creates a virtual ethernet pair&lt;/strong&gt; (veth pair) - one end in the Pod's network namespace, one end on the node&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Configures routing&lt;/strong&gt; so the Pod can reach other Pods and external networks&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Sets up network policies&lt;/strong&gt; if defined (firewall rules for Pod-to-Pod traffic)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2vhx7qdtq3w57rhs32ua.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2vhx7qdtq3w57rhs32ua.png" alt="popo" width="800" height="321"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Real-world example:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Imagine you have a three-tier application:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Frontend Pods&lt;/strong&gt; (10.244.1.x) on Node 1&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Backend API Pods&lt;/strong&gt; (10.244.2.x) on Node 2
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Database Pods&lt;/strong&gt; (10.244.3.x) on Node 3&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The frontend Pods can directly call the backend API using its Pod IP (10.244.2.8:8080) even though they're on different physical nodes. The CNI plugin handles all the routing transparently using overlay networking (typically VXLAN or similar encapsulation).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy4w6hp4uxstflm443vb8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy4w6hp4uxstflm443vb8.png" alt="popo" width="800" height="277"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The diagram above shows three nodes running five Pods. All five Pods are connected to the pod network and can communicate with each other. You can also see the Pod network spanning all three nodes. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Important distinction:&lt;/strong&gt; The network is only for Pods, not nodes. As shown in the diagram, you can connect nodes to multiple different networks (management network, storage network, etc.), but the Pod network spans them all, creating a unified communication layer for your applications.&lt;/p&gt;

&lt;p&gt;Kubernetes has two main patterns for multi-container Pods: init containers and sidecar containers. Let's quickly explain each.&lt;/p&gt;

&lt;h2&gt;
  
  
  Multi-container Pods
&lt;/h2&gt;

&lt;p&gt;Multi-container Pods are a powerful pattern and very popular in real-world deployments.&lt;/p&gt;

&lt;p&gt;According to microservices design patterns, every container should have a single clearly defined responsibility. For example, an application that syncs content from a repository and serves it as a web page has two distinct responsibilities:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Sync the content&lt;/li&gt;
&lt;li&gt;Serve the web page&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;You should design this app with two microservices and give each one its own container — one container responsible for syncing the content and the other responsible for serving it. We call this &lt;strong&gt;separation of concerns&lt;/strong&gt;, or the &lt;strong&gt;single responsibility principle&lt;/strong&gt;, and it keeps containers small and simple, encourages reuse, and makes troubleshooting easier.&lt;/p&gt;

&lt;p&gt;Most of the time, you'll put application containers in their own Pods and they'll communicate over the network. However, sometimes putting them in the same Pod is beneficial. Sticking with the sync and serve example, putting the containers in the same Pod allows the sync container to pull content from a remote system and store it in a shared volume where the web container can read it and serve it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmw9dcnmsiqr4hjtq1alm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmw9dcnmsiqr4hjtq1alm.png" alt="popo" width="800" height="322"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Kubernetes has two main patterns for multi-container Pods: &lt;strong&gt;init containers&lt;/strong&gt; and &lt;strong&gt;sidecar containers&lt;/strong&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  Multi-container Pods: Init Containers
&lt;/h2&gt;

&lt;p&gt;Init containers are a special type of container defined in the Kubernetes API. You run them in the same Pod as application containers, but Kubernetes guarantees they'll start and complete &lt;strong&gt;before&lt;/strong&gt; the main app container starts. It also guarantees they'll only run once.&lt;/p&gt;

&lt;p&gt;The purpose of init containers is to prepare and initialize the environment so it's ready for application containers.&lt;/p&gt;

&lt;h3&gt;
  
  
  Real-world Examples
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Example 1: Waiting for a Service&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You have an application that should only start when a remote API is accepting connections. Instead of complicating the main application with the logic to check the remote API, you run that logic in an init container in the same Pod. When you deploy the Pod, the init container comes up first and sends requests to the remote API waiting for it to respond. While this is happening, the main app container cannot start. However, as soon as the remote API accepts a request, the init container completes, and the main app container starts.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example 2: Content Preparation&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You have another application that needs a one-time clone of a remote repository before starting. Again, instead of bloating and complicating the main application with the code to clone and prepare the content (knowledge of the remote server address, certificates, auth, file sync protocol, checksum verifications, etc.), you implement that in an init container that is guaranteed to complete the task before the main application container starts.&lt;/p&gt;

&lt;h3&gt;
  
  
  Init Container Lifecycle
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe30mpg4an9fq0txgg25h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe30mpg4an9fq0txgg25h.png" alt="popo" width="800" height="497"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A drawback of init containers&lt;/strong&gt; is that they're limited to running tasks before the main app container starts. For something that runs alongside the main app container, you need a sidecar container.&lt;/p&gt;




&lt;h2&gt;
  
  
  Multi-container Pods: Sidecars
&lt;/h2&gt;

&lt;p&gt;Sidecar containers are regular containers that run at the same time as application containers for the &lt;strong&gt;entire lifecycle&lt;/strong&gt; of the Pod.&lt;/p&gt;

&lt;p&gt;Unlike init containers, sidecars are not a special resource in the Kubernetes API — we're currently using regular containers to implement the sidecar pattern. Work is in progress to formalize the sidecar pattern in the API, but at the time of writing, it's still an early alpha feature.&lt;/p&gt;

&lt;p&gt;The job of a sidecar container is to add functionality to an app without having to implement it in the actual app. Common examples include sidecars that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Scrape and ship logs&lt;/li&gt;
&lt;li&gt;Sync remote content&lt;/li&gt;
&lt;li&gt;Broker connections&lt;/li&gt;
&lt;li&gt;Transform or munge data&lt;/li&gt;
&lt;li&gt;Provide encryption and decryption&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;They're also heavily used by service meshes where the sidecar intercepts network traffic and provides traffic encryption and telemetry.&lt;/p&gt;

&lt;p&gt;The figure below shows a multi-container Pod with a main app container and a service mesh sidecar. The sidecar intercepts all network traffic and provides encryption and decryption. It also sends telemetry data to the service mesh control plane.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq3ld3p18k7vumsq887q8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq3ld3p18k7vumsq887q8.png" alt="popo" width="800" height="277"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Experimentation
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Basic Pod Manifest
&lt;/h3&gt;

&lt;p&gt;A typical Pod manifest file looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;hello-pod&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;zone&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;prod&lt;/span&gt;
    &lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;hello-ctr&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nigelpoulton/k8sbook:1.0&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;8080&lt;/span&gt;
    &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;limits&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;128Mi&lt;/span&gt;
        &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;0.5&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's break down each section:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;kind&lt;/strong&gt;: Tells Kubernetes what type of object you're defining. This one defines a Pod, but if you were defining a Deployment, the kind field would say &lt;code&gt;Deployment&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;apiVersion&lt;/strong&gt;: Tells Kubernetes what version of the API to use when creating the object. This manifest uses the &lt;code&gt;v1&lt;/code&gt; version of the API.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;metadata&lt;/strong&gt;: Names the Pod &lt;code&gt;hello-pod&lt;/code&gt; and gives it two labels. You'll use labels in future chapters to connect the Pod to a Service for networking.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;spec&lt;/strong&gt;: Where most of the action happens. This example defines a single-container Pod with an application container called &lt;code&gt;hello-ctr&lt;/code&gt;. The container is based on the &lt;code&gt;nigelpoulton/k8sbook:1.0&lt;/code&gt; image, listens on port 8080, and tells the scheduler it needs a maximum of 128MB of memory and half a CPU.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To make it a multi-container Pod, you simply add more containers below the &lt;code&gt;spec.containers&lt;/code&gt; section.&lt;/p&gt;




&lt;h2&gt;
  
  
  Deploying Pods from a Manifest File
&lt;/h2&gt;

&lt;p&gt;Run the following &lt;code&gt;kubectl apply&lt;/code&gt; command to deploy the Pod. The command sends the &lt;code&gt;pod.yml&lt;/code&gt; file to the API server defined in the current context of your kubeconfig file. It also attaches credentials from your kubeconfig file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; pod.yml
pod/hello-pod created
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Although the output says the Pod is created, it might still be pulling the image and starting the container.&lt;/p&gt;

&lt;p&gt;Run a &lt;code&gt;kubectl get pods&lt;/code&gt; to check the status:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get pods
NAME        READY   STATUS              RESTARTS   AGE
hello-pod   0/1     ContainerCreating   0          9s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The Pod in the example isn't fully created yet — the &lt;code&gt;READY&lt;/code&gt; column shows zero containers ready, and the &lt;code&gt;STATUS&lt;/code&gt; column shows why.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Kubernetes automatically pulls (downloads) images from Docker Hub. To use another registry, just add the registry's URL before the image name in the YAML file.&lt;/p&gt;

&lt;p&gt;Once the &lt;code&gt;READY&lt;/code&gt; column shows &lt;code&gt;1/1&lt;/code&gt; and the &lt;code&gt;STATUS&lt;/code&gt; column shows &lt;code&gt;Running&lt;/code&gt;, your Pod will be running on a healthy cluster node and monitored by the node's kubelet.&lt;/p&gt;

&lt;p&gt;You'll see how to connect to the app and test it in future chapters.&lt;/p&gt;




&lt;h2&gt;
  
  
  Inspecting Pods
&lt;/h2&gt;

&lt;p&gt;You've already run a &lt;code&gt;kubectl get pods&lt;/code&gt; command and seen that it returns a single line of basic info. However, the following flags provide much more information:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;-o wide&lt;/strong&gt;: Gives a few more columns but is still a single line of output&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;-o yaml&lt;/strong&gt;: Gets you everything Kubernetes knows about the object&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The following example shows the output of &lt;code&gt;kubectl get pods&lt;/code&gt; with the &lt;code&gt;-o yaml&lt;/code&gt; flag. The output is truncated, but notice how it's divided into two main parts:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;spec&lt;/strong&gt;: Shows the desired state of the object&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;status&lt;/strong&gt;: Shows the observed state
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get pods hello-pod &lt;span class="nt"&gt;-o&lt;/span&gt; yaml
apiVersion: v1
kind: Pod
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      &lt;span class="o"&gt;{&lt;/span&gt;&lt;span class="s2"&gt;"apiVersion"&lt;/span&gt;:&lt;span class="s2"&gt;"v1"&lt;/span&gt;,&lt;span class="s2"&gt;"kind"&lt;/span&gt;:&lt;span class="s2"&gt;"Pod"&lt;/span&gt;...&lt;span class="o"&gt;}&lt;/span&gt;
  name: hello-pod
  namespace: default
spec:                           &lt;span class="c"&gt;# Desired state&lt;/span&gt;
  containers:
  - image: image-name
    imagePullPolicy: IfNotPresent
    name: hello-ctr
    ports:
    - containerPort: 8080
      protocol: TCP
    resources:
      limits:
        cpu: 500m
        memory: 128Mi
  restartPolicy: Always
status:                         &lt;span class="c"&gt;# Observed state&lt;/span&gt;
  conditions:
  - lastProbeTime: null
    lastTransitionTime: &lt;span class="s2"&gt;"2024-01-03T18:21:51Z"&lt;/span&gt;
    status: &lt;span class="s2"&gt;"True"&lt;/span&gt;
    &lt;span class="nb"&gt;type&lt;/span&gt;: Initialized
  - lastProbeTime: null
    lastTransitionTime: &lt;span class="s2"&gt;"2024-01-03T18:22:05Z"&lt;/span&gt;
    status: &lt;span class="s2"&gt;"True"&lt;/span&gt;
    &lt;span class="nb"&gt;type&lt;/span&gt;: Ready
  containerStatuses:
  - containerID: containerd://abc123...
    image: image-name
    name: hello-ctr
    ready: &lt;span class="nb"&gt;true
    &lt;/span&gt;state:
      running:
        startedAt: &lt;span class="s2"&gt;"2024-01-03T18:22:04Z"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  kubectl describe
&lt;/h2&gt;

&lt;p&gt;Another great command is &lt;code&gt;kubectl describe&lt;/code&gt;. This gives you a nicely formatted overview of an object, including lifecycle events.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl describe pod hello-pod
Name:         hello-pod
Namespace:    default
Labels:       &lt;span class="nv"&gt;version&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;v1
              &lt;span class="nv"&gt;zone&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;prod
Status:       Running
IP:           10.1.0.103
Containers:
  hello-ctr:
    Container ID:   containerd://ec0c3e...
    Image:          image-name
    Port:           8080/TCP
    State:          Running
      Started:      Wed, 03 Jan 2024 18:22:04 +0000
    Ready:          True
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Events:
  Type    Reason     Age    From               Message
  &lt;span class="nt"&gt;----&lt;/span&gt;    &lt;span class="nt"&gt;------&lt;/span&gt;     &lt;span class="nt"&gt;----&lt;/span&gt;   &lt;span class="nt"&gt;----&lt;/span&gt;               &lt;span class="nt"&gt;-------&lt;/span&gt;
  Normal  Scheduled  5m30s  default-scheduler  Successfully assigned default/hello-pod to node-1
  Normal  Pulling    5m30s  kubelet            Pulling image &lt;span class="s2"&gt;"nigelpoulton/k8sbook:1.0"&lt;/span&gt;
  Normal  Pulled     5m8s   kubelet            Successfully pulled image &lt;span class="s2"&gt;"nigelpoulton/k8sbook:1.0"&lt;/span&gt;
  Normal  Created    5m8s   kubelet            Created container hello-ctr
  Normal  Started    5m8s   kubelet            Started container hello-ctr
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  kubectl logs
&lt;/h2&gt;

&lt;p&gt;You can use the &lt;code&gt;kubectl logs&lt;/code&gt; command to pull the logs from any container in a Pod. The basic format of the command is &lt;code&gt;kubectl logs &amp;lt;pod&amp;gt;&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;If you run the command against a multi-container Pod, you automatically get the logs from the first container in the Pod. However, you can override this by using the &lt;code&gt;--container&lt;/code&gt; flag and specifying the name of the container you want the logs from. If you're unsure of the names of containers or the order they appear in a multi-container Pod, just run a &lt;code&gt;kubectl describe pod &amp;lt;pod&amp;gt;&lt;/code&gt; command. You can get the same info from the Pod's YAML file.&lt;/p&gt;

&lt;p&gt;The following YAML shows a multi-container Pod with two containers. The first container is called &lt;code&gt;app&lt;/code&gt;, and the second is called &lt;code&gt;syncer&lt;/code&gt;. Running &lt;code&gt;kubectl logs&lt;/code&gt; against this Pod without specifying the &lt;code&gt;--container&lt;/code&gt; flag will get you the logs from the &lt;code&gt;app&lt;/code&gt; container.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;logtest&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;app&lt;/span&gt;                    &lt;span class="c1"&gt;# First container (default)&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;8080&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;syncer&lt;/span&gt;                 &lt;span class="c1"&gt;# Second container&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;image-name&lt;/span&gt;
    &lt;span class="na"&gt;volumeMounts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;html&lt;/span&gt;
      &lt;span class="na"&gt;mountPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/tmp/git&lt;/span&gt;
  &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;html&lt;/span&gt;
    &lt;span class="na"&gt;emptyDir&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You'd run the following command if you wanted the logs from the &lt;code&gt;syncer&lt;/code&gt; container. Don't run this command, as you haven't deployed this Pod yet.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl logs logtest &lt;span class="nt"&gt;--container&lt;/span&gt; syncer
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  kubectl exec
&lt;/h2&gt;

&lt;p&gt;The &lt;code&gt;kubectl exec&lt;/code&gt; command is a great way to execute commands inside running containers.&lt;/p&gt;

&lt;p&gt;You can use &lt;code&gt;kubectl exec&lt;/code&gt; in two ways:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Remote command execution&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Exec session&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Remote command execution lets you send commands to a container from your local shell. The container executes the command and returns the output to your shell.&lt;/p&gt;

&lt;p&gt;An exec session connects your local shell to the container's shell and is the same as being logged on to the container.&lt;/p&gt;

&lt;p&gt;Let's look at both, starting with remote command execution.&lt;/p&gt;

&lt;p&gt;Run the following command from your local shell. It's asking the first container in the &lt;code&gt;hello-pod&lt;/code&gt; Pod to run a &lt;code&gt;ps&lt;/code&gt; command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl &lt;span class="nb"&gt;exec &lt;/span&gt;hello-pod &lt;span class="nt"&gt;--&lt;/span&gt; ps
PID   USER     TIME  COMMAND
  1   root     0:00  node ./app.js
 17   root     0:00  ps
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The container executed the &lt;code&gt;ps&lt;/code&gt; command and displayed the result in your local terminal.&lt;/p&gt;

&lt;p&gt;The format of the command is &lt;code&gt;kubectl exec &amp;lt;pod&amp;gt; -- &amp;lt;command&amp;gt;&lt;/code&gt;, and you can execute any command installed in the container. By default, commands execute in the first container in a Pod, but you can override this with the &lt;code&gt;--container&lt;/code&gt; flag.&lt;/p&gt;

&lt;p&gt;Try running the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl &lt;span class="nb"&gt;exec &lt;/span&gt;hello-pod &lt;span class="nt"&gt;--&lt;/span&gt; curl localhost:8080
OCI runtime &lt;span class="nb"&gt;exec &lt;/span&gt;failed: &lt;span class="nb"&gt;exec &lt;/span&gt;failed: unable to start container process:
&lt;span class="nb"&gt;exec&lt;/span&gt;: &lt;span class="s2"&gt;"curl"&lt;/span&gt;: executable file not found &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="nv"&gt;$PATH&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This one failed because the &lt;code&gt;curl&lt;/code&gt; command isn't installed in the container.&lt;/p&gt;

&lt;p&gt;Let's use &lt;code&gt;kubectl exec&lt;/code&gt; to get an interactive exec session to the same container. This works by connecting your terminal to the container's terminal, and it feels like you're logged on to the container.&lt;/p&gt;

&lt;p&gt;Run the following command to create an exec session to the first container in the &lt;code&gt;hello-pod&lt;/code&gt; Pod. Your shell prompt will change to indicate you're connected to the container's shell.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; hello-pod &lt;span class="nt"&gt;--&lt;/span&gt; sh
/#
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;-it&lt;/code&gt; flag tells &lt;code&gt;kubectl exec&lt;/code&gt; to make the session interactive by connecting your shell's STDIN and STDOUT streams to the STDIN and STDOUT of the first container in the Pod. The &lt;code&gt;sh&lt;/code&gt; command starts a new shell process in the session, and your prompt will change to indicate you're now inside the container.&lt;/p&gt;

&lt;p&gt;Run the following commands from within the exec session to install the &lt;code&gt;curl&lt;/code&gt; binary and then execute a &lt;code&gt;curl&lt;/code&gt; command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;/# apk add curl
fetch https://dl-cdn.alpinelinux.org/alpine/v3.18/main/x86_64/APKINDEX.tar.gz
fetch https://dl-cdn.alpinelinux.org/alpine/v3.18/community/x86_64/APKINDEX.tar.gz
&lt;span class="o"&gt;(&lt;/span&gt;1/5&lt;span class="o"&gt;)&lt;/span&gt; Installing ca-certificates &lt;span class="o"&gt;(&lt;/span&gt;20230506-r0&lt;span class="o"&gt;)&lt;/span&gt;
&lt;span class="o"&gt;(&lt;/span&gt;2/5&lt;span class="o"&gt;)&lt;/span&gt; Installing brotli-libs &lt;span class="o"&gt;(&lt;/span&gt;1.0.9-r14&lt;span class="o"&gt;)&lt;/span&gt;
&lt;span class="o"&gt;(&lt;/span&gt;3/5&lt;span class="o"&gt;)&lt;/span&gt; Installing libunistring &lt;span class="o"&gt;(&lt;/span&gt;1.1-r1&lt;span class="o"&gt;)&lt;/span&gt;
&lt;span class="o"&gt;(&lt;/span&gt;4/5&lt;span class="o"&gt;)&lt;/span&gt; Installing libidn2 &lt;span class="o"&gt;(&lt;/span&gt;2.3.4-r1&lt;span class="o"&gt;)&lt;/span&gt;
&lt;span class="o"&gt;(&lt;/span&gt;5/5&lt;span class="o"&gt;)&lt;/span&gt; Installing curl &lt;span class="o"&gt;(&lt;/span&gt;8.1.2-r0&lt;span class="o"&gt;)&lt;/span&gt;
OK: 12 MiB &lt;span class="k"&gt;in &lt;/span&gt;20 packages

/# curl localhost:8080
&amp;lt;html&amp;gt;
  &amp;lt;&lt;span class="nb"&gt;head&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
    &amp;lt;title&amp;gt;Hello from Kubernetes!&amp;lt;/title&amp;gt;
  &amp;lt;/head&amp;gt;
  &amp;lt;body&amp;gt;
    &amp;lt;h1&amp;gt;Hello from Kubernetes Storage!&amp;lt;/h1&amp;gt;
  &amp;lt;/body&amp;gt;
&amp;lt;/html&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Making changes like this to live Pods is an anti-pattern, as Pods are designed as immutable objects. However, it's OK for demonstration purposes like this.&lt;/p&gt;




&lt;h2&gt;
  
  
  Pod Hostnames
&lt;/h2&gt;

&lt;p&gt;Pods get their names from their YAML file's &lt;code&gt;metadata.name&lt;/code&gt; field, and Kubernetes uses this as the hostname for every container in the Pod.&lt;/p&gt;

&lt;p&gt;If you're following along, you'll have a single Pod deployed called &lt;code&gt;hello-pod&lt;/code&gt;. You deployed it from the following YAML file that sets the Pod name as &lt;code&gt;hello-pod&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;hello-pod&lt;/span&gt;    &lt;span class="c1"&gt;# Pod hostname - inherited by all containers&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;zone&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;prod&lt;/span&gt;
    &lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run the following command from inside your existing exec session to check the container's hostname. The command is case-sensitive.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;/# &lt;span class="nb"&gt;env&lt;/span&gt; | &lt;span class="nb"&gt;grep &lt;/span&gt;HOSTNAME
&lt;span class="nv"&gt;HOSTNAME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;hello-pod
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see, the container's hostname matches the name of the Pod. If it was a multi-container Pod, all containers would have the same hostname.&lt;/p&gt;

&lt;p&gt;Because of this, you should ensure that Pod names are valid DNS names (a-z, 0-9, the minus and period signs).&lt;/p&gt;

&lt;p&gt;Type &lt;code&gt;exit&lt;/code&gt; to quit your exec session and return to your local terminal.&lt;/p&gt;




&lt;h2&gt;
  
  
  Check Pod Immutability
&lt;/h2&gt;

&lt;p&gt;Pods are designed as immutable objects, meaning you shouldn't change them after deployment.&lt;/p&gt;

&lt;p&gt;Immutability applies at two levels:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Object immutability&lt;/strong&gt; (the Pod)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;App immutability&lt;/strong&gt; (containers)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Kubernetes handles object immutability by preventing changes to a running Pod's configuration. However, Kubernetes can't always prevent you from changing the app and filesystem in containers. You're responsible for ensuring containers and their apps are stateless and immutable.&lt;/p&gt;

&lt;p&gt;The following example uses &lt;code&gt;kubectl edit&lt;/code&gt; to edit a live Pod object. Try changing any of these attributes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Pod name&lt;/li&gt;
&lt;li&gt;Container name&lt;/li&gt;
&lt;li&gt;Container port&lt;/li&gt;
&lt;li&gt;Resource requests and limits&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You'll find that Kubernetes prevents most changes to running Pods, enforcing immutability at the object level.&lt;/p&gt;




&lt;h2&gt;
  
  
  Resource Requests and Resource Limits
&lt;/h2&gt;

&lt;p&gt;Kubernetes lets you specify resource requests and resource limits for each container in a Pod.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Requests&lt;/strong&gt; are minimum values&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Limits&lt;/strong&gt; are maximum values&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Consider the following snippet from a Pod YAML:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;requests&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;0.5&lt;/span&gt;
    &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;256Mi&lt;/span&gt;
  &lt;span class="na"&gt;limits&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1.0&lt;/span&gt;
    &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;512Mi&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This container needs a minimum of 256Mi of memory and half a CPU. The scheduler reads this and assigns it to a node with enough resources. If it can't find a suitable node, it marks the Pod as pending, and the cluster autoscaler will attempt to provision a new cluster node.&lt;/p&gt;

&lt;p&gt;Assuming the scheduler finds a suitable node, it assigns the Pod to the node, and the kubelet downloads the Pod spec and asks the local runtime to start it. As part of the process, the kubelet reserves the requested CPU and memory, guaranteeing the resources will be there when needed. It also sets a cap on resource usage based on each container's resource limits. In this example, it sets a cap of one CPU and 512Mi of memory. Most runtimes will also enforce resource limits, but how each runtime implements this can vary.&lt;/p&gt;

&lt;p&gt;While a container executes, it is guaranteed its minimum requirements (requests). However, it's allowed to use more if the node has additional available resources, but it's never allowed to use more than what you specify in its limits.&lt;/p&gt;

&lt;p&gt;For multi-container Pods, the scheduler combines the requests for all containers and finds a node with enough resources to satisfy the full Pod.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; If you've been following the examples closely, you'll have noticed that the &lt;code&gt;pod.yml&lt;/code&gt; you used to deploy the &lt;code&gt;hello-pod&lt;/code&gt; only specified resource limits — it didn't specify resource requests. However, some command outputs have shown both limits and requests. This is because Kubernetes automatically sets requests to match limits if you only specify limits.&lt;/p&gt;




&lt;h2&gt;
  
  
  Multi-container Pod Example – Init Container
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;initpod&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;initializer&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;initContainers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;init-ctr&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;busybox:1.28.4&lt;/span&gt;
    &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;sh'&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;-c'&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;until&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;nslookup&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;k8sbook;&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;do&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;echo&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;waiting&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;for&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;k8sbook&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;service;&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;sleep&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;1;&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;done;&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;echo&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Service&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;found!'&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;web-ctr&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nigelpoulton/k8sbook:1.0&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;8080&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Defining a container under the &lt;code&gt;spec.initContainers&lt;/code&gt; block makes it an init container that Kubernetes guarantees will run and complete before regular containers.&lt;/p&gt;

&lt;p&gt;Regular app containers are defined under the &lt;code&gt;spec.containers&lt;/code&gt; block and will not start until all init containers successfully complete.&lt;/p&gt;

&lt;p&gt;This example has a single init container called &lt;code&gt;init-ctr&lt;/code&gt; and a single app container called &lt;code&gt;web-ctr&lt;/code&gt;. The init container runs a loop looking for a Kubernetes Service called &lt;code&gt;k8sbook&lt;/code&gt;. As soon as you create the Service, the init container will get a response and exit. This allows the main container to start. You'll learn about Services in a future chapter.&lt;/p&gt;

&lt;p&gt;Deploy the multi-container Pod with the following command and then run a &lt;code&gt;kubectl get pods&lt;/code&gt; with the &lt;code&gt;--watch&lt;/code&gt; flag to see if it comes up.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; initpod.yml
pod/initpod created

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get pods &lt;span class="nt"&gt;--watch&lt;/span&gt;
NAME      READY   STATUS     RESTARTS   AGE
initpod   0/1     Init:0/1   0          6s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;Init:0/1&lt;/code&gt; status tells you that the init container is still running, meaning the main container hasn't started yet. If you run a &lt;code&gt;kubectl describe&lt;/code&gt; command, you'll see the overall Pod status is &lt;code&gt;Pending&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl describe pod initpod
Name:         initpod
Namespace:    default
Status:       Pending
Init Containers:
  init-ctr:
    State:          Running
      Started:      Thu, 04 Jan 2024 10:15:32 +0000
    Ready:          False
Containers:
  web-ctr:
    State:          Waiting
      Reason:       PodInitializing
Events:
  Type    Reason     Age   From               Message
  &lt;span class="nt"&gt;----&lt;/span&gt;    &lt;span class="nt"&gt;------&lt;/span&gt;     &lt;span class="nt"&gt;----&lt;/span&gt;  &lt;span class="nt"&gt;----&lt;/span&gt;               &lt;span class="nt"&gt;-------&lt;/span&gt;
  Normal  Scheduled  45s   default-scheduler  Successfully assigned default/initpod to node-1
  Normal  Pulling    44s   kubelet            Pulling image &lt;span class="s2"&gt;"busybox:1.28.4"&lt;/span&gt;
  Normal  Pulled     42s   kubelet            Successfully pulled image &lt;span class="s2"&gt;"busybox:1.28.4"&lt;/span&gt;
  Normal  Created    42s   kubelet            Created container init-ctr
  Normal  Started    42s   kubelet            Started container init-ctr
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The Pod will remain in this phase until you create a Service called &lt;code&gt;k8sbook&lt;/code&gt;. Run the following commands to create the Service and re-check the Pod status.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; initsvc.yml
service/k8sbook created

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get pods &lt;span class="nt"&gt;--watch&lt;/span&gt;
NAME      READY   STATUS              RESTARTS   AGE
initpod   0/1     Init:0/1            0          15s
initpod   0/1     PodInitializing     0          3m39s
initpod   1/1     Running             0          3m57s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The init container completes as soon as the Service appears, and the main application container starts. Give it a few seconds to fully start.&lt;/p&gt;

&lt;p&gt;If you run another &lt;code&gt;kubectl describe&lt;/code&gt; against the &lt;code&gt;initpod&lt;/code&gt; Pod, you'll see the init container is in the terminated state because it completed successfully (exit code 0).&lt;/p&gt;




&lt;h2&gt;
  
  
  Multi-container Pod Example – Sidecar Container
&lt;/h2&gt;

&lt;p&gt;Sidecar containers run alongside the main application container for the entire lifecycle of the Pod. We currently define them as regular containers under the &lt;code&gt;spec.containers&lt;/code&gt; section of the Pod YAML, and their job is to augment the main application container or provide a secondary support service.&lt;/p&gt;

&lt;p&gt;The following YAML file defines a multi-container Pod with both containers mounting the same shared volume. It's conventional to list the main app container as the first container and sidecars after it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;sidecar-pod&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;webserver&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ctr-web&lt;/span&gt;                              &lt;span class="c1"&gt;# Main application container&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx:1.21&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
    &lt;span class="na"&gt;volumeMounts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;html&lt;/span&gt;
      &lt;span class="na"&gt;mountPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/usr/share/nginx/html&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ctr-sync&lt;/span&gt;                             &lt;span class="c1"&gt;# Sidecar container&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;image-name&lt;/span&gt;
    &lt;span class="na"&gt;volumeMounts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;html&lt;/span&gt;
      &lt;span class="na"&gt;mountPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/tmp/git&lt;/span&gt;
    &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;GIT_SYNC_REPO&lt;/span&gt;
      &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;...."&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;GIT_SYNC_BRANCH&lt;/span&gt;
      &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;main"&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;GIT_SYNC_DEST&lt;/span&gt;
      &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;html"&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;GIT_SYNC_WAIT&lt;/span&gt;
      &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;60"&lt;/span&gt;
  &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;html&lt;/span&gt;
    &lt;span class="na"&gt;emptyDir&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The main app container is called &lt;code&gt;ctr-web&lt;/code&gt;. It's based on an NGINX image and serves a static web page loaded from the shared &lt;code&gt;html&lt;/code&gt; volume.&lt;/p&gt;

&lt;p&gt;The second container is called &lt;code&gt;ctr-sync&lt;/code&gt; and is the sidecar. It watches a GitHub repo and syncs changes into the same shared &lt;code&gt;html&lt;/code&gt; volume.&lt;/p&gt;

&lt;p&gt;When the contents of the GitHub repo change, the sidecar copies the updates to the shared volume, where the app container notices and serves an updated version of the web page.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>cloud</category>
      <category>architecture</category>
    </item>
    <item>
      <title>Mastering Kubernetes Step by Step  Part 1: Introduction to Kubernetes Architecture</title>
      <dc:creator>B.R.O.L.Y</dc:creator>
      <pubDate>Tue, 14 Oct 2025 23:52:19 +0000</pubDate>
      <link>https://dev.to/ridwaneelfilali/mastering-kubernetes-step-by-step-part-1-introduction-to-kubernetes-architecture-40l9</link>
      <guid>https://dev.to/ridwaneelfilali/mastering-kubernetes-step-by-step-part-1-introduction-to-kubernetes-architecture-40l9</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Kubernetes is an orchestrator for containerized, cloud-native, microservices applications. But what does that mean exactly? A bunch of jumbled words, right? Let's break it down step by step.&lt;/p&gt;

&lt;h3&gt;
  
  
  Orchestration
&lt;/h3&gt;

&lt;p&gt;An orchestrator is like an operating system: it has to dynamically respond to changes. In our case, Kubernetes reacts to deployed applications, scales them up or down as needed, self-heals when things break, and performs rolling updates and rollbacks with zero downtime — meaning the app should never go offline and users should never be frustrated. All of this happens automatically, without human interference, except for the initial setup of course.&lt;/p&gt;

&lt;h3&gt;
  
  
  Containerization
&lt;/h3&gt;

&lt;p&gt;Containerization is packaging your application into an image along with its dependencies so it can run anywhere. If you are familiar with Docker, this will come naturally. I recommend taking a moment to read a couple of blogs on Docker and Docker Compose, including some lower-level concepts, because these fundamentals are crucial for this course.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://medium.com/@ridwaneelfilali/a-journey-into-process-isolation-kernel-namespaces-control-groups-b262a94d7ec5" rel="noopener noreferrer"&gt;A Journey into Process Isolation: kernel namespaces, control groups&lt;/a&gt;*&lt;/li&gt;
&lt;li&gt;&lt;a href="https://medium.com/@ridwaneelfilali/docker-explained-86987249ad25" rel="noopener noreferrer"&gt;UNDERSTANDING DOCKER&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Cloud Native
&lt;/h2&gt;

&lt;p&gt;We call an application cloud-native if it leverages cloud features like auto-scaling, self-healing, automated updates, and rollbacks — basically, the things Kubernetes is capable of managing.&lt;/p&gt;

&lt;h3&gt;
  
  
  Microservices
&lt;/h3&gt;

&lt;p&gt;Microservices is an architectural decision to make your application composed of independent services. This means if one service is down, the others continue working. This approach is excellent because it helps developers manage projects better — teams can work on individual services independently and track them properly.&lt;/p&gt;

&lt;h3&gt;
  
  
  A Bit of History
&lt;/h3&gt;

&lt;p&gt;Kubernetes was developed by a group of engineers at Google in response to AWS's growing popularity. Google had its own internal tools for orchestrating containers due to the massive scale of the applications they managed. They released Kubernetes in 2014 and donated it to the &lt;strong&gt;Cloud Native Computing Foundation (CNCF)&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Kubernetes and Docker
&lt;/h2&gt;

&lt;p&gt;Early versions of Kubernetes shipped with Docker as the default runtime, handling tasks like creating, starting, and stopping containers. Over time, Docker became bloated, and many alternatives emerged. To address this, Kubernetes introduced the Container Runtime Interface (CRI), allowing users to choose the runtime that best fits their needs. As of Kubernetes 1.24, Docker is no longer supported, and most clusters now use &lt;strong&gt;containerd&lt;/strong&gt;, a lightweight runtime optimized for Kubernetes that fully supports Docker container images. Multiple runtimes can run on the same cluster, offering flexibility for performance and isolation requirements.&lt;/p&gt;

&lt;h2&gt;
  
  
  Kubernetes: the operating system of the cloud
&lt;/h2&gt;

&lt;p&gt;Kubernetes is often called the operating system of the cloud because it transforms the chaos of distributed infrastructure into a seamless platform for developers. Just as Linux or Windows hides the complexity of CPUs, memory, and storage from application processes, Kubernetes abstracts the sprawling resources of clouds and datacenters, letting you deploy microservices without worrying about which node, storage volume, or failure zone they run on. Whether your cluster lives on AWS, Azure, GCP, or a mix of clouds, Kubernetes schedules, scales, and heals your applications automatically, making hybrid deployments, multi-cloud setups, and cloud migrations effortless. From a developer’s perspective, you simply declare what your application needs — replicas, resources, dependencies — and Kubernetes handles the rest, turning the cloud into a reliable, self-managing operating environment.&lt;/p&gt;

&lt;h2&gt;
  
  
  Kubernetes: cluster
&lt;/h2&gt;

&lt;p&gt;A cluster is is just what the word means LMAO. well its a grouping of one or more nodes that provide CPU, MEMORY .. for an app &lt;br&gt;
kubernetes has two nodes A &lt;strong&gt;Control plane&lt;/strong&gt; and  a &lt;strong&gt;Worker node&lt;/strong&gt;. a plus is that a control plane has to be a linux node indifferent from the worker node which can be windows/linux. in a good setup you'll have to setup multiple control planes for HA(High Availability) the control planes mainly manaage the worker nodes which in turn are for running applications. people sometimes run application on control planes mainly for testing but prohibit it in production so that control planes only focus on manaing the worker nodes properly.&lt;/p&gt;

&lt;h3&gt;
  
  
  Control Plane
&lt;/h3&gt;

&lt;p&gt;As we mentioned, a Kubernetes cluster is a combination of a &lt;strong&gt;control plane&lt;/strong&gt; and &lt;strong&gt;worker nodes&lt;/strong&gt;. The control plane is the brain of the cluster — a collection of services that manage the cluster’s state, schedule tasks, handle auto-scaling, and orchestrate updates. It also exposes the &lt;strong&gt;API server&lt;/strong&gt;, which is the entry point for all interactions with the cluster. A simple setup might involve one control plane managing multiple worker nodes, but what really makes the control plane tick? Let’s take a closer look.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft2q8xu4gengq4elygjqh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft2q8xu4gengq4elygjqh.png" alt="Control Plane" width="800" height="600"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  The API Server
&lt;/h4&gt;

&lt;p&gt;The API server acts as the front door for Kubernetes, handling all requests and communication within the cluster. Every action — from deploying an application to communicating between controllers and the cluster store — goes through the API server. It exposes a &lt;strong&gt;RESTful API over HTTPS&lt;/strong&gt;, and every request must be authenticated and authorized.&lt;/p&gt;

&lt;p&gt;For example, deploying an application involves these steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Define the desired state of the application in a YAML configuration file.&lt;/li&gt;
&lt;li&gt;Submit the file to the API server.&lt;/li&gt;
&lt;li&gt;The API server authenticates and authorizes the request.&lt;/li&gt;
&lt;li&gt;The desired state is stored in the cluster database.&lt;/li&gt;
&lt;li&gt;The control plane schedules and executes the necessary changes on the worker nodes.&lt;/li&gt;
&lt;/ol&gt;

&lt;h4&gt;
  
  
  The Cluster Store
&lt;/h4&gt;

&lt;p&gt;The cluster store is essentially a database that records the &lt;strong&gt;desired state&lt;/strong&gt; of the cluster and all object definitions. Kubernetes uses &lt;strong&gt;etcd&lt;/strong&gt;, a distributed key-value store, as its backend. Each control plane node contains an etcd replica to ensure high availability (HA). In larger setups, architects may run a separate etcd cluster connected to all control planes to handle high-demand workloads.&lt;/p&gt;

&lt;p&gt;One challenge in distributed databases like etcd is the &lt;strong&gt;split-brain scenario&lt;/strong&gt;, which occurs when network partitions isolate some nodes from the rest, potentially causing multiple nodes to believe they are the leader and accept conflicting writes. For example, imagine a three-node etcd cluster: if one node gets disconnected from the other two, it might try to process updates independently. Etcd prevents this by requiring a &lt;strong&gt;majority quorum&lt;/strong&gt; for any write operation — only the majority of nodes can commit a change, ensuring consistency.&lt;/p&gt;

&lt;p&gt;Etcd also handles &lt;strong&gt;concurrent writes to the same key&lt;/strong&gt; using version numbers. If two clients try to update the same value at the same time, etcd will accept the first write and reject the second with a conflict error. The client must then retry using the latest version of the key, ensuring that no updates are accidentally overwritten.&lt;/p&gt;

&lt;h4&gt;
  
  
  Controllers and the controller manager
&lt;/h4&gt;

&lt;p&gt;Kubernetes relies on &lt;strong&gt;controllers&lt;/strong&gt; to handle much of the cluster’s “intelligence” and automation. These controllers run on the control plane and continuously monitor the cluster, comparing the &lt;strong&gt;observed state&lt;/strong&gt; with the &lt;strong&gt;desired state&lt;/strong&gt; you define. Common examples include the Deployment controller, StatefulSet controller, and ReplicaSet controller, each responsible for different types of workloads. Essentially, controllers act like caretakers: if you ask for three replicas of an application, the controller ensures that exactly three healthy instances are running and will create, delete, or restart pods as needed to maintain that state. To keep everything organized, Kubernetes runs a &lt;strong&gt;controller manager&lt;/strong&gt;, which spawns and supervises these individual controllers, ensuring they operate reliably and maintain the overall health and consistency of the cluster.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpmjfgdz80bhz4xf8as5y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpmjfgdz80bhz4xf8as5y.png" alt="Controller manager" width="800" height="469"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  The scheduler
&lt;/h4&gt;

&lt;p&gt;The &lt;strong&gt;Kubernetes scheduler&lt;/strong&gt; is responsible for assigning new workloads to healthy worker nodes. It continuously watches the API server for new tasks and evaluates which nodes are capable of running them. This involves filtering nodes based on factors like taints, affinity and anti-affinity rules, network port availability, and available CPU and memory. The scheduler then ranks the suitable nodes using a scoring system, considering criteria such as whether the required container image is already present, how much CPU and memory are free, and how many tasks the node is currently running. The nodes with the highest scores are chosen to execute the tasks. If no suitable node is available, the task is marked as pending. In clusters configured with autoscaling, a pending task can trigger a node autoscaling event, adding a new node to the cluster so the task can be scheduled and run.&lt;/p&gt;

&lt;h4&gt;
  
  
  The cloud controller manager
&lt;/h4&gt;

&lt;p&gt;If your Kubernetes cluster runs on a public cloud like AWS, Azure, GCP, or Civo Cloud, it uses a &lt;strong&gt;cloud controller manager&lt;/strong&gt; to integrate with cloud services. This component handles tasks such as provisioning instances, storage, and load balancers. For example, if an application requests a load balancer, the cloud controller manager automatically creates one in the cloud and connects it to your app, making cloud resources seamless to use.&lt;/p&gt;

&lt;h3&gt;
  
  
  Worker nodes
&lt;/h3&gt;

&lt;p&gt;an architecture of a worker node looks like this :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5y6nwdns5jmuyhi08cxm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5y6nwdns5jmuyhi08cxm.png" alt="Worker nodes" width="800" height="601"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  kubelet
&lt;/h4&gt;

&lt;p&gt;The &lt;strong&gt;kubelet&lt;/strong&gt; is the main Kubernetes agent running on every worker node, acting as the bridge between the node and the control plane. It watches the API server for new tasks, instructs the appropriate container runtime to execute them, and continuously reports the status of these tasks back to the API server. If a task fails to run, the kubelet reports the problem so the control plane can take the necessary actions to maintain the desired state of the cluster.&lt;/p&gt;

&lt;h4&gt;
  
  
  Runtime
&lt;/h4&gt;

&lt;p&gt;Each worker node also has one or more &lt;strong&gt;container runtimes&lt;/strong&gt; responsible for executing tasks. Most modern Kubernetes clusters use &lt;strong&gt;containerd&lt;/strong&gt;, which handles pulling container images and managing container lifecycle operations like starting and stopping containers. Older clusters shipped with Docker, which is now deprecated as a runtime, while platforms like Red Hat OpenShift often use &lt;strong&gt;CRI-O&lt;/strong&gt;. Each runtime has its strengths and trade-offs, and Kubernetes can work with any runtime that implements the Container Runtime Interface (CRI).&lt;/p&gt;

&lt;h4&gt;
  
  
  Kube-proxy
&lt;/h4&gt;

&lt;p&gt;Finally, every worker node runs a &lt;strong&gt;kube-proxy&lt;/strong&gt; service that manages cluster networking. Kube-proxy ensures that network traffic is correctly routed to the tasks running on the node and handles load balancing, making communication between services and pods seamless. With the kubelet, container runtime, and kube-proxy in place, each worker node becomes a self-managing unit capable of running applications reliably within the Kubernetes cluster.&lt;/p&gt;

&lt;h2&gt;
  
  
  Packaging apps for kubernetes
&lt;/h2&gt;

&lt;p&gt;In Kubernetes, all workloads — whether containers, VMs, or Wasm apps — must be wrapped in &lt;strong&gt;Pods&lt;/strong&gt; to run on the cluster. Think of Pods like standardized packages for a courier service: just as couriers can ship books, clothes, or electronics only if they’re properly packaged and labeled, Kubernetes can run any workload only when it’s packaged in a Pod. Once wrapped, Kubernetes handles the logistics of running the app — choosing the right nodes, connecting networks, attaching storage volumes, and monitoring its health. Typically, Pods are managed by higher-level &lt;strong&gt;controllers&lt;/strong&gt; like Deployments, which add extra value, similar to courier services offering insurance, express delivery, or tracking. Controllers ensure your applications stay healthy, scale automatically when needed, and maintain the desired state, letting you focus on building apps rather than managing the infrastructure.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc6sz6pbkhzy3cffstw2i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc6sz6pbkhzy3cffstw2i.png" alt="Breakdown of wraping in kubernetes" width="800" height="274"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The important thing to understand is that each layer of wrapping adds something:&lt;br&gt;
• The container wraps the app and provides dependencies&lt;br&gt;
• The Pod wraps the container so it can run on Kubernetes&lt;br&gt;
• The Deployment wraps the Pod and adds self-healing, scaling, and more&lt;/p&gt;

&lt;h2&gt;
  
  
  The declarative model and desired state
&lt;/h2&gt;

&lt;p&gt;At the heart of Kubernetes lies the &lt;strong&gt;declarative model&lt;/strong&gt;, a powerful system that revolves around three key concepts — &lt;strong&gt;observed state, desired state, and reconciliation&lt;/strong&gt;. The observed state represents the current reality of your cluster, while the desired state defines how you want it to look or behave. The reconciliation process acts as the bridge between the two, continuously ensuring that what exists matches what you’ve declared. In practice, this model starts when you define your application’s configuration in a YAML manifest — specifying details like container images, replicas, and ports — and submit it to the Kubernetes API server. Once authenticated and stored in the cluster’s key-value store, this configuration becomes a “record of intent.” From there, controllers constantly monitor the system, detecting any drift between the observed and desired states. If a discrepancy appears — say, a pod crashes or a node fails — the controller automatically takes corrective actions, such as rescheduling pods or pulling new images, to restore harmony. This self-correcting loop keeps your system stable and resilient. Unlike the &lt;strong&gt;imperative model&lt;/strong&gt;, which relies on manually executed, platform-specific commands and scripts, the declarative model is clean, consistent, and version-controllable — making it easier to roll out changes, recover from failures, and scale seamlessly. For example, if you declare ten replicas of an app and two nodes fail, Kubernetes automatically creates two new replicas to maintain the declared count. Similarly, updating a deployment is as simple as changing one line in your YAML file — Kubernetes handles the rollout, monitoring, and recovery automatically. This model’s elegance lies in its simplicity and automation: you tell Kubernetes what you want, and it continuously works to make that true.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>webdev</category>
      <category>architecture</category>
      <category>programming</category>
    </item>
    <item>
      <title>Internet Relay Chat</title>
      <dc:creator>B.R.O.L.Y</dc:creator>
      <pubDate>Wed, 21 Aug 2024 08:40:00 +0000</pubDate>
      <link>https://dev.to/ridwaneelfilali/internet-relay-chat-37ll</link>
      <guid>https://dev.to/ridwaneelfilali/internet-relay-chat-37ll</guid>
      <description>&lt;h2&gt;
  
  
  1 — Introduction to IRC Servers:
&lt;/h2&gt;

&lt;p&gt;An Internet Relay Chat (IRC) server is a crucial component in the infrastructure of IRC networks, facilitating real-time communication among users globally. IRC servers act as hubs where users connect to exchange messages in chat rooms (channels) or directly with each other. This decentralized model allows for a robust and flexible communication platform that has persisted since its inception in the late 1980s.&lt;/p&gt;

&lt;h3&gt;
  
  
  How IRC Servers Work
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Connection Establishment: Users connect to an IRC server using a client software capable of handling IRC protocols. These clients initiate a connection to the server, typically on a specific port (usually 6667 or 6697 for SSL/TLS).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Authentication and Identification: Upon connecting, users may need to authenticate themselves using a nickname (nick) and optional credentials (like passwords). This process ensures that users can be identified within the network.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Channel and Private Messaging: Users can join various channels based on topics or interests. Channels are virtual spaces where multiple users can exchange messages simultaneously. Additionally, users can communicate privately with each other via direct messaging.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Server Communication: IRC servers communicate among themselves to synchronize channels and user information across the network. This synchronization ensures that users can see the same list of channels and users, regardless of which server they are connected to within the network.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Commands and Services: IRC networks often provide additional services through bots and automated systems. These services can include channel management (like creating or maintaining channels), user authentication, and network-wide messaging.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F56loh8wjt7i7kt9e5slk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F56loh8wjt7i7kt9e5slk.png" alt="server-to-server-image" width="800" height="648"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Examples of IRC Servers
&lt;/h3&gt;

&lt;p&gt;Several software implementations serve as IRC servers, each with its own features and configurations:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;ircd-hybrid: A popular open-source IRC server known for its stability and scalability.&lt;/li&gt;
&lt;li&gt;InspIRCd: Another widely used open-source IRC server with extensive customization options.&lt;/li&gt;
&lt;li&gt;ircd-seven: Part of the IRCD-Hybrid family, focusing on improvements and added features.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These servers, along with others, form the backbone of various IRC networks, catering to different communities and needs.&lt;/p&gt;

&lt;h2&gt;
  
  
  2— Process of creating an IRC:
&lt;/h2&gt;

&lt;p&gt;Creating an IRC server involves setting up a platform that facilitates real-time communication between clients in a chat-based environment. Unlike traditional IRC setups that include server-to-server connections for network federation, my IRC project focuses solely on client-server interactions. This chapter will guide you through the process of setting up and configuring your IRC server from scratch.&lt;/p&gt;

&lt;h3&gt;
  
  
  1-Planning and Requirements
&lt;/h3&gt;

&lt;p&gt;Before diving into implementation, it’s crucial to define your server’s requirements:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Functionality: Determine the core features your IRC server will support, such as user authentication, channel management, and message broadcasting.&lt;/li&gt;
&lt;li&gt;Scalability: Consider how your server will handle multiple concurrent connections and optimize for performance.&lt;/li&gt;
&lt;li&gt;Security: Plan for user authentication mechanisms and ensure data transmission is secure, especially if handling sensitive information.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  2-Setting Up the Server Infrastructure
&lt;/h3&gt;

&lt;p&gt;Begin by setting up the foundational infrastructure for your IRC server:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Network Socket: Implement socket programming to handle incoming client connections.&lt;/li&gt;
&lt;li&gt;Protocol Implementation: Develop support for IRC protocol commands such as PASS, NICK, USER, JOIN, PRIVMSG, etc.&lt;/li&gt;
&lt;li&gt;Data Persistence: Consider how user data (nicknames, channels) will be stored and accessed, either through in-memory data structures or a database.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  3-Implementing IRC Server Features
&lt;/h3&gt;

&lt;p&gt;Focus on implementing essential IRC server features:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;User Authentication: Implement mechanisms for users to register nicknames (NICK) and authenticate (PASS) to the server.&lt;/li&gt;
&lt;li&gt;Channel Management: Allow users to create (JOIN) and manage channels (PART, MODE, TOPIC).&lt;/li&gt;
&lt;li&gt;Message Handling: Support for broadcasting messages (PRIVMSG), including private messages and channel communication.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuoebc63hmu8qzesulri7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuoebc63hmu8qzesulri7.png" alt="project requests" width="800" height="734"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  3 — Starting the server socket:
&lt;/h2&gt;

&lt;p&gt;To begin, the program expects user input specifying the socket port and password in the following format:&lt;br&gt;
&lt;code&gt;./ircserv &amp;lt;port&amp;gt; &amp;lt;password&amp;gt;&lt;/code&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;port:&lt;/strong&gt; The port number on which your IRC server will be listening to for incoming IRC connections.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;password:&lt;/strong&gt; The connection password. It will be needed by any IRC client that tries to connect to your server.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For official operation, ensure the port number falls within specific ranges that are not reserved:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;+-------------------------+-------------------------------------------------------------+
| Range                   | Use Cases                                                   |
+-------------------------+-------------------------------------------------------------+
| Well-Known Ports        | Reserved for standard services (HTTP, FTP, SSH)             |
|                         | Range: 0-1023                                               |
+-------------------------+-------------------------------------------------------------+
| Registered Ports        | Used by registered applications and services                |
|                         | Range: 1024-49151                                           |
+-------------------------+-------------------------------------------------------------+
| Dynamic or Private Ports| Ephemeral ports for temporary use                           |
|                         | Range: 49152-65535                                          |
+-------------------------+-------------------------------------------------------------+
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, we will proceed to create the server class and initialize the server socket. Refer to the project structure outlined in the image below for guidance.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fovyid700bs1radzhautt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fovyid700bs1radzhautt.png" alt="server structures" width="800" height="600"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  1-Creating Server socket:
&lt;/h3&gt;

&lt;p&gt;To establish communication with IRC clients, the server initializes a socket using the following code snippet:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;void Server::createSocket(std::string port)
{
    // Create a TCP socket
    this-&amp;gt;sock_fd = socket(AF_INET, SOCK_STREAM, 0);
    if (this-&amp;gt;sock_fd == -1)
    {
        close(this-&amp;gt;sock_fd);
        throw std::runtime_error("Can't create socket");
    }

    // Define server address structure
    sockaddr_in ServerAddress = {};
    ServerAddress.sin_family = AF_INET;
    ServerAddress.sin_addr.s_addr = INADDR_ANY;
    ServerAddress.sin_port = htons(std::atoi(port.c_str()));

    // Set socket to non-blocking mode
    if (fcntl(sock_fd, F_SETFL, O_NONBLOCK) == -1)
    {
        close(sock_fd);
        throw std::runtime_error("Can't set non-blocking");
    }

    // Bind socket to the specified port
    if (bind(sock_fd, (struct sockaddr*)&amp;amp;ServerAddress, sizeof(ServerAddress)) == -1){
        close(this-&amp;gt;sock_fd);
        throw std::runtime_error("Can't bind socket");
    }

    // Listen for incoming connections with a queue size of 10
    if (listen(sock_fd, 10) == -1)
    {
        close(sock_fd);
        throw std::runtime_error("Error listening");
    }

    // Server socket created successfully
    std::cout &amp;lt;&amp;lt; "Server socket created and listening on port " &amp;lt;&amp;lt; port &amp;lt;&amp;lt; std::endl;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  2-Starting the server:
&lt;/h3&gt;

&lt;p&gt;initiate the IRC server and handle incoming client connections and messages, the following Server::start() function is utilized:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;void Server::start()
{
    // Add server socket to pollfds array for monitoring
    pollfd serverfd = {sock_fd, POLLIN, 0};
    _pollfds.push_back(serverfd);

    // Inform that the server is running on the specified port
    std::cout &amp;lt;&amp;lt; "Server is running on port " &amp;lt;&amp;lt; this-&amp;gt;port &amp;lt;&amp;lt; std::endl;

    // Main server loop for handling events
    while(true)
    {
        // Wait indefinitely for events on monitored file descriptors
        if (poll(_pollfds.data(), _pollfds.size(), -1) == -1) {
            throw std::runtime_error("Poll error");
        }

        // Iterate through all monitored file descriptors
        for(auto it = _pollfds.begin(); it != _pollfds.end(); ++it)
        {
            // Check if the file descriptor has events to process
            if (it-&amp;gt;revents == 0)
                continue;

            // Handle incoming connection request on the server socket
            if (it-&amp;gt;revents &amp;amp; POLLIN)
            {
                if(it-&amp;gt;fd == sock_fd) {
                    addClient(sock_fd); // Accept new client connection
                    break;
                }
                else {
                    handleMessage(it-&amp;gt;fd); // Handle message from client
                }
            }
        }
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Explanation:
&lt;/h4&gt;

&lt;p&gt;Adding Server Socket to Pollfds:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;pollfd serverfd = {sock_fd, POLLIN, 0};&lt;/code&gt; initializes a &lt;code&gt;pollfd&lt;/code&gt; structure for the server socket (&lt;code&gt;sock_fd&lt;/code&gt;) to monitor for input readiness (&lt;code&gt;POLLIN&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;_pollfds.push_back(serverfd);&lt;/code&gt; adds the server socket to the &lt;code&gt;_pollfds&lt;/code&gt; vector for polling.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Polling for Events:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;if (poll(_pollfds.data(), _pollfds.size(), -1) == -1)&lt;/code&gt; polls all file descriptors in &lt;code&gt;_pollfds&lt;/code&gt; indefinitely (&lt;code&gt;-1&lt;/code&gt; timeout) for events.&lt;/li&gt;
&lt;li&gt;Throws an exception if &lt;code&gt;poll&lt;/code&gt; encounters an error.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Processing Events:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Iterates through each file descriptor in &lt;code&gt;_pollfds&lt;/code&gt; to check for events (&lt;code&gt;revents&lt;/code&gt;).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Handling Server Socket Event:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;if (it-&amp;gt;fd == sock_fd)&lt;/code&gt; checks if the event is on the server socket:&lt;/li&gt;
&lt;li&gt;Calls &lt;code&gt;addClient(sock_fd);&lt;/code&gt; to accept and add a new client connection.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Handling Client Messages:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;else if (it-&amp;gt;revents &amp;amp; POLLIN)&lt;/code&gt; processes incoming messages from connected clients:&lt;/li&gt;
&lt;li&gt;Calls &lt;code&gt;handleMessage(it-&amp;gt;fd);&lt;/code&gt; to handle the message from the client.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  3-Adding a client:
&lt;/h3&gt;

&lt;p&gt;To integrate a new client into the IRC server, the &lt;code&gt;Server::addClient(int sock_fd)&lt;/code&gt; function is employed:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;void Server::addClient(int sock_fd)
{
    // Accept incoming client connection
    int client_fd;
    sockaddr_in clientAddress = {};
    socklen_t clientAddressSize = sizeof(clientAddress);
    client_fd = accept(sock_fd, (struct sockaddr*)&amp;amp;clientAddress, &amp;amp;clientAddressSize);
    if (client_fd == -1)
        throw std::runtime_error("Can't accept client");

    // Add client socket to pollfds array for monitoring
    pollfd client_poll = {client_fd, POLLIN, 0};
    _pollfds.push_back(client_poll);

    // Retrieve client hostname
    char hostname[NI_MAXHOST];
    int result = getnameinfo((struct sockaddr*)&amp;amp;clientAddress, sizeof(clientAddress), hostname, NI_MAXHOST, NULL, 0, NI_NUMERICSERV);

    // Handle error if unable to retrieve hostname
    if (result != 0) {
        close(client_fd);
        throw std::runtime_error("Can't get hostname");
    }

    // Create a new Client object and store in clients map
    Client *client = new Client(hostname, ntohs(clientAddress.sin_port), client_fd);
    _clients.insert(std::make_pair(client_fd, client));

    // TODO: Log client connection
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Explanation:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Accepting Client Connection: &lt;code&gt;accept()&lt;/code&gt; is used to accept a new client connection on the server socket (&lt;code&gt;sock_fd&lt;/code&gt;). If unsuccessful (&lt;code&gt;client_fd == -1&lt;/code&gt;), an exception is thrown.&lt;/li&gt;
&lt;li&gt;Adding Client to Pollfds: A &lt;code&gt;pollfd&lt;/code&gt; structure is initialized for the client socket (&lt;code&gt;client_fd&lt;/code&gt;) with &lt;code&gt;POLLIN&lt;/code&gt; flag indicating readiness for reading.&lt;/li&gt;
&lt;li&gt;Retrieving Client Hostname: &lt;code&gt;getnameinfo()&lt;/code&gt; retrieves the hostname of the connecting client from &lt;code&gt;clientAddress&lt;/code&gt;. If unsuccessful (&lt;code&gt;result != 0&lt;/code&gt;), the client socket is closed and an exception is thrown.&lt;/li&gt;
&lt;li&gt;Creating Client Object: A new &lt;code&gt;Client&lt;/code&gt; object is instantiated with the retrieved hostname, client port (converted from network to host byte order), and client socket file descriptor (&lt;code&gt;client_fd&lt;/code&gt;). This client object is stored in &lt;code&gt;_clients&lt;/code&gt; map for future management.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  4-Handling message and commands:
&lt;/h3&gt;

&lt;p&gt;To manage incoming messages and execute commands from clients within the IRC server, the following functions are employed:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;void Server::handleMessage(int fd)
{
    try
    {
        // Retrieve client associated with the file descriptor
        Client* client = _clients.at(fd);

        // Read incoming message from client
        std::string message = readMessage(fd);

        // Pass message to command handler for processing
        _commandHandler-&amp;gt;handleCommand(message, client);
    }
    catch(const std::exception&amp;amp; e)
    {
        std::cout &amp;lt;&amp;lt; e.what() &amp;lt;&amp;lt; std::endl;
        throw std::runtime_error("Error handling message");
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Explanation:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Retrieve Client:&lt;/strong&gt; &lt;code&gt;_clients.at(fd)&lt;/code&gt; retrieves the client object associated with the provided file descriptor (&lt;code&gt;fd&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Read Message:&lt;/strong&gt; &lt;code&gt;readMessage(fd)&lt;/code&gt; reads the incoming message from the client socket (&lt;code&gt;fd&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Handle Command:&lt;/strong&gt; &lt;code&gt;_commandHandler-&amp;gt;handleCommand(message, client)&lt;/code&gt; delegates message handling and command execution to &lt;code&gt;_commandHandler&lt;/code&gt;, passing the received message and client object.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;std::string Server::readMessage(int fd)
{
    char buffer[1024];
    std::string message;

    // Receive message from client
    int bytes = recv(fd, buffer, 1024, 0);

    // Handle receive errors
    if (bytes == -1)
        throw std::runtime_error("Can't read message");
    else
        message = std::string(buffer, bytes);

    return message;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Explanation:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Receive Message:&lt;/strong&gt; &lt;code&gt;recv(fd, buffer, 1024, 0)&lt;/code&gt; reads up to 1024 bytes from the client socket (&lt;code&gt;fd&lt;/code&gt;) into &lt;code&gt;buffer&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Handle Receive Errors:&lt;/strong&gt; Throws an exception if &lt;code&gt;recv&lt;/code&gt; returns &lt;code&gt;-1&lt;/code&gt;, indicating an error occurred during message reception.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;void CommandHandler::handleCommand(std::string command, Client *client)
{
    // Remove leading colon from command if present
    if (command[0] == ':')
        command = command.substr(1);

    // Process each command line by line
    std::stringstream ss(command);
    std::string cmd;
    while(std::getline(ss, cmd))
    {
        // Trim any trailing carriage return characters
        if (cmd.back() == '\r')
            cmd.pop_back();

        // Retrieve command name
        std::string commandName = cmd.substr(0, cmd.find(" "));

        try
        {
            // Retrieve corresponding command object
            Command *c = _commands.at(commandName);

            // Extract command arguments
            std::string argsBuffer = cmd.substr(cmd.find(" ") + 1);
            std::istringstream argsStream(argsBuffer);
            std::string arg;
            std::list&amp;lt;std::string&amp;gt; args;
            while(std::getline(argsStream, arg, ' '))
            {
                arg.erase(std::remove_if(arg.begin(), arg.end(), ::isspace), arg.end());
                args.push_back(arg);
            }

            // Execute command with client and arguments
            c-&amp;gt;run(client, args);
        }
        catch (const std::out_of_range &amp;amp;e)
        {
            // Handle unknown command error
            client-reply(Replies::ERR_UNKNOWNCOMMAND(commandName));
        }
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Command Processing: Splits the incoming command into individual lines (&lt;code&gt;cmd&lt;/code&gt;), trims any trailing carriage return characters, and retrieves the command name.&lt;/li&gt;
&lt;li&gt;Command Execution: Attempts to locate and execute the corresponding command handler (&lt;code&gt;Command *c = _commands.at(commandName);&lt;/code&gt;) using &lt;code&gt;_commands&lt;/code&gt; map.&lt;/li&gt;
&lt;li&gt;Argument Parsing: Extracts command arguments from &lt;code&gt;cmd&lt;/code&gt;, parses them, and executes the command with the client and parsed arguments (&lt;code&gt;c-&amp;gt;run(client, args);&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;Error Handling: Catches &lt;code&gt;std::out_of_range&lt;/code&gt; exception to handle unknown command scenarios (&lt;code&gt;clientreply(Replies::ERR_UNKNOWNCOMMAND(commandName));&lt;/code&gt;).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;the diagram of the algorithm followed is below&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi88kymvtiqhr65qxkhmn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi88kymvtiqhr65qxkhmn.png" alt="algorithm flow" width="800" height="721"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  4 — Creating a bot:
&lt;/h3&gt;

&lt;p&gt;In this section, you have the discretion to select the type of bot to develop, ensuring its relevance within the project framework. For this purpose, I have chosen to develop a weather bot, which functions as a client connecting to the server. Other clients interact with this bot using the command format: WEATHER . When the server receives such a command, it relays it to the bot and awaits the bot's response. Upon receiving the weather information from the bot, the server then directs this response back to the specific client who initiated the query.&lt;/p&gt;

&lt;p&gt;Below is an overview of the logical structure underlying this process:&lt;/p&gt;

&lt;p&gt;1.Clients send a request in the form WEATHER  to the server.&lt;br&gt;
2.The server receives the request and forwards it to the weather bot.&lt;br&gt;
3.The bot processes the request, querying the relevant weather data.&lt;br&gt;
4.The bot sends the weather information back to the server.&lt;br&gt;
5.The server, upon receiving the bot’s response, forwards this information to the client who made the initial request.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj9n4vspxyarb4n32a3o1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj9n4vspxyarb4n32a3o1.png" alt="bot handling flow in server" width="800" height="1370"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  1 — inside weather bot brain:
&lt;/h4&gt;

&lt;p&gt;The code blow check args and weather_api_key and attempts to connect and register with the server then in case of success start listening for incoming calls&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;int main(int argc, char **argv) {
    if (argc != 3) {
        std::cerr &amp;lt;&amp;lt; GREEN &amp;lt;&amp;lt; "Usage: " &amp;lt;&amp;lt; argv[0] &amp;lt;&amp;lt; " &amp;lt;port&amp;gt; &amp;lt;password&amp;gt;" &amp;lt;&amp;lt; RESET &amp;lt;&amp;lt; std::endl;
        return 1;
    }

    std::string server = "localhost";
    int port = std::stoi(argv[1]);
    std::string password = argv[2];
    int sockfd;
    char *apikey = getenv("WEATHER_API_KEY");
    std::string apiKey;
    if (apikey)
    {
        apiKey = apikey;
        std::cout &amp;lt;&amp;lt; GREEN &amp;lt;&amp;lt; "WEATHER_API_KEY environment variable set ✅" &amp;lt;&amp;lt; RESET &amp;lt;&amp;lt; std::endl;
    }
    else
    {
        std::cerr &amp;lt;&amp;lt; RED &amp;lt;&amp;lt; "WEATHER_API_KEY environment variable not set ❌" &amp;lt;&amp;lt; RESET &amp;lt;&amp;lt; std::endl;
        exit(1);
    }

    if (connectToServer(server, port, sockfd)) {

        std::cout &amp;lt;&amp;lt; GREEN &amp;lt;&amp;lt; "Connected to server ✅" &amp;lt;&amp;lt; RESET &amp;lt;&amp;lt; std::endl;

        // Send registration messages
        if (!sendToServer(sockfd, "PASS " + password + "\r\n") ||
            !sendToServer(sockfd, "NICK bot\r\n") ||
            !sendToServer(sockfd, "USER botname 0 * :bot\r\n")) {
            std::cerr &amp;lt;&amp;lt; RED &amp;lt;&amp;lt; "Failed to register with the server ❌" &amp;lt;&amp;lt; RESET &amp;lt;&amp;lt; std::endl;
            close(sockfd);
            return 1;
        }
        std::cout &amp;lt;&amp;lt; GREEN &amp;lt;&amp;lt; "Registered with the server ✅" &amp;lt;&amp;lt; RESET &amp;lt;&amp;lt; std::endl;
        std::cout &amp;lt;&amp;lt; GREEN &amp;lt;&amp;lt; "Listening for messages..." &amp;lt;&amp;lt; RESET &amp;lt;&amp;lt; std::endl;
        listenForMessages(sockfd, apiKey);
        close(sockfd);
    } else {
        std::cerr &amp;lt;&amp;lt; RED &amp;lt;&amp;lt; "Failed to connect to server ❌" &amp;lt;&amp;lt; RESET &amp;lt;&amp;lt; std::endl;
    }

    return 0;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;now we check the connectToServer function that attempts to connect with the server using a socket&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;bool connectToServer(const std::string &amp;amp;server, int port, int &amp;amp;sockfd) {
    struct sockaddr_in server_addr;
    struct hostent *host;

    if ((host = gethostbyname(server.c_str())) == NULL) {
        std::cerr &amp;lt;&amp;lt; RED &amp;lt;&amp;lt; "Failed to get host by name ❌" &amp;lt;&amp;lt; RESET &amp;lt;&amp;lt; std::endl;
        return false;
    }

    if ((sockfd = socket(AF_INET, SOCK_STREAM, 0)) == -1) {
        std::cerr &amp;lt;&amp;lt; RED &amp;lt;&amp;lt; "Failed to create socket ❌" &amp;lt;&amp;lt; RESET &amp;lt;&amp;lt; std::endl;
        return false;
    }

    server_addr.sin_family = AF_INET;
    server_addr.sin_port = htons(port);
    server_addr.sin_addr = *((struct in_addr *)host-&amp;gt;h_addr);
    memset(&amp;amp;(server_addr.sin_zero), '\0', 8);

    if (connect(sockfd, (struct sockaddr *)&amp;amp;server_addr, sizeof(struct sockaddr)) == -1) {
        std::cerr &amp;lt;&amp;lt; RED &amp;lt;&amp;lt; "Failed to connect to server ❌" &amp;lt;&amp;lt; RESET &amp;lt;&amp;lt; std::endl;
        close(sockfd);
        return false;
    }
    return true;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h5&gt;
  
  
  Hostname Resolution (&lt;code&gt;gethostbyname&lt;/code&gt;):
&lt;/h5&gt;

&lt;ul&gt;
&lt;li&gt;The function first attempts to resolve the hostname (&lt;code&gt;server&lt;/code&gt;) to an IP address using &lt;code&gt;gethostbyname&lt;/code&gt;. If unsuccessful (returns &lt;code&gt;NULL&lt;/code&gt;), it prints an error message and returns &lt;code&gt;false&lt;/code&gt;, indicating failure to resolve the host.&lt;/li&gt;
&lt;/ul&gt;

&lt;h5&gt;
  
  
  Socket Creation (&lt;code&gt;socket&lt;/code&gt;):
&lt;/h5&gt;

&lt;ul&gt;
&lt;li&gt;If hostname resolution succeeds, the function creates a TCP socket (&lt;code&gt;SOCK_STREAM&lt;/code&gt;) using &lt;code&gt;socket(AF_INET, SOCK_STREAM, 0)&lt;/code&gt;. If socket creation fails (returns &lt;code&gt;-1&lt;/code&gt;), it prints an error message and returns &lt;code&gt;false&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h5&gt;
  
  
  Server Address Configuration (&lt;code&gt;server_addr&lt;/code&gt;):
&lt;/h5&gt;

&lt;ul&gt;
&lt;li&gt;The function initializes a &lt;code&gt;sockaddr_in&lt;/code&gt; structure (&lt;code&gt;server_addr&lt;/code&gt;) to hold the server's address information:&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;sin_family&lt;/code&gt; is set to &lt;code&gt;AF_INET&lt;/code&gt; indicating IPv4.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;sin_port&lt;/code&gt; is set to the specified port number (&lt;code&gt;port&lt;/code&gt;) in network byte order (&lt;code&gt;htons&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;sin_addr&lt;/code&gt; is set to the resolved IP address obtained from &lt;code&gt;gethostbyname&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;memset&lt;/code&gt; initializes the remaining bytes of &lt;code&gt;sin_zero&lt;/code&gt; to &lt;code&gt;0&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h5&gt;
  
  
  Connecting to the Server (&lt;code&gt;connect&lt;/code&gt;):
&lt;/h5&gt;

&lt;ul&gt;
&lt;li&gt;The function attempts to establish a connection to the server using &lt;code&gt;connect(sockfd, (struct sockaddr *)&amp;amp;server_addr, sizeof(struct sockaddr))&lt;/code&gt;. If connection fails (returns &lt;code&gt;-1&lt;/code&gt;), it prints an error message, closes the socket (&lt;code&gt;sockfd&lt;/code&gt;), and returns &lt;code&gt;false&lt;/code&gt;.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;void listenForMessages(int sockfd, std::string&amp;amp; apiKey) {
    char buffer[512];
    int numBytes;
    while (true) {
        if ((numBytes = recv(sockfd, buffer, sizeof(buffer) - 1, 0)) &amp;gt; 0) {
            buffer[numBytes] = '\0';
            std::string message(buffer);
            std::istringstream iss(message);
            std::string command, user, city;

            iss &amp;gt;&amp;gt; command &amp;gt;&amp;gt; user &amp;gt;&amp;gt; city;

            if (command == "WEATHER") {
                std::string weatherJson = fetchWeatherData(city, apiKey);

                std::string formattedMessage = formatWeatherResponse(weatherJson);
                std::cout &amp;lt;&amp;lt; "Client: " &amp;lt;&amp;lt; user &amp;lt;&amp;lt; " requested weather for " &amp;lt;&amp;lt; city &amp;lt;&amp;lt; std::endl;
                std::istringstream iss(formattedMessage);
                std::string line;
                while (std::getline(iss, line)) {
                    std::string response = "PRIVMSG " + user + " :" + line + "\r\n";
                    sendToServer(sockfd, response);
                }
            }

        } else if (numBytes == 0) {
            std::cout &amp;lt;&amp;lt; RED &amp;lt;&amp;lt; "Server closed the connection" &amp;lt;&amp;lt; RESET &amp;lt;&amp;lt; std::endl;
            break;
        } else {
            std::cerr &amp;lt;&amp;lt; RED &amp;lt;&amp;lt; "Failed to receive message ❌" &amp;lt;&amp;lt; RESET &amp;lt;&amp;lt; std::endl;
            break;
        }
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;std::string fetchWeatherData(const std::string&amp;amp; city, const std::string&amp;amp; apiKey) {
    //saving api keys is a vonerable point, so we will use the environment variable instead
    // std::string apiKey = "be44eb7afe554d9890b210909240806"; 

        std::string url = "http://api.weatherapi.com/v1/current.json?key=" + apiKey + "&amp;amp;q=" + city;
        std::string command = "curl -s \"" + url + "\" -o weather.json";

        // Execute the curl command
        system(command.c_str());

        // Read the content of the file into a string
        std::ifstream file("weather.json");
        std::string response((std::istreambuf_iterator&amp;lt;char&amp;gt;(file)), std::istreambuf_iterator&amp;lt;char&amp;gt;());
        file.close();

        // Optionally remove the temporary file
        remove("weather.json");

        return response;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  std::string formatWeatherResponse(const std::string&amp;amp; weatherJson) {
    std::string location = extractValue(weatherJson, "name");
    std::string country = extractValue(weatherJson, "country");
    std::string condition = extractValue(weatherJson, "text");
    std::string temp_c = extractValue(weatherJson, "temp_c");
    std::string wind_kph = extractValue(weatherJson, "wind_kph");
    std::string humidity = extractValue(weatherJson, "humidity");
    std::string uv_index = extractValue(weatherJson, "uv");

    std::ostringstream oss;
    oss &amp;lt;&amp;lt; "------------------------\n";
    oss &amp;lt;&amp;lt; "Weather Information:\n";
    oss &amp;lt;&amp;lt; "Location: " &amp;lt;&amp;lt; location &amp;lt;&amp;lt; ", " &amp;lt;&amp;lt; country &amp;lt;&amp;lt; "\n";
    oss &amp;lt;&amp;lt; "Condition: " &amp;lt;&amp;lt; condition &amp;lt;&amp;lt; "\n";
    oss &amp;lt;&amp;lt; "Temperature: " &amp;lt;&amp;lt; temp_c &amp;lt;&amp;lt; "°C\n";
    oss &amp;lt;&amp;lt; "Wind Speed: " &amp;lt;&amp;lt; wind_kph &amp;lt;&amp;lt; " kph\n";
    oss &amp;lt;&amp;lt; "Humidity: " &amp;lt;&amp;lt; humidity &amp;lt;&amp;lt; "%\n";
    oss &amp;lt;&amp;lt; "UV Index: " &amp;lt;&amp;lt; uv_index &amp;lt;&amp;lt; "\n";
    oss &amp;lt;&amp;lt; "------------------------\n";

    return oss.str();
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;std::string extractValue(const std::string&amp;amp; json, const std::string&amp;amp; key) {
    std::string searchKey = "\"" + key + "\":";
    std::size_t startPos = json.find(searchKey);
    if (startPos == std::string::npos) {
        return "";
    }

    startPos += searchKey.length();
    while (json[startPos] == ' ' || json[startPos] == '\"' || json[startPos] == '{') {
        startPos++;
    }

    std::size_t endPos = json.find_first_of(",}", startPos);
    std::string value = json.substr(startPos, endPos - startPos);
    value.erase(std::remove(value.begin(), value.end(), '\"'), value.end());
    return value;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgv5p3dneghkmb1zxv328.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgv5p3dneghkmb1zxv328.png" alt="bot algorithm flow" width="800" height="812"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  NOTE:
&lt;/h2&gt;

&lt;p&gt;When working with APIs that require authentication via API keys, it’s crucial not to hardcode or embed these keys directly into your source code. Here’s why:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Security Risk: Hardcoding API keys exposes them to potential theft if the code repository is compromised or accessed by unauthorized parties.&lt;/li&gt;
&lt;li&gt;Best Practices: Instead of embedding API keys in code, use environment variables or configuration files that are not included in your version control system (e.g., .gitignore for Git repositories).&lt;/li&gt;
&lt;li&gt;Environmental Variables: Store sensitive information like API keys in environment variables during development and deployment. This approach keeps them secure and separate from your application codebase.&lt;/li&gt;
&lt;li&gt;Avoiding Accidental Exposure: Inadvertently exposing API keys in public repositories can lead to unauthorized usage, potential costs, or compromised data.&lt;/li&gt;
&lt;li&gt;Security Hygiene: Regularly audit your codebase for any hardcoded sensitive information and implement secure practices to handle such credentials.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;in my case we used environment variables but you can use other solutions&lt;/p&gt;

</description>
      <category>cpp</category>
      <category>network</category>
      <category>api</category>
      <category>linux</category>
    </item>
  </channel>
</rss>
