<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Xavier Alexander</title>
    <description>The latest articles on DEV Community by Xavier Alexander (@xalexander).</description>
    <link>https://dev.to/xalexander</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/xalexander"/>
    <language>en</language>
    <item>
      <title>Breaking Things (Slightly) with a Nexus Upgrade: A Retrospective</title>
      <dc:creator>Xavier Alexander</dc:creator>
      <pubDate>Mon, 03 Mar 2025 17:20:40 +0000</pubDate>
      <link>https://dev.to/xalexander/breaking-things-slightly-with-a-nexus-upgrade-a-retrospective-355n</link>
      <guid>https://dev.to/xalexander/breaking-things-slightly-with-a-nexus-upgrade-a-retrospective-355n</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwmtic8wnxkl1ojwbmx5m.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwmtic8wnxkl1ojwbmx5m.jpg" alt="8bit-computer" width="100" height="100"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Pre-Upgrade
&lt;/h3&gt;

&lt;p&gt;A few weeks ago, I upgraded Nexus Repository Manager at my job. Nexus acts as a central hub for storing and distributing binaries, serving as a proxy for remote repositories, and hosting private artifacts. It sits behind Traefik, both of which are defined in a Docker Compose file. Since Traefik also needed an update, I figured I’d kill two birds with one stone 😅. I wasn’t nervous about the Nexus upgrade—it was a minor version jump. But Traefik? That made me nervous because it was a big jump.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Upgrade
&lt;/h3&gt;

&lt;p&gt;After reviewing the release notes, I felt confident. I bumped the image tags and restarted the containers. Everything seemed to be going well. I could access the Nexus UI and successfully push and pull images. I went to bed feeling extremely accomplished, as this was my first-ever production upgrade.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Aftermath
&lt;/h3&gt;

&lt;p&gt;The next morning, I logged in and went through my normal tasks—checking emails, pull requests, and double-checking that I could still push and pull images from Nexus. Everything seemed fine… until I got a message from someone in IT:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;“Hey man. Sorry to be the bearer of bad news, but Nexus is mega-bricked.”&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;My heart sank. This issue affected a high-priority team—we’ll call them Team Alpha. Minutes later, a ticket came in with more details: some of their Jenkins jobs were returning a 502 error when attempting to push artifacts to Nexus. Oddly, other jobs were successfully pushing artifacts to Nexus.&lt;/p&gt;

&lt;p&gt;After a few hours of digging and troubleshooting, I got another ticket—this time from Team Alpha’s team lead. Again, my heart sank. Now they were requesting an elevated priority for the ticket. I reached out to their team lead, who mentioned that this same issue happened a few years ago, before I joined the company:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;“I can’t remember exactly how you guys fixed it, but it was something with the proxy.”&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;To me, this was good news—if it happened before, that meant it had already been fixed once. Though I didn’t know how, just knowing there was a solution was a step in the right direction.&lt;/p&gt;

&lt;p&gt;He also mentioned that the 502 timeout only happened when pushing really large images—the kind that take some time to upload. And it was failing at exactly 60 seconds. That immediately made me think of Traefik. I went back through the release notes, and then it hit me right in the face:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Starting with v2.11.2, the entryPoints.readTimeout option default value changed to 60 seconds.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The pushes of the large files were timing out because they were exceeding 60 seconds, the default value for &lt;code&gt;readTimeout&lt;/code&gt;. This also explains why other jobs , pushing smaller files to Nexus, were succeeding.  I let Team Alpha’s lead know, and he gave me the go-ahead to update the Docker Compose file and restart Traefik.&lt;/p&gt;

&lt;p&gt;After what felt like hours of waiting, he responded:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;“Issue fixed!”&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;🎉🎉🎉&lt;/p&gt;

&lt;h3&gt;
  
  
  Lessons Learned
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Don’t wait until the night of an upgrade to review release notes. If I had started going over the relase notes a week earlier, I would’ve caught the &lt;code&gt;readTimeout&lt;/code&gt; change.&lt;/li&gt;
&lt;li&gt;Avoid big gaps between upgrades if possible. Jumping from version 2.0.0 → 2.0.5 is much easier than 2.0.0 → 5.1.0.
2.Expand the scope of the post upgrade testing and validating. Ok it works for my team, but what about other teams who use it?&lt;/li&gt;
&lt;li&gt;Really, really read every change in the release notes. (Yes, every change.)&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>devops</category>
    </item>
    <item>
      <title>Setting Up A 3-Node Kubernetes Cluster</title>
      <dc:creator>Xavier Alexander</dc:creator>
      <pubDate>Thu, 06 Feb 2025 22:46:45 +0000</pubDate>
      <link>https://dev.to/xalexander/setting-up-a-3-node-kubernetes-cluster-71o</link>
      <guid>https://dev.to/xalexander/setting-up-a-3-node-kubernetes-cluster-71o</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7pbbb24fa0lketbm1rk0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7pbbb24fa0lketbm1rk0.png" alt="A tiny computer" width="100" height="100"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Six months ago, I started a new job as a DevSecOps engineer. During the interview process the team gave me a heads up about what tools they used. I was excited to learn Kubernetes was one of them. I thought to myself, "Cool, I'll be ready. I've done a few tutorials on this." But after the first few days,I quickly realized I was wrong—I was not ready 😂. I wasn’t just dealing with single-container pods like in the tutorials. I was now faced with terms like &lt;code&gt;Ingress&lt;/code&gt;, &lt;code&gt;CRDs&lt;/code&gt;, and &lt;code&gt;PVCs&lt;/code&gt;… none of it made sense. The days of simple walkthroughs were long gone. I needed a homelab to start building for real. I ordered the following &lt;a href="https://www.amazon.com/dp/B0D5Y4BKZD?th=1" rel="noopener noreferrer"&gt;beelink&lt;/a&gt;, installed proxmox, and hit the ground running. &lt;/p&gt;

&lt;p&gt;In this post I'll be walking through how I set up a 3 node kubernetes cluster&lt;/p&gt;

&lt;h2&gt;
  
  
  Requirements
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;3 VMs &lt;/li&gt;
&lt;li&gt;2 GB or more of RAM per machine &lt;/li&gt;
&lt;li&gt;2 CPUs or more for control plane machines.&lt;/li&gt;
&lt;li&gt;OS - I'm using Rocky Linux&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  On Every Node
&lt;/h2&gt;

&lt;p&gt;The following should be completed on every node. &lt;/p&gt;

&lt;h3&gt;
  
  
  Disable Swap
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Explanation:
&lt;/h4&gt;

&lt;p&gt;The default behavior of a kubelet is to fail to start if swap memory is detected on a node. This means that swap should either be disabled or tolerated by kubelet.&lt;/p&gt;

&lt;p&gt;In &lt;code&gt;/etc/fstab&lt;/code&gt; comment out the line that declares swap.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#/dev/mapper/rl-swap     none                    swap    defaults        0 0&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Confirm by running the &lt;code&gt;free&lt;/code&gt; command. You should see 0s for the Swap row.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;[&lt;/span&gt;xavier@lab-cp ~]&lt;span class="nv"&gt;$ &lt;/span&gt;free &lt;span class="nt"&gt;-m&lt;/span&gt;
               total        used        free      shared  buff/cache   available
Mem:            3562        1725         769          17        1316        1836
Swap:              0           0           0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Install Containerd
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Explanation:
&lt;/h4&gt;

&lt;p&gt;To run containers in pods, we need a container runtime. I went with containerd.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# This command adds the docker repoistory &lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;dnf config-manager &lt;span class="nt"&gt;--add-repo&lt;/span&gt; https://download.docker.com/linux/centos/docker-ce.repo

&lt;span class="c"&gt;# This command installs containerd&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;dnf &lt;span class="nb"&gt;install &lt;/span&gt;containerd
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Ensure containerd is healthy:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;[&lt;/span&gt;xavier@lab-cp ~]&lt;span class="nv"&gt;$ &lt;/span&gt;systemctl status containerd
● containerd.service - containerd container runtime
     Loaded: loaded &lt;span class="o"&gt;(&lt;/span&gt;/usr/lib/systemd/system/containerd.service&lt;span class="p"&gt;;&lt;/span&gt; enabled&lt;span class="p"&gt;;&lt;/span&gt; preset: disabled&lt;span class="o"&gt;)&lt;/span&gt;
     Active: active &lt;span class="o"&gt;(&lt;/span&gt;running&lt;span class="o"&gt;)&lt;/span&gt; since Tue 2025-01-21 18:32:57 EST&lt;span class="p"&gt;;&lt;/span&gt; 2 weeks 1 day ago
       Docs: https://containerd.io
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Install kubelet, kubeadm, and kubectl
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Explanation
&lt;/h4&gt;

&lt;p&gt;kubelet: The primary node agent that runs on each node, ensuring containers are running.&lt;br&gt;
kubeadm: A tool that helps bootstrap and manage Kubernetes clusters.&lt;br&gt;
kubectl: The command-line tool used to interact with the Kubernetes API and manage cluster resources.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# This overwrites any existing configuration in /etc/yum.repos.d/kubernetes.repo&lt;/span&gt;
&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt; | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://pkgs.k8s.io/core:/stable:/v1.32/rpm/
enabled=1
gpgcheck=1
gpgkey=https://pkgs.k8s.io/core:/stable:/v1.32/rpm/repodata/repomd.xml.key
exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni
&lt;/span&gt;&lt;span class="no"&gt;EOF

&lt;/span&gt;&lt;span class="c"&gt;#--disableexcludes=kubernetes ensures that no repository exclusions prevent the installation of these packages.&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;yum &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; kubelet kubeadm kubectl &lt;span class="nt"&gt;--disableexcludes&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;kubernetes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Configure cgroup Driver
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Explanation:
&lt;/h4&gt;

&lt;p&gt;The kubelet and the underlying container runtime need to interface with cgroups(coontrol groups) to enforce resource management for pods and containers which includes cpu/memory requests and limits for containerized workloads. The cgroup (control group) driver is responsible for managing and allocating system resources such as CPU and memory to containers.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# This command moves (renames) the existing config.toml file to config.toml.bak as a backup.&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo mv&lt;/span&gt; /etc/containerd/config.toml /etc/containerd/config.toml.bak

&lt;span class="c"&gt;# This command generates the default configuration for containerd, a container runtime, and writes it to a new config.toml file.&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;containerd config default &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; config.toml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the config.toml file, set &lt;code&gt;SystemdCgroup&lt;/code&gt; to true&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fv2%2Fresize%3Afit%3A1400%2F0%2A9xY_Av2HZiL6FD_b" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fv2%2Fresize%3Afit%3A1400%2F0%2A9xY_Av2HZiL6FD_b" width="800" height="306"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Network Configuration
&lt;/h3&gt;
&lt;h4&gt;
  
  
  Explanation:
&lt;/h4&gt;

&lt;p&gt;When setting up a Kubernetes cluster, we need to adjust certain kernel parameters to ensure that networking functions correctly. These settings control how the Linux kernel handles network traffic, particularly when dealing with bridged network interfaces and packet forwarding.&lt;/p&gt;

&lt;p&gt;Edit the &lt;code&gt;/etc/sysctl.d/k8s.conf&lt;/code&gt; file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;net.ipv4.ip_forward &lt;span class="o"&gt;=&lt;/span&gt; 1
net.bridge.bridge-nf-call-ip6tables &lt;span class="o"&gt;=&lt;/span&gt; 1
net.bridge.bridge-nf-call-iptables &lt;span class="o"&gt;=&lt;/span&gt; 1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  On The Control Plane Only
&lt;/h2&gt;

&lt;p&gt;The following should be completed on the control plane, only.&lt;/p&gt;

&lt;h3&gt;
  
  
  Open The Required Ports
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Protocol&lt;/th&gt;
&lt;th&gt;Direction&lt;/th&gt;
&lt;th&gt;Port Range&lt;/th&gt;
&lt;th&gt;Purpose&lt;/th&gt;
&lt;th&gt;Used By&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;TCP&lt;/td&gt;
&lt;td&gt;Inbound&lt;/td&gt;
&lt;td&gt;6443&lt;/td&gt;
&lt;td&gt;Kubernetes API server&lt;/td&gt;
&lt;td&gt;All&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;TCP&lt;/td&gt;
&lt;td&gt;Inbound&lt;/td&gt;
&lt;td&gt;2379-2380&lt;/td&gt;
&lt;td&gt;etcd server client API&lt;/td&gt;
&lt;td&gt;kube-apiserver, etcd&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;TCP&lt;/td&gt;
&lt;td&gt;Inbound&lt;/td&gt;
&lt;td&gt;10250&lt;/td&gt;
&lt;td&gt;Kubelet API&lt;/td&gt;
&lt;td&gt;Self, Control plane&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;TCP&lt;/td&gt;
&lt;td&gt;Inbound&lt;/td&gt;
&lt;td&gt;10259&lt;/td&gt;
&lt;td&gt;kube-scheduler&lt;/td&gt;
&lt;td&gt;Self&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;TCP&lt;/td&gt;
&lt;td&gt;Inbound&lt;/td&gt;
&lt;td&gt;10257&lt;/td&gt;
&lt;td&gt;kube-controller-manager&lt;/td&gt;
&lt;td&gt;Self&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Initialize The Cluster
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Explanation:
&lt;/h4&gt;

&lt;p&gt;kubeadm init sets up the control plane, configuring essential components like the API server, scheduler, and controller manager.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;kubeadm init &lt;span class="nt"&gt;--apiserver-advertise-address&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;HOST IP&lt;span class="o"&gt;}&lt;/span&gt; &lt;span class="nt"&gt;--pod-network-cidr&lt;/span&gt; 10.244.0.0/16
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you're successful, you'll get something like :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  &lt;span class="nb"&gt;mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; &lt;span class="nv"&gt;$HOME&lt;/span&gt;/.kube
  &lt;span class="nb"&gt;sudo cp&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; /etc/kubernetes/admin.conf &lt;span class="nv"&gt;$HOME&lt;/span&gt;/.kube/config
  &lt;span class="nb"&gt;sudo chown&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;id&lt;/span&gt; &lt;span class="nt"&gt;-u&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;:&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;id&lt;/span&gt; &lt;span class="nt"&gt;-g&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt; &lt;span class="nv"&gt;$HOME&lt;/span&gt;/.kube/config

Alternatively, &lt;span class="k"&gt;if &lt;/span&gt;you are the root user, you can run:

  &lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;KUBECONFIG&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run &lt;span class="s2"&gt;"kubectl apply -f [podnetwork].yaml"&lt;/span&gt; with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can &lt;span class="nb"&gt;join &lt;/span&gt;any number of worker nodes by running the following on each as root:

kubeadm &lt;span class="nb"&gt;join&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;HOST IP&lt;span class="o"&gt;}&lt;/span&gt;:6443 &lt;span class="nt"&gt;--token&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;TOKEN&lt;span class="o"&gt;}&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--discovery-token-ca-cert-hash&lt;/span&gt; sha256:&lt;span class="o"&gt;{&lt;/span&gt;HASH&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Save this output somewhere. You'll need it shortly. &lt;/p&gt;

&lt;h3&gt;
  
  
  Set Up KUBECONFIG
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Explanation:
&lt;/h4&gt;

&lt;p&gt;A kubeconfig file is a configuration file used by Kubernetes to store information about clusters, users, namespaces, and authentication mechanisms, allowing the kubectl command-line tool to connect and interact with a Kubernetes cluster by providing necessary credentials and access details to the cluster's API server; essentially, it acts as a central hub to manage access to multiple Kubernetes clusters from a single location.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; &lt;span class="nv"&gt;$HOME&lt;/span&gt;/.kube
&lt;span class="nb"&gt;sudo cp&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; /etc/kubernetes/admin.conf &lt;span class="nv"&gt;$HOME&lt;/span&gt;/.kube/config
&lt;span class="nb"&gt;sudo chown&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;id&lt;/span&gt; &lt;span class="nt"&gt;-u&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;:&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;id&lt;/span&gt; &lt;span class="nt"&gt;-g&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt; &lt;span class="nv"&gt;$HOME&lt;/span&gt;/.kube/config
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Deploy A CNI
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Explanation:
&lt;/h4&gt;

&lt;p&gt;In order for your pods to communicate with each other, you need to install a Container Network Interface(CNI)&lt;/p&gt;

&lt;p&gt;There are a ton of network plugins to choose from. I ended up going with Cilium. Installation steps can be found &lt;a href="https://docs.cilium.io/en/stable/gettingstarted/k8s-install-default/#install-the-cilium-cli" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  On The Worker Nodes Only
&lt;/h2&gt;

&lt;p&gt;The following should be completed on the worker nodes only.&lt;/p&gt;

&lt;h3&gt;
  
  
  Open The Required Ports
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Protocol&lt;/th&gt;
&lt;th&gt;Direction&lt;/th&gt;
&lt;th&gt;Port Range&lt;/th&gt;
&lt;th&gt;Purpose&lt;/th&gt;
&lt;th&gt;Used By&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;TCP&lt;/td&gt;
&lt;td&gt;Inbound&lt;/td&gt;
&lt;td&gt;10250&lt;/td&gt;
&lt;td&gt;Kubelet API&lt;/td&gt;
&lt;td&gt;Self, Control plane&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;TCP&lt;/td&gt;
&lt;td&gt;Inbound&lt;/td&gt;
&lt;td&gt;10256&lt;/td&gt;
&lt;td&gt;kube-proxy&lt;/td&gt;
&lt;td&gt;Self, Load balancers&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;TCP&lt;/td&gt;
&lt;td&gt;Inbound&lt;/td&gt;
&lt;td&gt;30000-32767&lt;/td&gt;
&lt;td&gt;NodePort Services†&lt;/td&gt;
&lt;td&gt;All&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Now you can join worker nodes to the control plane. From the output you saved earlier, as sudo, run the kubeadm join command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubeadm &lt;span class="nb"&gt;join&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;HOST IP&lt;span class="o"&gt;}&lt;/span&gt;:6443 &lt;span class="nt"&gt;--token&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;TOKEN&lt;span class="o"&gt;}&lt;/span&gt; &lt;span class="nt"&gt;--discovery-token-ca-cert-hash&lt;/span&gt; sha256:&lt;span class="o"&gt;{&lt;/span&gt;HASH&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will take a few minutes, but if it was successful, you'll get a message with the following. &lt;code&gt;This node has joined the cluster&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;If all went well, you should now have a 3 node cluster! To verify, create a pod&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;[&lt;/span&gt;xavier@lab-cp ~]&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl run mypod &lt;span class="nt"&gt;--image&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;nginx:latest
pod/mypod created
&lt;span class="o"&gt;[&lt;/span&gt;xavier@lab-cp ~]&lt;span class="nv"&gt;$ &lt;/span&gt;k get pods
NAME            READY   STATUS    RESTARTS   AGE
mypod           1/1     Running   0          36s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There are much easier ways to set up a cluster, but using kubeadm gives you a deeper understanding of how Kubernetes works under the hood. &lt;/p&gt;

&lt;p&gt;This method also provides more flexibility, allowing you to customize your setup based on your needs. Whether you’re setting up a test environment or preparing for a production deployment, mastering the fundamentals will make troubleshooting and scaling your cluster much easier.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>kubernetes</category>
      <category>k8s</category>
      <category>linux</category>
    </item>
  </channel>
</rss>
