<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Codewired</title>
    <description>The latest articles on DEV Community by Codewired (@codewired).</description>
    <link>https://dev.to/codewired</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/codewired"/>
    <language>en</language>
    <item>
      <title>PART 1: Deploy modern applications on a production grade, local K8s Cluster, layered with Istio Service Mesh and Observability.</title>
      <dc:creator>Codewired</dc:creator>
      <pubDate>Tue, 28 May 2024 21:09:34 +0000</pubDate>
      <link>https://dev.to/codewired/part-1-deploy-modern-applications-on-a-production-grade-local-kubernetes-cluster-with-istio-service-mesh-and-observability-1ifp</link>
      <guid>https://dev.to/codewired/part-1-deploy-modern-applications-on-a-production-grade-local-kubernetes-cluster-with-istio-service-mesh-and-observability-1ifp</guid>
      <description>&lt;p&gt;This first part of the three part series will guide you through, how to setup a Hackathon Starter ready, production grade, local development k8s cluster and a service mesh using Rancher's k3s light weight clusters and Istio service mesh. We will then deploy a sample application, add observability and ramp up traffic to see the service mesh in action. The next two parts of this series which will be released next month, will focus on full stack application development with Next.js and FastAPI, effectively showing intermediate and advanced developers how to scaffold production grade dashboard applications and powerful, scalable Fast REST APIs for all purposes,  and finally deploying them to the infrastructure that we will setup in this part. &lt;/p&gt;

&lt;p&gt;INSTALL DOCKER AND k3d ON LINUX (Debian) &amp;amp; MAC&lt;/p&gt;

&lt;p&gt;Install Docker Engine and k3d on MAC&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;code&gt;brew update&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;brew install --cask docker&lt;/code&gt; (Recommended if you don't have docker desktop installed)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;brew install k3d&lt;/code&gt; (Install k3d tool)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;On Linux, you will need to uninstall older versions of docker engine on your Linux box if you installed an earlier version. There are two ways to do it.&lt;/p&gt;

&lt;p&gt;The first option is to: Uninstall Docker Engine, CLI, containerd, and compose packages&lt;br&gt;
&lt;code&gt;sudo apt-get purge docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin docker-ce-rootless-extras&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;And delete all images, containers and volumes run:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;code&gt;sudo rm -rf /var/lib/docker&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;sudo rm -rf /var/lib/containerd&lt;/code&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The second option is to run the command below to install all conflicting packages&lt;br&gt;
&lt;code&gt;for pkg in docker.io docker-doc docker-compose docker-compose-v2 podman-docker containerd runc; do sudo apt-get remove $pkg; done&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Install latest docker engine using the apt repository&lt;br&gt;
Add Docker's official GPG key:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;code&gt;sudo apt-get update&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;sudo apt-get install ca-certificates curl&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;sudo install -m 0755 -d /etc/apt/keyrings&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;sudo chmod a+r /etc/apt/keyrings/docker.asc&lt;/code&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Add the repository to Apt sources:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;code&gt;echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu $(. /etc/os-release &amp;amp;&amp;amp; echo "$VERSION_CODENAME") stable" | sudo tee /etc/apt/sources.list.d/docker.list &amp;gt; /dev/null&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;sudo apt-get update&lt;/code&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If you use an Ubuntu derivative distro, such as Linux Mint, you may need to use UBUNTU_CODENAME instead of VERSION_CODENAME.&lt;/p&gt;

&lt;p&gt;To install the latest version run:&lt;br&gt;
&lt;code&gt;sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;To install a specific version based on your Linux distro version, do this:&lt;br&gt;
List the available versions:&lt;br&gt;
&lt;code&gt;apt-cache madison docker-ce | awk '{ print $3 }'&lt;/code&gt;&lt;br&gt;
Select and install the desired version&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;code&gt;VERSION_STRING=5:26.1.0-1~ubuntu.24.04~noble&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;sudo apt-get install docker-ce=$VERSION_STRING docker-ce-cli=$VERSION_STRING containerd.io docker-buildx-plugin docker-compose-plugin&lt;/code&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Create a docker group and add the current user&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;code&gt;sudo groupadd -f docker&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;sudo usermod -aG docker $USER&lt;/code&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Verify that the Docker Engine installation is successful by running the hello-world image.&lt;br&gt;
&lt;code&gt;sudo docker run hello-world&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Install K3d, a light weight wrapper to run Rancher's K3s light weight clusters.&lt;br&gt;
Latest Release &lt;code&gt;curl -s https://raw.githubusercontent.com/k3d-io/k3d/main/install.sh | bash&lt;/code&gt;&lt;br&gt;
Specific Release &lt;code&gt;curl -s https://raw.githubusercontent.com/k3d-io/k3d/main/install.sh | TAG=v5.0.0 bash&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Show docker version&lt;br&gt;
&lt;code&gt;docker version&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Show k3d version&lt;br&gt;
&lt;code&gt;k3d version&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;INSTALL ISTIO (As of the time of writing this doc, the latest was 1.22.0 and it works with Kubernetes 1.30.1)&lt;/p&gt;

&lt;p&gt;Install from Istio Download site&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;code&gt;curl -L https://istio.io/downloadIstio | sh -&lt;/code&gt;  (Install the Latest Release Version)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;curl -L https://istio.io/downloadIstio | ISTIO_VERSION=1.22.0 TARGET_ARCH=x86_64 sh -&lt;/code&gt; (Install a specific version OR override processor architecture)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Install from Github Repo&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;code&gt;ISTIO_VERSION=1.22.0&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;ISTIO_URL=https://github.com/istio/istio/releases/download/$ISTIO_VERSION/istio-$ISTIO_VERSION-linux-amd64.tar.gz&lt;/code&gt; (For Linux processor ARCH, change as needed)&lt;/li&gt;
&lt;li&gt;&lt;code&gt;curl -L $ISTIO_URL | tar xz&lt;/code&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Move to the Istio folder and set your PATH env variables for Istio bin directory so you can run istioctl from anywhere&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;code&gt;cd istio-1.22.0&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;export PATH=$PWD/bin:$PATH&lt;/code&gt; (You should add this line to your shell config file, .zshrc or .bashrc. Replace the $PWD value with the actual value in your shell config)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Show Istio CTL version&lt;br&gt;
&lt;code&gt;istioctl version&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Inspect profiles:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;code&gt;istioctl profile list&lt;/code&gt; (Will list available profiles)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;istioctl profile dump default&lt;/code&gt; (Will dump the default profile config)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Install Istio with the demo profile&lt;br&gt;
&lt;code&gt;istioctl install --set profile=demo -y&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Deploy Multi-Node K3s Kubernetes Cluster (v1.30.1) with a local registry, disabling treafik for istio instead (3 nodes, including control plane)&lt;br&gt;
The incantation below creates a 3 node Kubernetes cluster (1 control plane and 2 workers) and uses a load balancer port to expose the internal application via the nginx load balancer, also setting up an  internal repository for pushing local images&lt;/p&gt;

&lt;p&gt;&lt;code&gt;k3d cluster create svc-mesh-poc --agents 2 --port 7443:443@loadbalancer --port 3070:80@loadbalancer --api-port 6443 --registry-create svc-mesh-registry --image rancher/k3s:v1.30.1-k3s1 --k3s-arg '--disable=traefik@server:*'&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Probe local k3s docker images and the newly installed cluster&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;code&gt;docker ps --format 'table {{.ID}}\t{{.Image}}\t{{.Names}}\t{{.Ports}}'&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;kubectl get nodes&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;kubectl get ns&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;kubectl get pods -A&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;kubectl get services -A&lt;/code&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Create a Namespace for Demo Application that will be deployed so we can see Istio in action&lt;br&gt;
&lt;code&gt;kubectl create namespace istio-demo-app&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;To enable the automatic injection of Envoy sidecar proxies on the demo app namespace, run the following: (Otherwise you will need to do this manually when you deploy your applications)&lt;br&gt;
&lt;code&gt;kubectl label namespace istio-demo-app istio-injection=enabled&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Deploy Istio's demo application using images from the manifests in the Istio installation samples folder which points to their public registries (Please examine the manifest before applying and make sure you are in the istio version folder)&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;code&gt;cd istio-1.22.0&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;kubectl -n istio-demo-app apply -f samples/bookinfo/platform/kube/bookinfo.yaml&lt;/code&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;When installation is complete verify pods and services:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;code&gt;kubectl -n istio-demo-app get services&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;kubectl -n istio-demo-app get pods&lt;/code&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Open Outside Traffic to pods so we can browse it locally and on the internal network via the browser:&lt;br&gt;
&lt;code&gt;kubectl -n istio-demo-app apply -f samples/bookinfo/networking/bookinfo-gateway.yaml&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Analyze Namespace for Errors&lt;br&gt;
&lt;code&gt;istioctl -n istio-demo-app analyze&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Open in browser&lt;br&gt;
&lt;code&gt;http://localhost:3070/productpage&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Install Metrics and Tracing Utilities&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;code&gt;kubectl apply -f samples/addons&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;kubectl rollout status deployment/kiali -n istio-system&lt;/code&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If there are errors trying to install the addons, try running the command again. There may be some timing issues which will be resolved when the command is run again.&lt;/p&gt;

&lt;p&gt;Access the Kiali dashboard&lt;br&gt;
&lt;code&gt;istioctl dashboard kiali&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Ramp up hits on the demo application to see Istio MESH in action on Kiali. Run the command&lt;br&gt;
&lt;code&gt;for i in $(seq 1 100); &lt;br&gt;
 do curl -so /dev/null http://localhost:3070/productpage; &lt;br&gt;
 done&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;You many need to clean up all installation in the cluster at some point:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;code&gt;kubectl delete -f samples/addons&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;kubectl -n istio-demo-app delete -f samples/bookinfo/networking/bookinfo-gateway.yaml&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;kubectl -n istio-demo-app delete -f samples/bookinfo/platform/kube/bookinfo.yaml&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;istioctl x uninstall --purge&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;kubectl delete namespace istio-system&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;kubectl delete namespace istio-demo-app&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;kubectl label namespace istio-demo-app istio-injection-&lt;/code&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;You my need to delete the k3d/k3s cluster:&lt;br&gt;
&lt;code&gt;k3d cluster delete svc-mesh-poc&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Yo may want to have more Graphical User Interface to see your cluster in full swing.To do that, deploy Kubernete Dashboard by running the following commands:&lt;/p&gt;

&lt;p&gt;Add kubernetes-dashboard repository&lt;br&gt;
&lt;code&gt;helm repo add kubernetes-dashboard https://kubernetes.github.io/dashboard/&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Deploy a Helm Release named "kubernetes-dashboard" using the kubernetes-dashboard chart&lt;br&gt;
&lt;code&gt;helm upgrade --install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard --create-namespace --namespace kubernetes-dashboard&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Verify that Dashboard is deployed and running.&lt;br&gt;
&lt;code&gt;kubectl get pod -n kubernetes-dashboard&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Create a ServiceAccount and ClusterRoleBinding to provide admin access to the newly created cluster.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;code&gt;kubectl create serviceaccount -n kubernetes-dashboard admin-user&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;kubectl create clusterrolebinding -n kubernetes-dashboard admin-user --clusterrole cluster-admin --serviceaccount=kubernetes-dashboard:admin-user&lt;/code&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;To log in to your Dashboard, you need a Bearer Token. Use the following command to store the token in a variable.&lt;br&gt;
&lt;code&gt;token=$(kubectl -n kubernetes-dashboard create token admin-user)&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Display the token using the echo command and copy it to use for logging into your Dashboard.&lt;br&gt;
&lt;code&gt;echo $token&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;You can access your Dashboard using the kubectl command-line tool and port forwarding by running the following commands and pasting the Bearer token on the text box:&lt;br&gt;
&lt;code&gt;kubectl -n kubernetes-dashboard port-forward svc/kubernetes-dashboard-kong-proxy 8443:443&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Browse to &lt;a href="https://localhost:8443"&gt;https://localhost:8443&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Clean Up Admin Service Account and Cluster Role Binding for Kubernetes Dashboard user.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;code&gt;kubectl -n kubernetes-dashboard delete serviceaccount admin-user&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;kubectl -n kubernetes-dashboard delete clusterrolebinding admin-user&lt;/code&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Now that you have A lightweight multi-node cluster running locally with Istio configured, you can now try out all of these Istio features:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Request Routing&lt;/li&gt;
&lt;li&gt;Fault Injection&lt;/li&gt;
&lt;li&gt;Traffic Shifting&lt;/li&gt;
&lt;li&gt;Querying metrics&lt;/li&gt;
&lt;li&gt;Visualizing metrics&lt;/li&gt;
&lt;li&gt;Accessing external services&lt;/li&gt;
&lt;li&gt;Visualizing your mesh.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Stay tuned for more!&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Build a multi node Kubernetes Cluster on Google Cloud VMs using Kubeadm, from the ground up!</title>
      <dc:creator>Codewired</dc:creator>
      <pubDate>Sat, 25 May 2024 19:00:42 +0000</pubDate>
      <link>https://dev.to/codewired/running-a-5-node-kubernetes-on-google-cloud-vms-using-kubeadm-3811</link>
      <guid>https://dev.to/codewired/running-a-5-node-kubernetes-on-google-cloud-vms-using-kubeadm-3811</guid>
      <description>&lt;p&gt;Bringing in and updating an initial post written two years ago on my other profile here. These are the defined and authentic steps you can use to run a Three, Five, Seven...etc node cluster on GCP using Linux VMs.&lt;/p&gt;

&lt;p&gt;Choose your Linux Flavor, recommended is Ubuntu 22.04 (Jammy) (This will work on local dev boxes and on Cloud Compute VMs with Jammy)&lt;/p&gt;

&lt;p&gt;On a Google Cloud Web Console, pick your desired project that has billing enabled and setup your cli tool (Google Cloud CLI) and create a VPC, Subnet and Firewall to allow traffik.&lt;/p&gt;

&lt;p&gt;(Replace resource names in square brackets without the brackets):&lt;br&gt;
 Create a Virtual Private Cloud Network&lt;br&gt;
 &lt;code&gt;gcloud compute networks create [vpc name] --subnet-mode custom&lt;/code&gt;&lt;br&gt;
 Create a Subnet with a specific range (10.0.96.0/24)&lt;br&gt;
 &lt;code&gt;gcloud compute networks subnets create [subnet name] --network [vpc name] --range 10.0.96.0/24&lt;/code&gt;&lt;br&gt;
 Create Firewall Rule that allows internal communication accros all protocols (10.0.96.0/24, 10.0.92.0/22)&lt;br&gt;
 &lt;code&gt;gcloud compute firewall-rules create [internal network name] --allow tcp,udp,icmp --network [vpc name] --source-ranges 110.0.96.0/24, 10.0.92.0/22&lt;/code&gt;&lt;br&gt;
 Create a firewall rule that allows external SSH, ICMP, and HTTPS:&lt;br&gt;
 &lt;code&gt;gcloud compute firewall-rules create [external network name] --allow tcp,icmp --network [vpc name] --source-ranges 0.0.0.0/0&lt;/code&gt;&lt;br&gt;
 List the firewall rules in the VPC network:&lt;br&gt;
 &lt;code&gt;gcloud compute firewall-rules list --filter="network:[vpc name]"&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Provision Nodes:&lt;br&gt;
Create 3, 5 or 7 compute instances which will host the Kubernetes Proxy, control plane and worker nodes respectively (A proxy is recommended if you are creating 5 nodes or more):&lt;/p&gt;

&lt;p&gt;Proxy Plane Node (Optional):&lt;br&gt;
&lt;code&gt;gcloud compute instances create proxynode --async --boot-disk-size 50GB --can-ip-forward --image-family ubuntu-2204-lts --image-project ubuntu-os-cloud --machine-type n2-standard-2 --private-network-ip 10.0.96.10 --scopes compute-rw,storage-ro,service-management,service-control,logging-write,monitoring --subnet [subnet name] --tags kubevms-node,proxy&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Master Control Plane Node:&lt;br&gt;
&lt;code&gt;gcloud compute instances create masternode --async --boot-disk-size 200GB --can-ip-forward --image-family ubuntu-2204-lts --image-project ubuntu-os-cloud --machine-type n2-standard-2 --private-network-ip 10.0.96.11 --scopes compute-rw,storage-ro,service-management,service-control,logging-write,monitoring --subnet [subnet name] --tags kubeadm-node,controller&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Worker Nodes: (10.0.96.21+ for the other worker nodes)&lt;br&gt;
&lt;code&gt;gcloud compute instances create workernode1 --async --boot-disk-size 100GB --can-ip-forward --image-family ubuntu-2204-lts --image-project ubuntu-os-cloud --machine-type n2-standard-2 --private-network-ip 10.0.96.20 --scopes compute-rw,storage-ro,service-management,service-control,logging-write,monitoring --subnet [subnet name] --tags kubeadm-node,worker&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Print the Internal IP address and Pod CIDR range for each worker node&lt;br&gt;
&lt;code&gt;gcloud compute instances describe workernode1 --format 'value[separator=" "](networkInterfaces[0].networkIP,metadata.items[0].value)'&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;List the compute instances in your default compute zone:&lt;br&gt;
&lt;code&gt;gcloud compute instances list --filter="tags.items=kubeadm-node"&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Test SSH Into Google Cloud VM Instance (You will need to SSH into all the VMs/Nodes to install software)&lt;br&gt;
&lt;code&gt;gcloud compute ssh [compute instance name]&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;RUN THESE INSTALLATIONS ON ALL NODES&lt;br&gt;
a. &lt;code&gt;sudo -i&lt;/code&gt;&lt;br&gt;
b. &lt;code&gt;apt-get update &amp;amp;&amp;amp; apt-get upgrade -y&lt;/code&gt;&lt;br&gt;
c. &lt;code&gt;apt install curl apt-transport-https vim git wget gnupg2 software-properties-common apt-transport-https ca-certificates uidmap lsb-release -y&lt;/code&gt;&lt;br&gt;
d. &lt;code&gt;swapoff -a&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;INSTALL AND CONFIGURE CONTAINER RUNTIME PREREQUISITES ON ALL NODES&lt;br&gt;
Verify that the br_netfilter module is loaded by running &lt;br&gt;
&lt;code&gt;lsmod | grep br_netfilter&lt;/code&gt;&lt;br&gt;
In order for a Linux node's iptables to correctly view bridged traffic, verify that &lt;code&gt;net.bridge.bridge-nf-call-iptables&lt;/code&gt; is set to 1 in your sysctl by running the following commands:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;cat &amp;lt;&amp;lt;EOF | sudo tee /etc/modules-load.d/k8s.conf&lt;br&gt;
 overlay&lt;br&gt;
 br_netfilter&lt;br&gt;
 EOF&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo modprobe overlay&lt;/code&gt;&lt;br&gt;
&lt;code&gt;sudo modprobe br_netfilter&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;sysctl params required by setup, params persist across reboots&lt;br&gt;
&lt;code&gt;cat &amp;lt;&amp;lt;EOF | sudo tee /etc/sysctl.d/k8s.conf&lt;br&gt;
 net.bridge.bridge-nf-call-iptables  = 1&lt;br&gt;
 net.bridge.bridge-nf-call-ip6tables = 1&lt;br&gt;
 net.ipv4.ip_forward                 = 1&lt;br&gt;
 EOF&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Apply sysctl params without reboot&lt;br&gt;
&lt;code&gt;sudo sysctl --system&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;INSTALL CONTAINER RUNTIME ON ALL NODES&lt;br&gt;
a. &lt;code&gt;mkdir -p /etc/apt/keyrings&lt;/code&gt;&lt;br&gt;
b. &lt;code&gt;curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg&lt;/code&gt;&lt;br&gt;
c. &lt;code&gt;echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list &amp;gt; /dev/null&lt;/code&gt;&lt;br&gt;
d. &lt;code&gt;apt-get update&lt;/code&gt;&lt;br&gt;
e. &lt;code&gt;apt-get install docker-ce docker-ce-cli containerd.io docker-compose-plugin&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;CONFIGURE CGROUP DRIVER FOR CONTAINERD ON ALL NODES (We will use the more advanced SystemD that comes with Ubuntu 2204)&lt;br&gt;
a. &lt;code&gt;stat -fc %T /sys/fs/cgroup/&lt;/code&gt; (Check to see if you are using the supported cgroupV2)&lt;br&gt;
b. &lt;code&gt;sudo containerd config default | sudo tee /etc/containerd/config.toml&lt;/code&gt; (Make sure that the config.toml is present with defaults)&lt;br&gt;
c. Set the SystemDGroup = true to use the CGroup driver in the config.toml          [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]&lt;br&gt;
...&lt;br&gt;
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]&lt;br&gt;
SystemdCgroup = true&lt;br&gt;
d. &lt;code&gt;sudo systemctl restart containerd&lt;/code&gt; (Restart ContainerD)&lt;/p&gt;

&lt;p&gt;INSTALL KUBEADM, KUBELET &amp;amp; KUBECTL ON ALL NODES&lt;br&gt;
Download the Google Cloud public signing key:&lt;br&gt;
a. &lt;code&gt;sudo curl -fsSLo /etc/apt/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Add the Kubernetes apt repository:&lt;br&gt;
b. &lt;code&gt;echo "deb [signed-by=/etc/apt/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list&lt;/code&gt;&lt;br&gt;
c. &lt;code&gt;sudo apt-get update&lt;/code&gt;&lt;br&gt;
   &lt;code&gt;sudo apt-get install -y kubelet kubeadm kubectl&lt;/code&gt;&lt;br&gt;
   &lt;code&gt;sudo apt-mark hold kubelet kubeadm kubectl&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;CONFIGURE CGROUP DRIVER FOR MASTER NODE (Add section to kubeadm-config.yaml if you are using a supprted OS for SystemD and change the kubernetesVersion to the actual one installed by kubeadm)&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    apiVersion: kubeadm.k8s.io/v1beta3
    kind: ClusterConfiguration
    kubernetesVersion: 1.30.0
    controlPlaneEndpoint: "masternode:6443"
    networking:
      podSubnet: 10.200.0.0/16

    ---
    apiVersion: kubelet.config.k8s.io/v1beta1
    kind: KubeletConfiguration
    cgroupDriver: systemd
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;CONFIRGURE HOSTNAME FOR MASTER NODE&lt;br&gt;
Open file : nano /ect/hosts&lt;br&gt;
Add Master Node's Static IP and preferred Hostname (10.0.96.11 masternode)&lt;/p&gt;

&lt;p&gt;INITIALIZE KUBEADM ON MASTER NODE (Remember to save the token hash)&lt;br&gt;
&lt;code&gt;kubeadm init --config=kubeadm-config.yaml --upload-certs | tee kubeadm-init.out&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Logout from ROOT if you are stilL ROOT&lt;/p&gt;

&lt;p&gt;To make kubectl work for your non-root user, run these commands, which are also part of the kubeadm init output:&lt;br&gt;
&lt;code&gt;mkdir -p $HOME/.kube&lt;/code&gt;&lt;br&gt;
&lt;code&gt;sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config&lt;/code&gt;&lt;br&gt;
&lt;code&gt;sudo chown $(id -u):$(id -g) $HOME/.kube/config&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;INSTALL A POD NETWORKING INTERFACE ON MASTER NODE&lt;br&gt;
Download and Install the Tigera Calico operator and custom resource definitions.&lt;br&gt;
&lt;code&gt;kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.24.5/manifests/tigera-operator.yaml&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Download and Install Calico by creating the necessary custom resource. &lt;br&gt;
Before installing remember to change the CALICO_IPV4POOL_CIDR to the POD_CIDR (10.200.0.0/16)&lt;br&gt;
&lt;code&gt;kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.24.5/manifests/custom-resources.yaml&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;JOIN WORKER NODES TO THE CONTROL PLANE (MASTER NODE)&lt;br&gt;
Run the command below inside each worker node with the token you got from the cli when you initialized kubeadm on Master Node:&lt;br&gt;
&lt;code&gt;kubeadm join masternode:6443 --token n0smf1.ixdasx8uy109cuf8 --discovery-token-ca-cert-hash sha256:f6bce2764268ece50e6f9ecb7b933258eac95b525217b8debb647ef41d49a898&lt;/code&gt;&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
