<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: MohammedBanabila</title>
    <description>The latest articles on DEV Community by MohammedBanabila (@mohammed_banabila_3bc9e49).</description>
    <link>https://dev.to/mohammed_banabila_3bc9e49</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/mohammed_banabila_3bc9e49"/>
    <language>en</language>
    <item>
      <title>Stacked-Topology</title>
      <dc:creator>MohammedBanabila</dc:creator>
      <pubDate>Fri, 18 Apr 2025 01:53:24 +0000</pubDate>
      <link>https://dev.to/mohammed_banabila_3bc9e49/stacked-topology-34pf</link>
      <guid>https://dev.to/mohammed_banabila_3bc9e49/stacked-topology-34pf</guid>
      <description>&lt;p&gt;Setup Stacked Topology High availability clusters using kubeadm&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8a1e11s4tacz037yrs70.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8a1e11s4tacz037yrs70.jpg" alt="Image description" width="800" height="539"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;steps:  to High availability clusters&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;deploy infrastructure  at aws  for self hosted kubernetes cluster&lt;/li&gt;
&lt;li&gt;vpc1=77.132.0.0/16 and integrate with internet gateway&lt;/li&gt;
&lt;li&gt;3 public subnet  for each Azs
     cidrblock  will  be at range 77.132.100.0/24, 77.132.101.0/24,77.132.102.0/24&lt;/li&gt;
&lt;li&gt;3 private subnet for each Azs
     cidrblock  will  be at range 77.132.103.0/24 , 77.132.104.0/24 , 77.132.105.0/24
5 . associate  routetable with  public subnets &lt;/li&gt;
&lt;li&gt; associate per routetable that have route to outside by Nat gateways  with  each private subnet&lt;/li&gt;
&lt;li&gt; deploy  network load balancer  which have  target registered with  public  subnets&lt;/li&gt;
&lt;li&gt; deploy 3  security groups   for  load balancer ,  control plane nodes , worker nodes&lt;/li&gt;
&lt;li&gt; use  session  manager for access  the  all nodes  by  adding a  iam role and have permission .&lt;/li&gt;
&lt;li&gt; deploy  ec2  autoscaling   for  control plane and worker  nodes&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;to deploy high availability clusters :&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;require  to  have  3  control plane  nodes  which give  beneficial  performance and role decision to whom be master&lt;br&gt;
2 .  following formula of  Quarom= N/2+1  which N=No.of control plane  node &lt;br&gt;
   ex  3/2+1  = 2   meaning  the  minimum to  consider it Healthy  at 2  control plane node  are ready  for role decision and  preemption &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;to setup High availability control plane :&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;disable swap&lt;br&gt;
2, update kernel params &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;install  container runtime ,containerd&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;install runc&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;install cni pluging&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;install kubeadm ,kubelet , kubectl&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;optional check version of kubeadm , kubelet,kubectl&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;initialize  the control plane with kubeadm&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;sudo  kubeadm   --control-plane-endpoint  “Load-Balancer-DNS:6443”  --pod-network-cidr    192.168.0.0/16  --upload-certs    optional[--node-name   master1 ]&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;install  calico cni pluging 3.29.3&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;when you join other control plane node  , use&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;sudo kubeadm join mylb-b12f4818ac02ac54.elb.us-east-1.amazonaws.com:6443 --token wme2w0.ft4sn3x0tc2035bu \&lt;br&gt;
--discovery-token-ca-cert-hash sha256:a4e516c840d20093ef2262c57b3afb171be07cf0ec0514de8198b91f46418b36 \&lt;br&gt;
--control-plane --certificate-key 403d1068384eebe16c1a50fc25e81b24ac7ccef26423244af9e1966e80531f9b  --node-name&lt;/p&gt;

&lt;p&gt;Note:   give name for node  using  --node-name  &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;setup  worker nodes&lt;/li&gt;
&lt;li&gt;disable swap
2, update kernel params &lt;/li&gt;
&lt;li&gt;install  container runtime ,containerd&lt;/li&gt;
&lt;li&gt;install runc&lt;/li&gt;
&lt;li&gt;install cni pluging&lt;/li&gt;
&lt;li&gt;install kubeadm ,kubelet , kubectl&lt;/li&gt;
&lt;li&gt;optional check version of kubeadm , kubelet,kubectl&lt;/li&gt;
&lt;li&gt; copy kubeconfig  from controlplane  node  to  worker node kubeconfig&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;sudo kubeadm join mylb-b12f4818ac02ac54.elb.us-east-1.amazonaws.com:6443 --token wme2w0.ft4sn3x0tc2035bu \&lt;br&gt;
--discovery-token-ca-cert-hash sha256:a4e516c840d20093ef2262c57b3afb171be07cf0ec0514de8198b91f46418b36  --node-name&lt;/p&gt;

&lt;p&gt;if  forget or  missing token  output when kubeadm  bootstraps  finish &lt;/p&gt;

&lt;p&gt;sudo  kubeadm  token  --print-join-command&lt;/p&gt;

&lt;p&gt;master.sh&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;` #!/bin/bash 


`#disable swap 
sudo swapoff -a
sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

# update kernel params 
cat &amp;lt;&amp;lt;EOF | sudo tee /etc/modules-load.d/k8s.conf 
overlay
br_netfilter
EOF
sudo modprobe overlay
sudo modprobe br_netfilter
cat &amp;lt;&amp;lt;EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF
sudo sysctl --system

# install container runtime ,containerd 

curl -LO https://github.com/containerd/containerd/releases/download/v2.0.4/containerd-2.0.4-linux-amd64.tar.gz

sudo tar Cxzvf /usr/local containerd-2.0.4-linux-amd64.tar.gz

curl -LO https://raw.githubusercontent.com/containerd/containerd/main/containerd.service

sudo mkdir -p /usr/local/lib/systemd/system/
sudo mv containerd.service /usr/local/lib/systemd/system/
sudo mkdir -p /etc/containerd
sudo containerd config default | sudo tee /etc/containerd/config.toml
sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml

sudo systemctl daemon-reload
sudo systemctl enable containerd --now


# install runc 

curl -LO https://github.com/opencontainers/runc/releases/download/v1.1.12/runc.amd64

sudo install -m 755 runc.amd64 /usr/local/sbin/runc 

# install cni 

curl -LO "https://github.com/containernetworking/plugins/releases/download/v1.6.2/cni-plugins-linux-amd64-v1.6.2.tgz"

sudo mkdir -p /opt/cni/bin

sudo tar Cxzvf /opt/cni/bin cni-plugins-linux-amd64-v1.6.2.tgz

# install kubeadm, kubelet, kubectl

sudo apt-get update -y
sudo apt-get install -y apt-transport-https ca-certificates curl gpg
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.32/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.32/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update -y
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

kubeadm version 
kubelet --version
kubectl version --client

# initial kubeadm for Ha cluster

sudo kubeadm init --control-plane-endpoint mylb-030c3a1c6c4fae6a.elb.us-east-1.amazonaws.com:6443 --pod-network-cidr 192.168.0.0/16 --upload-certs --node-name master1

# configure crictl to work with containerd

sudo crictl config runtime-endpoint: unix:///run/containerd/containerd.sock
sudo chown $(id -u):$(id -g) /var/run/containerd


mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
# This command changes the ownership of the Kubernetes admin configuration file
# - $(id -u) gets the current user's ID number
# - $(id -g) gets the current user's group ID number
# - /etc/kubernetes/admin.conf is the Kubernetes admin config file containing cluster credentials
# - chown changes the owner and group of the file to match the current user
# This allows the current user to access the Kubernetes cluster without needing sudo
sudo chown $(id -u):$(id -g) /etc/kubernetes/admin.conf

export KUBECONFIG=/etc/kubernetes/admin.conf


# install calico cni plugin 3.29.3 

kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.29.3/manifests/tigera-operator.yaml

curl -o custom-resources.yaml https://raw.githubusercontent.com/projectcalico/calico/v3.29.3/manifests/custom-resources.yaml

kubectl create -f custom-resources.yaml


# alias kubectl utility 

alias k=kubectl &amp;gt; .bashrc &amp;amp;&amp;amp;  source .bashrc


master-ha1.sh  


#!/bin/bash 
# disable swap 
sudo swapoff -a
sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

# update kernel params 
cat &amp;lt;&amp;lt;EOF | sudo tee /etc/modules-load.d/k8s.conf 
overlay
br_netfilter
EOF
sudo modprobe overlay
sudo modprobe br_netfilter
cat &amp;lt;&amp;lt;EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF
sudo sysctl --system

# install container runtime ,containerd 

curl -LO https://github.com/containerd/containerd/releases/download/v2.0.4/containerd-2.0.4-linux-amd64.tar.gz

sudo tar Cxzvf /usr/local containerd-2.0.4-linux-amd64.tar.gz

curl -LO https://raw.githubusercontent.com/containerd/containerd/main/containerd.service

sudo mkdir -p /usr/local/lib/systemd/system/
sudo mv containerd.service /usr/local/lib/systemd/system/
sudo mkdir -p /etc/containerd
sudo containerd config default | sudo tee /etc/containerd/config.toml
sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml

sudo systemctl daemon-reload
sudo systemctl enable containerd --now


# install runc 

curl -LO https://github.com/opencontainers/runc/releases/download/v1.1.12/runc.amd64

sudo install -m 755 runc.amd64 /usr/local/sbin/runc 

# install cni 

curl -LO "https://github.com/containernetworking/plugins/releases/download/v1.6.2/cni-plugins-linux-amd64-v1.6.2.tgz"

sudo mkdir -p /opt/cni/bin

sudo tar Cxzvf /opt/cni/bin cni-plugins-linux-amd64-v1.6.2.tgz

# install kubeadm, kubelet, kubectl

sudo apt-get update -y
sudo apt-get install -y apt-transport-https ca-certificates curl gpg
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.32/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.32/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update -y
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

kubeadm version 
kubelet --version
kubectl version --client

# initial kubeadm for Ha cluster

sudo kubeadm join mylb-030c3a1c6c4fae6a.elb.us-east-1.amazonaws.com:6443 --token 236pfl.0f9p8r06gcmfmj7p \
--discovery-token-ca-cert-hash sha256:992eb054da8690fbfc3ccbf2b0b30e5ce323c6b3444b3f11724dafa072e5f45a \
--control-plane --certificate-key 3fb4811956c257fd5407fb6aafd099077eb66a2da357683447b1b186d4dd890c --node-name master2

# sudo kubeadm join mylb-b12f4818ac02ac54.elb.us-east-1.amazonaws.com:6443 --token wme2w0.ft4sn3x0tc2035bu \
# --discovery-token-ca-cert-hash sha256:a4e516c840d20093ef2262c57b3afb171be07cf0ec0514de8198b91f46418b36 \
# --control-plane --certificate-key 403d1068384eebe16c1a50fc25e81b24ac7ccef26423244af9e1966e80531f9b --node-name master2

# configure crictl to work with containerd

sudo crictl config runtime-endpoint: unix:///run/containerd/containerd.sock
sudo chown $(id -u):$(id -g) /var/run/containerd


mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
sudo chown $(id -u):$(id -g) /etc/kubernetes/admin.conf

export KUBECONFIG=/etc/kubernetes/admin.conf

Note:
   when   join  other  control plane  node    , no  need  to  install calico  cni  2.29.3 , automatically  installed 

# alias kubectl utility 

alias k=kubectl &amp;gt; .bashrc &amp;amp;&amp;amp;  source .bashrc
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;master-ha2.sh  &lt;/p&gt;

&lt;p&gt;#!/bin/bash &lt;/p&gt;

&lt;h1&gt;
  
  
  disable swap
&lt;/h1&gt;

&lt;p&gt;sudo swapoff -a&lt;br&gt;
sudo sed -i '/ swap / s/^(.*)$/#\1/g' /etc/fstab&lt;/p&gt;

&lt;h1&gt;
  
  
  update kernel params
&lt;/h1&gt;

&lt;p&gt;cat &amp;lt;&amp;lt;EOF | sudo tee /etc/modules-load.d/k8s.conf &lt;br&gt;
overlay&lt;br&gt;
br_netfilter&lt;br&gt;
EOF&lt;br&gt;
sudo modprobe overlay&lt;br&gt;
sudo modprobe br_netfilter&lt;br&gt;
cat &amp;lt;&amp;lt;EOF | sudo tee /etc/sysctl.d/k8s.conf&lt;br&gt;
net.bridge.bridge-nf-call-iptables = 1&lt;br&gt;
net.bridge.bridge-nf-call-ip6tables = 1&lt;br&gt;
net.ipv4.ip_forward = 1&lt;br&gt;
EOF&lt;br&gt;
sudo sysctl --system&lt;/p&gt;

&lt;h1&gt;
  
  
  install container runtime ,containerd
&lt;/h1&gt;

&lt;p&gt;curl -LO &lt;a href="https://github.com/containerd/containerd/releases/download/v2.0.4/containerd-2.0.4-linux-amd64.tar.gz" rel="noopener noreferrer"&gt;https://github.com/containerd/containerd/releases/download/v2.0.4/containerd-2.0.4-linux-amd64.tar.gz&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;sudo tar Cxzvf /usr/local containerd-2.0.4-linux-amd64.tar.gz&lt;/p&gt;

&lt;p&gt;curl -LO &lt;a href="https://raw.githubusercontent.com/containerd/containerd/main/containerd.service" rel="noopener noreferrer"&gt;https://raw.githubusercontent.com/containerd/containerd/main/containerd.service&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;sudo mkdir -p /usr/local/lib/systemd/system/&lt;br&gt;
sudo mv containerd.service /usr/local/lib/systemd/system/&lt;br&gt;
sudo mkdir -p /etc/containerd&lt;br&gt;
sudo containerd config default | sudo tee /etc/containerd/config.toml&lt;br&gt;
sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml&lt;/p&gt;

&lt;p&gt;sudo systemctl daemon-reload&lt;br&gt;
sudo systemctl enable containerd --now&lt;/p&gt;

&lt;h1&gt;
  
  
  install runc
&lt;/h1&gt;

&lt;p&gt;curl -LO &lt;a href="https://github.com/opencontainers/runc/releases/download/v1.1.12/runc.amd64" rel="noopener noreferrer"&gt;https://github.com/opencontainers/runc/releases/download/v1.1.12/runc.amd64&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;sudo install -m 755 runc.amd64 /usr/local/sbin/runc &lt;/p&gt;

&lt;h1&gt;
  
  
  install cni
&lt;/h1&gt;

&lt;p&gt;curl -LO "&lt;a href="https://github.com/containernetworking/plugins/releases/download/v1.6.2/cni-plugins-linux-amd64-v1.6.2.tgz" rel="noopener noreferrer"&gt;https://github.com/containernetworking/plugins/releases/download/v1.6.2/cni-plugins-linux-amd64-v1.6.2.tgz&lt;/a&gt;"&lt;/p&gt;

&lt;p&gt;sudo mkdir -p /opt/cni/bin&lt;/p&gt;

&lt;p&gt;sudo tar Cxzvf /opt/cni/bin cni-plugins-linux-amd64-v1.6.2.tgz&lt;/p&gt;

&lt;h1&gt;
  
  
  install kubeadm, kubelet, kubectl
&lt;/h1&gt;

&lt;p&gt;sudo apt-get update -y&lt;br&gt;
sudo apt-get install -y apt-transport-https ca-certificates curl gpg&lt;br&gt;
curl -fsSL &lt;a href="https://pkgs.k8s.io/core:/stable:/v1.32/deb/Release.key" rel="noopener noreferrer"&gt;https://pkgs.k8s.io/core:/stable:/v1.32/deb/Release.key&lt;/a&gt; | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg&lt;br&gt;
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] &lt;a href="https://pkgs.k8s.io/core:/stable:/v1.32/deb/" rel="noopener noreferrer"&gt;https://pkgs.k8s.io/core:/stable:/v1.32/deb/&lt;/a&gt; /' | sudo tee /etc/apt/sources.list.d/kubernetes.list&lt;br&gt;
sudo apt-get update -y&lt;br&gt;
sudo apt-get install -y kubelet kubeadm kubectl&lt;br&gt;
sudo apt-mark hold kubelet kubeadm kubectl&lt;/p&gt;

&lt;p&gt;kubeadm version &lt;br&gt;
kubelet --version&lt;br&gt;
kubectl version --client&lt;/p&gt;

&lt;h1&gt;
  
  
  initial kubeadm for Ha cluster
&lt;/h1&gt;

&lt;p&gt;sudo kubeadm join mylb-030c3a1c6c4fae6a.elb.us-east-1.amazonaws.com:6443 --token 236pfl.0f9p8r06gcmfmj7p \&lt;br&gt;
--discovery-token-ca-cert-hash sha256:992eb054da8690fbfc3ccbf2b0b30e5ce323c6b3444b3f11724dafa072e5f45a \&lt;br&gt;
--control-plane --certificate-key 3fb4811956c257fd5407fb6aafd099077eb66a2da357683447b1b186d4dd890c --node-name master3&lt;/p&gt;

&lt;h1&gt;
  
  
  sudo kubeadm join mylb-b12f4818ac02ac54.elb.us-east-1.amazonaws.com:6443 --token wme2w0.ft4sn3x0tc2035bu \
&lt;/h1&gt;

&lt;h1&gt;
  
  
  --discovery-token-ca-cert-hash sha256:a4e516c840d20093ef2262c57b3afb171be07cf0ec0514de8198b91f46418b36 \
&lt;/h1&gt;

&lt;h1&gt;
  
  
  --control-plane --certificate-key 403d1068384eebe16c1a50fc25e81b24ac7ccef26423244af9e1966e80531f9b --node-name master3
&lt;/h1&gt;

&lt;h1&gt;
  
  
  configure crictl to work with containerd
&lt;/h1&gt;

&lt;p&gt;sudo crictl config runtime-endpoint: unix:///run/containerd/containerd.sock&lt;br&gt;
sudo chown $(id -u):$(id -g) /var/run/containerd&lt;/p&gt;

&lt;p&gt;mkdir -p $HOME/.kube&lt;br&gt;
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config&lt;br&gt;
sudo chown $(id -u):$(id -g) $HOME/.kube/config&lt;br&gt;
sudo chown $(id -u):$(id -g) /etc/kubernetes/admin.conf&lt;/p&gt;

&lt;p&gt;export KUBECONFIG=/etc/kubernetes/admin.conf&lt;/p&gt;

&lt;h1&gt;
  
  
  alias kubectl utility
&lt;/h1&gt;

&lt;p&gt;alias k=kubectl &amp;gt; .bashrc &amp;amp;&amp;amp;  source .bashrc&lt;/p&gt;

&lt;p&gt;for worker  nodes  &lt;/p&gt;

&lt;p&gt;worker1.sh  &lt;/p&gt;

&lt;h1&gt;
  
  
  !/bin/bash
&lt;/h1&gt;

&lt;h1&gt;
  
  
  disable swap
&lt;/h1&gt;

&lt;p&gt;sudo swapoff -a&lt;br&gt;
sudo sed -i '/ swap / s/^(.*)$/#\1/g' /etc/fstab&lt;/p&gt;

&lt;h1&gt;
  
  
  update kernel params
&lt;/h1&gt;

&lt;p&gt;cat &amp;lt;&amp;lt;EOF | sudo tee /etc/modules-load.d/k8s.conf &lt;br&gt;
overlay&lt;br&gt;
br_netfilter&lt;br&gt;
EOF&lt;br&gt;
sudo modprobe overlay&lt;br&gt;
sudo modprobe br_netfilter&lt;br&gt;
cat &amp;lt;&amp;lt;EOF | sudo tee /etc/sysctl.d/k8s.conf&lt;br&gt;
net.bridge.bridge-nf-call-iptables = 1&lt;br&gt;
net.bridge.bridge-nf-call-ip6tables = 1&lt;br&gt;
net.ipv4.ip_forward = 1&lt;br&gt;
EOF&lt;br&gt;
sudo sysctl --system&lt;/p&gt;

&lt;h1&gt;
  
  
  install container runtime ,containerd
&lt;/h1&gt;

&lt;p&gt;curl -LO &lt;a href="https://github.com/containerd/containerd/releases/download/v2.0.4/containerd-2.0.4-linux-amd64.tar.gz" rel="noopener noreferrer"&gt;https://github.com/containerd/containerd/releases/download/v2.0.4/containerd-2.0.4-linux-amd64.tar.gz&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;sudo tar Cxzvf /usr/local containerd-2.0.4-linux-amd64.tar.gz&lt;/p&gt;

&lt;p&gt;curl -LO &lt;a href="https://raw.githubusercontent.com/containerd/containerd/main/containerd.service" rel="noopener noreferrer"&gt;https://raw.githubusercontent.com/containerd/containerd/main/containerd.service&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;sudo mkdir -p /usr/local/lib/systemd/system/&lt;br&gt;
sudo mv containerd.service /usr/local/lib/systemd/system/&lt;br&gt;
sudo mkdir -p /etc/containerd&lt;br&gt;
sudo containerd config default | sudo tee /etc/containerd/config.toml&lt;br&gt;
sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml&lt;/p&gt;

&lt;p&gt;sudo systemctl daemon-reload&lt;br&gt;
sudo systemctl enable containerd --now&lt;/p&gt;

&lt;h1&gt;
  
  
  install runc
&lt;/h1&gt;

&lt;p&gt;curl -LO &lt;a href="https://github.com/opencontainers/runc/releases/download/v1.1.12/runc.amd64" rel="noopener noreferrer"&gt;https://github.com/opencontainers/runc/releases/download/v1.1.12/runc.amd64&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;sudo install -m 755 runc.amd64 /usr/local/sbin/runc &lt;/p&gt;

&lt;h1&gt;
  
  
  install cni
&lt;/h1&gt;

&lt;p&gt;curl -LO "&lt;a href="https://github.com/containernetworking/plugins/releases/download/v1.6.2/cni-plugins-linux-amd64-v1.6.2.tgz" rel="noopener noreferrer"&gt;https://github.com/containernetworking/plugins/releases/download/v1.6.2/cni-plugins-linux-amd64-v1.6.2.tgz&lt;/a&gt;"&lt;/p&gt;

&lt;p&gt;sudo mkdir -p /opt/cni/bin&lt;/p&gt;

&lt;p&gt;sudo tar Cxzvf /opt/cni/bin cni-plugins-linux-amd64-v1.6.2.tgz&lt;/p&gt;

&lt;h1&gt;
  
  
  install kubeadm, kubelet, kubectl
&lt;/h1&gt;

&lt;p&gt;sudo apt-get update -y&lt;br&gt;
sudo apt-get install -y apt-transport-https ca-certificates curl gpg&lt;br&gt;
curl -fsSL &lt;a href="https://pkgs.k8s.io/core:/stable:/v1.32/deb/Release.key" rel="noopener noreferrer"&gt;https://pkgs.k8s.io/core:/stable:/v1.32/deb/Release.key&lt;/a&gt; | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg&lt;br&gt;
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] &lt;a href="https://pkgs.k8s.io/core:/stable:/v1.32/deb/" rel="noopener noreferrer"&gt;https://pkgs.k8s.io/core:/stable:/v1.32/deb/&lt;/a&gt; /' | sudo tee /etc/apt/sources.list.d/kubernetes.list&lt;br&gt;
sudo apt-get update -y&lt;br&gt;
sudo apt-get install -y kubelet kubeadm kubectl&lt;br&gt;
sudo apt-mark hold kubelet kubeadm kubectl&lt;/p&gt;

&lt;p&gt;kubeadm version &lt;br&gt;
kubelet --version&lt;br&gt;
kubectl version --client&lt;/p&gt;

&lt;h1&gt;
  
  
  initial kubeadm for Ha cluster
&lt;/h1&gt;

&lt;h1&gt;
  
  
  sudo kubeadm token create --print-join-command
&lt;/h1&gt;

&lt;h1&gt;
  
  
  sudo kubeadm join mylb-7ede46b278348924.elb.us-east-1.amazonaws.com:6443 --token s1pzb2.e49unzo701usj6er \
&lt;/h1&gt;

&lt;h1&gt;
  
  
  --discovery-token-ca-cert-hash sha256:3d5d6836796b31f8d638feef97711973cbb62388cd549fb7676016e4d11d3689 --node-name worker1
&lt;/h1&gt;

&lt;p&gt;sudo kubeadm join mylb-030c3a1c6c4fae6a.elb.us-east-1.amazonaws.com:6443 --token 236pfl.0f9p8r06gcmfmj7p \&lt;br&gt;
--discovery-token-ca-cert-hash sha256:992eb054da8690fbfc3ccbf2b0b30e5ce323c6b3444b3f11724dafa072e5f45a --node-name worker1&lt;/p&gt;

&lt;h1&gt;
  
  
  configure crictl to work with containerd
&lt;/h1&gt;

&lt;p&gt;sudo crictl config runtime-endpoint: unix:///run/containerd/containerd.sock&lt;br&gt;
sudo chown $(id -u):$(id -g) /run/containerd/containerd.sock&lt;/p&gt;

&lt;p&gt;mkdir -p $HOME/.kube&lt;br&gt;
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config&lt;br&gt;
sudo chown $(id -u):$(id -g) $HOME/.kube/config&lt;br&gt;
sudo chown $(id -u):$(id -g) /etc/kubernetes/admin.conf&lt;/p&gt;

&lt;p&gt;export KUBECONFIG=/etc/kubernetes/admin.conf&lt;/p&gt;

&lt;h1&gt;
  
  
  copy kubeconfig from control plane to worker nodes kubeconfig
&lt;/h1&gt;

&lt;h1&gt;
  
  
  alias kubectl utility
&lt;/h1&gt;

&lt;p&gt;alias k=kubectl &amp;gt; .bashrc &amp;amp;&amp;amp; source .bashrc&lt;/p&gt;

&lt;p&gt;worker2.sh &lt;/p&gt;

&lt;h1&gt;
  
  
  !/bin/bash
&lt;/h1&gt;

&lt;h1&gt;
  
  
  disable swap
&lt;/h1&gt;

&lt;p&gt;sudo swapoff -a&lt;br&gt;
sudo sed -i '/ swap / s/^(.*)$/#\1/g' /etc/fstab&lt;/p&gt;

&lt;h1&gt;
  
  
  update kernel params
&lt;/h1&gt;

&lt;p&gt;cat &amp;lt;&amp;lt;EOF | sudo tee /etc/modules-load.d/k8s.conf &lt;br&gt;
overlay&lt;br&gt;
br_netfilter&lt;br&gt;
EOF&lt;br&gt;
sudo modprobe overlay&lt;br&gt;
sudo modprobe br_netfilter&lt;br&gt;
cat &amp;lt;&amp;lt;EOF | sudo tee /etc/sysctl.d/k8s.conf&lt;br&gt;
net.bridge.bridge-nf-call-iptables = 1&lt;br&gt;
net.bridge.bridge-nf-call-ip6tables = 1&lt;br&gt;
net.ipv4.ip_forward = 1&lt;br&gt;
EOF&lt;br&gt;
sudo sysctl --system&lt;/p&gt;

&lt;h1&gt;
  
  
  install container runtime ,containerd
&lt;/h1&gt;

&lt;p&gt;curl -LO &lt;a href="https://github.com/containerd/containerd/releases/download/v2.0.4/containerd-2.0.4-linux-amd64.tar.gz" rel="noopener noreferrer"&gt;https://github.com/containerd/containerd/releases/download/v2.0.4/containerd-2.0.4-linux-amd64.tar.gz&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;sudo tar Cxzvf /usr/local containerd-2.0.4-linux-amd64.tar.gz&lt;/p&gt;

&lt;p&gt;curl -LO &lt;a href="https://raw.githubusercontent.com/containerd/containerd/main/containerd.service" rel="noopener noreferrer"&gt;https://raw.githubusercontent.com/containerd/containerd/main/containerd.service&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;sudo mkdir -p /usr/local/lib/systemd/system/&lt;br&gt;
sudo mv containerd.service /usr/local/lib/systemd/system/&lt;br&gt;
sudo mkdir -p /etc/containerd&lt;br&gt;
sudo containerd config default | sudo tee /etc/containerd/config.toml&lt;br&gt;
sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml&lt;/p&gt;

&lt;p&gt;sudo systemctl daemon-reload&lt;br&gt;
sudo systemctl enable containerd --now&lt;/p&gt;

&lt;h1&gt;
  
  
  install runc
&lt;/h1&gt;

&lt;p&gt;curl -LO &lt;a href="https://github.com/opencontainers/runc/releases/download/v1.1.12/runc.amd64" rel="noopener noreferrer"&gt;https://github.com/opencontainers/runc/releases/download/v1.1.12/runc.amd64&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;sudo install -m 755 runc.amd64 /usr/local/sbin/runc &lt;/p&gt;

&lt;h1&gt;
  
  
  install cni
&lt;/h1&gt;

&lt;p&gt;curl -LO "&lt;a href="https://github.com/containernetworking/plugins/releases/download/v1.6.2/cni-plugins-linux-amd64-v1.6.2.tgz" rel="noopener noreferrer"&gt;https://github.com/containernetworking/plugins/releases/download/v1.6.2/cni-plugins-linux-amd64-v1.6.2.tgz&lt;/a&gt;"&lt;/p&gt;

&lt;p&gt;sudo mkdir -p /opt/cni/bin&lt;/p&gt;

&lt;p&gt;sudo tar Cxzvf /opt/cni/bin cni-plugins-linux-amd64-v1.6.2.tgz&lt;/p&gt;

&lt;h1&gt;
  
  
  install kubeadm, kubelet, kubectl
&lt;/h1&gt;

&lt;p&gt;sudo apt-get update -y&lt;br&gt;
sudo apt-get install -y apt-transport-https ca-certificates curl gpg&lt;br&gt;
curl -fsSL &lt;a href="https://pkgs.k8s.io/core:/stable:/v1.32/deb/Release.key" rel="noopener noreferrer"&gt;https://pkgs.k8s.io/core:/stable:/v1.32/deb/Release.key&lt;/a&gt; | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg&lt;br&gt;
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] &lt;a href="https://pkgs.k8s.io/core:/stable:/v1.32/deb/" rel="noopener noreferrer"&gt;https://pkgs.k8s.io/core:/stable:/v1.32/deb/&lt;/a&gt; /' | sudo tee /etc/apt/sources.list.d/kubernetes.list&lt;br&gt;
sudo apt-get update -y&lt;br&gt;
sudo apt-get install -y kubelet kubeadm kubectl&lt;br&gt;
sudo apt-mark hold kubelet kubeadm kubectl&lt;/p&gt;

&lt;p&gt;kubeadm version &lt;br&gt;
kubelet --version&lt;br&gt;
kubectl version --client&lt;/p&gt;

&lt;h1&gt;
  
  
  initial kubeadm for Ha cluster
&lt;/h1&gt;

&lt;h1&gt;
  
  
  sudo kubeadm token create --print-join-command
&lt;/h1&gt;

&lt;h1&gt;
  
  
  sudo kubeadm join mylb-7ede46b278348924.elb.us-east-1.amazonaws.com:6443 --token s1pzb2.e49unzo701usj6er \
&lt;/h1&gt;

&lt;h1&gt;
  
  
  --discovery-token-ca-cert-hash sha256:3d5d6836796b31f8d638feef97711973cbb62388cd549fb7676016e4d11d3689 --node-name worker2
&lt;/h1&gt;

&lt;p&gt;sudo kubeadm join mylb-7966c35b34cfe6a2.elb.us-east-1.amazonaws.com:6443 --token wedeoe.3dktwv0s0a1tfudi \&lt;br&gt;
--discovery-token-ca-cert-hash sha256:1104fe6342e316036524c87a994e47035f091f6869a4f84292cecc8dd4f279bd --node-name worker2&lt;/p&gt;

&lt;h1&gt;
  
  
  configure crictl to work with containerd
&lt;/h1&gt;

&lt;p&gt;sudo crictl config runtime-endpoint: unix:///run/containerd/containerd.sock&lt;br&gt;
sudo chwon $(id -u):$(id -g) /run/containerd/containerd.sock&lt;/p&gt;

&lt;p&gt;mkdir -p $HOME/.kube&lt;br&gt;
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config&lt;br&gt;
sudo chown $(id -u):$(id -g) $HOME/.kube/config&lt;br&gt;
sudo chown $(id -u):$(id -g) /etc/kubernetes/admin.conf&lt;/p&gt;

&lt;p&gt;export KUBECONFIG=/etc/kubernetes/admin.conf&lt;/p&gt;

&lt;h1&gt;
  
  
  alias kubectl utility
&lt;/h1&gt;

&lt;p&gt;alias k=kubectl &amp;gt; .bashrc &amp;amp;&amp;amp;  source .bashrc&lt;/p&gt;

&lt;p&gt;worker3.sh  &lt;/p&gt;

&lt;p&gt;#!/bin/bash &lt;/p&gt;

&lt;h1&gt;
  
  
  disable swap
&lt;/h1&gt;

&lt;p&gt;sudo swapoff -a&lt;br&gt;
sudo sed -i '/ swap / s/^(.*)$/#\1/g' /etc/fstab&lt;/p&gt;

&lt;h1&gt;
  
  
  update kernel params
&lt;/h1&gt;

&lt;p&gt;cat &amp;lt;&amp;lt;EOF | sudo tee /etc/modules-load.d/k8s.conf &lt;br&gt;
overlay&lt;br&gt;
br_netfilter&lt;br&gt;
EOF&lt;br&gt;
sudo modprobe overlay&lt;br&gt;
sudo modprobe br_netfilter&lt;br&gt;
cat &amp;lt;&amp;lt;EOF | sudo tee /etc/sysctl.d/k8s.conf&lt;br&gt;
net.bridge.bridge-nf-call-iptables = 1&lt;br&gt;
net.bridge.bridge-nf-call-ip6tables = 1&lt;br&gt;
net.ipv4.ip_forward = 1&lt;br&gt;
EOF&lt;br&gt;
sudo sysctl --system&lt;/p&gt;

&lt;h1&gt;
  
  
  install container runtime ,containerd
&lt;/h1&gt;

&lt;p&gt;curl -LO &lt;a href="https://github.com/containerd/containerd/releases/download/v2.0.4/containerd-2.0.4-linux-amd64.tar.gz" rel="noopener noreferrer"&gt;https://github.com/containerd/containerd/releases/download/v2.0.4/containerd-2.0.4-linux-amd64.tar.gz&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;sudo tar Cxzvf /usr/local containerd-2.0.4-linux-amd64.tar.gz&lt;/p&gt;

&lt;p&gt;curl -LO &lt;a href="https://raw.githubusercontent.com/containerd/containerd/main/containerd.service" rel="noopener noreferrer"&gt;https://raw.githubusercontent.com/containerd/containerd/main/containerd.service&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;sudo mkdir -p /usr/local/lib/systemd/system/&lt;br&gt;
sudo mv containerd.service /usr/local/lib/systemd/system/&lt;br&gt;
sudo mkdir -p /etc/containerd&lt;br&gt;
sudo containerd config default | sudo tee /etc/containerd/config.toml&lt;br&gt;
sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml&lt;/p&gt;

&lt;p&gt;sudo systemctl daemon-reload&lt;br&gt;
sudo systemctl enable containerd --now&lt;/p&gt;

&lt;h1&gt;
  
  
  install runc
&lt;/h1&gt;

&lt;p&gt;curl -LO &lt;a href="https://github.com/opencontainers/runc/releases/download/v1.1.12/runc.amd64" rel="noopener noreferrer"&gt;https://github.com/opencontainers/runc/releases/download/v1.1.12/runc.amd64&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;sudo install -m 755 runc.amd64 /usr/local/sbin/runc &lt;/p&gt;

&lt;h1&gt;
  
  
  install cni
&lt;/h1&gt;

&lt;p&gt;curl -LO "&lt;a href="https://github.com/containernetworking/plugins/releases/download/v1.6.2/cni-plugins-linux-amd64-v1.6.2.tgz" rel="noopener noreferrer"&gt;https://github.com/containernetworking/plugins/releases/download/v1.6.2/cni-plugins-linux-amd64-v1.6.2.tgz&lt;/a&gt;"&lt;/p&gt;

&lt;p&gt;sudo mkdir -p /opt/cni/bin&lt;/p&gt;

&lt;p&gt;sudo tar Cxzvf /opt/cni/bin cni-plugins-linux-amd64-v1.6.2.tgz&lt;/p&gt;

&lt;h1&gt;
  
  
  install kubeadm, kubelet, kubectl
&lt;/h1&gt;

&lt;p&gt;sudo apt-get update -y&lt;br&gt;
sudo apt-get install -y apt-transport-https ca-certificates curl gpg&lt;br&gt;
curl -fsSL &lt;a href="https://pkgs.k8s.io/core:/stable:/v1.32/deb/Release.key" rel="noopener noreferrer"&gt;https://pkgs.k8s.io/core:/stable:/v1.32/deb/Release.key&lt;/a&gt; | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg&lt;br&gt;
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] &lt;a href="https://pkgs.k8s.io/core:/stable:/v1.32/deb/" rel="noopener noreferrer"&gt;https://pkgs.k8s.io/core:/stable:/v1.32/deb/&lt;/a&gt; /' | sudo tee /etc/apt/sources.list.d/kubernetes.list&lt;br&gt;
sudo apt-get update -y&lt;br&gt;
sudo apt-get install -y kubelet kubeadm kubectl&lt;br&gt;
sudo apt-mark hold kubelet kubeadm kubectl&lt;/p&gt;

&lt;p&gt;kubeadm version &lt;br&gt;
kubelet --version&lt;br&gt;
kubectl version --client&lt;/p&gt;

&lt;h1&gt;
  
  
  initial kubeadm for Ha cluster
&lt;/h1&gt;

&lt;h1&gt;
  
  
  sudo kubeadm token create --print-join-command
&lt;/h1&gt;

&lt;h1&gt;
  
  
  sudo kubeadm join mylb-7ede46b278348924.elb.us-east-1.amazonaws.com:6443 --token s1pzb2.e49unzo701usj6er \
&lt;/h1&gt;

&lt;h1&gt;
  
  
  --discovery-token-ca-cert-hash sha256:3d5d6836796b31f8d638feef97711973cbb62388cd549fb7676016e4d11d3689 --node-name worker2
&lt;/h1&gt;

&lt;p&gt;sudo kubeadm join mylb-7966c35b34cfe6a2.elb.us-east-1.amazonaws.com:6443 --token wedeoe.3dktwv0s0a1tfudi \&lt;br&gt;
--discovery-token-ca-cert-hash sha256:1104fe6342e316036524c87a994e47035f091f6869a4f84292cecc8dd4f279bd --node-name worker3&lt;/p&gt;

&lt;h1&gt;
  
  
  configure crictl to work with containerd
&lt;/h1&gt;

&lt;p&gt;sudo crictl config runtime-endpoint: unix:///run/containerd/containerd.sock&lt;br&gt;
sudo chwon $(id -u):$(id -g) /run/containerd/containerd.sock&lt;/p&gt;

&lt;p&gt;mkdir -p $HOME/.kube&lt;br&gt;
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config&lt;br&gt;
sudo chown $(id -u):$(id -g) $HOME/.kube/config&lt;br&gt;
sudo chown $(id -u):$(id -g) /etc/kubernetes/admin.conf&lt;/p&gt;

&lt;p&gt;export KUBECONFIG=/etc/kubernetes/admin.conf&lt;/p&gt;

&lt;h1&gt;
  
  
  alias kubectl utility
&lt;/h1&gt;

&lt;p&gt;alias k=kubectl &amp;gt; .bashrc &amp;amp;&amp;amp;  source .bashrc&lt;/p&gt;

&lt;p&gt;Note: remember to copy kubeconfig  from  control plane  to worker nodes kubeconfig &lt;/p&gt;

&lt;p&gt;to  deploy  infrastructure  using  by  pulumi  python  and using  pulumi esc  to store key/value&lt;/p&gt;

&lt;p&gt;"""An AWS Python Pulumi program"""&lt;br&gt;
import pulumi , pulumi_aws as aws , json &lt;br&gt;
import pulumi&lt;/p&gt;

&lt;p&gt;cfg1=pulumi.Config()&lt;/p&gt;

&lt;p&gt;vpc1=aws.ec2.Vpc(&lt;br&gt;
"vpc1",&lt;br&gt;
aws.ec2.VpcArgs(&lt;br&gt;
cidr_block=cfg1.require(key="block1"),&lt;br&gt;
tags={&lt;br&gt;
"Name" : "vpc1"&lt;br&gt;
}&lt;br&gt;
)&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;intgw1=aws.ec2.InternetGateway(&lt;br&gt;
"intgw1",&lt;br&gt;
aws.ec2.InternetGatewayArgs(&lt;br&gt;
vpc_id=vpc1.id,&lt;br&gt;
tags={&lt;br&gt;
"Name" : "intgw1"&lt;br&gt;
}&lt;br&gt;
)&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;zones=["us-east-1a", "us-east-1b" , "us-east-1c"]&lt;br&gt;
cidr1=cfg1.require(key="cidr1")&lt;br&gt;
cidr2=cfg1.require(key="cidr2")&lt;br&gt;
cidr3=cfg1.require(key="cidr3")&lt;br&gt;
cidr4=cfg1.require(key="cidr4")&lt;br&gt;
cidr5=cfg1.require(key="cidr5")&lt;br&gt;
cidr6=cfg1.require(key="cidr6")&lt;br&gt;
pubsubnets=[cidr1,cidr2,cidr3]&lt;br&gt;
privsubnets=[cidr4,cidr5,cidr6]&lt;/p&gt;

&lt;p&gt;pubnames=["pub1" , "pub2" , "pub3" ]&lt;br&gt;
privnames=[ "priv1" , "priv2" , "priv3" ]&lt;/p&gt;

&lt;p&gt;for allpub in range(len(pubnames)):&lt;br&gt;
pubnames[allpub]=aws.ec2.Subnet(&lt;br&gt;
pubnames[allpub],&lt;br&gt;
aws.ec2.SubnetArgs(&lt;br&gt;
vpc_id=vpc1.id,&lt;br&gt;
cidr_block=pubsubnets[allpub],&lt;br&gt;
availability_zone=zones[allpub],&lt;br&gt;
map_public_ip_on_launch=True,&lt;br&gt;
tags={&lt;br&gt;
"Name" : pubnames[allpub]&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;)&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;table1=aws.ec2.RouteTable(&lt;br&gt;
"table1",&lt;br&gt;
aws.ec2.RouteTableArgs(&lt;br&gt;
vpc_id=vpc1.id,&lt;br&gt;
routes=[&lt;br&gt;
aws.ec2.RouteTableRouteArgs(&lt;br&gt;
cidr_block=cfg1.require(key="any_ipv4_traffic"),&lt;br&gt;
gateway_id=intgw1.id&lt;br&gt;
)&lt;br&gt;
],&lt;br&gt;
tags={&lt;br&gt;
"Name" : "table1"&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;)&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;pub_associate1=aws.ec2.RouteTableAssociation(&lt;br&gt;
"pub_associate1",&lt;br&gt;
aws.ec2.RouteTableAssociationArgs(&lt;br&gt;
subnet_id=pubnames[0].id,&lt;br&gt;
route_table_id=table1.id&lt;br&gt;
)&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;pub_associate2=aws.ec2.RouteTableAssociation(&lt;br&gt;
"pub_associate2",&lt;br&gt;
aws.ec2.RouteTableAssociationArgs(&lt;br&gt;
subnet_id=pubnames[1].id,&lt;br&gt;
route_table_id=table1.id&lt;br&gt;
)&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;pub_associate3=aws.ec2.RouteTableAssociation(&lt;br&gt;
"pub_associate3",&lt;br&gt;
aws.ec2.RouteTableAssociationArgs(&lt;br&gt;
subnet_id=pubnames[2].id,&lt;br&gt;
route_table_id=table1.id&lt;br&gt;
)&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;for allpriv in range(len(privnames)):&lt;br&gt;
privnames[allpriv]=aws.ec2.Subnet(&lt;br&gt;
privnames[allpriv],&lt;br&gt;
aws.ec2.SubnetArgs(&lt;br&gt;
vpc_id=vpc1.id,&lt;br&gt;
cidr_block=privsubnets[allpriv],&lt;br&gt;
availability_zone=zones[allpriv],&lt;br&gt;
tags={&lt;br&gt;
"Name" : privnames[allpriv]&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;)&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;eips=[ "eip1" , "eip2" , "eip3" ]&lt;/p&gt;

&lt;p&gt;for alleips in range(len(eips)):&lt;br&gt;
eips[alleips]=aws.ec2.Eip(&lt;br&gt;
eips[alleips],&lt;br&gt;
aws.ec2.EipArgs(&lt;br&gt;
domain="vpc",&lt;br&gt;
tags={&lt;br&gt;
"Name" : eips[alleips]&lt;br&gt;
}&lt;br&gt;
)&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;nats=[ "natgw1" , "natgw2" , "natgw3" ]&lt;/p&gt;

&lt;p&gt;mynats= {&lt;/p&gt;

&lt;p&gt;"nat1" : aws.ec2.NatGateway(&lt;br&gt;
"nat1",&lt;br&gt;
connectivity_type="public",&lt;br&gt;
subnet_id=pubnames[allpub].id,&lt;br&gt;
allocation_id=eips[0].allocation_id,&lt;br&gt;
tags={&lt;br&gt;
"Name" : "nat1"&lt;br&gt;
}&lt;br&gt;
),&lt;/p&gt;

&lt;p&gt;"nat2" : aws.ec2.NatGateway(&lt;br&gt;
"nat2",&lt;br&gt;
connectivity_type="public",&lt;br&gt;
subnet_id=pubnames[1].id,&lt;br&gt;
allocation_id=eips[1].allocation_id,&lt;br&gt;
tags={&lt;br&gt;
"Name" : "nat2"&lt;br&gt;
}&lt;br&gt;
),&lt;/p&gt;

&lt;p&gt;"nat3" : aws.ec2.NatGateway(&lt;br&gt;
"nat3",&lt;br&gt;
connectivity_type="public",&lt;br&gt;
subnet_id=pubnames[2].id,&lt;br&gt;
allocation_id=eips[2].allocation_id,&lt;br&gt;
tags={&lt;br&gt;
"Name" : "nat3"&lt;br&gt;
}&lt;br&gt;
),&lt;/p&gt;

&lt;p&gt;}&lt;/p&gt;

&lt;p&gt;priv2table2=aws.ec2.RouteTable(&lt;br&gt;
"priv2table2",&lt;br&gt;
aws.ec2.RouteTableArgs(&lt;br&gt;
vpc_id=vpc1.id,&lt;br&gt;
routes=[&lt;br&gt;
aws.ec2.RouteTableRouteArgs(&lt;br&gt;
cidr_block=cfg1.require(key="any_ipv4_traffic"),&lt;br&gt;
nat_gateway_id=mynats["nat1"]&lt;br&gt;
)&lt;br&gt;
],&lt;br&gt;
tags={&lt;/p&gt;

&lt;p&gt;"Name" : "priv2table2"&lt;br&gt;
}&lt;br&gt;
)&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;priv3table3=aws.ec2.RouteTable(&lt;br&gt;
"priv3table3",&lt;br&gt;
aws.ec2.RouteTableArgs(&lt;br&gt;
vpc_id=vpc1.id,&lt;br&gt;
routes=[&lt;br&gt;
aws.ec2.RouteTableRouteArgs(&lt;br&gt;
cidr_block=cfg1.require(key="any_ipv4_traffic"),&lt;br&gt;
nat_gateway_id=mynats["nat2"]&lt;br&gt;
)&lt;br&gt;
],&lt;br&gt;
tags={&lt;/p&gt;

&lt;p&gt;"Name" : "priv3table3"&lt;br&gt;
}&lt;br&gt;
)&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;priv4table4=aws.ec2.RouteTable(&lt;br&gt;
"priv4table4",&lt;br&gt;
aws.ec2.RouteTableArgs(&lt;br&gt;
vpc_id=vpc1.id,&lt;br&gt;
routes=[&lt;br&gt;
aws.ec2.RouteTableRouteArgs(&lt;br&gt;
cidr_block=cfg1.require(key="any_ipv4_traffic"),&lt;br&gt;
nat_gateway_id=mynats["nat3"]&lt;br&gt;
)&lt;br&gt;
],&lt;br&gt;
tags={&lt;/p&gt;

&lt;p&gt;"Name" : "priv4table4"&lt;br&gt;
}&lt;br&gt;
)&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;privassociate1=aws.ec2.RouteTableAssociation(&lt;br&gt;
"privassociate1",&lt;br&gt;
aws.ec2.RouteTableAssociationArgs(&lt;br&gt;
subnet_id=privnames[0].id,&lt;br&gt;
route_table_id=priv2table2.id&lt;br&gt;
)&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;privassociate2=aws.ec2.RouteTableAssociation(&lt;br&gt;
"privassociate2",&lt;br&gt;
aws.ec2.RouteTableAssociationArgs(&lt;br&gt;
subnet_id=privnames[1].id,&lt;br&gt;
route_table_id=priv3table3.id&lt;br&gt;
)&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;privassociate3=aws.ec2.RouteTableAssociation(&lt;br&gt;
"privassociate3",&lt;br&gt;
aws.ec2.RouteTableAssociationArgs(&lt;br&gt;
subnet_id=privnames[2].id,&lt;br&gt;
route_table_id=priv4table4.id&lt;br&gt;
)&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;nacl_ingress_trafic=[&lt;br&gt;
aws.ec2.NetworkAclIngressArgs(&lt;br&gt;
from_port=22,&lt;br&gt;
to_port=22,&lt;br&gt;
protocol="tcp",&lt;br&gt;
cidr_block=cfg1.require(key="myips"),&lt;br&gt;
icmp_code=0,&lt;br&gt;
icmp_type=0,&lt;br&gt;
action="allow",&lt;br&gt;
rule_no=100&lt;br&gt;
),&lt;br&gt;
aws.ec2.NetworkAclIngressArgs(&lt;br&gt;
from_port=22,&lt;br&gt;
to_port=22,&lt;br&gt;
protocol="tcp",&lt;br&gt;
cidr_block=cfg1.require(key="any_ipv4_traffic"),&lt;br&gt;
icmp_code=0,&lt;br&gt;
icmp_type=0,&lt;br&gt;
action="deny",&lt;br&gt;
rule_no=101&lt;br&gt;
),&lt;br&gt;
aws.ec2.NetworkAclIngressArgs(&lt;br&gt;
from_port=80,&lt;br&gt;
to_port=80,&lt;br&gt;
protocol="tcp",&lt;br&gt;
cidr_block=cfg1.require(key="any_ipv4_traffic"),&lt;br&gt;
icmp_code=0,&lt;br&gt;
icmp_type=0,&lt;br&gt;
action="allow",&lt;br&gt;
rule_no=200&lt;/p&gt;

&lt;p&gt;),&lt;br&gt;
aws.ec2.NetworkAclIngressArgs(&lt;br&gt;
from_port=443,&lt;br&gt;
to_port=443,&lt;br&gt;
protocol="tcp",&lt;br&gt;
cidr_block=cfg1.require(key="any_ipv4_traffic"),&lt;br&gt;
icmp_code=0,&lt;br&gt;
icmp_type=0,&lt;br&gt;
action="allow",&lt;br&gt;
rule_no=300&lt;br&gt;
),&lt;br&gt;
aws.ec2.NetworkAclIngressArgs(&lt;br&gt;
from_port=1024,&lt;br&gt;
to_port=65535,&lt;br&gt;
protocol="tcp",&lt;br&gt;
cidr_block=cfg1.require(key="any_ipv4_traffic"),&lt;br&gt;
icmp_code=0,&lt;br&gt;
icmp_type=0,&lt;br&gt;
action="allow",&lt;br&gt;
rule_no=400&lt;br&gt;
),&lt;br&gt;
aws.ec2.NetworkAclIngressArgs(&lt;br&gt;
from_port=0,&lt;br&gt;
to_port=0,&lt;br&gt;
protocol="-1",&lt;br&gt;
cidr_block=cfg1.require(key="any_ipv4_traffic"),&lt;br&gt;
icmp_code=0,&lt;br&gt;
icmp_type=0,&lt;br&gt;
action="allow",&lt;br&gt;
rule_no=500&lt;br&gt;
)&lt;br&gt;
]&lt;/p&gt;

&lt;p&gt;nacl_egress_trafic=[&lt;br&gt;
aws.ec2.NetworkAclEgressArgs(&lt;br&gt;
from_port=22,&lt;br&gt;
to_port=22,&lt;br&gt;
protocol="tcp",&lt;br&gt;
cidr_block=cfg1.require(key="myips"),&lt;br&gt;
icmp_code=0,&lt;br&gt;
icmp_type=0,&lt;br&gt;
action="allow",&lt;br&gt;
rule_no=100&lt;br&gt;
),&lt;br&gt;
aws.ec2.NetworkAclEgressArgs(&lt;br&gt;
from_port=22,&lt;br&gt;
to_port=22,&lt;br&gt;
protocol="tcp",&lt;br&gt;
cidr_block=cfg1.require(key="any_ipv4_traffic"),&lt;br&gt;
icmp_code=0,&lt;br&gt;
icmp_type=0,&lt;br&gt;
action="deny",&lt;br&gt;
rule_no=101&lt;br&gt;
),&lt;br&gt;
aws.ec2.NetworkAclEgressArgs(&lt;br&gt;
from_port=80,&lt;br&gt;
to_port=80,&lt;br&gt;
protocol="tcp",&lt;br&gt;
cidr_block=cfg1.require(key="any_ipv4_traffic"),&lt;br&gt;
icmp_code=0,&lt;br&gt;
icmp_type=0,&lt;br&gt;
action="allow",&lt;br&gt;
rule_no=200&lt;/p&gt;

&lt;p&gt;),&lt;br&gt;
aws.ec2.NetworkAclEgressArgs(&lt;br&gt;
from_port=443,&lt;br&gt;
to_port=443,&lt;br&gt;
protocol="tcp",&lt;br&gt;
cidr_block=cfg1.require(key="any_ipv4_traffic"),&lt;br&gt;
icmp_code=0,&lt;br&gt;
icmp_type=0,&lt;br&gt;
action="allow",&lt;br&gt;
rule_no=300&lt;br&gt;
),&lt;/p&gt;

&lt;p&gt;aws.ec2.NetworkAclEgressArgs(&lt;br&gt;
from_port=1024,&lt;br&gt;
to_port=65535,&lt;br&gt;
protocol="tcp",&lt;br&gt;
cidr_block=cfg1.require(key="any_ipv4_traffic"),&lt;br&gt;
icmp_code=0,&lt;br&gt;
icmp_type=0,&lt;br&gt;
action="allow",&lt;br&gt;
rule_no=400&lt;br&gt;
),&lt;br&gt;
aws.ec2.NetworkAclEgressArgs(&lt;br&gt;
from_port=0,&lt;br&gt;
to_port=0,&lt;br&gt;
protocol="-1",&lt;br&gt;
cidr_block=cfg1.require(key="any_ipv4_traffic"),&lt;br&gt;
icmp_code=0,&lt;br&gt;
icmp_type=0,&lt;br&gt;
action="allow",&lt;br&gt;
rule_no=500&lt;br&gt;
)&lt;br&gt;
]&lt;/p&gt;

&lt;p&gt;mynacls=aws.ec2.NetworkAcl(&lt;br&gt;
"mynacls",&lt;br&gt;
aws.ec2.NetworkAclArgs(&lt;br&gt;
vpc_id=vpc1.id,&lt;br&gt;
ingress=nacl_ingress_trafic,&lt;br&gt;
egress=nacl_egress_trafic,&lt;br&gt;
tags={&lt;br&gt;
"Name": "mynacls"&lt;br&gt;
}&lt;br&gt;
)&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;nacl_associate1=aws.ec2.NetworkAclAssociation(&lt;br&gt;
"nacl_associate1",&lt;br&gt;
aws.ec2.NetworkAclAssociationArgs(&lt;br&gt;
network_acl_id=mynacls.id,&lt;br&gt;
subnet_id=pubnames[0].id&lt;br&gt;
),&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;nacl1=aws.ec2.NetworkAclAssociation(&lt;br&gt;
"nacl1",&lt;br&gt;
aws.ec2.NetworkAclAssociationArgs(&lt;br&gt;
network_acl_id=mynacls.id,&lt;br&gt;
subnet_id=pubnames[0].id&lt;br&gt;
)&lt;br&gt;
)&lt;br&gt;
nacl2=aws.ec2.NetworkAclAssociation(&lt;br&gt;
"nacl2",&lt;br&gt;
aws.ec2.NetworkAclAssociationArgs(&lt;br&gt;
network_acl_id=mynacls.id,&lt;br&gt;
subnet_id=pubnames[1].id&lt;br&gt;
)&lt;br&gt;
),&lt;/p&gt;

&lt;p&gt;nacl3=aws.ec2.NetworkAclAssociation(&lt;br&gt;
"nacl3",&lt;br&gt;
aws.ec2.NetworkAclAssociationArgs(&lt;br&gt;
network_acl_id=mynacls.id,&lt;br&gt;
subnet_id=pubnames[2].id&lt;br&gt;
)&lt;br&gt;
),&lt;/p&gt;

&lt;p&gt;nacl4=aws.ec2.NetworkAclAssociation(&lt;br&gt;
"nacl4",&lt;br&gt;
aws.ec2.NetworkAclAssociationArgs(&lt;br&gt;
network_acl_id=mynacls.id,&lt;br&gt;
subnet_id=privnames[0].id&lt;br&gt;
)&lt;br&gt;
),&lt;/p&gt;

&lt;p&gt;nacl5=aws.ec2.NetworkAclAssociation(&lt;br&gt;
"nacl5",&lt;br&gt;
aws.ec2.NetworkAclAssociationArgs(&lt;br&gt;
network_acl_id=mynacls.id,&lt;br&gt;
subnet_id=privnames[1].id&lt;br&gt;
)&lt;br&gt;
),&lt;br&gt;
nacl6=aws.ec2.NetworkAclAssociation(&lt;br&gt;
"nacl6",&lt;br&gt;
aws.ec2.NetworkAclAssociationArgs(&lt;br&gt;
network_acl_id=mynacls.id,&lt;br&gt;
subnet_id=privnames[2].id&lt;br&gt;
)&lt;br&gt;
),&lt;/p&gt;

&lt;p&gt;master_ingress_rule={&lt;br&gt;
"rule2":aws.ec2.SecurityGroupIngressArgs(&lt;br&gt;
from_port=0,&lt;br&gt;
to_port=0,&lt;br&gt;
protocol="-1",&lt;br&gt;
cidr_blocks=[ cfg1.require(key="any_ipv4_traffic") ]&lt;br&gt;
),&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;master_egress_rule={&lt;br&gt;
"rule5": aws.ec2.SecurityGroupEgressArgs(&lt;br&gt;
from_port=0,&lt;br&gt;
to_port=0,&lt;br&gt;
protocol="-1",&lt;br&gt;
cidr_blocks=[ cfg1.require(key="any_ipv4_traffic") ]&lt;br&gt;
)&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;mastersecurity=aws.ec2.SecurityGroup(&lt;br&gt;
"mastersecurity",&lt;br&gt;
aws.ec2.SecurityGroupArgs(&lt;br&gt;
name="mastersecurity",&lt;br&gt;
vpc_id=vpc1.id,&lt;br&gt;
tags={&lt;br&gt;
"Name" : "mastersecurity"&lt;br&gt;
},&lt;br&gt;
ingress=[ &lt;br&gt;
master_ingress_rule["rule2"],&lt;br&gt;
],&lt;br&gt;
egress=[master_egress_rule["rule5"]]&lt;br&gt;
)&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;worker_ingress_rule={&lt;br&gt;
"rule2": aws.ec2.SecurityGroupIngressArgs(&lt;br&gt;
from_port=0,&lt;br&gt;
to_port=0,&lt;br&gt;
protocol="-1",&lt;br&gt;
cidr_blocks=[cfg1.require(key="any_ipv4_traffic")]&lt;br&gt;
),&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;worker_egress_rule={&lt;br&gt;
"rule5": aws.ec2.SecurityGroupEgressArgs(&lt;br&gt;
from_port=0,&lt;br&gt;
to_port=0,&lt;br&gt;
protocol="-1",&lt;br&gt;
cidr_blocks=[ cfg1.require(key="any_ipv4_traffic") ]&lt;br&gt;
)&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;workersecurity=aws.ec2.SecurityGroup(&lt;br&gt;
"workersecurity",&lt;br&gt;
aws.ec2.SecurityGroupArgs(&lt;br&gt;
name="workersecurity",&lt;br&gt;
vpc_id=vpc1.id,&lt;br&gt;
tags={&lt;br&gt;
"Name" : "workersecurity"&lt;br&gt;
},&lt;br&gt;
ingress=[ &lt;br&gt;
worker_ingress_rule["rule2"]&lt;br&gt;
],&lt;br&gt;
egress=[worker_egress_rule["rule5"]]&lt;br&gt;
)&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;lbsecurity=aws.ec2.SecurityGroup(&lt;br&gt;
"lbsecurity",&lt;br&gt;
aws.ec2.SecurityGroupArgs(&lt;br&gt;
name="lbsecurity",&lt;br&gt;
vpc_id=vpc1.id,&lt;br&gt;
tags={&lt;br&gt;
"Name" : "lbsecurity"&lt;br&gt;
},&lt;br&gt;
ingress=[&lt;br&gt;
aws.ec2.SecurityGroupIngressArgs(&lt;br&gt;
from_port=6443,&lt;br&gt;
to_port=6443,&lt;br&gt;
protocol="tcp",&lt;br&gt;
cidr_blocks=[cfg1.require(key="any_ipv4_traffic")] # for kube-apiserver &lt;br&gt;
),&lt;br&gt;
],&lt;br&gt;
egress=[&lt;br&gt;
aws.ec2.SecurityGroupEgressArgs(&lt;br&gt;
from_port=0,&lt;br&gt;
to_port=0,&lt;br&gt;
protocol="-1",&lt;br&gt;
cidr_blocks=[cfg1.require(key="any_ipv4_traffic")]&lt;br&gt;
)&lt;br&gt;
]&lt;br&gt;
)&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;mylb=aws.lb.LoadBalancer(&lt;br&gt;
"mylb",&lt;br&gt;
aws.lb.LoadBalancerArgs(&lt;br&gt;
name="mylb",&lt;br&gt;
load_balancer_type="network",&lt;br&gt;
idle_timeout=300,&lt;br&gt;
ip_address_type="ipv4",&lt;br&gt;
security_groups=[lbsecurity.id],&lt;br&gt;
subnets=[&lt;br&gt;
pubnames[0].id,&lt;br&gt;
pubnames[1].id,&lt;br&gt;
pubnames[2].id&lt;br&gt;
],&lt;br&gt;
tags={&lt;br&gt;
"Name" : "mylb"&lt;br&gt;
},&lt;br&gt;
internal=False,&lt;br&gt;
enable_cross_zone_load_balancing=True&lt;br&gt;
)&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;mytargets2=aws.lb.TargetGroup(&lt;br&gt;
"mytargets2",&lt;br&gt;
aws.lb.TargetGroupArgs(&lt;br&gt;
name="mytargets2",&lt;br&gt;
port=6443,&lt;br&gt;
protocol="TCP",&lt;br&gt;
target_type="instance",&lt;br&gt;
vpc_id=vpc1.id,&lt;br&gt;
tags={&lt;br&gt;
"Name" : "mytargets2"&lt;br&gt;
},&lt;br&gt;
health_check=aws.lb.TargetGroupHealthCheckArgs(&lt;br&gt;
enabled=True,&lt;br&gt;
interval=60,&lt;br&gt;
port="traffic-port",&lt;br&gt;
healthy_threshold=3,&lt;br&gt;
unhealthy_threshold=3,&lt;br&gt;
timeout=30,&lt;br&gt;
matcher="200-599"&lt;br&gt;
)&lt;br&gt;
)&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;listener1=aws.lb.Listener(&lt;br&gt;
"listener1",&lt;br&gt;
aws.lb.ListenerArgs(&lt;br&gt;
load_balancer_arn=mylb.arn,&lt;br&gt;
port=6443,&lt;br&gt;
protocol="TCP",&lt;br&gt;
default_actions=[&lt;br&gt;
aws.lb.ListenerDefaultActionArgs(&lt;br&gt;
type="forward",&lt;br&gt;
target_group_arn=mytargets2.arn&lt;br&gt;
),&lt;br&gt;
]&lt;br&gt;
)&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;clusters_role=aws.iam.Role(&lt;br&gt;
"clusters_role",&lt;br&gt;
aws.iam.RoleArgs(&lt;br&gt;
name="clustersroles",&lt;br&gt;
assume_role_policy=json.dumps({&lt;br&gt;
"Version" : "2012-10-17",&lt;br&gt;
"Statement" : [{&lt;br&gt;
"Effect" : "Allow",&lt;br&gt;
"Action": "sts:AssumeRole",&lt;br&gt;
"Principal": {&lt;br&gt;
"Service" : "ec2.amazonaws.com"&lt;br&gt;
}&lt;br&gt;
}]&lt;/p&gt;

&lt;p&gt;})&lt;br&gt;
)&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;ssmattach1=aws.iam.RolePolicyAttachment(&lt;br&gt;
"ssmattach1",&lt;br&gt;
aws.iam.RolePolicyAttachmentArgs(&lt;br&gt;
role=clusters_role.name,&lt;br&gt;
policy_arn=aws.iam.ManagedPolicy.AMAZON_SSM_MANAGED_EC2_INSTANCE_DEFAULT_POLICY&lt;br&gt;
)&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;myprofile=aws.iam.InstanceProfile(&lt;br&gt;
"myprofile",&lt;br&gt;
aws.iam.InstanceProfileArgs(&lt;br&gt;
name="myprofile",&lt;br&gt;
role=clusters_role.name,&lt;br&gt;
tags={&lt;br&gt;
"Name" : "myprofile"&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;)&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;mastertemps=aws.ec2.LaunchTemplate(&lt;br&gt;
"mastertemps",&lt;br&gt;
aws.ec2.LaunchTemplateArgs(&lt;br&gt;
image_id=cfg1.require(key="ami"),&lt;br&gt;
name="mastertemps",&lt;br&gt;
instance_type=cfg1.require(key="instance-type"),&lt;br&gt;
vpc_security_group_ids=[mastersecurity.id],&lt;br&gt;
block_device_mappings=[&lt;br&gt;
aws.ec2.LaunchTemplateBlockDeviceMappingArgs(&lt;br&gt;
device_name="/dev/sdb",&lt;br&gt;
ebs=aws.ec2.LaunchTemplateBlockDeviceMappingEbsArgs(&lt;br&gt;
volume_size=8,&lt;br&gt;
volume_type="gp3",&lt;br&gt;
delete_on_termination=True,&lt;br&gt;
encrypted="false"&lt;br&gt;
)&lt;br&gt;
)&lt;br&gt;
],&lt;br&gt;
tags={&lt;br&gt;
"Name" : "mastertemps" &lt;br&gt;
},&lt;br&gt;
iam_instance_profile= aws.ec2.LaunchTemplateIamInstanceProfileArgs(&lt;br&gt;
arn=myprofile.arn&lt;br&gt;
)&lt;br&gt;
)&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;controlplane=aws.autoscaling.Group(&lt;br&gt;
"controlplane",&lt;br&gt;
aws.autoscaling.GroupArgs(&lt;br&gt;
name="controlplane",&lt;br&gt;
vpc_zone_identifiers=[&lt;br&gt;
pubnames[0].id,&lt;br&gt;
pubnames[1].id,&lt;br&gt;
pubnames[2].id&lt;br&gt;
],&lt;br&gt;
desired_capacity=3,&lt;br&gt;
max_size=9,&lt;br&gt;
min_size=1,&lt;br&gt;
launch_template=aws.autoscaling.GroupLaunchTemplateArgs(&lt;br&gt;
id=mastertemps.id,&lt;br&gt;
version="$Latest"&lt;br&gt;
),&lt;br&gt;
default_cooldown=600,&lt;br&gt;
tags=[&lt;br&gt;
aws.autoscaling.GroupTagArgs(&lt;br&gt;
key="Name",&lt;br&gt;
value="controlplane",&lt;br&gt;
propagate_at_launch=True&lt;br&gt;
)&lt;br&gt;
],&lt;br&gt;
health_check_grace_period=600,&lt;br&gt;
health_check_type="EC2",&lt;br&gt;
)&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;lbattach=aws.autoscaling.Attachment(&lt;br&gt;
"lbattach",&lt;br&gt;
aws.autoscaling.AttachmentArgs(&lt;br&gt;
autoscaling_group_name=controlplane.name ,&lt;br&gt;
lb_target_group_arn=mytargets2.arn&lt;br&gt;
)&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;workertemps=aws.ec2.LaunchTemplate(&lt;br&gt;
"workertemps",&lt;br&gt;
aws.ec2.LaunchTemplateArgs(&lt;br&gt;
image_id=cfg1.require(key="ami"),&lt;br&gt;
name="workertemps",&lt;br&gt;
instance_type=cfg1.require(key="instance-type"),&lt;br&gt;
vpc_security_group_ids=[mastersecurity.id],&lt;br&gt;
block_device_mappings=[&lt;br&gt;
aws.ec2.LaunchTemplateBlockDeviceMappingArgs(&lt;br&gt;
device_name="/dev/sdr",&lt;br&gt;
ebs=aws.ec2.LaunchTemplateBlockDeviceMappingEbsArgs(&lt;br&gt;
volume_size=8,&lt;br&gt;
volume_type="gp3",&lt;br&gt;
delete_on_termination=True,&lt;br&gt;
encrypted="false"&lt;br&gt;
)&lt;br&gt;
)&lt;br&gt;
],&lt;br&gt;
tags={&lt;br&gt;
"Name" : "workertemps" &lt;br&gt;
},&lt;br&gt;
iam_instance_profile=aws.ec2.LaunchTemplateIamInstanceProfileArgs(&lt;br&gt;
arn=myprofile.arn&lt;br&gt;
)&lt;br&gt;
)&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;worker=aws.autoscaling.Group(&lt;br&gt;
"worker",&lt;br&gt;
aws.autoscaling.GroupArgs(&lt;br&gt;
name="worker",&lt;br&gt;
vpc_zone_identifiers=[&lt;br&gt;
privnames[0].id,&lt;br&gt;
privnames[1].id,&lt;br&gt;
privnames[2].id&lt;br&gt;
],&lt;br&gt;
desired_capacity=3,&lt;br&gt;
max_size=9,&lt;br&gt;
min_size=1,&lt;br&gt;
launch_template=aws.autoscaling.GroupLaunchTemplateArgs(&lt;br&gt;
id=workertemps.id,&lt;br&gt;
version="$Latest"&lt;br&gt;
),&lt;br&gt;
default_cooldown=600,&lt;br&gt;
tags=[&lt;br&gt;
aws.autoscaling.GroupTagArgs(&lt;br&gt;
key="Name",&lt;br&gt;
value="worker",&lt;br&gt;
propagate_at_launch=True&lt;br&gt;
)&lt;br&gt;
],&lt;br&gt;
health_check_grace_period=600,&lt;br&gt;
health_check_type="EC2",&lt;br&gt;
)&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;myssmdoc=aws.ssm.Document(&lt;br&gt;
"myssmdoc",&lt;br&gt;
aws.ssm.DocumentArgs(&lt;br&gt;
name="mydoc",&lt;br&gt;
document_format="YAML",&lt;br&gt;
document_type="Command",&lt;br&gt;
version_name="1.1.0",&lt;br&gt;
content="""schemaVersion: '1.2'&lt;br&gt;
description: Check ip configuration of a Linux instance.&lt;br&gt;
parameters: {}&lt;br&gt;
runtimeConfig:&lt;br&gt;
'aws:runShellScript':&lt;br&gt;
properties:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;id: '0.aws:runShellScript'
runCommand:&lt;/li&gt;
&lt;li&gt;sudo su &lt;/li&gt;
&lt;li&gt;sudo apt update -y &amp;amp;&amp;amp; sudo apt upgrade -y&lt;/li&gt;
&lt;li&gt;sudo apt install -y net-tools&lt;/li&gt;
&lt;li&gt;sudo systemctl enable amazon-ssm-agent --now &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;"""&lt;br&gt;
)&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;myssmassociates=aws.ssm.Association(&lt;br&gt;
"myssmassociates",&lt;br&gt;
aws.ssm.AssociationArgs(&lt;br&gt;
name=myssmdoc.name,&lt;br&gt;
association_name="myssmassociates",&lt;br&gt;
targets=[&lt;br&gt;
aws.ssm.AssociationTargetArgs(&lt;br&gt;
key="InstanceIds",&lt;br&gt;
values=["*"]&lt;br&gt;
)&lt;br&gt;
],&lt;br&gt;
)&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;controlplanes=aws.ec2.get_instances(&lt;br&gt;
instance_tags={&lt;br&gt;
"Name": "controlplane"&lt;br&gt;
},&lt;br&gt;
instance_state_names=["running"],&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;workers=aws.ec2.get_instances(&lt;br&gt;
instance_tags={&lt;br&gt;
"Name": "worker"&lt;br&gt;
},&lt;br&gt;
instance_state_names=["running"],&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;get_controlplanepubips=controlplanes.public_ips&lt;br&gt;
get_controlplaneprivips=controlplanes.private_ips&lt;br&gt;
get_workerprivips=workers.private_ips&lt;br&gt;
get_loadbalancer=mylb.dns_name &lt;br&gt;
pulumi.export("get_controlplanepubips", get_controlplanepubips)&lt;br&gt;
pulumi.export("get_controlplaneprivips", get_controlplaneprivips)&lt;br&gt;
pulumi.export("get_workerprivips", get_workerprivips)&lt;br&gt;
pulumi.export("get_loadbalancer", get_loadbalancer)&lt;/p&gt;

&lt;p&gt;Note: &lt;/p&gt;

&lt;p&gt;for  control plane  security  group  &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   allow   all  traffic   in  inbound and  outbound  rules  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;for  worker secuiity group   &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  allow  all  traffic   in  inbound   and outbound  rules  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;for  network  load balancer  security group  &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; allow tcp 6443 for apiserver  which  allow  registered  target control plane nodes 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;outcomes  :  from  control plane and  worker  nodes&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3r1l6tgs9drggg551cmd.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3r1l6tgs9drggg551cmd.jpeg" alt="Image description" width="800" height="334"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbfxzqynztxmg8jqy4fbs.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbfxzqynztxmg8jqy4fbs.jpeg" alt="Image description" width="800" height="334"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi23v63ewzev72h52sfcq.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi23v63ewzev72h52sfcq.jpeg" alt="Image description" width="800" height="334"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa61rroructfllrm7qb7w.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa61rroructfllrm7qb7w.jpeg" alt="Image description" width="800" height="334"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;references:&lt;br&gt;&lt;br&gt;
                                                                                                      &lt;a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/" rel="noopener noreferrer"&gt;https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=FHXucrzqcZM&amp;amp;t=1944s" rel="noopener noreferrer"&gt;https://www.youtube.com/watch?v=FHXucrzqcZM&amp;amp;t=1944s&lt;/a&gt;  &lt;/p&gt;

</description>
    </item>
    <item>
      <title>backup and restore etcd</title>
      <dc:creator>MohammedBanabila</dc:creator>
      <pubDate>Sun, 30 Mar 2025 01:41:05 +0000</pubDate>
      <link>https://dev.to/mohammed_banabila_3bc9e49/backup-and-restore-etcd-4e8o</link>
      <guid>https://dev.to/mohammed_banabila_3bc9e49/backup-and-restore-etcd-4e8o</guid>
      <description>&lt;p&gt;.setup  self  managed k8s  cluster  using  kubeadm  and  those steps are :&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; create vpc and  subnet  and  in this  lab  we create vpc with cidr block=77.132.0.0/16 and 3   public subnets with block size=24
&lt;/li&gt;
&lt;li&gt; deploy 3 ec2  instance  for  all node and  define which be  control plane  , others be worker nodes&lt;/li&gt;
&lt;li&gt; create 2 security groups for control plane node and  worker nodes
4 .  setup  network acceess control   list
5 .  associate  elastic ip  with those instances
&lt;/li&gt;
&lt;li&gt;integrate  internet gateway  with vpc
7 . create routetable and associate  it with subnets&lt;/li&gt;
&lt;li&gt;after the infrastructure are up  and running  example  ex2 instances&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;note:&lt;br&gt;
 we  use  control plane  node  ports  which  need it  by its components&lt;/p&gt;

&lt;p&gt;apiserver    port 6443&lt;br&gt;
   ectd           port 2379-2380&lt;br&gt;
   scheduler  port 10257&lt;br&gt;
   controller-manager 10259&lt;br&gt;
   access to control plane node    port 22   only to  my ip  address /32&lt;/p&gt;

&lt;p&gt;and  for workers node  port &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubelet    port 10250 
kube-proxy   port 10256 
nodePort      30000-32767 
access to worker node    port 22   only to  my ip  address /32
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;we add those  ports to   control plane security group   and  worker security groups&lt;/p&gt;

&lt;p&gt;before start  setup  , first   update and  upgrade  packages  using  sudo apt update -y &amp;amp;&amp;amp; sudo apt upgrade -y &lt;br&gt;
step  for setup  control plane node  as master node :&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;disable  swap&lt;/li&gt;
&lt;li&gt; update  kernel params
3  install  container runtime &lt;/li&gt;
&lt;li&gt;install  runc&lt;/li&gt;
&lt;li&gt;install cni plugin&lt;/li&gt;
&lt;li&gt;install kubeadm , kubelet , kubectl   and   check   version &lt;/li&gt;
&lt;li&gt;initialize  control plane  node using  kubeadm  and  print output token which  let you to join worker nodes
8 . install  cni plugin   calico
9 .  use kubeadm join   to  master
optional: you can change hostname at nodes 
 change hostname  to be  worker1 , worker2   for worker nodes&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;step  for setup  worker nodes : &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;disable  swap&lt;/li&gt;
&lt;li&gt; update  kernel params
3  install  container runtime &lt;/li&gt;
&lt;li&gt;install  runc&lt;/li&gt;
&lt;li&gt;install cni plugin&lt;/li&gt;
&lt;li&gt;install kubeadm , kubelet , kubectl   and   check   version&lt;/li&gt;
&lt;li&gt; use kubeadm join   to  master&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Note:  copy kubeconfig  from control plane to   workers node  at  .kube/config &lt;/p&gt;

&lt;p&gt;at  control plane node &lt;br&gt;
     cd .kube/ &lt;br&gt;
     cat  config  from master node  and  copy it  to  worker node&lt;/p&gt;

&lt;p&gt;Note: &lt;br&gt;
 you  can  deploy  it  at  console    or   as  infrastructure as code &lt;br&gt;
 in this  lab ,  I  used  pulumi python for  provisioning the resources.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"""An AWS Python Pulumi program"""
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;import pulumi , pulumi_aws as aws , json&lt;/p&gt;

&lt;p&gt;cfg1=pulumi.Config() &lt;/p&gt;

&lt;p&gt;vpc1=aws.ec2.Vpc(&lt;br&gt;
"vpc1",&lt;br&gt;
aws.ec2.VpcArgs(&lt;br&gt;
cidr_block=cfg1.require(key='block1'),&lt;br&gt;
tags={&lt;br&gt;
"Name": "vpc1"&lt;br&gt;
}&lt;br&gt;
)&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;intgw1=aws.ec2.InternetGateway(&lt;br&gt;
"intgw1",&lt;br&gt;
aws.ec2.InternetGatewayArgs(&lt;br&gt;
vpc_id=vpc1.id,&lt;br&gt;
tags={&lt;br&gt;
"Name": "intgw1"&lt;br&gt;
}&lt;br&gt;
)&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;zones="us-east-1a"&lt;br&gt;
publicsubnets=["subnet1","subnet2","subnet3"]&lt;br&gt;
cidr1=cfg1.require(key="cidr1")&lt;br&gt;
cidr2=cfg1.require(key="cidr2")&lt;br&gt;
cidr3=cfg1.require(key="cidr3")&lt;/p&gt;

&lt;p&gt;cidrs=[ cidr1 , cidr2 , cidr3 ]&lt;/p&gt;

&lt;p&gt;for allsubnets in range(len(publicsubnets)):&lt;br&gt;
publicsubnets[allsubnets]=aws.ec2.Subnet(&lt;br&gt;
publicsubnets[allsubnets],&lt;br&gt;
aws.ec2.SubnetArgs(&lt;br&gt;
vpc_id=vpc1.id,&lt;br&gt;
cidr_block=cidrs[allsubnets],&lt;br&gt;
map_public_ip_on_launch=False,&lt;br&gt;
availability_zone=zones,&lt;br&gt;
tags={&lt;br&gt;
"Name" : publicsubnets[allsubnets]&lt;br&gt;
}&lt;br&gt;
)&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;table1=aws.ec2.RouteTable(&lt;br&gt;
"table1",&lt;br&gt;
aws.ec2.RouteTableArgs(&lt;br&gt;
vpc_id=vpc1.id,&lt;br&gt;
routes=[&lt;br&gt;
aws.ec2.RouteTableRouteArgs(&lt;br&gt;
cidr_block=cfg1.require(key="any_ipv4_traffic"),&lt;br&gt;
gateway_id=intgw1.id&lt;br&gt;
)&lt;br&gt;
],&lt;br&gt;
tags={&lt;br&gt;
"Name" : "table1"&lt;br&gt;
}&lt;br&gt;
)&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;associate1=aws.ec2.RouteTableAssociation(&lt;br&gt;
"associate1",&lt;br&gt;
aws.ec2.RouteTableAssociationArgs(&lt;br&gt;
subnet_id=publicsubnets[0].id,&lt;br&gt;
route_table_id=table1.id&lt;br&gt;
)&lt;br&gt;
)&lt;br&gt;
associate2=aws.ec2.RouteTableAssociation(&lt;br&gt;
"associate2",&lt;br&gt;
aws.ec2.RouteTableAssociationArgs(&lt;br&gt;
subnet_id=publicsubnets[1].id,&lt;br&gt;
route_table_id=table1.id&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;)&lt;/p&gt;

&lt;p&gt;associate3=aws.ec2.RouteTableAssociation(&lt;br&gt;
"associate3",&lt;br&gt;
aws.ec2.RouteTableAssociationArgs(&lt;br&gt;
subnet_id=publicsubnets[2].id,&lt;br&gt;
route_table_id=table1.id&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;)&lt;/p&gt;

&lt;p&gt;ingress_traffic=[&lt;br&gt;
aws.ec2.NetworkAclIngressArgs(&lt;br&gt;
from_port=22,&lt;br&gt;
to_port=22,&lt;br&gt;
protocol="tcp",&lt;br&gt;
cidr_block=cfg1.require(key="myips"),&lt;br&gt;
icmp_code=0,&lt;br&gt;
icmp_type=0,&lt;br&gt;
action="allow",&lt;br&gt;
rule_no=100&lt;br&gt;
),&lt;br&gt;
aws.ec2.NetworkAclIngressArgs(&lt;br&gt;
from_port=22,&lt;br&gt;
to_port=22,&lt;br&gt;
protocol="tcp",&lt;br&gt;
cidr_block=cfg1.require(key="any_ipv4_traffic"),&lt;br&gt;
icmp_code=0,&lt;br&gt;
icmp_type=0,&lt;br&gt;
action="deny",&lt;br&gt;
rule_no=101&lt;br&gt;
),&lt;br&gt;
aws.ec2.NetworkAclIngressArgs(&lt;br&gt;
from_port=80,&lt;br&gt;
to_port=80,&lt;br&gt;
protocol="tcp",&lt;br&gt;
cidr_block=cfg1.require(key="any_ipv4_traffic"),&lt;br&gt;
icmp_code=0,&lt;br&gt;
icmp_type=0,&lt;br&gt;
action="allow",&lt;br&gt;
rule_no=200&lt;br&gt;
),&lt;br&gt;
aws.ec2.NetworkAclIngressArgs(&lt;br&gt;
from_port=443,&lt;br&gt;
to_port=443,&lt;br&gt;
protocol="tcp",&lt;br&gt;
cidr_block=cfg1.require(key="any_ipv4_traffic"),&lt;br&gt;
icmp_code=0,&lt;br&gt;
icmp_type=0,&lt;br&gt;
action="allow",&lt;br&gt;
rule_no=300&lt;br&gt;
),&lt;br&gt;
aws.ec2.NetworkAclIngressArgs(&lt;br&gt;
from_port=0,&lt;br&gt;
to_port=0,&lt;br&gt;
protocol="-1",&lt;br&gt;
cidr_block=cfg1.require(key="any_ipv4_traffic"),&lt;br&gt;
icmp_code=0,&lt;br&gt;
icmp_type=0,&lt;br&gt;
action="allow",&lt;br&gt;
rule_no=400&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;]&lt;/p&gt;

&lt;p&gt;egress_traffic=[&lt;br&gt;
aws.ec2.NetworkAclEgressArgs(&lt;br&gt;
from_port=22,&lt;br&gt;
to_port=22,&lt;br&gt;
protocol="tcp",&lt;br&gt;
cidr_block=cfg1.require(key="myips"),&lt;br&gt;
icmp_code=0,&lt;br&gt;
icmp_type=0,&lt;br&gt;
action="allow",&lt;br&gt;
rule_no=100&lt;br&gt;
),&lt;br&gt;
aws.ec2.NetworkAclEgressArgs(&lt;br&gt;
from_port=22,&lt;br&gt;
to_port=22,&lt;br&gt;
protocol="tcp",&lt;br&gt;
cidr_block=cfg1.require(key="any_ipv4_traffic"),&lt;br&gt;
icmp_code=0,&lt;br&gt;
icmp_type=0,&lt;br&gt;
action="deny",&lt;br&gt;
rule_no=101&lt;br&gt;
),&lt;br&gt;
aws.ec2.NetworkAclEgressArgs(&lt;br&gt;
from_port=80,&lt;br&gt;
to_port=80,&lt;br&gt;
protocol="tcp",&lt;br&gt;
cidr_block=cfg1.require(key="any_ipv4_traffic"),&lt;br&gt;
icmp_code=0,&lt;br&gt;
icmp_type=0,&lt;br&gt;
action="allow",&lt;br&gt;
rule_no=200&lt;br&gt;
),&lt;br&gt;
aws.ec2.NetworkAclEgressArgs(&lt;br&gt;
from_port=443,&lt;br&gt;
to_port=443,&lt;br&gt;
protocol="tcp",&lt;br&gt;
cidr_block=cfg1.require(key="any_ipv4_traffic"),&lt;br&gt;
icmp_code=0,&lt;br&gt;
icmp_type=0,&lt;br&gt;
action="allow",&lt;br&gt;
rule_no=300&lt;br&gt;
),&lt;br&gt;
aws.ec2.NetworkAclEgressArgs(&lt;br&gt;
from_port=0,&lt;br&gt;
to_port=0,&lt;br&gt;
protocol="-1",&lt;br&gt;
cidr_block=cfg1.require(key="any_ipv4_traffic"),&lt;br&gt;
icmp_code=0,&lt;br&gt;
icmp_type=0,&lt;br&gt;
action="allow",&lt;br&gt;
rule_no=400&lt;br&gt;
)&lt;br&gt;
]&lt;/p&gt;

&lt;p&gt;nacls1=aws.ec2.NetworkAcl(&lt;br&gt;
"nacls1",&lt;br&gt;
aws.ec2.NetworkAclArgs(&lt;br&gt;
vpc_id=vpc1.id,&lt;br&gt;
ingress=ingress_traffic,&lt;br&gt;
egress=egress_traffic,&lt;br&gt;
tags={&lt;br&gt;
"Name" : "nacls1"&lt;br&gt;
},&lt;br&gt;
)&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;nacllink1=aws.ec2.NetworkAclAssociation(&lt;br&gt;
"nacllink1",&lt;br&gt;
aws.ec2.NetworkAclAssociationArgs(&lt;br&gt;
network_acl_id=nacls1.id,&lt;br&gt;
subnet_id=publicsubnets[0].id&lt;br&gt;
)&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;nacllink2=aws.ec2.NetworkAclAssociation(&lt;br&gt;
"nacllink2",&lt;br&gt;
aws.ec2.NetworkAclAssociationArgs(&lt;br&gt;
network_acl_id=nacls1.id,&lt;br&gt;
subnet_id=publicsubnets[1].id&lt;br&gt;
)&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;nacllink3=aws.ec2.NetworkAclAssociation(&lt;br&gt;
"nacllink3",&lt;br&gt;
aws.ec2.NetworkAclAssociationArgs(&lt;br&gt;
network_acl_id=nacls1.id,&lt;br&gt;
subnet_id=publicsubnets[2].id&lt;br&gt;
)&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;masteringress=[&lt;br&gt;
aws.ec2.SecurityGroupIngressArgs(&lt;br&gt;
from_port=22,&lt;br&gt;
to_port=22,&lt;br&gt;
protocol="tcp",&lt;br&gt;
cidr_blocks=[cfg1.require(key="myips")],&lt;/p&gt;

&lt;p&gt;),&lt;br&gt;
aws.ec2.SecurityGroupIngressArgs(&lt;br&gt;
from_port=6443,&lt;br&gt;
to_port=6443,&lt;br&gt;
protocol="tcp",&lt;br&gt;
cidr_blocks=[vpc1.cidr_block]&lt;br&gt;
),&lt;br&gt;
aws.ec2.SecurityGroupIngressArgs(&lt;br&gt;
from_port=2379,&lt;br&gt;
to_port=2380,&lt;br&gt;
protocol="tcp",&lt;br&gt;
cidr_blocks=[vpc1.cidr_block]&lt;/p&gt;

&lt;p&gt;),&lt;/p&gt;

&lt;p&gt;aws.ec2.SecurityGroupIngressArgs(&lt;br&gt;
from_port=10249,&lt;br&gt;
to_port=10260,&lt;br&gt;
protocol="tcp",&lt;br&gt;
cidr_blocks=[vpc1.cidr_block]&lt;br&gt;
)&lt;br&gt;
]&lt;br&gt;
masteregress=[&lt;br&gt;
aws.ec2.SecurityGroupEgressArgs(&lt;br&gt;
from_port=0,&lt;br&gt;
to_port=0,&lt;br&gt;
protocol="-1",&lt;br&gt;
cidr_blocks=[cfg1.require(key="any_ipv4_traffic")]&lt;br&gt;
),&lt;br&gt;
]&lt;/p&gt;

&lt;p&gt;mastersecurity=aws.ec2.SecurityGroup(&lt;br&gt;
"mastersecurity",&lt;br&gt;
aws.ec2.SecurityGroupArgs(&lt;br&gt;
vpc_id=vpc1.id,&lt;br&gt;
name="mastersecurity",&lt;br&gt;
ingress=masteringress,&lt;br&gt;
egress=masteregress,&lt;br&gt;
tags={&lt;br&gt;
"Name" : "mastersecurity"&lt;br&gt;
}&lt;br&gt;
)&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;workeringress=[&lt;br&gt;
aws.ec2.SecurityGroupIngressArgs(&lt;br&gt;
from_port=22,&lt;br&gt;
to_port=22,&lt;br&gt;
protocol="tcp",&lt;br&gt;
cidr_blocks=[cfg1.require(key="myips")]&lt;br&gt;
),&lt;/p&gt;

&lt;p&gt;aws.ec2.SecurityGroupIngressArgs(&lt;br&gt;
from_port=10250,&lt;br&gt;
to_port=10250,&lt;br&gt;
protocol="tcp",&lt;br&gt;
cidr_blocks=[vpc1.cidr_block]&lt;br&gt;
),&lt;br&gt;
aws.ec2.SecurityGroupIngressArgs(&lt;br&gt;
from_port=10256,&lt;br&gt;
to_port=10256,&lt;br&gt;
protocol="tcp",&lt;br&gt;
cidr_blocks=[vpc1.cidr_block]&lt;br&gt;
),&lt;br&gt;
aws.ec2.SecurityGroupIngressArgs(&lt;br&gt;
from_port=30000,&lt;br&gt;
to_port=32767,&lt;br&gt;
protocol="tcp",&lt;br&gt;
cidr_blocks=[vpc1.cidr_block]&lt;br&gt;
),&lt;br&gt;
aws.ec2.SecurityGroupIngressArgs(&lt;br&gt;
from_port=443,&lt;br&gt;
to_port=443,&lt;br&gt;
protocol="tcp",&lt;br&gt;
cidr_blocks=[cfg1.require(key="any_ipv4_traffic")] &lt;br&gt;
),&lt;br&gt;
aws.ec2.SecurityGroupIngressArgs(&lt;br&gt;
from_port=80,&lt;br&gt;
to_port=80,&lt;br&gt;
protocol="tcp",&lt;br&gt;
cidr_blocks=[cfg1.require(key="any_ipv4_traffic")] &lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;]&lt;br&gt;
workeregress=[&lt;br&gt;
aws.ec2.SecurityGroupEgressArgs(&lt;br&gt;
from_port=0,&lt;br&gt;
to_port=0,&lt;br&gt;
protocol="-1",&lt;br&gt;
cidr_blocks=[cfg1.require(key="any_ipv4_traffic")] &lt;/p&gt;

&lt;p&gt;),&lt;br&gt;
]&lt;/p&gt;

&lt;p&gt;workersecurity=aws.ec2.SecurityGroup(&lt;br&gt;
"workersecurity",&lt;br&gt;
aws.ec2.SecurityGroupArgs(&lt;br&gt;
vpc_id=vpc1.id,&lt;br&gt;
name="workersecurity",&lt;br&gt;
ingress=workeringress,&lt;br&gt;
egress=workeregress,&lt;br&gt;
tags={&lt;br&gt;
"Name" : "workersecurity"&lt;br&gt;
}&lt;br&gt;
)&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;master=aws.ec2.Instance(&lt;br&gt;
"master",&lt;br&gt;
aws.ec2.InstanceArgs(&lt;br&gt;
ami=cfg1.require(key='ami'),&lt;br&gt;
instance_type=cfg1.require(key='instance-type'),&lt;br&gt;
vpc_security_group_ids=[mastersecurity.id],&lt;br&gt;
subnet_id=publicsubnets[0].id,&lt;br&gt;
availability_zone=zones,&lt;br&gt;
key_name="mykey1",&lt;br&gt;
tags={&lt;br&gt;
"Name" : "master"&lt;br&gt;
},&lt;br&gt;
ebs_block_devices=[&lt;br&gt;
aws.ec2.InstanceEbsBlockDeviceArgs(&lt;br&gt;
device_name="/dev/sdm",&lt;br&gt;
volume_size=8,&lt;br&gt;
volume_type="gp3"&lt;br&gt;
)&lt;br&gt;
]&lt;br&gt;
)&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;worker1=aws.ec2.Instance(&lt;br&gt;
"worker1",&lt;br&gt;
aws.ec2.InstanceArgs(&lt;br&gt;
ami=cfg1.require(key='ami'),&lt;br&gt;
instance_type=cfg1.require(key='instance-type'),&lt;br&gt;
vpc_security_group_ids=[workersecurity.id],&lt;br&gt;
subnet_id=publicsubnets[1].id,&lt;br&gt;
availability_zone=zones,&lt;br&gt;
key_name="mykey2",&lt;br&gt;
tags={&lt;br&gt;
"Name" : "worker1"&lt;br&gt;
},&lt;br&gt;
ebs_block_devices=[&lt;br&gt;
aws.ec2.InstanceEbsBlockDeviceArgs(&lt;br&gt;
device_name="/dev/sdb",&lt;br&gt;
volume_size=8,&lt;br&gt;
volume_type="gp3"&lt;br&gt;
)&lt;br&gt;
],&lt;br&gt;
)&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;worker2=aws.ec2.Instance(&lt;br&gt;
"worker2",&lt;br&gt;
aws.ec2.InstanceArgs(&lt;br&gt;
ami=cfg1.require(key='ami'),&lt;br&gt;
instance_type=cfg1.require(key='instance-type'),&lt;br&gt;
vpc_security_group_ids=[workersecurity.id],&lt;br&gt;
subnet_id=publicsubnets[2].id,&lt;br&gt;
key_name="mykey2",&lt;br&gt;
tags={&lt;br&gt;
"Name" : "worker2"&lt;br&gt;
},&lt;br&gt;
ebs_block_devices=[&lt;br&gt;
aws.ec2.InstanceEbsBlockDeviceArgs(&lt;br&gt;
device_name="/dev/sdc",&lt;br&gt;
volume_size=8,&lt;br&gt;
volume_type="gp3"&lt;br&gt;
)&lt;br&gt;
],&lt;br&gt;
)&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;eips=[ "eip1" , "eip2" , "eip3" ]&lt;br&gt;
for alleips in range(len(eips)):&lt;br&gt;
eips[alleips]=aws.ec2.Eip(&lt;br&gt;
eips[alleips],&lt;br&gt;
aws.ec2.EipArgs(&lt;br&gt;
domain="vpc",&lt;br&gt;
tags={&lt;br&gt;
"Name" : eips[alleips]&lt;br&gt;
} &lt;br&gt;
)&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;eiplink1=aws.ec2.EipAssociation(&lt;br&gt;
"eiplink1",&lt;br&gt;
aws.ec2.EipAssociationArgs(&lt;br&gt;
allocation_id=eips[0].id,&lt;br&gt;
instance_id=master.id&lt;br&gt;
)&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;eiplink2=aws.ec2.EipAssociation(&lt;br&gt;
"eiplink2",&lt;br&gt;
aws.ec2.EipAssociationArgs(&lt;br&gt;
allocation_id=eips[1].id,&lt;br&gt;
instance_id=worker1.id&lt;br&gt;
)&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;eiplink3=aws.ec2.EipAssociation(&lt;br&gt;
"eiplink3",&lt;br&gt;
aws.ec2.EipAssociationArgs(&lt;br&gt;
allocation_id=eips[2].id,&lt;br&gt;
instance_id=worker2.id&lt;br&gt;
)&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;pulumi.export("master_eip" , value=eips[0].public_ip )&lt;/p&gt;

&lt;p&gt;pulumi.export("worker1_eip", value=eips[1].public_ip )&lt;/p&gt;

&lt;p&gt;pulumi.export("worker2_eip", value=eips[2].public_ip )&lt;/p&gt;

&lt;p&gt;pulumi.export("master_private_ip", value=master.private_ip)&lt;/p&gt;

&lt;p&gt;pulumi.export("worker1_private_ip" , value=worker1.private_ip)&lt;/p&gt;

&lt;p&gt;pulumi.export( "worker2_private_ip" , value=worker2.private_ip )&lt;/p&gt;

&lt;p&gt;master.sh   script :   for master node &lt;/p&gt;

&lt;h1&gt;
  
  
  !/bin/bash
&lt;/h1&gt;

&lt;h1&gt;
  
  
  disable swap
&lt;/h1&gt;

&lt;p&gt;sudo swapoff -a&lt;br&gt;
sudo sed -i '/ swap / s/^(.*)$/#\1/g' /etc/fstab&lt;/p&gt;

&lt;h1&gt;
  
  
  update kernel params
&lt;/h1&gt;

&lt;p&gt;cat &amp;lt;&amp;lt;EOF | sudo tee /etc/modules-load.d/k8s.conf&lt;br&gt;
overlay&lt;br&gt;
br_netfilter&lt;br&gt;
EOF&lt;/p&gt;

&lt;p&gt;sudo modprobe overlay&lt;br&gt;
sudo modprobe br_netfilter&lt;/p&gt;

&lt;p&gt;cat &amp;lt;&amp;lt;EOF | sudo tee /etc/sysctl.d/k8s.conf&lt;br&gt;
net.bridge.bridge-nf-call-iptables = 1&lt;br&gt;
net.bridge.bridge-nf-call-ip6tables = 1&lt;br&gt;
net.ipv4.ip_forward = 1&lt;br&gt;
EOF&lt;/p&gt;

&lt;p&gt;sudo sysctl --system&lt;/p&gt;

&lt;h1&gt;
  
  
  install container runtime
&lt;/h1&gt;

&lt;h1&gt;
  
  
  install containerd 1.7.27
&lt;/h1&gt;

&lt;p&gt;curl -LO &lt;a href="https://github.com/containerd/containerd/releases/download/v1.7.27/containerd-1.7.27-linux-amd64.tar.gz" rel="noopener noreferrer"&gt;https://github.com/containerd/containerd/releases/download/v1.7.27/containerd-1.7.27-linux-amd64.tar.gz&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;sudo tar Cxzvf /usr/local containerd-1.7.27-linux-amd64.tar.gz&lt;/p&gt;

&lt;p&gt;curl -LO &lt;a href="https://raw.githubusercontent.com/containerd/containerd/main/containerd.service" rel="noopener noreferrer"&gt;https://raw.githubusercontent.com/containerd/containerd/main/containerd.service&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;sudo mkdir -p /usr/local/lib/systemd/system/&lt;br&gt;
sudo mv containerd.service /usr/local/lib/systemd/system/&lt;br&gt;
sudo mkdir -p /etc/containerd&lt;br&gt;
sudo containerd config default | sudo tee /etc/containerd/config.toml&lt;br&gt;
sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml&lt;/p&gt;

&lt;p&gt;sudo systemctl daemon-reload&lt;br&gt;
sudo systemctl enable containerd --now&lt;/p&gt;

&lt;h1&gt;
  
  
  install runc
&lt;/h1&gt;

&lt;p&gt;curl -LO &lt;a href="https://github.com/opencontainers/runc/releases/download/v1.1.12/runc.amd64" rel="noopener noreferrer"&gt;https://github.com/opencontainers/runc/releases/download/v1.1.12/runc.amd64&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;sudo install -m 755 runc.amd64 /usr/local/sbin/runc &lt;/p&gt;

&lt;h1&gt;
  
  
  install cni plugins
&lt;/h1&gt;

&lt;p&gt;curl -LO "&lt;a href="https://github.com/containernetworking/plugins/releases/download/v1.6.2/cni-plugins-linux-amd64-v1.6.2.tgz" rel="noopener noreferrer"&gt;https://github.com/containernetworking/plugins/releases/download/v1.6.2/cni-plugins-linux-amd64-v1.6.2.tgz&lt;/a&gt;"&lt;/p&gt;

&lt;p&gt;sudo mkdir -p /opt/cni/bin&lt;/p&gt;

&lt;p&gt;sudo tar Cxzvf /opt/cni/bin cni-plugins-linux-amd64-v1.6.2.tgz&lt;/p&gt;

&lt;h1&gt;
  
  
  install kubeadm, kubelet, kubectl
&lt;/h1&gt;

&lt;p&gt;sudo apt-get update -y&lt;br&gt;
sudo apt-get install -y apt-transport-https ca-certificates curl gpg&lt;br&gt;
curl -fsSL &lt;a href="https://pkgs.k8s.io/core:/stable:/v1.32/deb/Release.key" rel="noopener noreferrer"&gt;https://pkgs.k8s.io/core:/stable:/v1.32/deb/Release.key&lt;/a&gt; | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg&lt;br&gt;
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] &lt;a href="https://pkgs.k8s.io/core:/stable:/v1.32/deb/" rel="noopener noreferrer"&gt;https://pkgs.k8s.io/core:/stable:/v1.32/deb/&lt;/a&gt; /' | sudo tee /etc/apt/sources.list.d/kubernetes.list&lt;br&gt;
sudo apt-get update -y&lt;br&gt;
sudo apt-get install -y kubelet kubeadm kubectl&lt;br&gt;
sudo apt-mark hold kubelet kubeadm kubectl&lt;/p&gt;

&lt;p&gt;kubeadm version &lt;br&gt;
kubelet --version&lt;br&gt;
kubectl version --client&lt;/p&gt;

&lt;h1&gt;
  
  
  configure crictl to work with containerd
&lt;/h1&gt;

&lt;p&gt;sudo crictl config runtime-endpoint unix:///run/containerd/containerd.sock&lt;br&gt;
sudo chmod 777 -R /var/run/containerd/&lt;/p&gt;

&lt;h1&gt;
  
  
  initialize the control plane node using kubeadm
&lt;/h1&gt;

&lt;p&gt;sudo kubeadm init --pod-network-cidr=192.168.0.0/16 --apiserver-advertise-address=77.132.100.111 --node-name master&lt;/p&gt;

&lt;p&gt;mkdir -p $HOME/.kube&lt;br&gt;
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config&lt;br&gt;
sudo chown $(id -u):$(id -g) $HOME/.kube/config&lt;/p&gt;

&lt;p&gt;sudo chmod 777 .kube/ &lt;br&gt;
sudo chmod 777 -R /etc/kubernetes/&lt;/p&gt;

&lt;p&gt;export KUBECONFIG=/etc/kubernetes/admin.conf&lt;/p&gt;

&lt;h1&gt;
  
  
  install cni calico
&lt;/h1&gt;

&lt;p&gt;kubectl create -f &lt;a href="https://raw.githubusercontent.com/projectcalico/calico/v3.29.2/manifests/tigera-operator.yaml" rel="noopener noreferrer"&gt;https://raw.githubusercontent.com/projectcalico/calico/v3.29.2/manifests/tigera-operator.yaml&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;curl -o custom-resources.yaml &lt;a href="https://raw.githubusercontent.com/projectcalico/calico/v3.29.2/manifests/custom-resources.yaml" rel="noopener noreferrer"&gt;https://raw.githubusercontent.com/projectcalico/calico/v3.29.2/manifests/custom-resources.yaml&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;kubectl apply -f custom-resources.yaml &lt;/p&gt;

&lt;p&gt;Notice:&lt;/p&gt;

&lt;p&gt;sudo kubeadm join 77.132.100.111:6443 --token uwic8i.4btsato9aj46v7pz \&lt;br&gt;
--discovery-token-ca-cert-hash sha256:084b7c1384239e35a6d18d5b6034cfc621193a2d4dd206fa3dfc405dd6976335 &lt;/p&gt;

&lt;h1&gt;
  
  
  output   from   kubadm  after  finished installation  so  can  use to access worker  node
&lt;/h1&gt;

&lt;h1&gt;
  
  
  if  you  missing  the token  ,  you can  initiate  new token  that worker node can  be  ue it
&lt;/h1&gt;

&lt;h1&gt;
  
  
  sudo kubeadm token create --print-join-command &amp;gt;&amp;gt; join-master.sh
&lt;/h1&gt;

&lt;h1&gt;
  
  
  sudo chmod +x join-master.sh
&lt;/h1&gt;

&lt;p&gt;worker1.sh  script for worker node1&lt;/p&gt;

&lt;h1&gt;
  
  
  !/bin/bash
&lt;/h1&gt;

&lt;h1&gt;
  
  
  disable swap
&lt;/h1&gt;

&lt;p&gt;sudo swapoff -a&lt;br&gt;
sudo sed -i '/ swap / s/^(.*)$/#\1/g' /etc/fstab&lt;/p&gt;

&lt;h1&gt;
  
  
  update kernel params
&lt;/h1&gt;

&lt;p&gt;cat &amp;lt;&amp;lt;EOF | sudo tee /etc/modules-load.d/k8s.conf&lt;br&gt;
overlay&lt;br&gt;
br_netfilter&lt;br&gt;
EOF&lt;/p&gt;

&lt;p&gt;sudo modprobe overlay&lt;br&gt;
sudo modprobe br_netfilter&lt;/p&gt;

&lt;p&gt;cat &amp;lt;&amp;lt;EOF | sudo tee /etc/sysctl.d/k8s.conf&lt;br&gt;
net.bridge.bridge-nf-call-iptables = 1&lt;br&gt;
net.bridge.bridge-nf-call-ip6tables = 1&lt;br&gt;
net.ipv4.ip_forward = 1&lt;br&gt;
EOF&lt;/p&gt;

&lt;p&gt;sudo sysctl --system&lt;/p&gt;

&lt;h1&gt;
  
  
  install container runtime
&lt;/h1&gt;

&lt;h1&gt;
  
  
  install containerd 1.7.27
&lt;/h1&gt;

&lt;p&gt;curl -LO &lt;a href="https://github.com/containerd/containerd/releases/download/v1.7.27/containerd-1.7.27-linux-amd64.tar.gz" rel="noopener noreferrer"&gt;https://github.com/containerd/containerd/releases/download/v1.7.27/containerd-1.7.27-linux-amd64.tar.gz&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;sudo tar Cxzvf /usr/local containerd-1.7.27-linux-amd64.tar.gz&lt;/p&gt;

&lt;p&gt;curl -LO &lt;a href="https://raw.githubusercontent.com/containerd/containerd/main/containerd.service" rel="noopener noreferrer"&gt;https://raw.githubusercontent.com/containerd/containerd/main/containerd.service&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;sudo mkdir -p /usr/local/lib/systemd/system/&lt;br&gt;
sudo mv containerd.service /usr/local/lib/systemd/system/&lt;br&gt;
sudo mkdir -p /etc/containerd&lt;br&gt;
sudo containerd config default | sudo tee /etc/containerd/config.toml&lt;br&gt;
sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml&lt;/p&gt;

&lt;p&gt;sudo systemctl daemon-reload&lt;br&gt;
sudo systemctl enable containerd --now&lt;/p&gt;

&lt;h1&gt;
  
  
  install runc
&lt;/h1&gt;

&lt;p&gt;curl -LO &lt;a href="https://github.com/opencontainers/runc/releases/download/v1.1.12/runc.amd64" rel="noopener noreferrer"&gt;https://github.com/opencontainers/runc/releases/download/v1.1.12/runc.amd64&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;sudo install -m 755 runc.amd64 /usr/local/sbin/runc &lt;/p&gt;

&lt;h1&gt;
  
  
  install cni plugins
&lt;/h1&gt;

&lt;p&gt;curl -LO "&lt;a href="https://github.com/containernetworking/plugins/releases/download/v1.6.2/cni-plugins-linux-amd64-v1.6.2.tgz" rel="noopener noreferrer"&gt;https://github.com/containernetworking/plugins/releases/download/v1.6.2/cni-plugins-linux-amd64-v1.6.2.tgz&lt;/a&gt;"&lt;/p&gt;

&lt;p&gt;sudo mkdir -p /opt/cni/bin&lt;/p&gt;

&lt;p&gt;sudo tar Cxzvf /opt/cni/bin cni-plugins-linux-amd64-v1.6.2.tgz&lt;/p&gt;

&lt;h1&gt;
  
  
  install kubeadm, kubelet, kubectl
&lt;/h1&gt;

&lt;p&gt;sudo apt-get update -y&lt;br&gt;
sudo apt-get install -y apt-transport-https ca-certificates curl gpg&lt;br&gt;
curl -fsSL &lt;a href="https://pkgs.k8s.io/core:/stable:/v1.32/deb/Release.key" rel="noopener noreferrer"&gt;https://pkgs.k8s.io/core:/stable:/v1.32/deb/Release.key&lt;/a&gt; | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg&lt;br&gt;
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] &lt;a href="https://pkgs.k8s.io/core:/stable:/v1.32/deb/" rel="noopener noreferrer"&gt;https://pkgs.k8s.io/core:/stable:/v1.32/deb/&lt;/a&gt; /' | sudo tee /etc/apt/sources.list.d/kubernetes.list&lt;br&gt;
sudo apt-get update -y&lt;br&gt;
sudo apt-get install -y kubelet kubeadm kubectl&lt;br&gt;
sudo apt-mark hold kubelet kubeadm kubectl&lt;/p&gt;

&lt;p&gt;kubeadm version &lt;br&gt;
kubelet --version&lt;br&gt;
kubectl version --client&lt;/p&gt;

&lt;h1&gt;
  
  
  configure crictl to work with containerd
&lt;/h1&gt;

&lt;p&gt;sudo crictl config runtime-endpoint unix:///run/containerd/containerd.sock&lt;br&gt;
sudo chmod 777 -R /var/run/containerd/&lt;/p&gt;

&lt;p&gt;sudo hostname worker1&lt;/p&gt;

&lt;p&gt;mkdir -p $HOME/.kube&lt;br&gt;
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config&lt;br&gt;
sudo chown $(id -u):$(id -g) $HOME/.kube/config&lt;/p&gt;

&lt;p&gt;sudo chmod 777 .kube/ &lt;br&gt;
sudo chmod 777 -R /etc/kubernetes/&lt;/p&gt;

&lt;p&gt;sudo kubeadm join 77.132.100.111:6443 --token uwic8i.4btsato9aj46v7pz \&lt;br&gt;
--discovery-token-ca-cert-hash sha256:084b7c1384239e35a6d18d5b6034cfc621193a2d4dd206fa3dfc405dd6976335 &lt;/p&gt;

&lt;p&gt;worker2 script  for worker node 2&lt;/p&gt;

&lt;h1&gt;
  
  
  !/bin/bash
&lt;/h1&gt;

&lt;h1&gt;
  
  
  disable swap
&lt;/h1&gt;

&lt;p&gt;sudo swapoff -a&lt;br&gt;
sudo sed -i '/ swap / s/^(.*)$/#\1/g' /etc/fstab&lt;/p&gt;

&lt;h1&gt;
  
  
  update kernel params
&lt;/h1&gt;

&lt;p&gt;cat &amp;lt;&amp;lt;EOF | sudo tee /etc/modules-load.d/k8s.conf&lt;br&gt;
overlay&lt;br&gt;
br_netfilter&lt;br&gt;
EOF&lt;/p&gt;

&lt;p&gt;sudo modprobe overlay&lt;br&gt;
sudo modprobe br_netfilter&lt;/p&gt;

&lt;p&gt;cat &amp;lt;&amp;lt;EOF | sudo tee /etc/sysctl.d/k8s.conf&lt;br&gt;
net.bridge.bridge-nf-call-iptables = 1&lt;br&gt;
net.bridge.bridge-nf-call-ip6tables = 1&lt;br&gt;
net.ipv4.ip_forward = 1&lt;br&gt;
EOF&lt;/p&gt;

&lt;p&gt;sudo sysctl --system&lt;/p&gt;

&lt;h1&gt;
  
  
  install container runtime
&lt;/h1&gt;

&lt;h1&gt;
  
  
  install containerd 1.7.27
&lt;/h1&gt;

&lt;p&gt;curl -LO &lt;a href="https://github.com/containerd/containerd/releases/download/v1.7.27/containerd-1.7.27-linux-amd64.tar.gz" rel="noopener noreferrer"&gt;https://github.com/containerd/containerd/releases/download/v1.7.27/containerd-1.7.27-linux-amd64.tar.gz&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;sudo tar Cxzvf /usr/local containerd-1.7.27-linux-amd64.tar.gz&lt;/p&gt;

&lt;p&gt;curl -LO &lt;a href="https://raw.githubusercontent.com/containerd/containerd/main/containerd.service" rel="noopener noreferrer"&gt;https://raw.githubusercontent.com/containerd/containerd/main/containerd.service&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;sudo mkdir -p /usr/local/lib/systemd/system/&lt;br&gt;
sudo mv containerd.service /usr/local/lib/systemd/system/&lt;br&gt;
sudo mkdir -p /etc/containerd&lt;br&gt;
sudo containerd config default | sudo tee /etc/containerd/config.toml&lt;br&gt;
sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml&lt;/p&gt;

&lt;p&gt;sudo systemctl daemon-reload&lt;br&gt;
sudo systemctl enable containerd --now&lt;/p&gt;

&lt;h1&gt;
  
  
  install runc
&lt;/h1&gt;

&lt;p&gt;curl -LO &lt;a href="https://github.com/opencontainers/runc/releases/download/v1.1.12/runc.amd64" rel="noopener noreferrer"&gt;https://github.com/opencontainers/runc/releases/download/v1.1.12/runc.amd64&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;sudo install -m 755 runc.amd64 /usr/local/sbin/runc &lt;/p&gt;

&lt;h1&gt;
  
  
  install cni plugins
&lt;/h1&gt;

&lt;p&gt;curl -LO "&lt;a href="https://github.com/containernetworking/plugins/releases/download/v1.6.2/cni-plugins-linux-amd64-v1.6.2.tgz" rel="noopener noreferrer"&gt;https://github.com/containernetworking/plugins/releases/download/v1.6.2/cni-plugins-linux-amd64-v1.6.2.tgz&lt;/a&gt;"&lt;/p&gt;

&lt;p&gt;sudo mkdir -p /opt/cni/bin&lt;/p&gt;

&lt;p&gt;sudo tar Cxzvf /opt/cni/bin cni-plugins-linux-amd64-v1.6.2.tgz&lt;/p&gt;

&lt;h1&gt;
  
  
  install kubeadm, kubelet, kubectl
&lt;/h1&gt;

&lt;h1&gt;
  
  
  install kubeadm, kubelet, kubectl
&lt;/h1&gt;

&lt;p&gt;sudo apt-get update -y&lt;br&gt;
sudo apt-get install -y apt-transport-https ca-certificates curl gpg&lt;br&gt;
curl -fsSL &lt;a href="https://pkgs.k8s.io/core:/stable:/v1.32/deb/Release.key" rel="noopener noreferrer"&gt;https://pkgs.k8s.io/core:/stable:/v1.32/deb/Release.key&lt;/a&gt; | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg&lt;br&gt;
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] &lt;a href="https://pkgs.k8s.io/core:/stable:/v1.32/deb/" rel="noopener noreferrer"&gt;https://pkgs.k8s.io/core:/stable:/v1.32/deb/&lt;/a&gt; /' | sudo tee /etc/apt/sources.list.d/kubernetes.list&lt;br&gt;
sudo apt-get update -y&lt;br&gt;
sudo apt-get install -y kubelet kubeadm kubectl&lt;br&gt;
sudo apt-mark hold kubelet kubeadm kubectl&lt;/p&gt;

&lt;p&gt;kubeadm version &lt;br&gt;
kubelet --version&lt;br&gt;
kubectl version --client&lt;/p&gt;

&lt;h1&gt;
  
  
  configure crictl to work with containerd
&lt;/h1&gt;

&lt;p&gt;sudo crictl config runtime-endpoint unix:///run/containerd/containerd.sock&lt;br&gt;
sudo chmod 777 -R /var/run/containerd/&lt;/p&gt;

&lt;p&gt;sudo hostname worker2&lt;/p&gt;

&lt;p&gt;mkdir -p $HOME/.kube&lt;br&gt;
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config&lt;br&gt;
sudo chown $(id -u):$(id -g) $HOME/.kube/config&lt;/p&gt;

&lt;p&gt;sudo chmod 777 .kube/ &lt;br&gt;
sudo chmod 777 -R /etc/kubernetes/&lt;/p&gt;

&lt;p&gt;sudo kubeadm join 77.132.100.111:6443 --token uwic8i.4btsato9aj46v7pz \&lt;br&gt;
--discovery-token-ca-cert-hash sha256:084b7c1384239e35a6d18d5b6034cfc621193a2d4dd206fa3dfc405dd6976335 &lt;/p&gt;

&lt;h1&gt;
  
  
  after finishiing the setup  for kuberbernetes cluster  and worker node:
&lt;/h1&gt;

&lt;p&gt;create  alias for kubectll command &lt;/p&gt;

&lt;p&gt;Exmaple  &lt;/p&gt;

&lt;p&gt;alias k=kubectl  &amp;gt; .bashrc   for all nodes&lt;/p&gt;

&lt;p&gt;create  deployment  nginx1    with  image nginx:1.27.3   , replicas=4  --port=8000&lt;br&gt;&lt;br&gt;
expose  deployment nginx1   with  type=NodePort –port=80, --target-port=8000 –name=nginx-svc&lt;/p&gt;

&lt;p&gt;k  create  deployment/nginx1 –image=nginx:1.27.3 –replicas=4 –port=8000 &lt;br&gt;
k expose  deployment/nginx1  --port=80  --target-port=8000  --name=nginx-svc&lt;br&gt;
k get pods&lt;br&gt;&lt;br&gt;
k get  svc&lt;br&gt;
k get deploy&lt;br&gt;
k descrinbe pod  pod-name &lt;br&gt;
k describe deploy deployment-name &lt;br&gt;
k describe svc   service-name &lt;br&gt;
Install etcdctl  on  master  node&lt;br&gt;
  sudo apt install -y etcd-client   &lt;/p&gt;

&lt;p&gt;ECTDCTL_API=3       so you can use command etcdctl &lt;/p&gt;

&lt;p&gt;export ECTDCTL_API=3&lt;/p&gt;

&lt;p&gt;type     etcdctl     &lt;/p&gt;

&lt;p&gt;cd      /etc/kubernetes/manifest    &lt;/p&gt;

&lt;p&gt;cat    etcd.yaml     &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;     notice     --data-dir  |  --cacert=  |--cert=  |  --trust-ca-file  | --key 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;default directory for etcd   /var/lib/etcd &lt;/p&gt;

&lt;p&gt;1.backup all datta  into  /opt/etcd-backup.db&lt;br&gt;
sudo  etcdctl   --endpoints=    \   --cacert=    \  --cert=     \  --key=      snapshot save  /opt/etcd-backup.db  &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;restore  all   data
sudo  etcdctl   --endpoints=    \   --cacert=    \  --cert=     \  --key=      snapshot restore /opt/etcd-backup.db   --data-dir=/var/lib/etcd-backup-and-restore&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;use  nano   or vim  editor to   change   new  path  of   --data-dir=/var/lib/etcd-backup-and-restore on  etcd.yaml also with  volume mountpath  &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;restart  all  components   by :&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kube-apiserver.yaml 
kube-scheduler.yaml 
kube-controller-manager.yaml  
etcd.yaml
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;mv &lt;em&gt;.yaml  /tmp&lt;br&gt;&lt;br&gt;
sudo   mv  /tmp/&lt;/em&gt;.yaml    .  &lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;also   recommeded to restart the   kubelet   and  all daemon  &lt;/p&gt;

&lt;p&gt;sudo systemctl restart kubelet  &amp;amp;&amp;amp; sudo systemctl daemon-reload &lt;/p&gt;

&lt;p&gt;check   on pod  etcd-master   using  k describe  pod etcd-master &lt;/p&gt;

&lt;p&gt;Example  for  etcdctl   &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;etcdctl  snapshot save&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;sudo etcdctl --endpoints=&lt;a href="https://127.0.0.1:2379" rel="noopener noreferrer"&gt;https://127.0.0.1:2379&lt;/a&gt; \&lt;br&gt;
--cacert=/etc/kubernetes/pki/etcd/ca.crt \&lt;br&gt;
--cert=/etc/kubernetes/pki/etcd/server.crt \&lt;br&gt;
--key=/etc/kubernetes/pki/etcd/server.key \&lt;br&gt;
snapshot save /opt/etcd-backup.db&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;etcdctl  snapshot restore&lt;br&gt;
sudo etcdctl --endpoints=&lt;a href="https://127.0.0.1:2379" rel="noopener noreferrer"&gt;https://127.0.0.1:2379&lt;/a&gt; \&lt;br&gt;
--cacert=/etc/kubernetes/pki/etcd/ca.crt \&lt;br&gt;
--cert=/etc/kubernetes/pki/etcd/server.crt \&lt;br&gt;
--key=/etc/kubernetes/pki/etcd/server.key \&lt;br&gt;
snapshot restore /opt/etcd-backup.db \&lt;br&gt;
--data-dir /var/lib/etcd-backup-and-restore&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;output table  of etcd-backup.db for etcdctll&lt;br&gt;&lt;br&gt;
sudo etcdctl --write-out=table --endpoints=&lt;a href="https://127.0.0.1:2379" rel="noopener noreferrer"&gt;https://127.0.0.1:2379&lt;/a&gt; \&lt;br&gt;
--cacert=/etc/kubernetes/pki/etcd/ca.crt \&lt;br&gt;
--cert=/etc/kubernetes/pki/etcd/server.crt \&lt;br&gt;
--key=/etc/kubernetes/pki/etcd/server.key \&lt;br&gt;
snapshot status /opt/etcd-backup.dd&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;References: &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;   Options for Highly Available Topology(etcd)
&lt;a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/ha-topology/" rel="noopener noreferrer"&gt;https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/ha-topology/&lt;/a&gt; 

&lt;ol&gt;
&lt;li&gt; Operating etcd clusters for Kubernetes
&lt;a href="https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/" rel="noopener noreferrer"&gt;https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/&lt;/a&gt; &lt;/li&gt;
&lt;li&gt;cka  series  2024-2025 ,backup and restore etcd
&lt;a href="https://www.youtube.com/watch?v=R2wuFCYgnm4&amp;amp;list=PLl4APkPHzsUUOkOv3i62UidrLmSB8DcGC&amp;amp;index=36" rel="noopener noreferrer"&gt;https://www.youtube.com/watch?v=R2wuFCYgnm4&amp;amp;list=PLl4APkPHzsUUOkOv3i62UidrLmSB8DcGC&amp;amp;index=36&lt;/a&gt; &lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;

&lt;/ol&gt;

</description>
    </item>
    <item>
      <title>Setup and upgrade kubernetes version for control plane and worker nodes</title>
      <dc:creator>MohammedBanabila</dc:creator>
      <pubDate>Sat, 22 Mar 2025 02:23:43 +0000</pubDate>
      <link>https://dev.to/mohammed_banabila_3bc9e49/setup-and-upgrade-kubernetes-version-for-control-plane-and-worker-nodes-30cp</link>
      <guid>https://dev.to/mohammed_banabila_3bc9e49/setup-and-upgrade-kubernetes-version-for-control-plane-and-worker-nodes-30cp</guid>
      <description>&lt;p&gt;in this lab , has two  stages: &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;setup kubernetes version 1.31 for control plane  and  worker nodes  using kubeadm&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Note:&lt;/p&gt;

&lt;p&gt;we  deploy control plane and worker nodes  with any cloud provider  example Aws&lt;br&gt;&lt;br&gt;
using ec2 instance  with run on ubuntu &lt;/p&gt;

&lt;p&gt;Steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; create vpc and  subnet  and  in this  lab  we create vpc with cidr block=77.132.0.0/16 and 3   public subnets with block size=24
&lt;/li&gt;
&lt;li&gt; deploy 3 ec2  instance  for  all node and  define which be  control plane  , others be worker nodes&lt;/li&gt;
&lt;li&gt; create 2 security groups for control plane node and  worker nodes
4 .  setup  network acceess control   list
5 .  associate  elastic ip  with those instances
&lt;/li&gt;
&lt;li&gt;integrate  internet gateway  with vpc
7 . create route table and associate  it with subnets&lt;/li&gt;
&lt;li&gt;after the infrastructure are up  and running  example  ec2 instances&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;note:&lt;br&gt;
 we  use  control plane  node  ports  which  need it  by its components&lt;/p&gt;

&lt;p&gt;apiserver    port 6443&lt;br&gt;
   ectd           port 2379-2380&lt;br&gt;
   scheduler  port 10257&lt;br&gt;
   controller-manager 10259&lt;br&gt;
   access to control plane node    port 22   only to  my ip  address /32&lt;/p&gt;

&lt;p&gt;and  for workers node  port &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubelet    port 10250 
kube-proxy   port 10256 
nodePort      30000-32767 
access to worker node    port 22   only to  my ip  address /32
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;we add those  ports to   control plane security group   and  worker security groups&lt;/p&gt;

&lt;p&gt;before start  setup  , first   update and  upgrade  packages  using  sudo apt update -y &amp;amp;&amp;amp; sudo apt upgrade -y &lt;br&gt;
step  for setup  control plane node  as master node :&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;disable  swap&lt;/li&gt;
&lt;li&gt; update  kernel params
3  install  container runtime &lt;/li&gt;
&lt;li&gt;install  runc&lt;/li&gt;
&lt;li&gt;install cni plugin&lt;/li&gt;
&lt;li&gt;install kubeadm , kubelet , kubectl   and   check   version &lt;/li&gt;
&lt;li&gt;initialize  control plane  node using  kubeadm  and  print output token which  let you to join worker nodes
8 . install  cni plugin   cilium 
9 .  use kubeadm join   to  master
optional: you can change hostname at nodes 
 change hostname  to be  worker1 , worker2   for worker nodes&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;step  for setup  worker nodes : &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;disable  swap&lt;/li&gt;
&lt;li&gt; update  kernel params
3  install  container runtime &lt;/li&gt;
&lt;li&gt;install  runc&lt;/li&gt;
&lt;li&gt;install cni plugin&lt;/li&gt;
&lt;li&gt;install kubeadm , kubelet , kubectl   and   check   version&lt;/li&gt;
&lt;li&gt; use kubeadm join   to  master&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Note:  copy kubeconfig  from control plane to   workers node  at  .kube/config &lt;/p&gt;

&lt;p&gt;at  control plane node &lt;br&gt;
     cd .kube/ &lt;br&gt;
     cat  config   and  copy it all &lt;/p&gt;

&lt;p&gt;Note: &lt;br&gt;
 you  can  deploy  it  at  console or as  infrastructure as code &lt;br&gt;
 in this lab ,I used pulumi python for  provisioning the resources.&lt;/p&gt;

&lt;p&gt;"""An AWS Python Pulumi program"""&lt;br&gt;
import pulumi , pulumi_aws as aws , json&lt;/p&gt;

&lt;p&gt;cfg1=pulumi.Config() &lt;/p&gt;

&lt;p&gt;vpc1=aws.ec2.Vpc(&lt;br&gt;
"vpc1",&lt;br&gt;
aws.ec2.VpcArgs(&lt;br&gt;
cidr_block=cfg1.require(key='block1'),&lt;br&gt;
tags={&lt;br&gt;
"Name": "vpc1"&lt;br&gt;
}&lt;br&gt;
)&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;intgw1=aws.ec2.InternetGateway(&lt;br&gt;
"intgw1",&lt;br&gt;
aws.ec2.InternetGatewayArgs(&lt;br&gt;
vpc_id=vpc1.id,&lt;br&gt;
tags={&lt;br&gt;
"Name": "intgw1"&lt;br&gt;
}&lt;br&gt;
)&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;zones="us-east-1a"&lt;br&gt;
publicsubnets=["subnet1","subnet2","subnet3"]&lt;br&gt;
cidr1=cfg1.require(key="cidr1")&lt;br&gt;
cidr2=cfg1.require(key="cidr2")&lt;br&gt;
cidr3=cfg1.require(key="cidr3")&lt;/p&gt;

&lt;p&gt;cidrs=[ cidr1 , cidr2 , cidr3 ]&lt;/p&gt;

&lt;p&gt;for allsubnets in range(len(publicsubnets)):&lt;br&gt;
publicsubnets[allsubnets]=aws.ec2.Subnet(&lt;br&gt;
publicsubnets[allsubnets],&lt;br&gt;
aws.ec2.SubnetArgs(&lt;br&gt;
vpc_id=vpc1.id,&lt;br&gt;
cidr_block=cidrs[allsubnets],&lt;br&gt;
map_public_ip_on_launch=False,&lt;br&gt;
availability_zone=zones,&lt;br&gt;
tags={&lt;br&gt;
"Name" : publicsubnets[allsubnets]&lt;br&gt;
}&lt;br&gt;
)&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;table1=aws.ec2.RouteTable(&lt;br&gt;
"table1",&lt;br&gt;
aws.ec2.RouteTableArgs(&lt;br&gt;
vpc_id=vpc1.id,&lt;br&gt;
routes=[&lt;br&gt;
aws.ec2.RouteTableRouteArgs(&lt;br&gt;
cidr_block=cfg1.require(key="any_ipv4_traffic"),&lt;br&gt;
gateway_id=intgw1.id&lt;br&gt;
)&lt;br&gt;
],&lt;br&gt;
tags={&lt;br&gt;
"Name" : "table1"&lt;br&gt;
}&lt;br&gt;
)&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;associate1=aws.ec2.RouteTableAssociation(&lt;br&gt;
"associate1",&lt;br&gt;
aws.ec2.RouteTableAssociationArgs(&lt;br&gt;
subnet_id=publicsubnets[0].id,&lt;br&gt;
route_table_id=table1.id&lt;br&gt;
)&lt;br&gt;
)&lt;br&gt;
associate2=aws.ec2.RouteTableAssociation(&lt;br&gt;
"associate2",&lt;br&gt;
aws.ec2.RouteTableAssociationArgs(&lt;br&gt;
subnet_id=publicsubnets[1].id,&lt;br&gt;
route_table_id=table1.id&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;)&lt;/p&gt;

&lt;p&gt;associate3=aws.ec2.RouteTableAssociation(&lt;br&gt;
"associate3",&lt;br&gt;
aws.ec2.RouteTableAssociationArgs(&lt;br&gt;
subnet_id=publicsubnets[2].id,&lt;br&gt;
route_table_id=table1.id&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;)&lt;/p&gt;

&lt;p&gt;ingress_traffic=[&lt;br&gt;
aws.ec2.NetworkAclIngressArgs(&lt;br&gt;
from_port=22,&lt;br&gt;
to_port=22,&lt;br&gt;
protocol="tcp",&lt;br&gt;
cidr_block=cfg1.require(key="myips"),&lt;br&gt;
icmp_code=0,&lt;br&gt;
icmp_type=0,&lt;br&gt;
action="allow",&lt;br&gt;
rule_no=100&lt;br&gt;
),&lt;br&gt;
aws.ec2.NetworkAclIngressArgs(&lt;br&gt;
from_port=22,&lt;br&gt;
to_port=22,&lt;br&gt;
protocol="tcp",&lt;br&gt;
cidr_block=cfg1.require(key="any_ipv4_traffic"),&lt;br&gt;
icmp_code=0,&lt;br&gt;
icmp_type=0,&lt;br&gt;
action="deny",&lt;br&gt;
rule_no=101&lt;br&gt;
),&lt;/p&gt;

&lt;p&gt;aws.ec2.NetworkAclIngressArgs(&lt;br&gt;
from_port=0,&lt;br&gt;
to_port=0,&lt;br&gt;
protocol="-1",&lt;br&gt;
cidr_block=cfg1.require(key="any_ipv4_traffic"),&lt;br&gt;
icmp_code=0,&lt;br&gt;
icmp_type=0,&lt;br&gt;
action="allow",&lt;br&gt;
rule_no=200&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;]&lt;/p&gt;

&lt;p&gt;egress_traffic=[&lt;br&gt;
aws.ec2.NetworkAclEgressArgs(&lt;br&gt;
from_port=22,&lt;br&gt;
to_port=22,&lt;br&gt;
protocol="tcp",&lt;br&gt;
cidr_block=cfg1.require(key="myips"),&lt;br&gt;
icmp_code=0,&lt;br&gt;
icmp_type=0,&lt;br&gt;
action="allow",&lt;br&gt;
rule_no=100&lt;br&gt;
),&lt;br&gt;
aws.ec2.NetworkAclEgressArgs(&lt;br&gt;
from_port=22,&lt;br&gt;
to_port=22,&lt;br&gt;
protocol="tcp",&lt;br&gt;
cidr_block=cfg1.require(key="any_ipv4_traffic"),&lt;br&gt;
icmp_code=0,&lt;br&gt;
icmp_type=0,&lt;br&gt;
action="deny",&lt;br&gt;
rule_no=101&lt;br&gt;
),&lt;/p&gt;

&lt;p&gt;aws.ec2.NetworkAclEgressArgs(&lt;br&gt;
from_port=0,&lt;br&gt;
to_port=0,&lt;br&gt;
protocol="-1",&lt;br&gt;
cidr_block=cfg1.require(key="any_ipv4_traffic"),&lt;br&gt;
icmp_code=0,&lt;br&gt;
icmp_type=0,&lt;br&gt;
action="allow",&lt;br&gt;
rule_no=200&lt;br&gt;
)&lt;br&gt;
]&lt;/p&gt;

&lt;p&gt;nacls1=aws.ec2.NetworkAcl(&lt;br&gt;
"nacls1",&lt;br&gt;
aws.ec2.NetworkAclArgs(&lt;br&gt;
vpc_id=vpc1.id,&lt;br&gt;
ingress=ingress_traffic,&lt;br&gt;
egress=egress_traffic,&lt;br&gt;
tags={&lt;br&gt;
"Name" : "nacls1"&lt;br&gt;
},&lt;br&gt;
)&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;nacllink1=aws.ec2.NetworkAclAssociation(&lt;br&gt;
"nacllink1",&lt;br&gt;
aws.ec2.NetworkAclAssociationArgs(&lt;br&gt;
network_acl_id=nacls1.id,&lt;br&gt;
subnet_id=publicsubnets[0].id&lt;br&gt;
)&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;nacllink2=aws.ec2.NetworkAclAssociation(&lt;br&gt;
"nacllink2",&lt;br&gt;
aws.ec2.NetworkAclAssociationArgs(&lt;br&gt;
network_acl_id=nacls1.id,&lt;br&gt;
subnet_id=publicsubnets[1].id&lt;br&gt;
)&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;nacllink3=aws.ec2.NetworkAclAssociation(&lt;br&gt;
"nacllink3",&lt;br&gt;
aws.ec2.NetworkAclAssociationArgs(&lt;br&gt;
network_acl_id=nacls1.id,&lt;br&gt;
subnet_id=publicsubnets[2].id&lt;br&gt;
)&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;masteringress=[&lt;br&gt;
aws.ec2.SecurityGroupIngressArgs(&lt;br&gt;
from_port=22,&lt;br&gt;
to_port=22,&lt;br&gt;
protocol="tcp",&lt;br&gt;
cidr_blocks=[cfg1.require(key="myips")],&lt;/p&gt;

&lt;p&gt;),&lt;br&gt;
aws.ec2.SecurityGroupIngressArgs(&lt;br&gt;
from_port=6443,&lt;br&gt;
to_port=6443,&lt;br&gt;
protocol="tcp",&lt;br&gt;
cidr_blocks=[vpc1.cidr_block]&lt;br&gt;
),&lt;br&gt;
aws.ec2.SecurityGroupIngressArgs(&lt;br&gt;
from_port=2379,&lt;br&gt;
to_port=2380,&lt;br&gt;
protocol="tcp",&lt;br&gt;
cidr_blocks=[vpc1.cidr_block]&lt;/p&gt;

&lt;p&gt;),&lt;/p&gt;

&lt;p&gt;aws.ec2.SecurityGroupIngressArgs(&lt;br&gt;
from_port=10249,&lt;br&gt;
to_port=10260,&lt;br&gt;
protocol="tcp",&lt;br&gt;
cidr_blocks=[vpc1.cidr_block]&lt;br&gt;
)&lt;br&gt;
]&lt;br&gt;
masteregress=[&lt;br&gt;
aws.ec2.SecurityGroupEgressArgs(&lt;br&gt;
from_port=0,&lt;br&gt;
to_port=0,&lt;br&gt;
protocol="-1",&lt;br&gt;
cidr_blocks=[cfg1.require(key="any_ipv4_traffic")]&lt;br&gt;
),&lt;br&gt;
]&lt;/p&gt;

&lt;p&gt;mastersecurity=aws.ec2.SecurityGroup(&lt;br&gt;
"mastersecurity",&lt;br&gt;
aws.ec2.SecurityGroupArgs(&lt;br&gt;
vpc_id=vpc1.id,&lt;br&gt;
name="mastersecurity",&lt;br&gt;
ingress=masteringress,&lt;br&gt;
egress=masteregress,&lt;br&gt;
tags={&lt;br&gt;
"Name" : "mastersecurity"&lt;br&gt;
}&lt;br&gt;
)&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;workeringress=[&lt;br&gt;
aws.ec2.SecurityGroupIngressArgs(&lt;br&gt;
from_port=22,&lt;br&gt;
to_port=22,&lt;br&gt;
protocol="tcp",&lt;br&gt;
cidr_blocks=[cfg1.require(key="myips")]&lt;br&gt;
),&lt;/p&gt;

&lt;p&gt;aws.ec2.SecurityGroupIngressArgs(&lt;br&gt;
from_port=10250,&lt;br&gt;
to_port=10250,&lt;br&gt;
protocol="tcp",&lt;br&gt;
cidr_blocks=[vpc1.cidr_block]&lt;br&gt;
),&lt;br&gt;
aws.ec2.SecurityGroupIngressArgs(&lt;br&gt;
from_port=10256,&lt;br&gt;
to_port=10256,&lt;br&gt;
protocol="tcp",&lt;br&gt;
cidr_blocks=[vpc1.cidr_block]&lt;br&gt;
),&lt;br&gt;
aws.ec2.SecurityGroupIngressArgs(&lt;br&gt;
from_port=30000,&lt;br&gt;
to_port=32767,&lt;br&gt;
protocol="tcp",&lt;br&gt;
cidr_blocks=[vpc1.cidr_block]&lt;br&gt;
),&lt;/p&gt;

&lt;p&gt;aws.ec2.SecurityGroupIngressArgs(&lt;br&gt;
from_port=80,&lt;br&gt;
to_port=80,&lt;br&gt;
protocol="tcp",&lt;br&gt;
cidr_blocks=[cfg1.require(key="any_ipv4_traffic")] &lt;br&gt;
),&lt;br&gt;
aws.ec2.SecurityGroupIngressArgs(&lt;br&gt;
from_port=443,&lt;br&gt;
to_port=443,&lt;br&gt;
protocol="tcp",&lt;br&gt;
cidr_blocks=[cfg1.require(key="any_ipv4_traffic")] &lt;br&gt;
),&lt;/p&gt;

&lt;p&gt;]&lt;br&gt;
workeregress=[&lt;br&gt;
aws.ec2.SecurityGroupEgressArgs(&lt;br&gt;
from_port=0,&lt;br&gt;
to_port=0,&lt;br&gt;
protocol="-1",&lt;br&gt;
cidr_blocks=[cfg1.require(key="any_ipv4_traffic")] &lt;/p&gt;

&lt;p&gt;),&lt;br&gt;
]&lt;/p&gt;

&lt;p&gt;workersecurity=aws.ec2.SecurityGroup(&lt;br&gt;
"workersecurity",&lt;br&gt;
aws.ec2.SecurityGroupArgs(&lt;br&gt;
vpc_id=vpc1.id,&lt;br&gt;
name="workersecurity",&lt;br&gt;
ingress=workeringress,&lt;br&gt;
egress=workeregress,&lt;br&gt;
tags={&lt;br&gt;
"Name" : "workersecurity"&lt;br&gt;
}&lt;br&gt;
)&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;master=aws.ec2.Instance(&lt;br&gt;
"master",&lt;br&gt;
aws.ec2.InstanceArgs(&lt;br&gt;
ami=cfg1.require(key='ami'),&lt;br&gt;
instance_type=cfg1.require(key='instance-type'),&lt;br&gt;
vpc_security_group_ids=[mastersecurity.id],&lt;br&gt;
subnet_id=publicsubnets[0].id,&lt;br&gt;
availability_zone=zones,&lt;br&gt;
key_name="mykey1",&lt;br&gt;
tags={&lt;br&gt;
"Name" : "master"&lt;br&gt;
},&lt;br&gt;
ebs_block_devices=[&lt;br&gt;
aws.ec2.InstanceEbsBlockDeviceArgs(&lt;br&gt;
device_name="/dev/sdm",&lt;br&gt;
volume_size=8,&lt;br&gt;
volume_type="gp3"&lt;br&gt;
)&lt;br&gt;
]&lt;br&gt;
)&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;worker1=aws.ec2.Instance(&lt;br&gt;
"worker1",&lt;br&gt;
aws.ec2.InstanceArgs(&lt;br&gt;
ami=cfg1.require(key='ami'),&lt;br&gt;
instance_type=cfg1.require(key='instance-type'),&lt;br&gt;
vpc_security_group_ids=[workersecurity.id],&lt;br&gt;
subnet_id=publicsubnets[1].id,&lt;br&gt;
availability_zone=zones,&lt;br&gt;
key_name="mykey2",&lt;br&gt;
tags={&lt;br&gt;
"Name" : "worker1"&lt;br&gt;
},&lt;br&gt;
ebs_block_devices=[&lt;br&gt;
aws.ec2.InstanceEbsBlockDeviceArgs(&lt;br&gt;
device_name="/dev/sdb",&lt;br&gt;
volume_size=8,&lt;br&gt;
volume_type="gp3"&lt;br&gt;
)&lt;br&gt;
],&lt;br&gt;
)&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;worker2=aws.ec2.Instance(&lt;br&gt;
"worker2",&lt;br&gt;
aws.ec2.InstanceArgs(&lt;br&gt;
ami=cfg1.require(key='ami'),&lt;br&gt;
instance_type=cfg1.require(key='instance-type'),&lt;br&gt;
vpc_security_group_ids=[workersecurity.id],&lt;br&gt;
subnet_id=publicsubnets[2].id,&lt;br&gt;
key_name="mykey2",&lt;br&gt;
tags={&lt;br&gt;
"Name" : "worker2"&lt;br&gt;
},&lt;br&gt;
ebs_block_devices=[&lt;br&gt;
aws.ec2.InstanceEbsBlockDeviceArgs(&lt;br&gt;
device_name="/dev/sdc",&lt;br&gt;
volume_size=8,&lt;br&gt;
volume_type="gp3"&lt;br&gt;
)&lt;br&gt;
],&lt;br&gt;
)&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;eips=[ "eip1" , "eip2" , "eip3" ]&lt;br&gt;
for alleips in range(len(eips)):&lt;br&gt;
eips[alleips]=aws.ec2.Eip(&lt;br&gt;
eips[alleips],&lt;br&gt;
aws.ec2.EipArgs(&lt;br&gt;
domain="vpc",&lt;br&gt;
tags={&lt;br&gt;
"Name" : eips[alleips]&lt;br&gt;
} &lt;br&gt;
)&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;eiplink1=aws.ec2.EipAssociation(&lt;br&gt;
"eiplink1",&lt;br&gt;
aws.ec2.EipAssociationArgs(&lt;br&gt;
allocation_id=eips[0].id,&lt;br&gt;
instance_id=master.id&lt;br&gt;
)&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;eiplink2=aws.ec2.EipAssociation(&lt;br&gt;
"eiplink2",&lt;br&gt;
aws.ec2.EipAssociationArgs(&lt;br&gt;
allocation_id=eips[1].id,&lt;br&gt;
instance_id=worker1.id&lt;br&gt;
)&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;eiplink3=aws.ec2.EipAssociation(&lt;br&gt;
"eiplink3",&lt;br&gt;
aws.ec2.EipAssociationArgs(&lt;br&gt;
allocation_id=eips[2].id,&lt;br&gt;
instance_id=worker2.id&lt;br&gt;
)&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;pulumi.export("master_eip" , value=eips[0].public_ip )&lt;/p&gt;

&lt;p&gt;pulumi.export("worker1_eip", value=eips[1].public_ip )&lt;/p&gt;

&lt;p&gt;pulumi.export("worker2_eip", value=eips[2].public_ip )&lt;/p&gt;

&lt;p&gt;pulumi.export("master_private_ip", value=master.private_ip)&lt;/p&gt;

&lt;p&gt;pulumi.export("worker1_private_ip" , value=worker1.private_ip)&lt;/p&gt;

&lt;p&gt;pulumi.export( "worker2_private_ip" , value=worker2.private_ip )&lt;/p&gt;

&lt;p&gt;Notes:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;remember  not all cni  plugins  support   network  policies&lt;br&gt;
you  can  use   calico  or cilium  plugins &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;for kubeadm , kubetlet , kubectl    should  same  version package&lt;br&gt;
in this lab I used v1.31    to  have 1.31.7&lt;br&gt;
references:&lt;br&gt;
&lt;a href="https://kubernetes.io/docs/reference/networking/ports-and-protocols/" rel="noopener noreferrer"&gt;https://kubernetes.io/docs/reference/networking/ports-and-protocols/&lt;/a&gt; &lt;br&gt;
&lt;a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/" rel="noopener noreferrer"&gt;https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/&lt;/a&gt;&lt;br&gt;
&lt;a href="https://github.com/opencontainers/runc/releases/" rel="noopener noreferrer"&gt;https://github.com/opencontainers/runc/releases/&lt;/a&gt; &lt;br&gt;
&lt;a href="https://github.com/containerd/containerd/releases" rel="noopener noreferrer"&gt;https://github.com/containerd/containerd/releases&lt;/a&gt;&lt;br&gt;
&lt;a href="https://github.com/opencontainers/runc" rel="noopener noreferrer"&gt;https://github.com/opencontainers/runc&lt;/a&gt; &lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Cka  2024-2025 series:&lt;br&gt;
&lt;a href="https://www.youtube.com/watch?v=6_gMoe7Ik8k&amp;amp;list=PLl4APkPHzsUUOkOv3i62UidrLmSB8DcGC" rel="noopener noreferrer"&gt;https://www.youtube.com/watch?v=6_gMoe7Ik8k&amp;amp;list=PLl4APkPHzsUUOkOv3i62UidrLmSB8DcGC&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;shellscript  for control plane ,worker1 node , worker2  node :&lt;/p&gt;

&lt;p&gt;Under control plane  use   master.sh &lt;/p&gt;

&lt;h1&gt;
  
  
  !/bin/bash
&lt;/h1&gt;

&lt;h1&gt;
  
  
  disable swap
&lt;/h1&gt;

&lt;p&gt;sudo swapoff -a&lt;br&gt;
sudo sed -i '/ swap / s/^(.*)$/#\1/g' /etc/fstab&lt;/p&gt;

&lt;h1&gt;
  
  
  update kernel params
&lt;/h1&gt;

&lt;p&gt;cat &amp;lt;&amp;lt;EOF | sudo tee /etc/modules-load.d/k8s.conf&lt;br&gt;
overlay&lt;br&gt;
br_netfilter&lt;br&gt;
EOF&lt;/p&gt;

&lt;p&gt;sudo modprobe overlay&lt;br&gt;
sudo modprobe br_netfilter&lt;/p&gt;

&lt;p&gt;cat &amp;lt;&amp;lt;EOF | sudo tee /etc/sysctl.d/k8s.conf&lt;br&gt;
net.bridge.bridge-nf-call-iptables = 1&lt;br&gt;
net.bridge.bridge-nf-call-ip6tables = 1&lt;br&gt;
net.ipv4.ip_forward = 1&lt;br&gt;
EOF&lt;/p&gt;

&lt;p&gt;sudo sysctl --system&lt;/p&gt;

&lt;h1&gt;
  
  
  install container runtime
&lt;/h1&gt;

&lt;h1&gt;
  
  
  install containerd 1.7.27
&lt;/h1&gt;

&lt;p&gt;curl -LO &lt;a href="https://github.com/containerd/containerd/releases/download/v1.7.27/containerd-1.7.27-linux-amd64.tar.gz" rel="noopener noreferrer"&gt;https://github.com/containerd/containerd/releases/download/v1.7.27/containerd-1.7.27-linux-amd64.tar.gz&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;sudo tar Cxzvf /usr/local containerd-1.7.27-linux-amd64.tar.gz&lt;/p&gt;

&lt;p&gt;curl -LO &lt;a href="https://raw.githubusercontent.com/containerd/containerd/main/containerd.service" rel="noopener noreferrer"&gt;https://raw.githubusercontent.com/containerd/containerd/main/containerd.service&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;sudo mkdir -p /usr/local/lib/systemd/system/&lt;br&gt;
sudo mv containerd.service /usr/local/lib/systemd/system/&lt;br&gt;
sudo mkdir -p /etc/containerd&lt;br&gt;
sudo containerd config default | sudo tee /etc/containerd/config.toml&lt;br&gt;
sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml&lt;/p&gt;

&lt;p&gt;sudo systemctl daemon-reload&lt;br&gt;
sudo systemctl enable containerd --now&lt;/p&gt;

&lt;h1&gt;
  
  
  install runc
&lt;/h1&gt;

&lt;p&gt;curl -LO &lt;a href="https://github.com/opencontainers/runc/releases/download/v1.1.12/runc.amd64" rel="noopener noreferrer"&gt;https://github.com/opencontainers/runc/releases/download/v1.1.12/runc.amd64&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;sudo install -m 755 runc.amd64 /usr/local/sbin/runc &lt;/p&gt;

&lt;h1&gt;
  
  
  install cni plugins
&lt;/h1&gt;

&lt;p&gt;curl -LO "&lt;a href="https://github.com/containernetworking/plugins/releases/download/v1.6.2/cni-plugins-linux-amd64-v1.6.2.tgz" rel="noopener noreferrer"&gt;https://github.com/containernetworking/plugins/releases/download/v1.6.2/cni-plugins-linux-amd64-v1.6.2.tgz&lt;/a&gt;"&lt;/p&gt;

&lt;p&gt;sudo mkdir -p /opt/cni/bin&lt;/p&gt;

&lt;p&gt;sudo tar Cxzvf /opt/cni/bin cni-plugins-linux-amd64-v1.6.2.tgz&lt;/p&gt;

&lt;h1&gt;
  
  
  install kubeadm, kubelet, kubectl
&lt;/h1&gt;

&lt;p&gt;sudo apt-get update -y&lt;br&gt;
sudo apt-get install -y apt-transport-https ca-certificates curl gpg&lt;br&gt;
curl -fsSL &lt;a href="https://pkgs.k8s.io/core:/stable:/v1.31/deb/Release.key" rel="noopener noreferrer"&gt;https://pkgs.k8s.io/core:/stable:/v1.31/deb/Release.key&lt;/a&gt; | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg&lt;br&gt;
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] &lt;a href="https://pkgs.k8s.io/core:/stable:/v1.31/deb/" rel="noopener noreferrer"&gt;https://pkgs.k8s.io/core:/stable:/v1.31/deb/&lt;/a&gt; /' | sudo tee /etc/apt/sources.list.d/kubernetes.list&lt;br&gt;
sudo apt-get update -y&lt;br&gt;
sudo apt-get install -y kubelet kubeadm kubectl&lt;br&gt;
sudo apt-mark hold kubelet kubeadm kubectl&lt;/p&gt;

&lt;p&gt;kubeadm version &lt;br&gt;
kubelet --version&lt;br&gt;
kubectl version --client&lt;/p&gt;

&lt;h1&gt;
  
  
  configure crictl to work with containerd
&lt;/h1&gt;

&lt;p&gt;sudo crictl config runtime-endpoint unix:///run/containerd/containerd.sock&lt;br&gt;
sudo chmod 777 -R /var/run/containerd/&lt;/p&gt;

&lt;h1&gt;
  
  
  initialize the control plane node using kubeadm
&lt;/h1&gt;

&lt;p&gt;sudo kubeadm init --pod-network-cidr=192.168.0.0/16 --apiserver-advertise-address=77.132.100.19 --node-name master&lt;/p&gt;

&lt;p&gt;mkdir -p $HOME/.kube&lt;br&gt;
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config&lt;br&gt;
sudo chown $(id -u):$(id -g) $HOME/.kube/config&lt;/p&gt;

&lt;p&gt;sudo chmod 777 .kube/ &lt;br&gt;
sudo chmod 777 -R /etc/kubernetes/&lt;/p&gt;

&lt;p&gt;export KUBECONFIG=/etc/kubernetes/admin.conf&lt;/p&gt;

&lt;h1&gt;
  
  
  install helm v3
&lt;/h1&gt;

&lt;p&gt;curl -fsSL -o get_helm.sh &lt;a href="https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3" rel="noopener noreferrer"&gt;https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3&lt;/a&gt;&lt;br&gt;
sudo chmod 777 get_helm.sh&lt;br&gt;
./get_helm.sh&lt;/p&gt;

&lt;h1&gt;
  
  
  install cni cilium cli
&lt;/h1&gt;

&lt;p&gt;CILIUM_CLI_VERSION=$(curl -s &lt;a href="https://raw.githubusercontent.com/cilium/cilium-cli/main/stable.txt" rel="noopener noreferrer"&gt;https://raw.githubusercontent.com/cilium/cilium-cli/main/stable.txt&lt;/a&gt;)&lt;br&gt;
CLI_ARCH=amd64&lt;br&gt;
if [ "$(uname -m)" = "aarch64" ]; then CLI_ARCH=arm64; fi&lt;br&gt;
curl -L --fail --remote-name-all &lt;a href="https://github.com/cilium/cilium-cli/releases/download/$%7BCILIUM_CLI_VERSION%7D/cilium-linux-$%7BCLI_ARCH%7D.tar.gz%7B,.sha256sum%7D" rel="noopener noreferrer"&gt;https://github.com/cilium/cilium-cli/releases/download/${CILIUM_CLI_VERSION}/cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}&lt;/a&gt;&lt;br&gt;
sha256sum --check cilium-linux-${CLI_ARCH}.tar.gz.sha256sum&lt;br&gt;
sudo tar xzvfC cilium-linux-${CLI_ARCH}.tar.gz /usr/local/bin&lt;br&gt;
rm cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}&lt;/p&gt;

&lt;p&gt;cilium status --wait&lt;/p&gt;

&lt;h1&gt;
  
  
  install cni cilium using helm
&lt;/h1&gt;

&lt;p&gt;helm repo add cilium &lt;a href="https://helm.cilium.io/" rel="noopener noreferrer"&gt;https://helm.cilium.io/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;helm install cilium cilium/cilium --version 1.17.2 --namespace kube-system&lt;/p&gt;

&lt;p&gt;Under worker1 node  use worker1.sh &lt;/p&gt;

&lt;h1&gt;
  
  
  !/bin/bash
&lt;/h1&gt;

&lt;h1&gt;
  
  
  disable swap
&lt;/h1&gt;

&lt;p&gt;sudo swapoff -a&lt;br&gt;
sudo sed -i '/ swap / s/^(.*)$/#\1/g' /etc/fstab&lt;/p&gt;

&lt;h1&gt;
  
  
  update kernel params
&lt;/h1&gt;

&lt;p&gt;cat &amp;lt;&amp;lt;EOF | sudo tee /etc/modules-load.d/k8s.conf&lt;br&gt;
overlay&lt;br&gt;
br_netfilter&lt;br&gt;
EOF&lt;/p&gt;

&lt;p&gt;sudo modprobe overlay&lt;br&gt;
sudo modprobe br_netfilter&lt;/p&gt;

&lt;p&gt;cat &amp;lt;&amp;lt;EOF | sudo tee /etc/sysctl.d/k8s.conf&lt;br&gt;
net.bridge.bridge-nf-call-iptables = 1&lt;br&gt;
net.bridge.bridge-nf-call-ip6tables = 1&lt;br&gt;
net.ipv4.ip_forward = 1&lt;br&gt;
EOF&lt;/p&gt;

&lt;p&gt;sudo sysctl --system&lt;/p&gt;

&lt;h1&gt;
  
  
  install container runtime
&lt;/h1&gt;

&lt;h1&gt;
  
  
  install containerd 1.7.27
&lt;/h1&gt;

&lt;p&gt;curl -LO &lt;a href="https://github.com/containerd/containerd/releases/download/v1.7.27/containerd-1.7.27-linux-amd64.tar.gz" rel="noopener noreferrer"&gt;https://github.com/containerd/containerd/releases/download/v1.7.27/containerd-1.7.27-linux-amd64.tar.gz&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;sudo tar Cxzvf /usr/local containerd-1.7.27-linux-amd64.tar.gz&lt;/p&gt;

&lt;p&gt;curl -LO &lt;a href="https://raw.githubusercontent.com/containerd/containerd/main/containerd.service" rel="noopener noreferrer"&gt;https://raw.githubusercontent.com/containerd/containerd/main/containerd.service&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;sudo mkdir -p /usr/local/lib/systemd/system/&lt;br&gt;
sudo mv containerd.service /usr/local/lib/systemd/system/&lt;br&gt;
sudo mkdir -p /etc/containerd&lt;br&gt;
sudo containerd config default | sudo tee /etc/containerd/config.toml&lt;br&gt;
sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml&lt;/p&gt;

&lt;p&gt;sudo systemctl daemon-reload&lt;br&gt;
sudo systemctl enable containerd --now&lt;/p&gt;

&lt;h1&gt;
  
  
  install runc
&lt;/h1&gt;

&lt;p&gt;curl -LO &lt;a href="https://github.com/opencontainers/runc/releases/download/v1.1.12/runc.amd64" rel="noopener noreferrer"&gt;https://github.com/opencontainers/runc/releases/download/v1.1.12/runc.amd64&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;sudo install -m 755 runc.amd64 /usr/local/sbin/runc &lt;/p&gt;

&lt;h1&gt;
  
  
  install cni plugins
&lt;/h1&gt;

&lt;p&gt;curl -LO "&lt;a href="https://github.com/containernetworking/plugins/releases/download/v1.6.2/cni-plugins-linux-amd64-v1.6.2.tgz" rel="noopener noreferrer"&gt;https://github.com/containernetworking/plugins/releases/download/v1.6.2/cni-plugins-linux-amd64-v1.6.2.tgz&lt;/a&gt;"&lt;/p&gt;

&lt;p&gt;sudo mkdir -p /opt/cni/bin&lt;/p&gt;

&lt;p&gt;sudo tar Cxzvf /opt/cni/bin cni-plugins-linux-amd64-v1.6.2.tgz&lt;/p&gt;

&lt;h1&gt;
  
  
  install kubeadm, kubelet, kubectl
&lt;/h1&gt;

&lt;p&gt;sudo apt-get update -y&lt;br&gt;
sudo apt-get install -y apt-transport-https ca-certificates curl gpg&lt;br&gt;
curl -fsSL &lt;a href="https://pkgs.k8s.io/core:/stable:/v1.31/deb/Release.key" rel="noopener noreferrer"&gt;https://pkgs.k8s.io/core:/stable:/v1.31/deb/Release.key&lt;/a&gt; | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg&lt;br&gt;
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] &lt;a href="https://pkgs.k8s.io/core:/stable:/v1.31/deb/" rel="noopener noreferrer"&gt;https://pkgs.k8s.io/core:/stable:/v1.31/deb/&lt;/a&gt; /' | sudo tee /etc/apt/sources.list.d/kubernetes.list&lt;br&gt;
sudo apt-get update -y&lt;br&gt;
sudo apt-get install -y kubelet kubeadm kubectl&lt;br&gt;
sudo apt-mark hold kubelet kubeadm kubectl&lt;/p&gt;

&lt;p&gt;kubeadm version &lt;br&gt;
kubelet --version&lt;br&gt;
kubectl version --client&lt;/p&gt;

&lt;h1&gt;
  
  
  configure crictl to work with containerd
&lt;/h1&gt;

&lt;p&gt;sudo crictl config runtime-endpoint unix:///run/containerd/containerd.sock&lt;br&gt;
sudo chmod 777 -R /var/run/containerd/&lt;/p&gt;

&lt;h1&gt;
  
  
  initialize the control plane node using kubeadm
&lt;/h1&gt;

&lt;p&gt;sudo hostname worker1&lt;/p&gt;

&lt;p&gt;mkdir -p $HOME/.kube&lt;br&gt;
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config&lt;br&gt;
sudo chown $(id -u):$(id -g) $HOME/.kube/config&lt;/p&gt;

&lt;p&gt;sudo chmod 777 .kube/ &lt;br&gt;
sudo chmod 777 -R /etc/kubernetes/&lt;/p&gt;

&lt;p&gt;sudo kubeadm join 77.132.100.19:6443 --token h0oyqy.f26d28ikx6acev65 \&lt;br&gt;
--discovery-token-ca-cert-hash sha256:3551b3a9363969229557eabb5a51a9d3851e335b49f0f2f2ededc535c2fdf77d &lt;/p&gt;

&lt;p&gt;Under worker2 node  use   worker2.sh &lt;/p&gt;

&lt;h1&gt;
  
  
  !/bin/bash
&lt;/h1&gt;

&lt;h1&gt;
  
  
  disable swap
&lt;/h1&gt;

&lt;p&gt;sudo swapoff -a&lt;br&gt;
sudo sed -i '/ swap / s/^(.*)$/#\1/g' /etc/fstab&lt;/p&gt;

&lt;h1&gt;
  
  
  update kernel params
&lt;/h1&gt;

&lt;p&gt;cat &amp;lt;&amp;lt;EOF | sudo tee /etc/modules-load.d/k8s.conf&lt;br&gt;
overlay&lt;br&gt;
br_netfilter&lt;br&gt;
EOF&lt;/p&gt;

&lt;p&gt;sudo modprobe overlay&lt;br&gt;
sudo modprobe br_netfilter&lt;/p&gt;

&lt;p&gt;cat &amp;lt;&amp;lt;EOF | sudo tee /etc/sysctl.d/k8s.conf&lt;br&gt;
net.bridge.bridge-nf-call-iptables = 1&lt;br&gt;
net.bridge.bridge-nf-call-ip6tables = 1&lt;br&gt;
net.ipv4.ip_forward = 1&lt;br&gt;
EOF&lt;/p&gt;

&lt;p&gt;sudo sysctl --system&lt;/p&gt;

&lt;h1&gt;
  
  
  install container runtime
&lt;/h1&gt;

&lt;h1&gt;
  
  
  install containerd 1.7.27
&lt;/h1&gt;

&lt;p&gt;curl -LO &lt;a href="https://github.com/containerd/containerd/releases/download/v1.7.27/containerd-1.7.27-linux-amd64.tar.gz" rel="noopener noreferrer"&gt;https://github.com/containerd/containerd/releases/download/v1.7.27/containerd-1.7.27-linux-amd64.tar.gz&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;sudo tar Cxzvf /usr/local containerd-1.7.27-linux-amd64.tar.gz&lt;/p&gt;

&lt;p&gt;curl -LO &lt;a href="https://raw.githubusercontent.com/containerd/containerd/main/containerd.service" rel="noopener noreferrer"&gt;https://raw.githubusercontent.com/containerd/containerd/main/containerd.service&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;sudo mkdir -p /usr/local/lib/systemd/system/&lt;br&gt;
sudo mv containerd.service /usr/local/lib/systemd/system/&lt;br&gt;
sudo mkdir -p /etc/containerd&lt;br&gt;
sudo containerd config default | sudo tee /etc/containerd/config.toml&lt;br&gt;
sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml&lt;/p&gt;

&lt;p&gt;sudo systemctl daemon-reload&lt;br&gt;
sudo systemctl enable containerd --now&lt;/p&gt;

&lt;h1&gt;
  
  
  install runc
&lt;/h1&gt;

&lt;p&gt;curl -LO &lt;a href="https://github.com/opencontainers/runc/releases/download/v1.1.12/runc.amd64" rel="noopener noreferrer"&gt;https://github.com/opencontainers/runc/releases/download/v1.1.12/runc.amd64&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;sudo install -m 755 runc.amd64 /usr/local/sbin/runc &lt;/p&gt;

&lt;h1&gt;
  
  
  install cni plugins
&lt;/h1&gt;

&lt;p&gt;curl -LO "&lt;a href="https://github.com/containernetworking/plugins/releases/download/v1.6.2/cni-plugins-linux-amd64-v1.6.2.tgz" rel="noopener noreferrer"&gt;https://github.com/containernetworking/plugins/releases/download/v1.6.2/cni-plugins-linux-amd64-v1.6.2.tgz&lt;/a&gt;"&lt;/p&gt;

&lt;p&gt;sudo mkdir -p /opt/cni/bin&lt;/p&gt;

&lt;p&gt;sudo tar Cxzvf /opt/cni/bin cni-plugins-linux-amd64-v1.6.2.tgz&lt;/p&gt;

&lt;h1&gt;
  
  
  install kubeadm, kubelet, kubectl
&lt;/h1&gt;

&lt;p&gt;sudo apt-get update -y&lt;br&gt;
sudo apt-get install -y apt-transport-https ca-certificates curl gpg&lt;br&gt;
curl -fsSL &lt;a href="https://pkgs.k8s.io/core:/stable:/v1.31/deb/Release.key" rel="noopener noreferrer"&gt;https://pkgs.k8s.io/core:/stable:/v1.31/deb/Release.key&lt;/a&gt; | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg&lt;br&gt;
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] &lt;a href="https://pkgs.k8s.io/core:/stable:/v1.31/deb/" rel="noopener noreferrer"&gt;https://pkgs.k8s.io/core:/stable:/v1.31/deb/&lt;/a&gt; /' | sudo tee /etc/apt/sources.list.d/kubernetes.list&lt;br&gt;
sudo apt-get update -y&lt;br&gt;
sudo apt-get install -y kubelet kubeadm kubectl&lt;br&gt;
sudo apt-mark hold kubelet kubeadm kubectl&lt;/p&gt;

&lt;p&gt;kubeadm version &lt;br&gt;
kubelet --version&lt;br&gt;
kubectl version --client&lt;/p&gt;

&lt;h1&gt;
  
  
  configure crictl to work with containerd
&lt;/h1&gt;

&lt;p&gt;sudo crictl config runtime-endpoint unix:///run/containerd/containerd.sock&lt;br&gt;
sudo chmod 777 -R /var/run/containerd/&lt;/p&gt;

&lt;h1&gt;
  
  
  initialize the control plane node using kubeadm
&lt;/h1&gt;

&lt;p&gt;sudo hostname worker2&lt;/p&gt;

&lt;p&gt;mkdir -p $HOME/.kube&lt;br&gt;
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config&lt;br&gt;
sudo chown $(id -u):$(id -g) $HOME/.kube/config&lt;/p&gt;

&lt;p&gt;sudo chmod 777 .kube/ &lt;br&gt;
sudo chmod 777 -R /etc/kubernetes/&lt;/p&gt;

&lt;p&gt;sudo kubeadm join 77.132.100.19:6443 --token h0oyqy.f26d28ikx6acev65 \&lt;br&gt;
--discovery-token-ca-cert-hash sha256:3551b3a9363969229557eabb5a51a9d3851e335b49f0f2f2ededc535c2fdf77d &lt;/p&gt;

&lt;p&gt;Second stage: &lt;/p&gt;

&lt;p&gt;after setup  control plane and worker  nodes   with  kubernetes v1.31 are all ready. will upgrade to  kubernetes  v1.32   which lead to   use  version 1.32.3&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  use    kubectl get  nodes   to   see  version  1.31.7   for kubectl
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ol&gt;
&lt;li&gt;upgrade control plane cluster and  its components:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;check  for kubernetes  package  &lt;/p&gt;

&lt;p&gt;pager /etc/apt/sources.list.d/kubernetes.list    &lt;/p&gt;

&lt;p&gt;output:&lt;/p&gt;

&lt;p&gt;deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] &lt;a href="https://pkgs.k8s.io/core:/stable:/v1.31/deb/" rel="noopener noreferrer"&gt;https://pkgs.k8s.io/core:/stable:/v1.31/deb/&lt;/a&gt; /&lt;/p&gt;

&lt;h1&gt;
  
  
  change permission  for /etc/apt/sources.list.d/kubernetes.list
&lt;/h1&gt;

&lt;p&gt;sudo  chmod 777  /etc/apt/sources.list.d/kubernetes.list &lt;/p&gt;

&lt;p&gt;change  v1.31  to v1.32      by using  nano   /etc/apt/sources.list.d/kubernetes.list   &lt;/p&gt;

&lt;h1&gt;
  
  
  deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] &lt;a href="https://pkgs.k8s.io/core:/stable:/v1.32/deb/" rel="noopener noreferrer"&gt;https://pkgs.k8s.io/core:/stable:/v1.32/deb/&lt;/a&gt; / change pkg 1.32
&lt;/h1&gt;

&lt;p&gt;determine to which  version   you want to upgrade&lt;/p&gt;

&lt;p&gt;sudo apt update&lt;br&gt;
sudo apt-cache madison kubeadm &lt;/p&gt;

&lt;p&gt;sudo apt-mark unhold kubeadm &amp;amp;&amp;amp; \&lt;br&gt;
sudo apt-get update &amp;amp;&amp;amp; sudo apt-get install -y kubeadm='1.32.3-1.1' &amp;amp;&amp;amp; \&lt;br&gt;
sudo apt-mark hold kubeadm&lt;/p&gt;

&lt;p&gt;sudo kubeadm upgrade plan &lt;/p&gt;

&lt;p&gt;sudo kubeadm upgrade apply 1.32.3-1.1&lt;/p&gt;

&lt;p&gt;kubectl drain master --ignore-daemonsets&lt;/p&gt;

&lt;p&gt;sudo apt-mark unhold kubelet kubectl &amp;amp;&amp;amp; \&lt;br&gt;
sudo apt-get update &amp;amp;&amp;amp; sudo apt-get install -y kubelet=''1.32.3-1.1'' kubectl=''1.32.3-1.1'' &amp;amp;&amp;amp; \&lt;br&gt;
sudo apt-mark hold kubelet kubectl&lt;/p&gt;

&lt;p&gt;sudo systemctl daemon-reload&lt;br&gt;
sudo systemctl restart kubelet&lt;/p&gt;

&lt;p&gt;kubectl uncordon master &lt;/p&gt;

&lt;p&gt;Upgrade  for worker1  node&lt;/p&gt;

&lt;p&gt;pager /etc/apt/sources.list.d/kubernetes.list &lt;/p&gt;

&lt;p&gt;deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] &lt;a href="https://pkgs.k8s.io/core:/stable:/v1.31/deb/" rel="noopener noreferrer"&gt;https://pkgs.k8s.io/core:/stable:/v1.31/deb/&lt;/a&gt; /&lt;/p&gt;

&lt;h1&gt;
  
  
  deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] &lt;a href="https://pkgs.k8s.io/core:/stable:/v1.32/deb/" rel="noopener noreferrer"&gt;https://pkgs.k8s.io/core:/stable:/v1.32/deb/&lt;/a&gt; / change pkg 1.32
&lt;/h1&gt;

&lt;p&gt;sudo chmod 777 /etc/apt/sources.list.d/kubernetes.list &lt;/p&gt;

&lt;p&gt;sudo apt update&lt;br&gt;
sudo apt-cache madison kubeadm&lt;/p&gt;

&lt;p&gt;sudo apt-mark unhold kubeadm &amp;amp;&amp;amp; \&lt;br&gt;
sudo apt-get update &amp;amp;&amp;amp; sudo apt-get install -y kubeadm='1.32.3-1.1' &amp;amp;&amp;amp; \&lt;br&gt;
sudo apt-mark hold kubeadm&lt;/p&gt;

&lt;p&gt;sudo kubeadm upgrade node&lt;/p&gt;

&lt;p&gt;kubectl drain worker1 --ignore-daemonsets&lt;/p&gt;

&lt;p&gt;sudo apt-mark unhold kubelet kubectl &amp;amp;&amp;amp; \&lt;br&gt;
sudo apt-get update &amp;amp;&amp;amp; sudo apt-get install -y kubelet=''1.32.3-1.1'' kubectl=''1.32.3-1.1'' &amp;amp;&amp;amp; \&lt;br&gt;
sudo apt-mark hold kubelet kubectl&lt;/p&gt;

&lt;p&gt;sudo systemctl daemon-reload&lt;br&gt;
sudo systemctl restart kubelet&lt;/p&gt;

&lt;p&gt;kubectl uncordon worker1&lt;/p&gt;

&lt;p&gt;upgrade for worker2   node &lt;/p&gt;

&lt;p&gt;pager /etc/apt/sources.list.d/kubernetes.list &lt;/p&gt;

&lt;p&gt;deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] &lt;a href="https://pkgs.k8s.io/core:/stable:/v1.31/deb/" rel="noopener noreferrer"&gt;https://pkgs.k8s.io/core:/stable:/v1.31/deb/&lt;/a&gt; /&lt;/p&gt;

&lt;p&gt;sudo chmod 777 /etc/apt/sources.list.d/kubernetes.list &lt;/p&gt;

&lt;h1&gt;
  
  
  deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] &lt;a href="https://pkgs.k8s.io/core:/stable:/v1.32/deb/" rel="noopener noreferrer"&gt;https://pkgs.k8s.io/core:/stable:/v1.32/deb/&lt;/a&gt; / change pkg 1.32
&lt;/h1&gt;

&lt;p&gt;sudo apt update&lt;br&gt;
sudo apt-cache madison kubeadm&lt;/p&gt;

&lt;p&gt;sudo apt-mark unhold kubeadm &amp;amp;&amp;amp; \&lt;br&gt;
sudo apt-get update &amp;amp;&amp;amp; sudo apt-get install -y kubeadm='1.32.3-1.1' &amp;amp;&amp;amp; \&lt;br&gt;
sudo apt-mark hold kubeadm&lt;/p&gt;

&lt;p&gt;sudo kubeadm upgrade node&lt;/p&gt;

&lt;p&gt;kubectl drain worker2 --ignore-daemonsets&lt;/p&gt;

&lt;p&gt;sudo apt-mark unhold kubelet kubectl &amp;amp;&amp;amp; \&lt;br&gt;
sudo apt-get update &amp;amp;&amp;amp; sudo apt-get install -y kubelet=''1.32.3-1.1'' kubectl=''1.32.3-1.1'' &amp;amp;&amp;amp; \&lt;br&gt;
sudo apt-mark hold kubelet kubectl&lt;/p&gt;

&lt;p&gt;sudo systemctl daemon-reload&lt;br&gt;
sudo systemctl restart kubelet&lt;/p&gt;

&lt;p&gt;kubectl uncordon worker2&lt;/p&gt;

&lt;p&gt;use  a  command    kubectl  get  nodes  to   see  changes  version   to  1.32.3&lt;br&gt;
also   check : &lt;br&gt;
    kubeadm  version &lt;br&gt;
    kubelet  --version &lt;br&gt;
    kubectl -- version&lt;/p&gt;

&lt;p&gt;Note :&lt;/p&gt;

&lt;p&gt;kubectl drain   [ node]   it evicted  pod and  be   unschedule &lt;br&gt;
   kubectl uncordon [node]   it will  be schedule to pod  at a   node &lt;/p&gt;

&lt;p&gt;References: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/" rel="noopener noreferrer"&gt;https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;&lt;a href="https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/change-package-repository/" rel="noopener noreferrer"&gt;https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/change-package-repository/&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;&lt;a href="https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/upgrading-linux-nodes/" rel="noopener noreferrer"&gt;https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/upgrading-linux-nodes/&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.cilium.io/en/stable/installation/k8s-install-kubeadm/#installation-using-kubeadm" rel="noopener noreferrer"&gt;https://docs.cilium.io/en/stable/installation/k8s-install-kubeadm/#installation-using-kubeadm&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;Cka series 2024-2025:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=NtX75Ze47EU&amp;amp;list=PLl4APkPHzsUUOkOv3i62UidrLmSB8DcGC&amp;amp;index=35" rel="noopener noreferrer"&gt;https://www.youtube.com/watch?v=NtX75Ze47EU&amp;amp;list=PLl4APkPHzsUUOkOv3i62UidrLmSB8DcGC&amp;amp;index=35&lt;/a&gt; &lt;/p&gt;

</description>
    </item>
    <item>
      <title>Setup multi node kubernetes cluster using kubeadm</title>
      <dc:creator>MohammedBanabila</dc:creator>
      <pubDate>Wed, 19 Mar 2025 17:13:02 +0000</pubDate>
      <link>https://dev.to/mohammed_banabila_3bc9e49/setup-multi-node-kubernetes-cluster-using-kubeadm-55e5</link>
      <guid>https://dev.to/mohammed_banabila_3bc9e49/setup-multi-node-kubernetes-cluster-using-kubeadm-55e5</guid>
      <description>&lt;p&gt;in this  lab  we  deploy control plane and worker nodes  with any cloud provider  example Aws&lt;br&gt;&lt;br&gt;
using ec2 instance  with run on ubuntu .&lt;/p&gt;

&lt;p&gt;Steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; create vpc and  subnet  and  in this  lab  we create vpc with cidr block=77.132.0.0/16 and 3   public subnets with block size=24
&lt;/li&gt;
&lt;li&gt; deploy 3 ec2  instance  for  all node and  define which be  control plane  , others be worker nodes&lt;/li&gt;
&lt;li&gt; create 2 security groups for control plane node and  worker nodes
4 .  setup  network acceess control   list
5 .  associate  elastic ip  with those instances
&lt;/li&gt;
&lt;li&gt;integrate  internet gateway  with vpc
7 . create routetable and associate  it with subnets&lt;/li&gt;
&lt;li&gt;after the infrastructure are up  and running  example  ex2 instances&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;note:&lt;br&gt;
 we  use  control plane  node  ports  which  need it  by its components&lt;/p&gt;

&lt;p&gt;apiserver    port 6443&lt;br&gt;
   ectd           port 2379-2380&lt;br&gt;
   scheduler  port 10257&lt;br&gt;
   controller-manager 10259&lt;br&gt;
   access to control plane node    port 22   only to  my ip  address /32&lt;/p&gt;

&lt;p&gt;and  for workers node  port &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubelet    port 10250 
kube-proxy   port 10256 
nodePort      30000-32767 
access to worker node    port 22   only to  my ip  address /32
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;we add those  ports to   control plane security group   and  worker security groups&lt;/p&gt;

&lt;p&gt;before start  setup  , first   update and  upgrade  packages  using  sudo apt update -y &amp;amp;&amp;amp; sudo apt upgrade -y &lt;br&gt;
step  for setup  control plane node  as master node :&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;disable  swap&lt;/li&gt;
&lt;li&gt; update  kernel params
3  install  container runtime &lt;/li&gt;
&lt;li&gt;install  runc&lt;/li&gt;
&lt;li&gt;install cni plugin&lt;/li&gt;
&lt;li&gt;install kubeadm , kubelet , kubectl   and   check   version &lt;/li&gt;
&lt;li&gt;initialize  control plane  node using  kubeadm  and  print output token which  let you to join worker nodes
8 . install  cni plugin   calico
9 .  use kubeadm join   to  master
optional: you can change hostname at nodes 
 change hostname  to be  worker1 , worker2   for worker nodes&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;step  for setup  worker nodes : &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;disable  swap&lt;/li&gt;
&lt;li&gt; update  kernel params
3  install  container runtime &lt;/li&gt;
&lt;li&gt;install  runc&lt;/li&gt;
&lt;li&gt;install cni plugin&lt;/li&gt;
&lt;li&gt;install kubeadm , kubelet , kubectl   and   check   version&lt;/li&gt;
&lt;li&gt; use kubeadm join   to  master&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Note:  copy kubeconfig  from control plane to   workers node  at  .kube/config &lt;/p&gt;

&lt;p&gt;at  control plane node &lt;br&gt;
     cd .kube/ &lt;br&gt;
     cat  config and copy it all &lt;/p&gt;

&lt;p&gt;Note: &lt;br&gt;
 you  can  deploy  it  at  console    or   as  infrastructure as code &lt;br&gt;
 in this  lab ,  I  used  pulumi python for  provisioning the resources.&lt;/p&gt;

&lt;p&gt;"""An AWS Python Pulumi program"""&lt;br&gt;
import pulumi , pulumi_aws as aws , json&lt;/p&gt;

&lt;p&gt;cfg1=pulumi.Config()    &lt;/p&gt;

&lt;p&gt;vpc1=aws.ec2.Vpc(&lt;br&gt;
  "vpc1",&lt;br&gt;
  aws.ec2.VpcArgs(&lt;br&gt;
    cidr_block=cfg1.require(key='block1'),&lt;br&gt;
    tags={&lt;br&gt;
      "Name": "vpc1"&lt;br&gt;
    }&lt;br&gt;
  )&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;intgw1=aws.ec2.InternetGateway(&lt;br&gt;
  "intgw1",&lt;br&gt;
  aws.ec2.InternetGatewayArgs(&lt;br&gt;
    vpc_id=vpc1.id,&lt;br&gt;
    tags={&lt;br&gt;
      "Name": "intgw1"&lt;br&gt;
    }&lt;br&gt;
  )&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;zones="us-east-1a"&lt;br&gt;
publicsubnets=["subnet1","subnet2","subnet3"]&lt;br&gt;
cidr1=cfg1.require(key="cidr1")&lt;br&gt;
cidr2=cfg1.require(key="cidr2")&lt;br&gt;
cidr3=cfg1.require(key="cidr3")&lt;/p&gt;

&lt;p&gt;cidrs=[   cidr1 , cidr2 , cidr3  ]&lt;/p&gt;

&lt;p&gt;for allsubnets in range(len(publicsubnets)):&lt;br&gt;
    publicsubnets[allsubnets]=aws.ec2.Subnet(&lt;br&gt;
              publicsubnets[allsubnets],&lt;br&gt;
              aws.ec2.SubnetArgs(&lt;br&gt;
                vpc_id=vpc1.id,&lt;br&gt;
                cidr_block=cidrs[allsubnets],&lt;br&gt;
                map_public_ip_on_launch=False,&lt;br&gt;
                availability_zone=zones,&lt;br&gt;
                tags={&lt;br&gt;
                  "Name" : publicsubnets[allsubnets]&lt;br&gt;
                }&lt;br&gt;
              )&lt;br&gt;
    )&lt;/p&gt;

&lt;p&gt;table1=aws.ec2.RouteTable(&lt;br&gt;
  "table1",&lt;br&gt;
   aws.ec2.RouteTableArgs(&lt;br&gt;
       vpc_id=vpc1.id,&lt;br&gt;
       routes=[&lt;br&gt;
        aws.ec2.RouteTableRouteArgs(&lt;br&gt;
          cidr_block=cfg1.require(key="any_ipv4_traffic"),&lt;br&gt;
          gateway_id=intgw1.id&lt;br&gt;
        )&lt;br&gt;
       ],&lt;br&gt;
       tags={&lt;br&gt;
        "Name" :  "table1"&lt;br&gt;
       }&lt;br&gt;
   )&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;associate1=aws.ec2.RouteTableAssociation(&lt;br&gt;
  "associate1",&lt;br&gt;
  aws.ec2.RouteTableAssociationArgs(&lt;br&gt;
    subnet_id=publicsubnets[0].id,&lt;br&gt;
    route_table_id=table1.id&lt;br&gt;
  )&lt;br&gt;
)&lt;br&gt;
associate2=aws.ec2.RouteTableAssociation(&lt;br&gt;
  "associate2",&lt;br&gt;
  aws.ec2.RouteTableAssociationArgs(&lt;br&gt;
    subnet_id=publicsubnets[1].id,&lt;br&gt;
    route_table_id=table1.id&lt;br&gt;
  )&lt;/p&gt;

&lt;p&gt;)&lt;/p&gt;

&lt;p&gt;associate3=aws.ec2.RouteTableAssociation(&lt;br&gt;
  "associate3",&lt;br&gt;
  aws.ec2.RouteTableAssociationArgs(&lt;br&gt;
    subnet_id=publicsubnets[2].id,&lt;br&gt;
    route_table_id=table1.id&lt;br&gt;
  )&lt;/p&gt;

&lt;p&gt;)&lt;/p&gt;

&lt;p&gt;ingress_traffic=[&lt;br&gt;
    aws.ec2.NetworkAclIngressArgs(&lt;br&gt;
      from_port=22,&lt;br&gt;
      to_port=22,&lt;br&gt;
      protocol="tcp",&lt;br&gt;
      cidr_block=cfg1.require(key="myips"),&lt;br&gt;
      icmp_code=0,&lt;br&gt;
      icmp_type=0,&lt;br&gt;
      action="allow",&lt;br&gt;
      rule_no=100&lt;br&gt;
    ),&lt;br&gt;
    aws.ec2.NetworkAclIngressArgs(&lt;br&gt;
      from_port=22,&lt;br&gt;
      to_port=22,&lt;br&gt;
      protocol="tcp",&lt;br&gt;
      cidr_block=cfg1.require(key="any_ipv4_traffic"),&lt;br&gt;
      icmp_code=0,&lt;br&gt;
      icmp_type=0,&lt;br&gt;
      action="deny",&lt;br&gt;
      rule_no=101&lt;br&gt;
    ),&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws.ec2.NetworkAclIngressArgs(
  from_port=0,
  to_port=0,
  protocol="-1",
  cidr_block=cfg1.require(key="any_ipv4_traffic"),
  icmp_code=0,
  icmp_type=0,
  action="allow",
  rule_no=200
)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;]&lt;/p&gt;

&lt;p&gt;egress_traffic=[&lt;br&gt;
     aws.ec2.NetworkAclEgressArgs(&lt;br&gt;
      from_port=22,&lt;br&gt;
      to_port=22,&lt;br&gt;
      protocol="tcp",&lt;br&gt;
      cidr_block=cfg1.require(key="myips"),&lt;br&gt;
      icmp_code=0,&lt;br&gt;
      icmp_type=0,&lt;br&gt;
      action="allow",&lt;br&gt;
      rule_no=100&lt;br&gt;
    ),&lt;br&gt;
    aws.ec2.NetworkAclEgressArgs(&lt;br&gt;
      from_port=22,&lt;br&gt;
      to_port=22,&lt;br&gt;
      protocol="tcp",&lt;br&gt;
      cidr_block=cfg1.require(key="any_ipv4_traffic"),&lt;br&gt;
      icmp_code=0,&lt;br&gt;
      icmp_type=0,&lt;br&gt;
      action="deny",&lt;br&gt;
      rule_no=101&lt;br&gt;
    ),&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws.ec2.NetworkAclEgressArgs(
  from_port=0,
  to_port=0,
  protocol="-1",
  cidr_block=cfg1.require(key="any_ipv4_traffic"),
  icmp_code=0,
  icmp_type=0,
  action="allow",
  rule_no=200
)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;]&lt;/p&gt;

&lt;p&gt;nacls1=aws.ec2.NetworkAcl(&lt;br&gt;
    "nacls1",&lt;br&gt;
    aws.ec2.NetworkAclArgs(&lt;br&gt;
      vpc_id=vpc1.id,&lt;br&gt;
      ingress=ingress_traffic,&lt;br&gt;
      egress=egress_traffic,&lt;br&gt;
      tags={&lt;br&gt;
        "Name" : "nacls1"&lt;br&gt;
      },&lt;br&gt;
    )&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;nacllink1=aws.ec2.NetworkAclAssociation(&lt;br&gt;
  "nacllink1",&lt;br&gt;
  aws.ec2.NetworkAclAssociationArgs(&lt;br&gt;
    network_acl_id=nacls1.id,&lt;br&gt;
    subnet_id=publicsubnets[0].id&lt;br&gt;
  )&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;nacllink2=aws.ec2.NetworkAclAssociation(&lt;br&gt;
  "nacllink2",&lt;br&gt;
  aws.ec2.NetworkAclAssociationArgs(&lt;br&gt;
    network_acl_id=nacls1.id,&lt;br&gt;
    subnet_id=publicsubnets[1].id&lt;br&gt;
  )&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;nacllink3=aws.ec2.NetworkAclAssociation(&lt;br&gt;
  "nacllink3",&lt;br&gt;
  aws.ec2.NetworkAclAssociationArgs(&lt;br&gt;
    network_acl_id=nacls1.id,&lt;br&gt;
    subnet_id=publicsubnets[2].id&lt;br&gt;
  )&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;masteringress=[&lt;br&gt;
aws.ec2.SecurityGroupIngressArgs(&lt;br&gt;
 from_port=22,&lt;br&gt;
 to_port=22,&lt;br&gt;
 protocol="tcp",&lt;br&gt;
 cidr_blocks=[cfg1.require(key="myips")],&lt;/p&gt;

&lt;p&gt;),&lt;br&gt;
aws.ec2.SecurityGroupIngressArgs(&lt;br&gt;
 from_port=6443,&lt;br&gt;
 to_port=6443,&lt;br&gt;
 protocol="tcp",&lt;br&gt;
 cidr_blocks=[vpc1.cidr_block]&lt;br&gt;
),&lt;br&gt;
aws.ec2.SecurityGroupIngressArgs(&lt;br&gt;
 from_port=2379,&lt;br&gt;
 to_port=2380,&lt;br&gt;
 protocol="tcp",&lt;br&gt;
 cidr_blocks=[vpc1.cidr_block]&lt;/p&gt;

&lt;p&gt;),&lt;/p&gt;

&lt;p&gt;aws.ec2.SecurityGroupIngressArgs(&lt;br&gt;
 from_port=10249,&lt;br&gt;
 to_port=10260,&lt;br&gt;
 protocol="tcp",&lt;br&gt;
 cidr_blocks=[vpc1.cidr_block]&lt;br&gt;
)&lt;br&gt;
]&lt;br&gt;
masteregress=[&lt;br&gt;
aws.ec2.SecurityGroupEgressArgs(&lt;br&gt;
    from_port=0,&lt;br&gt;
    to_port=0,&lt;br&gt;
    protocol="-1",&lt;br&gt;
    cidr_blocks=[cfg1.require(key="any_ipv4_traffic")]&lt;br&gt;
),&lt;br&gt;
]&lt;/p&gt;

&lt;p&gt;mastersecurity=aws.ec2.SecurityGroup(&lt;br&gt;
    "mastersecurity",&lt;br&gt;
    aws.ec2.SecurityGroupArgs(&lt;br&gt;
      vpc_id=vpc1.id,&lt;br&gt;
      name="mastersecurity",&lt;br&gt;
      ingress=masteringress,&lt;br&gt;
      egress=masteregress,&lt;br&gt;
      tags={&lt;br&gt;
        "Name" : "mastersecurity"&lt;br&gt;
      }&lt;br&gt;
    )&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;workeringress=[&lt;br&gt;
aws.ec2.SecurityGroupIngressArgs(&lt;br&gt;
  from_port=22,&lt;br&gt;
  to_port=22,&lt;br&gt;
  protocol="tcp",&lt;br&gt;
  cidr_blocks=[cfg1.require(key="myips")]&lt;br&gt;
),&lt;/p&gt;

&lt;p&gt;aws.ec2.SecurityGroupIngressArgs(&lt;br&gt;
  from_port=10250,&lt;br&gt;
  to_port=10250,&lt;br&gt;
  protocol="tcp",&lt;br&gt;
  cidr_blocks=[vpc1.cidr_block]&lt;br&gt;
),&lt;br&gt;
aws.ec2.SecurityGroupIngressArgs(&lt;br&gt;
  from_port=10256,&lt;br&gt;
  to_port=10256,&lt;br&gt;
  protocol="tcp",&lt;br&gt;
  cidr_blocks=[vpc1.cidr_block]&lt;br&gt;
),&lt;br&gt;
aws.ec2.SecurityGroupIngressArgs(&lt;br&gt;
  from_port=30000,&lt;br&gt;
  to_port=32767,&lt;br&gt;
  protocol="tcp",&lt;br&gt;
  cidr_blocks=[vpc1.cidr_block]&lt;br&gt;
),&lt;br&gt;
]&lt;br&gt;
workeregress=[&lt;br&gt;
aws.ec2.SecurityGroupEgressArgs(&lt;br&gt;
  from_port=0,&lt;br&gt;
  to_port=0,&lt;br&gt;
  protocol="-1",&lt;br&gt;
  cidr_blocks=[cfg1.require(key="any_ipv4_traffic")] &lt;/p&gt;

&lt;p&gt;),&lt;br&gt;
]&lt;/p&gt;

&lt;p&gt;workersecurity=aws.ec2.SecurityGroup(&lt;br&gt;
    "workersecurity",&lt;br&gt;
    aws.ec2.SecurityGroupArgs(&lt;br&gt;
      vpc_id=vpc1.id,&lt;br&gt;
      name="workersecurity",&lt;br&gt;
      ingress=workeringress,&lt;br&gt;
      egress=workeregress,&lt;br&gt;
      tags={&lt;br&gt;
        "Name" : "workersecurity"&lt;br&gt;
      }&lt;br&gt;
    )&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;master=aws.ec2.Instance(&lt;br&gt;
  "master",&lt;br&gt;
  aws.ec2.InstanceArgs(&lt;br&gt;
    ami=cfg1.require(key='ami'),&lt;br&gt;
    instance_type=cfg1.require(key='instance-type'),&lt;br&gt;
    vpc_security_group_ids=[mastersecurity.id],&lt;br&gt;
    subnet_id=publicsubnets[0].id,&lt;br&gt;
    availability_zone=zones,&lt;br&gt;
    key_name="mykey1",&lt;br&gt;
    tags={&lt;br&gt;
      "Name" : "master"&lt;br&gt;
    },&lt;br&gt;
    ebs_block_devices=[&lt;br&gt;
      aws.ec2.InstanceEbsBlockDeviceArgs(&lt;br&gt;
        device_name="/dev/sdm",&lt;br&gt;
        volume_size=8,&lt;br&gt;
        volume_type="gp3"&lt;br&gt;
      )&lt;br&gt;
    ],&lt;br&gt;
  )&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;worker1=aws.ec2.Instance(&lt;br&gt;
  "worker1",&lt;br&gt;
  aws.ec2.InstanceArgs(&lt;br&gt;
    ami=cfg1.require(key='ami'),&lt;br&gt;
    instance_type=cfg1.require(key='instance-type'),&lt;br&gt;
    vpc_security_group_ids=[workersecurity.id],&lt;br&gt;
    subnet_id=publicsubnets[1].id,&lt;br&gt;
    availability_zone=zones,&lt;br&gt;
    key_name="mykey2",&lt;br&gt;
    tags={&lt;br&gt;
      "Name" : "worker1"&lt;br&gt;
    },&lt;br&gt;
    ebs_block_devices=[&lt;br&gt;
      aws.ec2.InstanceEbsBlockDeviceArgs(&lt;br&gt;
        device_name="/dev/sdb",&lt;br&gt;
        volume_size=8,&lt;br&gt;
        volume_type="gp3"&lt;br&gt;
      )&lt;br&gt;
    ],&lt;br&gt;
  )&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;worker2=aws.ec2.Instance(&lt;br&gt;
  "worker2",&lt;br&gt;
  aws.ec2.InstanceArgs(&lt;br&gt;
    ami=cfg1.require(key='ami'),&lt;br&gt;
    instance_type=cfg1.require(key='instance-type'),&lt;br&gt;
    vpc_security_group_ids=[workersecurity.id],&lt;br&gt;
    subnet_id=publicsubnets[2].id,&lt;br&gt;
    key_name="mykey2",&lt;br&gt;
    tags={&lt;br&gt;
      "Name" : "worker2"&lt;br&gt;
    },&lt;br&gt;
    ebs_block_devices=[&lt;br&gt;
      aws.ec2.InstanceEbsBlockDeviceArgs(&lt;br&gt;
        device_name="/dev/sdc",&lt;br&gt;
        volume_size=8,&lt;br&gt;
        volume_type="gp3"&lt;br&gt;
      )&lt;br&gt;
    ],&lt;br&gt;
  )&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;eips=[  "eip1" , "eip2" , "eip3" ]&lt;br&gt;
for alleips in range(len(eips)):&lt;br&gt;
    eips[alleips]=aws.ec2.Eip(&lt;br&gt;
      eips[alleips],&lt;br&gt;
      aws.ec2.EipArgs(&lt;br&gt;
        domain="vpc",&lt;br&gt;
        tags={&lt;br&gt;
          "Name" : eips[alleips]&lt;br&gt;
        } &lt;br&gt;
      )&lt;br&gt;
    )&lt;/p&gt;

&lt;p&gt;eiplink1=aws.ec2.EipAssociation(&lt;br&gt;
  "eiplink1",&lt;br&gt;
  aws.ec2.EipAssociationArgs(&lt;br&gt;
    allocation_id=eips[0].id,&lt;br&gt;
    instance_id=master.id&lt;br&gt;
  )&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;eiplink2=aws.ec2.EipAssociation(&lt;br&gt;
  "eiplink2",&lt;br&gt;
  aws.ec2.EipAssociationArgs(&lt;br&gt;
    allocation_id=eips[1].id,&lt;br&gt;
    instance_id=worker1.id&lt;br&gt;
  )&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;eiplink3=aws.ec2.EipAssociation(&lt;br&gt;
  "eiplink3",&lt;br&gt;
  aws.ec2.EipAssociationArgs(&lt;br&gt;
    allocation_id=eips[2].id,&lt;br&gt;
    instance_id=worker2.id&lt;br&gt;
  )&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;pulumi.export("master_eip" , value=eips[0].public_ip  )&lt;/p&gt;

&lt;p&gt;pulumi.export("worker1_eip", value=eips[1].public_ip  )&lt;/p&gt;

&lt;p&gt;pulumi.export("worker2_eip", value=eips[2].public_ip  )&lt;/p&gt;

&lt;p&gt;pulumi.export("master_private_ip", value=master.private_ip)&lt;/p&gt;

&lt;p&gt;pulumi.export("worker1_private_ip" , value=worker1.private_ip)&lt;/p&gt;

&lt;p&gt;pulumi.export( "worker2_private_ip" , value=worker2.private_ip )&lt;/p&gt;

&lt;p&gt;================================================================&lt;/p&gt;

&lt;p&gt;Notes:&lt;br&gt;
 create  3 script shell     for control plane and worker nodes&lt;/p&gt;

&lt;p&gt;example:&lt;br&gt;
  master.sh   for   control plane node &lt;br&gt;
  worker1.sh  for  worker node 1 &lt;br&gt;
  worker2.sh  for worker node  2&lt;/p&gt;

&lt;p&gt;give permission to  execute those script&lt;br&gt;&lt;br&gt;
 sudo  chmod +x   master.sh&lt;br&gt;&lt;br&gt;
 sudo  chmod +x   worker1.sh &lt;br&gt;
 sudo chmod  +x  worker2.sh &lt;/p&gt;

&lt;p&gt;Under  master.sh :&lt;/p&gt;

&lt;h1&gt;
  
  
  !bin/bash
&lt;/h1&gt;

&lt;h1&gt;
  
  
  disable swap
&lt;/h1&gt;

&lt;p&gt;sudo swapoff -a&lt;br&gt;
sudo sed -i '/ swap / s/^(.*)$/#\1/g' /etc/fstab&lt;/p&gt;

&lt;h1&gt;
  
  
  update kernel params
&lt;/h1&gt;

&lt;p&gt;cat &amp;lt;&amp;lt;EOF | sudo tee /etc/modules-load.d/k8s.conf&lt;br&gt;
overlay&lt;br&gt;
br_netfilter&lt;br&gt;
EOF&lt;/p&gt;

&lt;p&gt;sudo modprobe overlay&lt;br&gt;
sudo modprobe br_netfilter&lt;/p&gt;

&lt;p&gt;cat &amp;lt;&amp;lt;EOF | sudo tee /etc/sysctl.d/k8s.conf&lt;br&gt;
net.bridge.bridge-nf-call-iptables  = 1&lt;br&gt;
net.bridge.bridge-nf-call-ip6tables = 1&lt;br&gt;
net.ipv4.ip_forward                 = 1&lt;br&gt;
EOF&lt;/p&gt;

&lt;p&gt;sudo sysctl --system&lt;/p&gt;

&lt;h1&gt;
  
  
  install container runtime
&lt;/h1&gt;

&lt;h1&gt;
  
  
  install containerd 1.7.27
&lt;/h1&gt;

&lt;p&gt;curl -LO  &lt;a href="https://github.com/containerd/containerd/releases/download/v1.7.27/containerd-1.7.27-linux-amd64.tar.gz" rel="noopener noreferrer"&gt;https://github.com/containerd/containerd/releases/download/v1.7.27/containerd-1.7.27-linux-amd64.tar.gz&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;sudo tar Cxzvf /usr/local containerd-1.7.27-linux-amd64.tar.gz&lt;/p&gt;

&lt;p&gt;curl -LO  &lt;a href="https://raw.githubusercontent.com/containerd/containerd/main/containerd.service" rel="noopener noreferrer"&gt;https://raw.githubusercontent.com/containerd/containerd/main/containerd.service&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;sudo mkdir -p /usr/local/lib/systemd/system/&lt;br&gt;
sudo mv containerd.service /usr/local/lib/systemd/system/&lt;br&gt;
sudo mkdir -p /etc/containerd&lt;br&gt;
sudo containerd config default | sudo tee /etc/containerd/config.toml&lt;br&gt;
sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml&lt;/p&gt;

&lt;p&gt;sudo systemctl daemon-reload&lt;br&gt;
sudo systemctl enable containerd --now&lt;/p&gt;

&lt;h1&gt;
  
  
  install runc
&lt;/h1&gt;

&lt;p&gt;curl -LO &lt;a href="https://github.com/opencontainers/runc/releases/download/v1.1.12/runc.amd64" rel="noopener noreferrer"&gt;https://github.com/opencontainers/runc/releases/download/v1.1.12/runc.amd64&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;sudo install -m 755 runc.amd64 /usr/local/sbin/runc  &lt;/p&gt;

&lt;h1&gt;
  
  
  install cni plugins
&lt;/h1&gt;

&lt;p&gt;curl -LO "&lt;a href="https://github.com/containernetworking/plugins/releases/download/v1.6.2/cni-plugins-linux-amd64-v1.6.2.tgz" rel="noopener noreferrer"&gt;https://github.com/containernetworking/plugins/releases/download/v1.6.2/cni-plugins-linux-amd64-v1.6.2.tgz&lt;/a&gt;"&lt;/p&gt;

&lt;p&gt;sudo mkdir -p /opt/cni/bin&lt;/p&gt;

&lt;p&gt;sudo tar Cxzvf /opt/cni/bin cni-plugins-linux-amd64-v1.6.2.tgz&lt;/p&gt;

&lt;h1&gt;
  
  
  install kubeadm, kubelet, kubectl
&lt;/h1&gt;

&lt;p&gt;sudo apt-get update -y&lt;br&gt;
sudo apt-get install -y apt-transport-https ca-certificates curl gpg&lt;br&gt;
curl -fsSL &lt;a href="https://pkgs.k8s.io/core:/stable:/v1.31/deb/Release.key" rel="noopener noreferrer"&gt;https://pkgs.k8s.io/core:/stable:/v1.31/deb/Release.key&lt;/a&gt; | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg&lt;br&gt;
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] &lt;a href="https://pkgs.k8s.io/core:/stable:/v1.31/deb/" rel="noopener noreferrer"&gt;https://pkgs.k8s.io/core:/stable:/v1.31/deb/&lt;/a&gt; /' | sudo tee /etc/apt/sources.list.d/kubernetes.list&lt;br&gt;
sudo apt-get update -y&lt;br&gt;
sudo apt-get install -y kubelet kubeadm kubectl&lt;br&gt;
sudo apt-mark hold kubelet kubeadm kubectl&lt;/p&gt;

&lt;p&gt;kubeadm version &lt;br&gt;
kubelet --version&lt;br&gt;
kubectl version --client&lt;/p&gt;

&lt;h1&gt;
  
  
  configure crictl to work with containerd
&lt;/h1&gt;

&lt;p&gt;sudo crictl config  runtime-endpoint unix:///run/containerd/containerd.sock&lt;br&gt;
sudo chmod 777  -R /var/run/containerd/&lt;/p&gt;

&lt;h1&gt;
  
  
  initialize the control plane node using kubeadm
&lt;/h1&gt;

&lt;p&gt;sudo kubeadm init --pod-network-cidr=192.168.0.0/16 --apiserver-advertise-address=77.132.100.6 --node-name master&lt;/p&gt;

&lt;p&gt;mkdir -p $HOME/.kube&lt;br&gt;
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config&lt;br&gt;
sudo chown $(id -u):$(id -g) $HOME/.kube/config&lt;/p&gt;

&lt;p&gt;sudo chmod  777 .kube/ &lt;br&gt;
sudo chmod  777 -R /etc/kubernetes/&lt;/p&gt;

&lt;p&gt;export KUBECONFIG=/etc/kubernetes/admin.conf&lt;/p&gt;

&lt;h1&gt;
  
  
  install cni calico
&lt;/h1&gt;

&lt;p&gt;kubectl create -f &lt;a href="https://raw.githubusercontent.com/projectcalico/calico/v3.29.2/manifests/tigera-operator.yaml" rel="noopener noreferrer"&gt;https://raw.githubusercontent.com/projectcalico/calico/v3.29.2/manifests/tigera-operator.yaml&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;curl -o custom-resources.yaml &lt;a href="https://raw.githubusercontent.com/projectcalico/calico/v3.29.2/manifests/custom-resources.yaml" rel="noopener noreferrer"&gt;https://raw.githubusercontent.com/projectcalico/calico/v3.29.2/manifests/custom-resources.yaml&lt;/a&gt;  &lt;/p&gt;

&lt;p&gt;kubectl apply -f custom-resources.yaml&lt;/p&gt;

&lt;h1&gt;
  
  
  On control plane, get join command:
&lt;/h1&gt;

&lt;p&gt;sudo kubeadm token create --print-join-command &amp;gt;&amp;gt; join-master.sh&lt;br&gt;
sudo chmod +x join-master.sh&lt;/p&gt;

&lt;p&gt;sudo kubeadm join 77.132.100.6:6443 --token 26qu35.wtps7hdhutk22nt8 \&lt;br&gt;
    --discovery-token-ca-cert-hash sha256:4c0cf2ce4c52d2b1e767056ffd7e2196358ef95c508f1a1abbe83e706a273538 &lt;/p&gt;

&lt;p&gt;Under worker1.sh : &lt;/p&gt;

&lt;h1&gt;
  
  
  !bin/bash
&lt;/h1&gt;

&lt;h1&gt;
  
  
  disable swap
&lt;/h1&gt;

&lt;p&gt;sudo swapoff -a&lt;br&gt;
sudo sed -i '/ swap / s/^(.*)$/#\1/g' /etc/fstab&lt;/p&gt;

&lt;h1&gt;
  
  
  update kernel params
&lt;/h1&gt;

&lt;p&gt;cat &amp;lt;&amp;lt;EOF | sudo tee /etc/modules-load.d/k8s.conf&lt;br&gt;
overlay&lt;br&gt;
br_netfilter&lt;br&gt;
EOF&lt;/p&gt;

&lt;p&gt;sudo modprobe overlay&lt;br&gt;
sudo modprobe br_netfilter&lt;/p&gt;

&lt;p&gt;cat &amp;lt;&amp;lt;EOF | sudo tee /etc/sysctl.d/k8s.conf&lt;br&gt;
net.bridge.bridge-nf-call-iptables  = 1&lt;br&gt;
net.bridge.bridge-nf-call-ip6tables = 1&lt;br&gt;
net.ipv4.ip_forward                 = 1&lt;br&gt;
EOF&lt;/p&gt;

&lt;p&gt;sudo sysctl --system&lt;/p&gt;

&lt;h1&gt;
  
  
  install container runtime
&lt;/h1&gt;

&lt;h1&gt;
  
  
  install containerd 1.7.27
&lt;/h1&gt;

&lt;p&gt;curl -LO  &lt;a href="https://github.com/containerd/containerd/releases/download/v1.7.27/containerd-1.7.27-linux-amd64.tar.gz" rel="noopener noreferrer"&gt;https://github.com/containerd/containerd/releases/download/v1.7.27/containerd-1.7.27-linux-amd64.tar.gz&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;sudo tar Cxzvf /usr/local containerd-1.7.27-linux-amd64.tar.gz&lt;/p&gt;

&lt;p&gt;curl -LO  &lt;a href="https://raw.githubusercontent.com/containerd/containerd/main/containerd.service" rel="noopener noreferrer"&gt;https://raw.githubusercontent.com/containerd/containerd/main/containerd.service&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;sudo mkdir -p /usr/local/lib/systemd/system/&lt;br&gt;
sudo mv containerd.service /usr/local/lib/systemd/system/&lt;br&gt;
sudo mkdir -p /etc/containerd&lt;br&gt;
sudo containerd config default | sudo tee /etc/containerd/config.toml&lt;br&gt;
sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml&lt;/p&gt;

&lt;p&gt;sudo systemctl daemon-reload&lt;br&gt;
sudo systemctl enable containerd --now&lt;/p&gt;

&lt;h1&gt;
  
  
  install runc
&lt;/h1&gt;

&lt;p&gt;curl -LO &lt;a href="https://github.com/opencontainers/runc/releases/download/v1.1.12/runc.amd64" rel="noopener noreferrer"&gt;https://github.com/opencontainers/runc/releases/download/v1.1.12/runc.amd64&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;sudo install -m 755 runc.amd64 /usr/local/sbin/runc  &lt;/p&gt;

&lt;h1&gt;
  
  
  install cni plugins
&lt;/h1&gt;

&lt;p&gt;curl -LO "&lt;a href="https://github.com/containernetworking/plugins/releases/download/v1.6.2/cni-plugins-linux-amd64-v1.6.2.tgz" rel="noopener noreferrer"&gt;https://github.com/containernetworking/plugins/releases/download/v1.6.2/cni-plugins-linux-amd64-v1.6.2.tgz&lt;/a&gt;"&lt;/p&gt;

&lt;p&gt;sudo mkdir -p /opt/cni/bin&lt;/p&gt;

&lt;p&gt;sudo tar Cxzvf /opt/cni/bin cni-plugins-linux-amd64-v1.6.2.tgz&lt;/p&gt;

&lt;h1&gt;
  
  
  install kubeadm, kubelet, kubectl
&lt;/h1&gt;

&lt;p&gt;sudo apt-get update -y&lt;br&gt;
sudo apt-get install -y apt-transport-https ca-certificates curl gpg&lt;br&gt;
curl -fsSL &lt;a href="https://pkgs.k8s.io/core:/stable:/v1.31/deb/Release.key" rel="noopener noreferrer"&gt;https://pkgs.k8s.io/core:/stable:/v1.31/deb/Release.key&lt;/a&gt; | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg&lt;br&gt;
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] &lt;a href="https://pkgs.k8s.io/core:/stable:/v1.31/deb/" rel="noopener noreferrer"&gt;https://pkgs.k8s.io/core:/stable:/v1.31/deb/&lt;/a&gt; /' | sudo tee /etc/apt/sources.list.d/kubernetes.list&lt;br&gt;
sudo apt-get update -y&lt;br&gt;
sudo apt-get install -y kubelet kubeadm kubectl&lt;br&gt;
sudo apt-mark hold kubelet kubeadm kubectl&lt;/p&gt;

&lt;p&gt;kubeadm version &lt;br&gt;
kubelet --version&lt;br&gt;
kubectl version --client&lt;/p&gt;

&lt;h1&gt;
  
  
  configure crictl to work with containerd
&lt;/h1&gt;

&lt;p&gt;sudo crictl config  runtime-endpoint unix:///run/containerd/containerd.sock&lt;br&gt;
sudo chmod 777  -R /var/run/containerd/&lt;/p&gt;

&lt;p&gt;sudo  hostname worker1&lt;/p&gt;

&lt;p&gt;mkdir -p $HOME/.kube&lt;br&gt;
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config&lt;br&gt;
sudo chown $(id -u):$(id -g) $HOME/.kube/config&lt;/p&gt;

&lt;p&gt;sudo chmod  777 .kube/ &lt;br&gt;
sudo chmod  777 -R /etc/kubernetes/&lt;/p&gt;

&lt;p&gt;sudo kubeadm join 77.132.100.6:6443 --token 26qu35.wtps7hdhutk22nt8 \&lt;br&gt;
    --discovery-token-ca-cert-hash sha256:4c0cf2ce4c52d2b1e767056ffd7e2196358ef95c508f1a1abbe83e706a273538 &lt;/p&gt;

&lt;p&gt;=======================&lt;br&gt;
Under worker2.sh  &lt;/p&gt;

&lt;h1&gt;
  
  
  !bin/bash
&lt;/h1&gt;

&lt;h1&gt;
  
  
  disable swap
&lt;/h1&gt;

&lt;p&gt;sudo swapoff -a&lt;br&gt;
sudo sed -i '/ swap / s/^(.*)$/#\1/g' /etc/fstab&lt;/p&gt;

&lt;h1&gt;
  
  
  update kernel params
&lt;/h1&gt;

&lt;p&gt;cat &amp;lt;&amp;lt;EOF | sudo tee /etc/modules-load.d/k8s.conf&lt;br&gt;
overlay&lt;br&gt;
br_netfilter&lt;br&gt;
EOF&lt;/p&gt;

&lt;p&gt;sudo modprobe overlay&lt;br&gt;
sudo modprobe br_netfilter&lt;/p&gt;

&lt;p&gt;cat &amp;lt;&amp;lt;EOF | sudo tee /etc/sysctl.d/k8s.conf&lt;br&gt;
net.bridge.bridge-nf-call-iptables  = 1&lt;br&gt;
net.bridge.bridge-nf-call-ip6tables = 1&lt;br&gt;
net.ipv4.ip_forward                 = 1&lt;br&gt;
EOF&lt;/p&gt;

&lt;p&gt;sudo sysctl --system&lt;/p&gt;

&lt;h1&gt;
  
  
  install container runtime
&lt;/h1&gt;

&lt;h1&gt;
  
  
  install containerd 1.7.27
&lt;/h1&gt;

&lt;p&gt;curl -LO  &lt;a href="https://github.com/containerd/containerd/releases/download/v1.7.27/containerd-1.7.27-linux-amd64.tar.gz" rel="noopener noreferrer"&gt;https://github.com/containerd/containerd/releases/download/v1.7.27/containerd-1.7.27-linux-amd64.tar.gz&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;sudo tar Cxzvf /usr/local containerd-1.7.27-linux-amd64.tar.gz&lt;/p&gt;

&lt;p&gt;curl -LO  &lt;a href="https://raw.githubusercontent.com/containerd/containerd/main/containerd.service" rel="noopener noreferrer"&gt;https://raw.githubusercontent.com/containerd/containerd/main/containerd.service&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;sudo mkdir -p /usr/local/lib/systemd/system/&lt;br&gt;
sudo mv containerd.service /usr/local/lib/systemd/system/&lt;br&gt;
sudo mkdir -p /etc/containerd&lt;br&gt;
sudo containerd config default | sudo tee /etc/containerd/config.toml&lt;br&gt;
sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml&lt;/p&gt;

&lt;p&gt;sudo systemctl daemon-reload&lt;br&gt;
sudo systemctl enable containerd --now&lt;/p&gt;

&lt;h1&gt;
  
  
  install runc
&lt;/h1&gt;

&lt;p&gt;curl -LO &lt;a href="https://github.com/opencontainers/runc/releases/download/v1.1.12/runc.amd64" rel="noopener noreferrer"&gt;https://github.com/opencontainers/runc/releases/download/v1.1.12/runc.amd64&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;sudo install -m 755 runc.amd64 /usr/local/sbin/runc  &lt;/p&gt;

&lt;h1&gt;
  
  
  install cni plugins
&lt;/h1&gt;

&lt;p&gt;curl -LO "&lt;a href="https://github.com/containernetworking/plugins/releases/download/v1.6.2/cni-plugins-linux-amd64-v1.6.2.tgz" rel="noopener noreferrer"&gt;https://github.com/containernetworking/plugins/releases/download/v1.6.2/cni-plugins-linux-amd64-v1.6.2.tgz&lt;/a&gt;"&lt;/p&gt;

&lt;p&gt;sudo mkdir -p /opt/cni/bin&lt;/p&gt;

&lt;p&gt;sudo tar Cxzvf /opt/cni/bin cni-plugins-linux-amd64-v1.6.2.tgz&lt;/p&gt;

&lt;h1&gt;
  
  
  install kubeadm, kubelet, kubectl
&lt;/h1&gt;

&lt;p&gt;sudo apt-get update -y&lt;br&gt;
sudo apt-get install -y apt-transport-https ca-certificates curl gpg&lt;br&gt;
curl -fsSL &lt;a href="https://pkgs.k8s.io/core:/stable:/v1.31/deb/Release.key" rel="noopener noreferrer"&gt;https://pkgs.k8s.io/core:/stable:/v1.31/deb/Release.key&lt;/a&gt; | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg&lt;br&gt;
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] &lt;a href="https://pkgs.k8s.io/core:/stable:/v1.31/deb/" rel="noopener noreferrer"&gt;https://pkgs.k8s.io/core:/stable:/v1.31/deb/&lt;/a&gt; /' | sudo tee /etc/apt/sources.list.d/kubernetes.list&lt;br&gt;
sudo apt-get update -y&lt;br&gt;
sudo apt-get install -y kubelet kubeadm kubectl&lt;br&gt;
sudo apt-mark hold kubelet kubeadm kubectl&lt;/p&gt;

&lt;p&gt;kubeadm version &lt;br&gt;
kubelet --version&lt;br&gt;
kubectl version --client&lt;/p&gt;

&lt;h1&gt;
  
  
  configure crictl to work with containerd
&lt;/h1&gt;

&lt;p&gt;sudo crictl config  runtime-endpoint unix:///run/containerd/containerd.sock&lt;br&gt;
sudo chmod 777  -R /var/run/containerd/&lt;/p&gt;

&lt;p&gt;sudo  hostname worker2&lt;/p&gt;

&lt;p&gt;mkdir -p $HOME/.kube&lt;br&gt;
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config&lt;br&gt;
sudo chown $(id -u):$(id -g) $HOME/.kube/config&lt;/p&gt;

&lt;p&gt;sudo chmod  777 .kube/ &lt;br&gt;
sudo chmod  777 -R /etc/kubernetes/&lt;/p&gt;

&lt;p&gt;sudo kubeadm join 77.132.100.6:6443 --token 26qu35.wtps7hdhutk22nt8 \&lt;br&gt;
    --discovery-token-ca-cert-hash sha256:4c0cf2ce4c52d2b1e767056ffd7e2196358ef95c508f1a1abbe83e706a273538 &lt;/p&gt;

&lt;p&gt;Notes:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;remember  not all cni  plugins  support   network  policies&lt;br&gt;
you  can  use   calico  or cilium  plugins &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;for kubeadm , kubetlet , kubectl    should  same  version package&lt;br&gt;
 in this lab I used v1.31    to  have 1.31.7&lt;br&gt;
references:&lt;br&gt;
&lt;a href="https://kubernetes.io/docs/reference/networking/ports-and-protocols/" rel="noopener noreferrer"&gt;https://kubernetes.io/docs/reference/networking/ports-and-protocols/&lt;/a&gt; &lt;br&gt;
&lt;a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/" rel="noopener noreferrer"&gt;https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/&lt;/a&gt;&lt;br&gt;
&lt;a href="https://github.com/opencontainers/runc/releases/" rel="noopener noreferrer"&gt;https://github.com/opencontainers/runc/releases/&lt;/a&gt; &lt;br&gt;
&lt;a href="https://github.com/containerd/containerd/releases" rel="noopener noreferrer"&gt;https://github.com/containerd/containerd/releases&lt;/a&gt;&lt;br&gt;
&lt;a href="https://github.com/opencontainers/runc" rel="noopener noreferrer"&gt;https://github.com/opencontainers/runc&lt;/a&gt; &lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Cka  2024-2025 series:&lt;br&gt;
&lt;a href="https://www.youtube.com/watch?v=6_gMoe7Ik8k&amp;amp;list=PLl4APkPHzsUUOkOv3i62UidrLmSB8DcGC" rel="noopener noreferrer"&gt;https://www.youtube.com/watch?v=6_gMoe7Ik8k&amp;amp;list=PLl4APkPHzsUUOkOv3i62UidrLmSB8DcGC&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;Note:   --apiserver-advertise-address= [ use private ip address of control plane node ]  example 77.132.100.6&lt;/p&gt;

</description>
    </item>
    <item>
      <title>pulumi stack reference and Kubernetes provider to be centralized stack cluster for Eks auto mode and use Kubernetes provider</title>
      <dc:creator>MohammedBanabila</dc:creator>
      <pubDate>Tue, 28 Jan 2025 15:14:41 +0000</pubDate>
      <link>https://dev.to/mohammed_banabila_3bc9e49/pulumi-stack-reference-and-kubernetes-provider-to-be-centralized-stack-cluster-for-eks-auto-mode-45ng</link>
      <guid>https://dev.to/mohammed_banabila_3bc9e49/pulumi-stack-reference-and-kubernetes-provider-to-be-centralized-stack-cluster-for-eks-auto-mode-45ng</guid>
      <description>&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;                                                                Eks auto mode 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Before this feature, we require to manage the compute, networking, storage, observability, authentication   and authorization, with eks cluster. to deploy and access workload.&lt;br&gt;
In this feature, it handles the compute, networking, storage, observability, authentication   and authorization, and ease you only to deploy workloads&lt;br&gt;
And manage them depend on your requirements.&lt;br&gt;
To enable auto mode with cluster:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Enable access with authentication mode either API or API_AND_CONFIGMAP&lt;/li&gt;
&lt;li&gt; Enable compute which manage and operate nodepools
&lt;/li&gt;
&lt;li&gt; Enable storage which let to use Ebs or file system to store the data
&lt;/li&gt;
&lt;li&gt; Enable vpc configuration to manage subnet, endpoint public access, endpoint private access, &lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Enable Kubernetes configuration which let you to deploy elastic load balancing &lt;br&gt;
Without need to install aws load balancer controller, optional to add network cidr  of Kubernetes either ipv4 or ipv6 or on behalf  will be create it either 10.100.0.0/16 or 172.31.0.0/16&lt;/p&gt;

&lt;p&gt;Note:  You can deploy eks auto mode to exiting cluster by enable all components&lt;br&gt;
You can deploy eks auto mode with:&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Aws cli &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Eksctl &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Infrastructure as code example terraform, pulumi and cloudformation&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Aws management console  &lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;My studies references: &lt;br&gt;
        1. AWS re:Invent 2024 - Automate your entire Kubernetes cluster with Amazon EKS Auto Mode (KUB204-NEW)&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;     https://www.youtube.com/watch?v=a_aDPo9oTMo&amp;amp;t=919s
    2. Dive into Amazon EKS Auto Mode
    https://www.youtube.com/watch?v=qxPP6zb_3mM&amp;amp;t=2092s
   3. AWS re:Invent 2024 - Simplify Kubernetes workloads with Karpenter &amp;amp; Amazon EKS Auto Mode (KUB312)
   https://www.youtube.com/watch?v=JwzP8I8tdaY  
   4. Simplify Amazon EKS Cluster Access Management using EKS Access Entry API

       https://www.youtube.com/watch?v=NvfOulAqy8w&amp;amp;t=48s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ol&gt;
&lt;li&gt;  EKS Auto - Theory and Live Demo and Q/A (From Principal SA at AWS)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=qxVUMyW3a_g" rel="noopener noreferrer"&gt;https://www.youtube.com/watch?v=qxVUMyW3a_g&lt;/a&gt;&lt;br&gt;
       6. Simplified Amazon EKS Access - NEW Cluster Access Management Controls&lt;br&gt;
              &lt;a href="https://www.youtube.com/watch?v=ae25cbV5Lxo" rel="noopener noreferrer"&gt;https://www.youtube.com/watch?v=ae25cbV5Lxo&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;  Automate cluster infrastructure with EKS Auto Mode&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/automode.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/eks/latest/userguide/automode.html&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; A deep dive into simplified Amazon EKS access management controls&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/blogs/containers/a-deep-dive-into-simplified-amazon-eks-access-management-controls/" rel="noopener noreferrer"&gt;https://aws.amazon.com/blogs/containers/a-deep-dive-into-simplified-amazon-eks-access-management-controls/&lt;/a&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;             example of deploy eks auto mode with pulumi at python  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;a"""An AWS Python Pulumi program"""&lt;br&gt;
import pulumi , pulumi_aws as aws ,json &lt;/p&gt;

&lt;p&gt;cfg1=pulumi.Config()&lt;/p&gt;

&lt;p&gt;eksvpc1=aws.ec2.Vpc(&lt;br&gt;
    "eksvpc1",&lt;br&gt;
    aws.ec2.VpcArgs(&lt;br&gt;
        cidr_block=cfg1.require_secret(key="block1"),&lt;br&gt;
        tags={&lt;br&gt;
            "Name": "eksvpc1",&lt;br&gt;
        },&lt;br&gt;
        enable_dns_hostnames=True,&lt;br&gt;
        enable_dns_support=True&lt;br&gt;
    )&lt;br&gt;
)&lt;br&gt;
intgw1=aws.ec2.InternetGateway(&lt;br&gt;
    "intgw1" , &lt;br&gt;
    aws.ec2.InternetGatewayArgs(&lt;br&gt;
        vpc_id=eksvpc1.id,&lt;br&gt;
        tags={&lt;br&gt;
            "Name": "intgw1",&lt;br&gt;
        },&lt;br&gt;
    )&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;pbsubs=["public1","public2"]&lt;br&gt;
zones=["us-east-1a","us-east-1b"]&lt;br&gt;
pbcidr1=cfg1.require_secret("cidr1")&lt;br&gt;
pbcidr2=cfg1.require_secret("cidr2")&lt;br&gt;
pbcidrs=[pbcidr1,pbcidr2]&lt;br&gt;
for allpbsub in range(len(pbsubs)):&lt;br&gt;
    pbsubs[allpbsub]=aws.ec2.Subnet(&lt;br&gt;
        pbsubs[allpbsub],&lt;br&gt;
        aws.ec2.SubnetArgs(&lt;br&gt;
            vpc_id=eksvpc1.id,&lt;br&gt;
            cidr_block=pbcidrs[allpbsub],&lt;br&gt;
            availability_zone=zones[allpbsub],&lt;br&gt;
            map_public_ip_on_launch=True,&lt;br&gt;
            tags={&lt;br&gt;
                "Name" : pbsubs[allpbsub],&lt;br&gt;
                "kubernetes.io/role/elb": "1",&lt;br&gt;
            }&lt;br&gt;
        )&lt;br&gt;
    )&lt;/p&gt;

&lt;p&gt;ndsubs=["node1","node2"]&lt;br&gt;
ndcidr1=cfg1.require_secret("cidr3")&lt;br&gt;
ndcidr2=cfg1.require_secret("cidr4")&lt;br&gt;
ndcidrs=[ndcidr1,ndcidr2]&lt;br&gt;
for allndsub in range(len(ndsubs)):&lt;br&gt;
    ndsubs[allndsub]=aws.ec2.Subnet(&lt;br&gt;
        ndsubs[allndsub],&lt;br&gt;
        aws.ec2.SubnetArgs(&lt;br&gt;
            vpc_id=eksvpc1.id,&lt;br&gt;
            cidr_block=ndcidrs[allndsub],&lt;br&gt;
            availability_zone=zones[allndsub],&lt;br&gt;
            tags={&lt;br&gt;
                "Name" : ndsubs[allndsub],&lt;br&gt;
                "kubernetes.io/role/internal-elb": "1",&lt;br&gt;
            }&lt;br&gt;
        )&lt;br&gt;
    )&lt;/p&gt;

&lt;p&gt;publictable=aws.ec2.RouteTable(&lt;br&gt;
        "publictable",&lt;br&gt;
        aws.ec2.RouteTableArgs(&lt;br&gt;
            vpc_id=eksvpc1.id,&lt;br&gt;
            routes=[&lt;br&gt;
                aws.ec2.RouteTableRouteArgs(&lt;br&gt;
                    cidr_block=cfg1.require_secret(key="any-traffic-ipv4"),&lt;br&gt;
                    gateway_id=intgw1.id,&lt;br&gt;
                ),&lt;br&gt;
            ],&lt;br&gt;
            tags={&lt;br&gt;
                "Name": "publictable",&lt;br&gt;
            },&lt;br&gt;
        )&lt;br&gt;
    )&lt;/p&gt;

&lt;p&gt;tblink1=aws.ec2.RouteTableAssociation(&lt;br&gt;
        "tblink1",&lt;br&gt;
        aws.ec2.RouteTableAssociationArgs(&lt;br&gt;
            subnet_id=pbsubs[0].id,&lt;br&gt;
            route_table_id=publictable.id,&lt;br&gt;
        )&lt;br&gt;
    )&lt;/p&gt;

&lt;p&gt;tblink2=aws.ec2.RouteTableAssociation(&lt;br&gt;
        "tblink2",&lt;br&gt;
        aws.ec2.RouteTableAssociationArgs(&lt;br&gt;
            subnet_id=pbsubs[1].id,&lt;br&gt;
            route_table_id=publictable.id,&lt;br&gt;
        )&lt;br&gt;
    )&lt;/p&gt;

&lt;p&gt;eips=["eip1", "eip2"]&lt;br&gt;
for alleip in range(len(eips)):&lt;br&gt;
    eips[alleip]=aws.ec2.Eip(&lt;br&gt;
        eips[alleip],&lt;br&gt;
        aws.ec2.EipArgs(&lt;br&gt;
            domain="vpc",&lt;br&gt;
            tags={&lt;br&gt;
                "Name": eips[alleip],&lt;br&gt;
            },&lt;br&gt;
        )&lt;br&gt;
    )&lt;/p&gt;

&lt;p&gt;natgws=["natgw1", "natgw2"]&lt;br&gt;
allocates=[eips[0].id , eips[1].id]&lt;br&gt;
for allnat in range(len(natgws)):&lt;br&gt;
    natgws[allnat]=aws.ec2.NatGateway(&lt;br&gt;
        natgws[allnat],&lt;br&gt;
        aws.ec2.NatGatewayArgs(&lt;br&gt;
            subnet_id=pbsubs[allpbsub].id,&lt;br&gt;
            allocation_id=allocates[allnat],&lt;br&gt;
            tags={&lt;br&gt;
                "Name": natgws[allnat],&lt;br&gt;
            },&lt;br&gt;
        )&lt;br&gt;
    )&lt;/p&gt;

&lt;p&gt;privatetables=["privatetable1" , "privatetable2"]&lt;br&gt;
for allprivtable in range(len(privatetables)):&lt;br&gt;
    privatetables[allprivtable]=aws.ec2.RouteTable(&lt;br&gt;
        privatetables[allprivtable],&lt;br&gt;
        aws.ec2.RouteTableArgs(&lt;br&gt;
            vpc_id=eksvpc1.id,&lt;br&gt;
            routes=[&lt;br&gt;
              aws.ec2.RouteTableRouteArgs(&lt;br&gt;
                  cidr_block=cfg1.require_secret(key="any-traffic-ipv4"),&lt;br&gt;
                  nat_gateway_id=natgws[0].id&lt;br&gt;
                  ),&lt;br&gt;
              aws.ec2.RouteTableRouteArgs(&lt;br&gt;
                  cidr_block=cfg1.require_secret(key="any-traffic-ipv4"),&lt;br&gt;
                  nat_gateway_id=natgws[1].id&lt;br&gt;
                  ),&lt;br&gt;
            ],&lt;br&gt;
            tags={&lt;br&gt;
                "Name": privatetables[allprivtable],&lt;br&gt;
            },&lt;br&gt;
        )&lt;br&gt;
    )&lt;/p&gt;

&lt;p&gt;privatetablelink1=aws.ec2.RouteTableAssociation(&lt;br&gt;
    "privatetablelink1",&lt;br&gt;
    aws.ec2.RouteTableAssociationArgs(&lt;br&gt;
        subnet_id=ndsubs[0].id,&lt;br&gt;
        route_table_id=privatetables[0].id,&lt;br&gt;
    )&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;privatetablelink2=aws.ec2.RouteTableAssociation(&lt;br&gt;
    "privatetablelink2",&lt;br&gt;
    aws.ec2.RouteTableAssociationArgs(&lt;br&gt;
        subnet_id=ndsubs[1].id,&lt;br&gt;
        route_table_id=privatetables[1].id,&lt;br&gt;
    )&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;inbound_traffic=[&lt;br&gt;
    aws.ec2.NetworkAclIngressArgs(&lt;br&gt;
        from_port=0,&lt;br&gt;
        to_port=0,&lt;br&gt;
        rule_no=100,&lt;br&gt;
        action="allow",&lt;br&gt;
        protocol="-1",&lt;br&gt;
        cidr_block=cfg1.require_secret(key="any-traffic-ipv4"),&lt;br&gt;
        icmp_code=0,&lt;br&gt;
        icmp_type=0&lt;br&gt;
        ),&lt;br&gt;
]&lt;br&gt;
outbound_traffic=[&lt;br&gt;
    aws.ec2.NetworkAclEgressArgs(&lt;br&gt;
        from_port=0,&lt;br&gt;
        to_port=0,&lt;br&gt;
        rule_no=100,&lt;br&gt;
        action="allow",&lt;br&gt;
        protocol="-1",&lt;br&gt;
        cidr_block=cfg1.require_secret(key="any-traffic-ipv4"),&lt;br&gt;
        icmp_code=0,&lt;br&gt;
        icmp_type=0&lt;br&gt;
        ),&lt;br&gt;
]&lt;/p&gt;

&lt;p&gt;nacllists=["mynacls1" , "mynacls2"]&lt;br&gt;
for allnacls in range(len(nacllists)):&lt;br&gt;
    nacllists[allnacls]=aws.ec2.NetworkAcl(&lt;br&gt;
        nacllists[allnacls],&lt;br&gt;
        aws.ec2.NetworkAclArgs(&lt;br&gt;
            vpc_id=eksvpc1.id,&lt;br&gt;
            ingress=inbound_traffic,&lt;br&gt;
            egress=outbound_traffic,&lt;br&gt;
            tags={&lt;br&gt;
                "Name": nacllists[allnacls],&lt;br&gt;
            }&lt;br&gt;
        )&lt;br&gt;
    )&lt;br&gt;
nacls30=aws.ec2.NetworkAclAssociation(&lt;br&gt;
        "nacls30",&lt;br&gt;
        aws.ec2.NetworkAclAssociationArgs(&lt;br&gt;
            network_acl_id=nacllists[0].id,&lt;br&gt;
            subnet_id=pbsubs[0].id&lt;br&gt;
        )&lt;br&gt;
    )&lt;/p&gt;

&lt;p&gt;nacls31=aws.ec2.NetworkAclAssociation(&lt;br&gt;
        "nacls31",&lt;br&gt;
        aws.ec2.NetworkAclAssociationArgs(&lt;br&gt;
            network_acl_id=nacllists[0].id,&lt;br&gt;
            subnet_id=pbsubs[1].id&lt;br&gt;
        )&lt;br&gt;
    )   &lt;/p&gt;

&lt;p&gt;nacls10=aws.ec2.NetworkAclAssociation(&lt;br&gt;
        "nacls10",&lt;br&gt;
        aws.ec2.NetworkAclAssociationArgs(&lt;br&gt;
            network_acl_id=nacllists[1].id,&lt;br&gt;
            subnet_id=ndsubs[0].id&lt;br&gt;
        )&lt;br&gt;
    )&lt;/p&gt;

&lt;p&gt;nacls11=aws.ec2.NetworkAclAssociation(&lt;br&gt;
        "nacls11",&lt;br&gt;
        aws.ec2.NetworkAclAssociationArgs(&lt;br&gt;
            network_acl_id=nacllists[1].id,&lt;br&gt;
            subnet_id=ndsubs[1].id&lt;br&gt;
        )&lt;br&gt;
    ) &lt;/p&gt;

&lt;p&gt;eksrole=aws.iam.Role(&lt;br&gt;
    "eksrole",&lt;br&gt;
    aws.iam.RoleArgs(&lt;br&gt;
        assume_role_policy=json.dumps({&lt;br&gt;
            "Version":"2012-10-17",&lt;br&gt;
            "Statement":[&lt;br&gt;
                {&lt;br&gt;
                    "Effect":"Allow",&lt;br&gt;
                    "Principal":{&lt;br&gt;
                        "Service":"eks.amazonaws.com"&lt;br&gt;
                    },&lt;br&gt;
                    "Action": [&lt;br&gt;
                         "sts:AssumeRole",&lt;br&gt;
                         "sts:TagSession",&lt;br&gt;
                        ]&lt;br&gt;
                }&lt;br&gt;
            ]  })&lt;br&gt;&lt;br&gt;
))&lt;/p&gt;

&lt;p&gt;nodesrole=aws.iam.Role(&lt;br&gt;
    "nodesrole",&lt;br&gt;
    aws.iam.RoleArgs(&lt;br&gt;
        assume_role_policy=json.dumps({&lt;br&gt;
            "Version": "2012-10-17",&lt;br&gt;
            "Statement":[&lt;br&gt;
                {&lt;br&gt;
                    "Effect": "Allow",&lt;br&gt;
                    "Principal":{&lt;br&gt;
                        "Service":"ec2.amazonaws.com"&lt;br&gt;
                    },&lt;br&gt;
                    "Action": "sts:AssumeRole"&lt;br&gt;
                }&lt;br&gt;&lt;br&gt;
            ]&lt;br&gt;
        })&lt;br&gt;
))&lt;/p&gt;

&lt;p&gt;clusterattach1=aws.iam.RolePolicyAttachment(&lt;br&gt;
        "clusterattach1",&lt;br&gt;
        aws.iam.RolePolicyAttachmentArgs(&lt;br&gt;
            role=eksrole.name,&lt;br&gt;
            policy_arn="arn:aws:iam::aws:policy/AmazonEKSClusterPolicy",&lt;br&gt;
        )&lt;br&gt;
    )&lt;/p&gt;

&lt;p&gt;clusterattach2=aws.iam.RolePolicyAttachment(&lt;br&gt;
        "clusterattach2",&lt;br&gt;
        aws.iam.RolePolicyAttachmentArgs(&lt;br&gt;
            role=eksrole.name,&lt;br&gt;
            policy_arn="arn:aws:iam::aws:policy/AmazonEKSComputePolicy",&lt;br&gt;
        )&lt;br&gt;
    )&lt;/p&gt;

&lt;p&gt;clusterattach3=aws.iam.RolePolicyAttachment(&lt;br&gt;
        "clusterattach3",&lt;br&gt;
        aws.iam.RolePolicyAttachmentArgs(&lt;br&gt;
            role=eksrole.name,&lt;br&gt;
            policy_arn="arn:aws:iam::aws:policy/AmazonEKSBlockStoragePolicy",&lt;br&gt;
        )&lt;br&gt;
    )&lt;/p&gt;

&lt;p&gt;clusterattach4=aws.iam.RolePolicyAttachment(&lt;br&gt;
        "clusterattach4",&lt;br&gt;
        aws.iam.RolePolicyAttachmentArgs(&lt;br&gt;
            role=eksrole.name,&lt;br&gt;
            policy_arn="arn:aws:iam::aws:policy/AmazonEKSLoadBalancingPolicy",&lt;br&gt;
        )&lt;br&gt;
    )&lt;/p&gt;

&lt;p&gt;clusterattach5=aws.iam.RolePolicyAttachment(&lt;br&gt;
        "clusterattach5",&lt;br&gt;
        aws.iam.RolePolicyAttachmentArgs(&lt;br&gt;
            role=eksrole.name,&lt;br&gt;
            policy_arn="arn:aws:iam::aws:policy/AmazonEKSNetworkingPolicy",&lt;br&gt;
        )&lt;br&gt;
    )&lt;/p&gt;

&lt;p&gt;nodesattach1=aws.iam.RolePolicyAttachment(&lt;br&gt;
        "nodesattach1",&lt;br&gt;
        aws.iam.RolePolicyAttachmentArgs(&lt;br&gt;
            role=nodesrole.name,&lt;br&gt;
            policy_arn="arn:aws:iam::aws:policy/AmazonEKSWorkerNodeMinimalPolicy",&lt;br&gt;
        )&lt;br&gt;
    )&lt;/p&gt;

&lt;p&gt;nodesattach2=aws.iam.RolePolicyAttachment(&lt;br&gt;
        "nodesattach2",&lt;br&gt;
        aws.iam.RolePolicyAttachmentArgs(&lt;br&gt;
            role=nodesrole.name,&lt;br&gt;
            policy_arn="arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryPullOnly",&lt;br&gt;
        )&lt;br&gt;
    )&lt;/p&gt;

&lt;p&gt;automode=aws.eks.Cluster(&lt;br&gt;
    "automode",&lt;br&gt;
    aws.eks.ClusterArgs(&lt;br&gt;
        name="automode",&lt;br&gt;
        bootstrap_self_managed_addons=False,&lt;br&gt;
        role_arn=eksrole.arn,&lt;br&gt;
        version="1.31",&lt;br&gt;
        compute_config={&lt;br&gt;
          "enabled": True,&lt;br&gt;
          "node_pools": ["general-purpose"],&lt;br&gt;
          "node_role_arn": nodesrole.arn,&lt;br&gt;
        },&lt;br&gt;
        access_config={&lt;br&gt;
         "authentication_mode": "API",&lt;br&gt;
        },&lt;br&gt;
        storage_config={&lt;br&gt;
           "block_storage": {&lt;br&gt;
            "enabled": True,&lt;br&gt;
        },&lt;br&gt;&lt;br&gt;
          },&lt;br&gt;
        kubernetes_network_config={&lt;br&gt;
            "elastic_load_balancing": {&lt;br&gt;
            "enabled": True,&lt;br&gt;
            },&lt;br&gt;
        },&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    tags={
      "Name" : "automode"    
    },
    vpc_config={
    "endpoint_private_access": True,
    "endpoint_public_access": True,
    "public_access_cidrs": [
        cfg1.require_secret(key="myips"),
    ],
    "subnet_ids": [
        ndsubs[0].id,
        ndsubs[1].id,
    ],
    }
),
opts=pulumi.ResourceOptions(
        depends_on=[
          clusterattach1,
          clusterattach2,
          clusterattach3,
          clusterattach4,
          clusterattach5,
        ]
    )
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;)&lt;/p&gt;

&lt;p&gt;myentry1=aws.eks.AccessEntry(&lt;br&gt;
    "myentry1",&lt;br&gt;
     aws.eks.AccessEntryArgs(&lt;br&gt;
        cluster_name=automode.name,&lt;br&gt;
        principal_arn=cfg1.require_secret(key="principal"),&lt;br&gt;
        type="STANDARD"&lt;br&gt;
     ),&lt;br&gt;
     opts=pulumi.ResourceOptions(&lt;br&gt;
            depends_on=[&lt;br&gt;
              automode&lt;br&gt;
            ]&lt;br&gt;
        )&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;entrypolicy1=aws.eks.AccessPolicyAssociation(&lt;br&gt;
    "entrypolicy1",&lt;br&gt;
    aws.eks.AccessPolicyAssociationArgs(&lt;br&gt;
        cluster_name=automode.name,&lt;br&gt;
        principal_arn=myentry1.principal_arn,&lt;br&gt;
        policy_arn="arn:aws:eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy",&lt;br&gt;
        access_scope={&lt;br&gt;
            "type" : "cluster",&lt;br&gt;
        }&lt;br&gt;
    ),&lt;br&gt;
    opts=pulumi.ResourceOptions(&lt;br&gt;
            depends_on=[&lt;br&gt;
              automode&lt;br&gt;
            ]&lt;br&gt;
        )&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;Note: &lt;br&gt;
   In case you can’t access eks cluster from   local pc or labtop &lt;br&gt;
  Follow this video   , Fixing 'The Server Has Asked for Credentials' in Kubernetes    Cluster API - EKS AWS Production&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=4aLwQASVHAE" rel="noopener noreferrer"&gt;https://www.youtube.com/watch?v=4aLwQASVHAE&lt;/a&gt;&lt;br&gt;
create     deploy.yaml ,   for deployment  and service NodePort&lt;br&gt;
apiVersion: apps/v1&lt;br&gt;
kind: Deployment&lt;br&gt;
metadata:&lt;br&gt;
  name:  nginx-deploy&lt;br&gt;
  namespace: default&lt;br&gt;
  labels:&lt;br&gt;
    app: nginx&lt;br&gt;
spec:&lt;br&gt;
  selector:&lt;br&gt;
    matchLabels:&lt;br&gt;
      app: nginx&lt;br&gt;
  replicas: 4&lt;br&gt;
  strategy:&lt;br&gt;
    rollingUpdate:&lt;br&gt;
      maxSurge: 25%&lt;br&gt;
      maxUnavailable: 25%&lt;br&gt;
    type: RollingUpdate&lt;br&gt;
  template:&lt;br&gt;
    metadata:&lt;br&gt;
      labels:&lt;br&gt;
        app:  nginx&lt;br&gt;
    spec:&lt;br&gt;
      # initContainers:&lt;br&gt;
        # Init containers are exactly like regular containers, except:&lt;br&gt;
          # - Init containers always run to completion.&lt;br&gt;
          # - Each init container must complete successfully before the next one starts.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  containers:
  - name:  nginx
    image:  nginx:1.27.3
    imagePullPolicy: IfNotPresent
    resources:
      requests:
        cpu: 100m
        memory: 100Mi
      limits:
        cpu: 100m
        memory: 100Mi
    livenessProbe:
      tcpSocket:
        port: 80
      initialDelaySeconds: 5
      timeoutSeconds: 5
      successThreshold: 1
      failureThreshold: 3
      periodSeconds: 10
    readinessProbe:
      httpGet:
        path: /
        port: 80
      initialDelaySeconds: 5
      timeoutSeconds: 2
      successThreshold: 1
      failureThreshold: 3
      periodSeconds: 10
    ports:
    - containerPort:  80
      name:  http
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;




&lt;p&gt;apiVersion: v1&lt;br&gt;
kind: Service&lt;br&gt;
metadata:&lt;br&gt;
  name: nginx-svc&lt;br&gt;
  namespace: default&lt;br&gt;
spec:&lt;br&gt;
  selector:&lt;br&gt;
    app: nginx&lt;br&gt;
  type: NodePort&lt;br&gt;
  ports:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;name: nginx-svc
protocol: TCP
port: 80
targetPort: 80
# If you set the &lt;code&gt;spec.type&lt;/code&gt; field to &lt;code&gt;NodePort&lt;/code&gt; and you want a specific port number,
# you can specify a value in the &lt;code&gt;spec.ports[*].nodePort&lt;/code&gt; field.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Create  ingress   &lt;/p&gt;




&lt;p&gt;apiVersion: networking.k8s.io/v1&lt;br&gt;
kind: IngressClass&lt;br&gt;
metadata:&lt;br&gt;
  namespace: default&lt;br&gt;
  labels:&lt;br&gt;
    app.kubernetes.io/name: myingress&lt;br&gt;
  name: myingress&lt;br&gt;
spec:&lt;br&gt;
  controller: eks.amazonaws.com/alb&lt;/p&gt;




&lt;p&gt;apiVersion: networking.k8s.io/v1&lt;br&gt;
kind: Ingress&lt;br&gt;
metadata:&lt;br&gt;
  namespace: default&lt;br&gt;
  name: myalblab&lt;br&gt;
  annotations:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;spec:&lt;br&gt;
  ingressClassName: myingress&lt;br&gt;
  rules:&lt;br&gt;
    - http:&lt;br&gt;
        paths:&lt;br&gt;
          - path: /&lt;br&gt;
            pathType: Prefix&lt;br&gt;
            backend:&lt;br&gt;
              service:&lt;br&gt;
                name: nginx-svc&lt;br&gt;
                port:&lt;br&gt;
                  number: 80&lt;/p&gt;

&lt;h2&gt;
  
  
  create  Nlb   with  service loadbalancer  
&lt;/h2&gt;

&lt;p&gt;apiVersion: v1&lt;br&gt;
kind: Service&lt;br&gt;
metadata:&lt;br&gt;
  name: lbsvc&lt;br&gt;
  namespace: default&lt;br&gt;
  annotations:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;service.beta.kubernetes.io/aws-load-balancer-type: nlb
service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;spec:&lt;br&gt;
  selector:&lt;br&gt;
    app: lbsvc&lt;br&gt;
  type: LoadBalancer&lt;br&gt;
  ports:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;name: lbsvc
protocol: TCP
port: 80
targetPort: 80
# If you set the &lt;code&gt;spec.type&lt;/code&gt; field to &lt;code&gt;NodePort&lt;/code&gt; and you want a specific port number,
# you can specify a value in the &lt;code&gt;spec.ports[*].nodePort&lt;/code&gt; field.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Other options:   using stack reference and pulumi Kubernetes registry &lt;br&gt;
                                                                             Main goals: &lt;br&gt;
To have centralized stack which   other stacks able to use and deploy from it   their resources. Meaning the outputs from stack A be reference value to the stack B as variable. To ease manage and control   the provisioning resources&lt;br&gt;
  Using stack reference we would require to have: &lt;br&gt;
      Organization/project name/stack name&lt;br&gt;&lt;br&gt;
       Note:  Organization = pulumi account  name&lt;/p&gt;

&lt;p&gt;Example:&lt;br&gt;
    Stack name : eksdev   which deploy eks auto mode &lt;br&gt;
    Stack name : kubedev   which  deploy  pod ,service, ingress with pulumi Kubernetes   registry &lt;/p&gt;

&lt;p&gt;Stack name  kubedev &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; """A k8s Python Pulumi program"""
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;import pulumi , pulumi_kubernetes as k8s&lt;/p&gt;

&lt;p&gt;cluster_stack="MohammedBanabila/lab-eks/eksdev"&lt;/p&gt;

&lt;p&gt;stackref1=pulumi.StackReference(name="stackref1", stack_name=cluster_stack)&lt;/p&gt;

&lt;p&gt;cfg1=pulumi.Config()&lt;/p&gt;

&lt;p&gt;mycluster=pulumi.export("mycluster", value=stackref1.get_output("cluster"))&lt;/p&gt;

&lt;p&gt;provider=k8s.Provider("provider", cluster=mycluster)&lt;/p&gt;

&lt;p&gt;deploy=k8s.apps.v1.Deployment(&lt;br&gt;
    "deploy",&lt;br&gt;
    metadata=k8s.meta.v1.ObjectMetaArgs(&lt;br&gt;
          name="nginx-deploy",&lt;br&gt;
          namespace="default",&lt;br&gt;
          labels={&lt;br&gt;
              "app":"nginx"&lt;br&gt;
          } &lt;br&gt;
        ),&lt;br&gt;
    spec=k8s.apps.v1.DeploymentSpecArgs(&lt;br&gt;
        replicas=4,&lt;br&gt;
        selector=k8s.meta.v1.LabelSelectorArgs(&lt;br&gt;
            match_labels={&lt;br&gt;
                "app":"nginx"&lt;br&gt;
            }&lt;br&gt;
        ),&lt;br&gt;
        template=k8s.core.v1.PodTemplateSpecArgs(&lt;br&gt;
            metadata=k8s.meta.v1.ObjectMetaArgs(&lt;br&gt;
                labels={&lt;br&gt;
                    "app":"nginx"&lt;br&gt;
                }&lt;br&gt;
            ),&lt;br&gt;
            spec=k8s.core.v1.PodSpecArgs(&lt;br&gt;
                containers=[&lt;br&gt;
                    k8s.core.v1.ContainerArgs(&lt;br&gt;
                        name="nginx",&lt;br&gt;
                        image="nginx",&lt;br&gt;
                        ports=[&lt;br&gt;
                            k8s.core.v1.ContainerPortArgs(&lt;br&gt;
                                container_port=80&lt;br&gt;
                            )&lt;br&gt;
                        ]&lt;br&gt;
                    )&lt;br&gt;
                ]&lt;br&gt;
            )&lt;br&gt;
        )&lt;br&gt;
    )   ,&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;opts=pulumi.ResourceOptions(provider=provider)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;)&lt;/p&gt;

&lt;p&gt;svc1=k8s.core.v1.Service(&lt;br&gt;
    "svc1",&lt;br&gt;
    metadata=k8s.meta.v1.ObjectMetaArgs(&lt;br&gt;
        name="nginx-svc",&lt;br&gt;
        namespace="default"&lt;br&gt;
    ),&lt;br&gt;
    spec=k8s.core.v1.ServiceSpecArgs(&lt;br&gt;
        selector={&lt;br&gt;
            "app":"nginx"&lt;br&gt;
        },&lt;br&gt;
        ports=[&lt;br&gt;
            k8s.core.v1.ServicePortArgs(&lt;br&gt;
                port=80,&lt;br&gt;
                target_port=80&lt;br&gt;
            )&lt;br&gt;
        ],&lt;br&gt;
        type="NodePort"&lt;br&gt;
    )   ,&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;opts=pulumi.ResourceOptions(provider=provider)  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;)&lt;/p&gt;

&lt;p&gt;ingressclaass=k8s.networking.v1.IngressClass(&lt;br&gt;
    "ingressclass",&lt;br&gt;
    metadata=k8s.meta.v1.ObjectMetaArgs(&lt;br&gt;
        name="nginx-ingress",&lt;br&gt;
        namespace="default",&lt;br&gt;
        labels={&lt;br&gt;
            "app.kubernetes.io/name":"nginx-ingress"&lt;br&gt;
        }&lt;br&gt;
    ),&lt;br&gt;
    spec=k8s.networking.v1.IngressClassSpecArgs(&lt;br&gt;
        controller="eks.amazonaws.com/alb"&lt;br&gt;
    ),&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;opts=pulumi.ResourceOptions(provider=provider)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;)&lt;/p&gt;

&lt;p&gt;myingress=k8s.networking.v1.Ingress(&lt;br&gt;
    "myingress",&lt;br&gt;
    metadata=k8s.meta.v1.ObjectMetaArgs(&lt;br&gt;
        name="nginx-ingress",&lt;br&gt;
        namespace="default",&lt;br&gt;
        labels={&lt;br&gt;
            "app" : "nginx"&lt;br&gt;
        },&lt;br&gt;
        annotations={&lt;br&gt;
            "alb.ingress.kubernetes.io/scheme":"internet-facing",&lt;br&gt;
            "alb.ingress.kubernetes.io/target-type":"ip"&lt;br&gt;
        }&lt;br&gt;
    ),&lt;br&gt;
    spec=k8s.networking.v1.IngressSpecArgs(&lt;br&gt;
        ingress_class_name="nginx-ingress",&lt;br&gt;
        rules=[&lt;br&gt;
            k8s.networking.v1.IngressRuleArgs(&lt;br&gt;
                http=k8s.networking.v1.HTTPIngressRuleValueArgs(&lt;br&gt;
                    paths=[&lt;br&gt;
                        k8s.networking.v1.HTTPIngressPathArgs(&lt;br&gt;
                            path="/",&lt;br&gt;
                            path_type="Prefix",&lt;br&gt;
                            backend=k8s.networking.v1.IngressBackendArgs(&lt;br&gt;
                                service=k8s.networking.v1.IngressServiceBackendArgs(&lt;br&gt;
                                    name="nginx-svc",&lt;br&gt;
                                    port=k8s.networking.v1.ServiceBackendPortArgs(&lt;br&gt;
                                        number=80&lt;br&gt;
                                    )&lt;br&gt;
                                )&lt;br&gt;
                            )&lt;br&gt;
                        )&lt;br&gt;
                    ]&lt;br&gt;
                )&lt;br&gt;
            )&lt;br&gt;
        ]&lt;br&gt;
    )   ,&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;opts=pulumi.ResourceOptions(provider=provider)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;)&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Eks auto mode using pulumi</title>
      <dc:creator>MohammedBanabila</dc:creator>
      <pubDate>Fri, 24 Jan 2025 15:34:41 +0000</pubDate>
      <link>https://dev.to/mohammed_banabila_3bc9e49/eks-auto-mode-using-pulumi-22ed</link>
      <guid>https://dev.to/mohammed_banabila_3bc9e49/eks-auto-mode-using-pulumi-22ed</guid>
      <description>

&lt;p&gt;Before this feature, we require to manage the compute, networking, storage, observability, authentication   and authorization, with eks cluster. to deploy and access workload.&lt;br&gt;
In this feature, it handles the compute, networking, storage, observability, authentication   and authorization, and ease you only to deploy workloads&lt;br&gt;
And manage them depend on your requirements.&lt;br&gt;
To enable auto mode with cluster:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Enable access with authentication mode either API or API_AND_CONFIGMAP&lt;/li&gt;
&lt;li&gt; Enable compute which manage and operate nodepools
&lt;/li&gt;
&lt;li&gt; Enable storage which let to use Ebs or file system to store the data
&lt;/li&gt;
&lt;li&gt; Enable vpc configuration to manage subnet, endpoint public access, endpoint private access, &lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Enable Kubernetes configuration which let you to deploy elastic load balancing &lt;br&gt;
Without need to install aws load balancer controller, optional to add network cidr  of Kubernetes either ipv4 or ipv6 or on behalf  will be create it either 10.100.0.0/16 or 172.31.0.0/16&lt;/p&gt;

&lt;p&gt;Note:  You can deploy eks auto mode to exiting cluster by enable all components&lt;br&gt;
You can deploy eks auto mode with:&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Aws cli &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Eksctl &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Infrastructure as code example terraform, pulumi and cloudformation&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Aws management console  &lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;`   """An AWS Python Pulumi program"""&lt;br&gt;
import pulumi , pulumi_aws as aws ,json &lt;/p&gt;

&lt;p&gt;cfg1=pulumi.Config()&lt;/p&gt;

&lt;p&gt;eksvpc1=aws.ec2.Vpc(&lt;br&gt;
    "eksvpc1",&lt;br&gt;
    aws.ec2.VpcArgs(&lt;br&gt;
        cidr_block=cfg1.require_secret(key="block1"),&lt;br&gt;
        tags={&lt;br&gt;
            "Name": "eksvpc1",&lt;br&gt;
        },&lt;br&gt;
        enable_dns_hostnames=True,&lt;br&gt;
        enable_dns_support=True&lt;br&gt;
    )&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;intgw1=aws.ec2.InternetGateway(&lt;br&gt;
    "intgw1" , &lt;br&gt;
    aws.ec2.InternetGatewayArgs(&lt;br&gt;
        vpc_id=eksvpc1.id,&lt;br&gt;
        tags={&lt;br&gt;
            "Name": "intgw1",&lt;br&gt;
        },&lt;br&gt;
    )&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;pbsubs=["public1","public2"]&lt;br&gt;
zones=["us-east-1a","us-east-1b"]&lt;br&gt;
pbcidr1=cfg1.require_secret("cidr1")&lt;br&gt;
pbcidr2=cfg1.require_secret("cidr2")&lt;br&gt;
pbcidrs=[pbcidr1,pbcidr2]&lt;br&gt;
for allpbsub in range(len(pbsubs)):&lt;br&gt;
    pbsubs[allpbsub]=aws.ec2.Subnet(&lt;br&gt;
        pbsubs[allpbsub],&lt;br&gt;
        aws.ec2.SubnetArgs(&lt;br&gt;
            vpc_id=eksvpc1.id,&lt;br&gt;
            cidr_block=pbcidrs[allpbsub],&lt;br&gt;
            availability_zone=zones[allpbsub],&lt;br&gt;
            map_public_ip_on_launch=True,&lt;br&gt;
            tags={&lt;br&gt;
                "Name" : pbsubs[allpbsub],&lt;br&gt;
                "kubernetes.io/role/elb": "1",&lt;br&gt;
            }&lt;br&gt;
        )&lt;br&gt;
    )&lt;/p&gt;

&lt;p&gt;ndsubs=["node1","node2"]&lt;br&gt;
ndcidr1=cfg1.require_secret("cidr3")&lt;br&gt;
ndcidr2=cfg1.require_secret("cidr4")&lt;br&gt;
ndcidrs=[ndcidr1,ndcidr2]&lt;br&gt;
for allndsub in range(len(ndsubs)):&lt;br&gt;
    ndsubs[allndsub]=aws.ec2.Subnet(&lt;br&gt;
        ndsubs[allndsub],&lt;br&gt;
        aws.ec2.SubnetArgs(&lt;br&gt;
            vpc_id=eksvpc1.id,&lt;br&gt;
            cidr_block=ndcidrs[allndsub],&lt;br&gt;
            availability_zone=zones[allndsub],&lt;br&gt;
            tags={&lt;br&gt;
                "Name" : ndsubs[allndsub],&lt;br&gt;
                "kubernetes.io/role/internal-elb": "1",&lt;br&gt;
            }&lt;br&gt;
        )&lt;br&gt;
    )&lt;/p&gt;

&lt;p&gt;publictable=aws.ec2.RouteTable(&lt;br&gt;
        "publictable",&lt;br&gt;
        aws.ec2.RouteTableArgs(&lt;br&gt;
            vpc_id=eksvpc1.id,&lt;br&gt;
            routes=[&lt;br&gt;
                aws.ec2.RouteTableRouteArgs(&lt;br&gt;
                    cidr_block=cfg1.require_secret(key="any-traffic-ipv4"),&lt;br&gt;
                    gateway_id=intgw1.id,&lt;br&gt;
                ),&lt;br&gt;
            ],&lt;br&gt;
            tags={&lt;br&gt;
                "Name": "publictable",&lt;br&gt;
            },&lt;br&gt;
        )&lt;br&gt;
    )&lt;/p&gt;

&lt;p&gt;tblink1=aws.ec2.RouteTableAssociation(&lt;br&gt;
        "tblink1",&lt;br&gt;
        aws.ec2.RouteTableAssociationArgs(&lt;br&gt;
            subnet_id=pbsubs[0].id,&lt;br&gt;
            route_table_id=publictable.id,&lt;br&gt;
        )&lt;br&gt;
    )&lt;/p&gt;

&lt;p&gt;tblink2=aws.ec2.RouteTableAssociation(&lt;br&gt;
        "tblink2",&lt;br&gt;
        aws.ec2.RouteTableAssociationArgs(&lt;br&gt;
            subnet_id=pbsubs[1].id,&lt;br&gt;
            route_table_id=publictable.id,&lt;br&gt;
        )&lt;br&gt;
    )&lt;/p&gt;

&lt;p&gt;eips=["eip1", "eip2"]&lt;br&gt;
for alleip in range(len(eips)):&lt;br&gt;
    eips[alleip]=aws.ec2.Eip(&lt;br&gt;
        eips[alleip],&lt;br&gt;
        aws.ec2.EipArgs(&lt;br&gt;
            domain="vpc",&lt;br&gt;
            tags={&lt;br&gt;
                "Name": eips[alleip],&lt;br&gt;
            },&lt;br&gt;
        )&lt;br&gt;
    )&lt;/p&gt;

&lt;p&gt;natgws=["natgw1", "natgw2"]&lt;br&gt;
allocates=[eips[0].id , eips[1].id]&lt;br&gt;
for allnat in range(len(natgws)):&lt;br&gt;
    natgws[allnat]=aws.ec2.NatGateway(&lt;br&gt;
        natgws[allnat],&lt;br&gt;
        aws.ec2.NatGatewayArgs(&lt;br&gt;
            subnet_id=pbsubs[allpbsub].id,&lt;br&gt;
            allocation_id=allocates[allnat],&lt;br&gt;
            tags={&lt;br&gt;
                "Name": natgws[allnat],&lt;br&gt;
            },&lt;br&gt;
        )&lt;br&gt;
    )&lt;/p&gt;

&lt;p&gt;privatetables=["privatetable1" , "privatetable2"]&lt;br&gt;
for allprivtable in range(len(privatetables)):&lt;br&gt;
    privatetables[allprivtable]=aws.ec2.RouteTable(&lt;br&gt;
        privatetables[allprivtable],&lt;br&gt;
        aws.ec2.RouteTableArgs(&lt;br&gt;
            vpc_id=eksvpc1.id,&lt;br&gt;
            routes=[&lt;br&gt;
              aws.ec2.RouteTableRouteArgs(&lt;br&gt;
                  cidr_block=cfg1.require_secret(key="any-traffic-ipv4"),&lt;br&gt;
                  nat_gateway_id=natgws[0].id&lt;br&gt;
                  ),&lt;br&gt;
              aws.ec2.RouteTableRouteArgs(&lt;br&gt;
                  cidr_block=cfg1.require_secret(key="any-traffic-ipv4"),&lt;br&gt;
                  nat_gateway_id=natgws[1].id&lt;br&gt;
                  ),&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;        ],
        tags={
            "Name": privatetables[allprivtable],
        },
    )
)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;privatetablelink1=aws.ec2.RouteTableAssociation(&lt;br&gt;
    "privatetablelink1",&lt;br&gt;
    aws.ec2.RouteTableAssociationArgs(&lt;br&gt;
        subnet_id=ndsubs[0].id,&lt;br&gt;
        route_table_id=privatetables[0].id,&lt;br&gt;
    )&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;privatetablelink2=aws.ec2.RouteTableAssociation(&lt;br&gt;
    "privatetablelink2",&lt;br&gt;
    aws.ec2.RouteTableAssociationArgs(&lt;br&gt;
        subnet_id=ndsubs[1].id,&lt;br&gt;
        route_table_id=privatetables[1].id,&lt;br&gt;
    )&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;inbound_traffic=[&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws.ec2.NetworkAclIngressArgs(
    from_port=22,
    to_port=22,
    rule_no=100,
    action="deny",
    protocol="tcp",
    cidr_block=cfg1.require_secret(key="any-traffic-ipv4"),
    icmp_code=0,
    icmp_type=0
    ),
aws.ec2.NetworkAclIngressArgs(
    from_port=22,
    to_port=22,
    rule_no=101,
    action="allow",
    protocol="tcp",
    cidr_block=cfg1.require_secret(key="myips"),
    icmp_code=0,
    icmp_type=0
    ),
aws.ec2.NetworkAclIngressArgs(
    from_port=80,
    to_port=80,
    rule_no=200,
    action="allow",
    protocol="tcp",
    cidr_block=cfg1.require_secret(key="any-traffic-ipv4"),
    icmp_code=0,
    icmp_type=0
    ),
aws.ec2.NetworkAclIngressArgs(
    from_port=443,
    to_port=443,
    rule_no=300,
    action="allow",
    protocol="tcp",
    cidr_block=cfg1.require_secret(key="any-traffic-ipv4"),
    icmp_code=0,
    icmp_type=0
    ),
aws.ec2.NetworkAclIngressArgs(
    from_port=30000,
    to_port=65535,
    rule_no=400,
    action="allow",
    protocol="tcp",
    cidr_block=cfg1.require_secret(key="any-traffic-ipv4"),
    icmp_code=0,
    icmp_type=0
    ),
aws.ec2.NetworkAclIngressArgs(
    from_port=0,
    to_port=0,
    rule_no=500,
    action="allow",
    protocol="-1",
    cidr_block=cfg1.require_secret(key="any-traffic-ipv4"),
    icmp_code=0,
    icmp_type=0
    )
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;]&lt;/p&gt;

&lt;p&gt;outbound_traffic=[&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws.ec2.NetworkAclEgressArgs(
    from_port=22,
    to_port=22,
    rule_no=100,
    action="deny",
    protocol="tcp",
    cidr_block=cfg1.require_secret(key="any-traffic-ipv4"),
    icmp_code=0,
    icmp_type=0
    ),
 aws.ec2.NetworkAclEgressArgs(
    from_port=22,
    to_port=22,
    rule_no=101,
    action="allow",
    protocol="tcp",
    cidr_block=cfg1.require_secret(key="myips"),
    icmp_code=0,
    icmp_type=0
    ),

 aws.ec2.NetworkAclEgressArgs(
    from_port=80,
    to_port=80,
    rule_no=200,
    action="allow",
    protocol="tcp",
    cidr_block=cfg1.require_secret(key="any-traffic-ipv4"),
    icmp_code=0,
    icmp_type=0
    ),

 aws.ec2.NetworkAclEgressArgs(
    from_port=443,
    to_port=443,
    rule_no=300,
    action="allow",
    protocol="tcp",
    cidr_block=cfg1.require_secret(key="any-traffic-ipv4"),
    icmp_code=0,
    icmp_type=0
    ),


aws.ec2.NetworkAclEgressArgs(
    from_port=30000,
    to_port=65535,
    rule_no=400,
    action="allow",
    protocol="tcp",
    cidr_block=cfg1.require_secret(key="any-traffic-ipv4"),
    icmp_code=0,
    icmp_type=0
    ),

aws.ec2.NetworkAclEgressArgs(
    from_port=0,
    to_port=0,
    rule_no=500,
    action="allow",
    protocol="-1",
    cidr_block=cfg1.require_secret(key="any-traffic-ipv4"),
    icmp_code=0,
    icmp_type=0
    ),
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;]&lt;/p&gt;

&lt;p&gt;nacllists=["mynacls1" , "mynacls2"]&lt;br&gt;
for allnacls in range(len(nacllists)):&lt;br&gt;
    nacllists[allnacls]=aws.ec2.NetworkAcl(&lt;br&gt;
        nacllists[allnacls],&lt;br&gt;
        aws.ec2.NetworkAclArgs(&lt;br&gt;
            vpc_id=eksvpc1.id,&lt;br&gt;
            ingress=inbound_traffic,&lt;br&gt;
            egress=outbound_traffic,&lt;br&gt;
            tags={&lt;br&gt;
                "Name": nacllists[allnacls],&lt;br&gt;
            }&lt;br&gt;
        )&lt;br&gt;
    )&lt;/p&gt;

&lt;p&gt;nacls30=aws.ec2.NetworkAclAssociation(&lt;br&gt;
        "nacls30",&lt;br&gt;
        aws.ec2.NetworkAclAssociationArgs(&lt;br&gt;
            network_acl_id=nacllists[0].id,&lt;br&gt;
            subnet_id=pbsubs[0].id&lt;br&gt;
        )&lt;br&gt;
    )&lt;br&gt;
nacls31=aws.ec2.NetworkAclAssociation(&lt;br&gt;
        "nacls31",&lt;br&gt;
        aws.ec2.NetworkAclAssociationArgs(&lt;br&gt;
            network_acl_id=nacllists[0].id,&lt;br&gt;
            subnet_id=pbsubs[1].id&lt;br&gt;
        )&lt;br&gt;
    )   &lt;/p&gt;

&lt;p&gt;nacls10=aws.ec2.NetworkAclAssociation(&lt;br&gt;
        "nacls10",&lt;br&gt;
        aws.ec2.NetworkAclAssociationArgs(&lt;br&gt;
            network_acl_id=nacllists[1].id,&lt;br&gt;
            subnet_id=ndsubs[0].id&lt;br&gt;
        )&lt;br&gt;
    )&lt;br&gt;
nacls11=aws.ec2.NetworkAclAssociation(&lt;br&gt;
        "nacls11",&lt;br&gt;
        aws.ec2.NetworkAclAssociationArgs(&lt;br&gt;
            network_acl_id=nacllists[1].id,&lt;br&gt;
            subnet_id=ndsubs[1].id&lt;br&gt;
        )&lt;br&gt;
    ) &lt;/p&gt;

&lt;p&gt;eksrole=aws.iam.Role(&lt;br&gt;
    "eksrole",&lt;br&gt;
    aws.iam.RoleArgs(&lt;br&gt;
        assume_role_policy=json.dumps({&lt;br&gt;
            "Version":"2012-10-17",&lt;br&gt;
            "Statement":[&lt;br&gt;
                {&lt;br&gt;
                    "Effect":"Allow",&lt;br&gt;
                    "Principal":{&lt;br&gt;
                        "Service":"eks.amazonaws.com"&lt;br&gt;
                    },&lt;br&gt;
                    "Action": [&lt;br&gt;
                         "sts:AssumeRole",&lt;br&gt;
                         "sts:TagSession",&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;                    ]
            }
        ]
    })    
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;))&lt;/p&gt;

&lt;p&gt;nodesrole=aws.iam.Role(&lt;br&gt;
    "nodesrole",&lt;br&gt;
    aws.iam.RoleArgs(&lt;br&gt;
        assume_role_policy=json.dumps({&lt;br&gt;
            "Version": "2012-10-17",&lt;br&gt;
            "Statement":[&lt;br&gt;
                {&lt;br&gt;
                    "Effect": "Allow",&lt;br&gt;
                    "Principal":{&lt;br&gt;
                        "Service":"ec2.amazonaws.com"&lt;br&gt;
                    },&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;                "Action": "sts:AssumeRole"
            }   
        ]
    })
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;))&lt;/p&gt;

&lt;p&gt;clusterattach1=aws.iam.RolePolicyAttachment(&lt;br&gt;
        "clusterattach1",&lt;br&gt;
        aws.iam.RolePolicyAttachmentArgs(&lt;br&gt;
            role=eksrole.name,&lt;br&gt;
            policy_arn="arn:aws:iam::aws:policy/AmazonEKSClusterPolicy",&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    )
)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;clusterattach2=aws.iam.RolePolicyAttachment(&lt;br&gt;
        "clusterattach2",&lt;br&gt;
        aws.iam.RolePolicyAttachmentArgs(&lt;br&gt;
            role=eksrole.name,&lt;br&gt;
            policy_arn="arn:aws:iam::aws:policy/AmazonEKSComputePolicy",&lt;br&gt;
        )&lt;br&gt;
    )&lt;/p&gt;

&lt;p&gt;clusterattach3=aws.iam.RolePolicyAttachment(&lt;br&gt;
        "clusterattach3",&lt;br&gt;
        aws.iam.RolePolicyAttachmentArgs(&lt;br&gt;
            role=eksrole.name,&lt;br&gt;
            policy_arn="arn:aws:iam::aws:policy/AmazonEKSBlockStoragePolicy",&lt;br&gt;
        )&lt;br&gt;
    )&lt;/p&gt;

&lt;p&gt;clusterattach4=aws.iam.RolePolicyAttachment(&lt;br&gt;
        "clusterattach4",&lt;br&gt;
        aws.iam.RolePolicyAttachmentArgs(&lt;br&gt;
            role=eksrole.name,&lt;br&gt;
            policy_arn="arn:aws:iam::aws:policy/AmazonEKSLoadBalancingPolicy",&lt;br&gt;
        )&lt;br&gt;
    )&lt;/p&gt;

&lt;p&gt;clusterattach5=aws.iam.RolePolicyAttachment(&lt;br&gt;
        "clusterattach5",&lt;br&gt;
        aws.iam.RolePolicyAttachmentArgs(&lt;br&gt;
            role=eksrole.name,&lt;br&gt;
            policy_arn="arn:aws:iam::aws:policy/AmazonEKSNetworkingPolicy",&lt;br&gt;
        )&lt;br&gt;
    )&lt;/p&gt;

&lt;p&gt;nodesattach1=aws.iam.RolePolicyAttachment(&lt;br&gt;
        "nodesattach1",&lt;br&gt;
        aws.iam.RolePolicyAttachmentArgs(&lt;br&gt;
            role=nodesrole.name,&lt;br&gt;
            policy_arn="arn:aws:iam::aws:policy/AmazonEKSWorkerNodeMinimalPolicy",&lt;br&gt;
        )&lt;br&gt;
    )&lt;/p&gt;

&lt;p&gt;nodesattach2=aws.iam.RolePolicyAttachment(&lt;br&gt;
        "nodesattach2",&lt;br&gt;
        aws.iam.RolePolicyAttachmentArgs(&lt;br&gt;
            role=nodesrole.name,&lt;br&gt;
            policy_arn="arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryPullOnly",&lt;br&gt;
        )&lt;br&gt;
    )&lt;/p&gt;

&lt;p&gt;automode=aws.eks.Cluster(&lt;br&gt;
    "automode",&lt;br&gt;
    aws.eks.ClusterArgs(&lt;br&gt;
        name="automode",&lt;br&gt;
        bootstrap_self_managed_addons=False,&lt;br&gt;
        role_arn=eksrole.arn,&lt;br&gt;
        version="1.31",&lt;br&gt;
        compute_config={&lt;br&gt;
          "enabled": True,&lt;br&gt;
          "node_pools": ["general-purpose"],&lt;br&gt;
          "node_role_arn": nodesrole.arn,&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    },
    access_config={
     "authentication_mode": "API",
    },
    storage_config={
       "block_storage": {
        "enabled": True,
    },     
      },
    kubernetes_network_config={
        "elastic_load_balancing": {
        "enabled": True,
        },
    },
    tags={
      "Name" : "automode"    
    },
    vpc_config={
    "endpoint_private_access": True,
    "endpoint_public_access": True,
    "public_access_cidrs": [
        cfg1.require_secret(key="myips"),
    ],
    "subnet_ids": [
        ndsubs[0].id,
        ndsubs[1].id,
    ],
    }
),
opts=pulumi.ResourceOptions(
        depends_on=[
          clusterattach1,
          clusterattach2,
          clusterattach3,
          clusterattach4,
          clusterattach5,
        ]
    )
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;)&lt;/p&gt;

&lt;p&gt;myentry1=aws.eks.AccessEntry(&lt;br&gt;
    "myentry1",&lt;br&gt;
     aws.eks.AccessEntryArgs(&lt;br&gt;
        cluster_name=automode.name,&lt;br&gt;
        principal_arn=cfg1.require_secret(key="principal"),&lt;br&gt;
        type="STANDARD"&lt;br&gt;
     ),&lt;br&gt;
     opts=pulumi.ResourceOptions(&lt;br&gt;
            depends_on=[&lt;br&gt;
              automode&lt;br&gt;
            ]&lt;br&gt;
        )&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;entrypolicy1=aws.eks.AccessPolicyAssociation(&lt;br&gt;
    "entrypolicy1",&lt;br&gt;
    aws.eks.AccessPolicyAssociationArgs(&lt;br&gt;
        cluster_name=automode.name,&lt;br&gt;
        principal_arn=myentry1.principal_arn,&lt;br&gt;
        policy_arn="arn:aws:eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy",&lt;br&gt;
        access_scope={&lt;br&gt;
            "type" : "cluster",&lt;br&gt;
        }&lt;br&gt;
    ),&lt;br&gt;
    opts=pulumi.ResourceOptions(&lt;br&gt;
            depends_on=[&lt;br&gt;
              automode&lt;br&gt;
            ]&lt;br&gt;
        )&lt;br&gt;
)         &lt;/p&gt;

&lt;p&gt;Note: &lt;br&gt;
   In case you can’t access  eks cluster  from   local pc or labtop &lt;br&gt;
  Follow this video   , Fixing 'The Server Has Asked for Credentials' in Kubernetes Cluster API - EKS AWS Production&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=4aLwQASVHAE" rel="noopener noreferrer"&gt;https://www.youtube.com/watch?v=4aLwQASVHAE&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;`apiVersion: apps/v1&lt;br&gt;
kind: Deployment&lt;br&gt;
metadata:&lt;br&gt;
  name:  httpd-deploy&lt;br&gt;
  namespace: default&lt;br&gt;
  labels:&lt;br&gt;
    app: httpd&lt;br&gt;
spec:&lt;br&gt;
  selector:&lt;br&gt;
    matchLabels:&lt;br&gt;
      app: httpd&lt;br&gt;
  replicas: 4&lt;br&gt;
  strategy:&lt;br&gt;
    rollingUpdate:&lt;br&gt;
      maxSurge: 25%&lt;br&gt;
      maxUnavailable: 25%&lt;br&gt;
    type: RollingUpdate&lt;br&gt;
  template:&lt;br&gt;
    metadata:&lt;br&gt;
      labels:&lt;br&gt;
        app:  httpd&lt;br&gt;
    spec:&lt;br&gt;
      # initContainers:&lt;br&gt;
        # Init containers are exactly like regular containers, except:&lt;br&gt;
          # - Init containers always run to completion.&lt;br&gt;
          # - Each init container must complete successfully before the next one starts.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  containers:
  - name:  httpd
    image:  httpd:2.4.41-alpine
    imagePullPolicy: IfNotPresent
    resources:
      requests:
        cpu: 100m
        memory: 100Mi
      limits:
        cpu: 100m
        memory: 100Mi
    livenessProbe:
      tcpSocket:
        port: 80
      initialDelaySeconds: 5
      timeoutSeconds: 5
      successThreshold: 1
      failureThreshold: 3
      periodSeconds: 10
    readinessProbe:
      httpGet:
        path: /
        port: 80
      initialDelaySeconds: 5
      timeoutSeconds: 2
      successThreshold: 1
      failureThreshold: 3
      periodSeconds: 10
    ports:
    - containerPort:  80
      name:  http
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;




&lt;p&gt;apiVersion: v1&lt;br&gt;
kind: Service&lt;br&gt;
metadata:&lt;br&gt;
  name: httpd-svc&lt;br&gt;
  namespace: default&lt;br&gt;
spec:&lt;br&gt;
  selector:&lt;br&gt;
    app: httpd&lt;br&gt;
  type: NodePort&lt;br&gt;
  ports:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;name: httpd-svc
protocol: TCP
port: 80
targetPort: 80
# If you set the &lt;code&gt;spec.type&lt;/code&gt; field to &lt;code&gt;NodePort&lt;/code&gt; and you want a specific port number,
# you can specify a value in the &lt;code&gt;spec.ports[*].nodePort&lt;/code&gt; field.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;`--- &lt;br&gt;
apiVersion: networking.k8s.io/v1&lt;br&gt;
kind: IngressClass&lt;br&gt;
metadata:&lt;br&gt;
  namespace: default&lt;br&gt;
  labels:&lt;br&gt;
    app.kubernetes.io/name: myingress&lt;br&gt;
  name: myingress&lt;br&gt;
spec:&lt;br&gt;
  controller: eks.amazonaws.com/alb&lt;/p&gt;




&lt;p&gt;apiVersion: networking.k8s.io/v1&lt;br&gt;
kind: Ingress&lt;br&gt;
metadata:&lt;br&gt;
  namespace: default&lt;br&gt;
  name: myalblab&lt;br&gt;
  annotations:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;spec:&lt;br&gt;
  ingressClassName: myingress&lt;br&gt;
  rules:&lt;br&gt;
    - http:&lt;br&gt;
        paths:&lt;br&gt;
          - path: /&lt;br&gt;
            pathType: Prefix&lt;br&gt;
            backend:&lt;br&gt;
              service:&lt;br&gt;
                name: httpd-svc&lt;br&gt;
                port:&lt;br&gt;
                  number: 80&lt;br&gt;
&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;                              `                                                             `
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

</description>
    </item>
  </channel>
</rss>
