<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Haider Raed</title>
    <description>The latest articles on DEV Community by Haider Raed (@haydercyber).</description>
    <link>https://dev.to/haydercyber</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/haydercyber"/>
    <language>en</language>
    <item>
      <title>k8s the hard way on Centos</title>
      <dc:creator>Haider Raed</dc:creator>
      <pubDate>Wed, 27 Oct 2021 06:39:42 +0000</pubDate>
      <link>https://dev.to/haydercyber/k8s-the-hard-way-4nmc</link>
      <guid>https://dev.to/haydercyber/k8s-the-hard-way-4nmc</guid>
      <description>&lt;h2&gt;
  
  
  Inforamtion
&lt;/h2&gt;

&lt;p&gt;Technical Writer &lt;a href="https://www.linkedin.com/in/haydercyber1" rel="noopener noreferrer"&gt;Haider Raed&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Kubernetes The Hard Way
&lt;/h2&gt;

&lt;p&gt;This tutorial walks you through setting up Kubernetes the hard way. This guide is not for people looking for a fully automated command to bring up a Kubernetes cluster. If that's you then check out &lt;a href="https://cloud.google.com/kubernetes-engine" rel="noopener noreferrer"&gt;Google Kubernetes Engine&lt;/a&gt;, or the &lt;a href="https://kubernetes.io/docs/setup" rel="noopener noreferrer"&gt;Getting Started Guides&lt;/a&gt;. the repo &lt;a href="https://github.com/haydercyber/k8s-the-hard-way" rel="noopener noreferrer"&gt;k8s-the-hard-way&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Kubernetes The Hard Way is optimized for learning, which means taking the long route to ensure you understand each task required to bootstrap a Kubernetes cluster.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The results of this tutorial should not be viewed as production ready, and may receive limited support from the community, but don't let that stop you from learning!&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Target Audience
&lt;/h2&gt;

&lt;p&gt;The target audience for this tutorial is someone planning to support a production Kubernetes cluster and wants to understand how everything fits together.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cluster Details
&lt;/h2&gt;

&lt;p&gt;Kubernetes The Hard Way guides you through bootstrapping a highly available Kubernetes cluster with end-to-end encryption between components and RBAC authentication.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://github.com/kubernetes/kubernetes" rel="noopener noreferrer"&gt;kubernetes&lt;/a&gt; v1.21.0&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/containerd/containerd" rel="noopener noreferrer"&gt;containerd&lt;/a&gt; v1.4.4&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/coredns/coredns" rel="noopener noreferrer"&gt;coredns&lt;/a&gt; v1.8.3&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/containernetworking/cni" rel="noopener noreferrer"&gt;cni&lt;/a&gt; v0.9.1&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/etcd-io/etcd" rel="noopener noreferrer"&gt;etcd&lt;/a&gt; v3.4.15&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Labs
&lt;/h2&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;its Need 6 centos vm 

&lt;ul&gt;
&lt;li&gt;A compatible Linux host. The Kubernetes project provides generic instructions &lt;/li&gt;
&lt;li&gt;2 GB or more of RAM per machine (any less will leave little room for your apps).&lt;/li&gt;
&lt;li&gt;2 CPUs or more.&lt;/li&gt;
&lt;li&gt;Full network connectivity between all machines in the cluster (public or private network is fine).&lt;/li&gt;
&lt;li&gt; Unique hostname, MAC address, and product_uuid for every node. See here for more details.&lt;/li&gt;
&lt;li&gt;Swap disabled. You MUST disable swap in order for the kubelet to work properly
&amp;gt; you can see the lab digram  in your case you only to change the ip for your machine edit hostname and mapping to your machines ip then add to /etc/hosts
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy5epqrljtutp1kecem38.png" alt="digram screenshot" width="800" height="844"&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  editing host file
&lt;/h2&gt;

&lt;p&gt;note: the ip will change to your ip range&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# cat &amp;lt;&amp;lt;EOF&amp;gt;&amp;gt; /etc/hosts 
192.168.0.1 kubecon01.k8s.com
192.168.0.2 kubecon02.k8s.com
192.168.0.5 worknode01.k8s.com
192.168.0.6 worknode02.k8s.com
192.168.0.3 api_loadbalancer.k8s.com
EOF
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Install Some Package in machine will help you
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# yum install bash-completion vim telnet -y 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  make sure the firewalld servies is stop and disabled
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# systemctl disable --now firewalld
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  make sure the Selinux is disabled
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# setenforce 0
# sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Installing the Client Tools
&lt;/h2&gt;

&lt;p&gt;In this lab you will install the command line utilities required to complete this tutorial: &lt;a href="https://github.com/cloudflare/cfssl" rel="noopener noreferrer"&gt;cfssl&lt;/a&gt;, &lt;a href="https://github.com/cloudflare/cfssl" rel="noopener noreferrer"&gt;cfssljson&lt;/a&gt;, and &lt;a href="https://kubernetes.io/docs/tasks/tools/install-kubectl" rel="noopener noreferrer"&gt;kubectl&lt;/a&gt;.&lt;br&gt;
in this lissions will work on remote kubectl &lt;/p&gt;
&lt;h2&gt;
  
  
  Install CFSSL
&lt;/h2&gt;

&lt;p&gt;The &lt;code&gt;cfssl&lt;/code&gt; and &lt;code&gt;cfssljson&lt;/code&gt; command line utilities will be used to provision a &lt;a href="https://en.wikipedia.org/wiki/Public_key_infrastructure" rel="noopener noreferrer"&gt;PKI Infrastructure&lt;/a&gt; and generate TLS certificates.&lt;/p&gt;

&lt;p&gt;Download and install &lt;code&gt;cfssl&lt;/code&gt; and &lt;code&gt;cfssljson&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# wget https://storage.googleapis.com/kubernetes-the-hard-way/cfssl/1.4.1/linux/cfssl
# wget https://storage.googleapis.com/kubernetes-the-hard-way/cfssl/1.4.1/linux/cfssljson
# chmod +x cfssl cfssljson
# sudo mv cfssl cfssljson /usr/local/bin/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Verification
&lt;/h2&gt;

&lt;p&gt;Verify &lt;code&gt;cfssl&lt;/code&gt; and &lt;code&gt;cfssljson&lt;/code&gt; version 1.4.1 or higher is installed:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cfssl version
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;output&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Version: 1.4.1
Runtime: go1.12.12
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cfssljson --version
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Version: 1.4.1
Runtime: go1.12.12
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Install kubectl
&lt;/h2&gt;

&lt;p&gt;The &lt;code&gt;kubectl&lt;/code&gt; command line utility is used to interact with the Kubernetes API Server. Download and install &lt;code&gt;kubectl&lt;/code&gt; from the official release binaries:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# wget https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kubectl
# chmod +x kubectl
# sudo mv kubectl /usr/local/bin/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Verification
&lt;/h3&gt;

&lt;p&gt;Verify &lt;code&gt;kubectl&lt;/code&gt; version 1.21.0 or higher is installed:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# kubectl version --client
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;output&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.0", GitCommit:"cb303e613a121a29364f75cc67d3d580833a7479", GitTreeState:"clean", BuildDate:"2021-04-08T16:31:21Z", GoVersion:"go1.16.1", Compiler:"gc", Platform:"linux/amd64"}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Provisioning a CA and Generating TLS Certificates
&lt;/h1&gt;

&lt;p&gt;In this lab you will provision a &lt;a href="https://en.wikipedia.org/wiki/Public_key_infrastructure" rel="noopener noreferrer"&gt;PKI Infrastructure&lt;/a&gt; using CloudFlare's PKI toolkit, &lt;a href="https://github.com/cloudflare/cfssl" rel="noopener noreferrer"&gt;cfssl&lt;/a&gt;, then use it to bootstrap a Certificate Authority, and generate TLS certificates for the following components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler, kubelet, and kube-proxy.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Do We Need a CA and TLS Certificates?
&lt;/h2&gt;

&lt;p&gt;Note: In this section, we will be provisioning a certificate authority (CA). We will then use the CA to generate several certificates&lt;/p&gt;

&lt;h2&gt;
  
  
  Certificates.
&lt;/h2&gt;

&lt;p&gt;are used to confirm (authenticate) identity. They are used to prove that you are who you say you are.&lt;/p&gt;

&lt;h2&gt;
  
  
  Certificate Authority
&lt;/h2&gt;

&lt;p&gt;provides the ability to confirm that a certificate is valid. A certificate authority can be used to validate any certificate that was issued using that certificate authority. Kubernetes uses certificates for a variety of security functions, and the different parts of our cluster will validate certificates using the certificate authority. In this section, we will generate all of these certificates and copy the necessary files to the servers that need them.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Certificates Do We Need?
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Client Certificates 

&lt;ul&gt;
&lt;li&gt;These certificates provide client authentication for various users: admin, kubecontroller-manager, kube-proxy, kube-scheduler, and the kubelet client on each worker node. &lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Kubernetes API Server Certificate

&lt;ul&gt;
&lt;li&gt;This is the TLS certificate for the Kubernetes API.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Service Account Key Pair 

&lt;ul&gt;
&lt;li&gt;Kubernetes uses a certificate to sign service account tokens, so we need to provide a certificate for that purpose.
## Provisioning the Certificate Authority
In order to generate the certificates needed by Kubernetes, you must first provision a certificate authority. This lesson will guide you through the process of provisioning a new certificate authority for your Kubernetes cluster. After completing this lesson, you should have a certificate authority, which consists of two files: ca-key.pem and ca.pem
lets create dir that contains all certificates 
## Generating Client Certificates
Now that you have provisioned a certificate authority for the Kubernetes cluster, you are ready to begin generating certificates. The first set of certificates you will need to generate consists of the client certificates used by various Kubernetes components. In this lesson, we will generate the following client certificates: admin , kubelet (one for each worker node), kube-controller-manager , kube-proxy , and kube-scheduler . After completing this lesson, you will have the client certificate files which you will need later to set up the cluster. Here are the commands used in the demo. The command blocks surrounded by curly braces can be entered as a single command: 
In this lab you will provision a PKI Infrastructure using CloudFlare's PKI toolkit, cfssl, then use it to bootstrap a Certificate Authority, and generate TLS certificates for the following components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler, kubelet, and kube-proxy.
## Certificate Authority
In this section you will provision a Certificate Authority that can be used to generate additional TLS certificates.
Generate the CA configuration file, certificate, and private key:
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# mkdir k8s
# cd k8s 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Use this command to generate the certificate authority. Include the opening and closing curly braces to run this entire block as a single command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{

cat &amp;gt; ca-config.json &amp;lt;&amp;lt;EOF
{
  "signing": {
    "default": {
      "expiry": "8760h"
    },
    "profiles": {
      "kubernetes": {
        "usages": ["signing", "key encipherment", "server auth", "client auth"],
        "expiry": "8760h"
      }
    }
  }
}
EOF

cat &amp;gt; ca-csr.json &amp;lt;&amp;lt;EOF
{
  "CN": "Kubernetes",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "US",
      "L": "Portland",
      "O": "Kubernetes",
      "OU": "CA",
      "ST": "Oregon"
    }
  ]
}
EOF

cfssl gencert -initca ca-csr.json | cfssljson -bare ca

}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ca-key.pem
ca.pem
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Client and Server Certificates
&lt;/h2&gt;

&lt;p&gt;In this section you will generate client and server certificates for each Kubernetes component and a client certificate for the Kubernetes &lt;code&gt;admin&lt;/code&gt; user.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Admin Client Certificate
&lt;/h3&gt;

&lt;p&gt;Generate the &lt;code&gt;admin&lt;/code&gt; client certificate and private key:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{

cat &amp;gt; admin-csr.json &amp;lt;&amp;lt;EOF
{
  "CN": "admin",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "US",
      "L": "Portland",
      "O": "system:masters",
      "OU": "Kubernetes The Hard Way",
      "ST": "Oregon"
    }
  ]
}
EOF

cfssl gencert \
  -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -profile=kubernetes \
  admin-csr.json | cfssljson -bare admin

}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;admin-key.pem
admin.pem
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  The Kubelet Client Certificates
&lt;/h2&gt;

&lt;p&gt;Kubernetes uses a &lt;a href="https://kubernetes.io/docs/admin/authorization/node/" rel="noopener noreferrer"&gt;special-purpose authorization mode&lt;/a&gt; called Node Authorizer, that specifically authorizes API requests made by &lt;a href="https://kubernetes.io/docs/concepts/overview/components/#kubelet" rel="noopener noreferrer"&gt;Kubelets&lt;/a&gt;. In order to be authorized by the Node Authorizer, Kubelets must use a credential that identifies them as being in the &lt;code&gt;system:nodes&lt;/code&gt; group, with a username of &lt;code&gt;system:node:&amp;lt;nodeName&amp;gt;&lt;/code&gt;. In this section you will create a certificate for each Kubernetes worker node that meets the Node Authorizer requirements.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Kubelet Client certificates. Be sure to enter your actual machines values for all four of the variables at the top:&lt;br&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy5epqrljtutp1kecem38.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy5epqrljtutp1kecem38.png" alt="digram screenshot" width="800" height="844"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# WORKER0_HOST=worknode01.k8s.com
# WORKER0_IP=192.168.0.5
# WORKER1_HOST=worknode02.k8s.com
# WORKER1_IP=192.168.0.6
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;for instance in worknode01.k8s.com worknode02.k8s.com; do
cat &amp;gt; ${instance}-csr.json &amp;lt;&amp;lt;EOF
{
  "CN": "system:node:${instance}",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "US",
      "L": "Portland",
      "O": "system:nodes",
      "OU": "Kubernetes The Hard Way",
      "ST": "Oregon"
    }
  ]
}
EOF
cfssl gencert \
  -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -hostname=${instance} \
  -profile=kubernetes \
  ${instance}-csr.json | cfssljson -bare ${instance}
done
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;worknode01.k8s.com-key.pem
worknode01.k8s.com.pem
worknode02.k8s.com-key.pem
worknode02.k8s.com.pem
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  The Controller Manager Client Certificate
&lt;/h3&gt;

&lt;p&gt;Generate the &lt;code&gt;kube-controller-manager&lt;/code&gt; client certificate and private key:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{

cat &amp;gt; kube-controller-manager-csr.json &amp;lt;&amp;lt;EOF
{
  "CN": "system:kube-controller-manager",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "US",
      "L": "Portland",
      "O": "system:kube-controller-manager",
      "OU": "Kubernetes The Hard Way",
      "ST": "Oregon"
    }
  ]
}
EOF

cfssl gencert \
  -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -profile=kubernetes \
  kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager

}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kube-controller-manager-key.pem
kube-controller-manager.pem
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  The Kube Proxy Client Certificate
&lt;/h3&gt;

&lt;p&gt;Generate the &lt;code&gt;kube-proxy&lt;/code&gt; client certificate and private key:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{

cat &amp;gt; kube-proxy-csr.json &amp;lt;&amp;lt;EOF
{
  "CN": "system:kube-proxy",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "US",
      "L": "Portland",
      "O": "system:node-proxier",
      "OU": "Kubernetes The Hard Way",
      "ST": "Oregon"
    }
  ]
}
EOF

cfssl gencert \
  -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -profile=kubernetes \
  kube-proxy-csr.json | cfssljson -bare kube-proxy

}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kube-proxy-key.pem
kube-proxy.pem
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  The Scheduler Client Certificate
&lt;/h3&gt;

&lt;p&gt;Generate the &lt;code&gt;kube-scheduler&lt;/code&gt; client certificate and private key:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{

cat &amp;gt; kube-scheduler-csr.json &amp;lt;&amp;lt;EOF
{
  "CN": "system:kube-scheduler",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "US",
      "L": "Portland",
      "O": "system:kube-scheduler",
      "OU": "Kubernetes The Hard Way",
      "ST": "Oregon"
    }
  ]
}
EOF

cfssl gencert \
  -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -profile=kubernetes \
  kube-scheduler-csr.json | cfssljson -bare kube-scheduler

}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kube-scheduler-key.pem
kube-scheduler.pem
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  The Kubernetes API Server Certificate
&lt;/h3&gt;

&lt;p&gt;The &lt;code&gt;kubernetes-the-hard-way&lt;/code&gt; static IP address will be included in the list of subject alternative names for the Kubernetes API Server certificate. This will ensure the certificate can be validated by remote clients.&lt;br&gt;
We have generated all of the the client certificates our Kubernetes cluster will need, but we also need a server certificate for the Kubernetes API. In this lesson, we will generate one, signed with all of the hostnames and IPs that may be used later in order to access the Kubernetes API. After completing this lesson, you will have a Kubernetes API server certificate in the form of two files called kubernetes-key.pem  and kubernetes.pem . &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Here are the commands used in the demo. Be sure to replace all the placeholder values in CERT_HOSTNAME with their real values from your machines .&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy5epqrljtutp1kecem38.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy5epqrljtutp1kecem38.png" alt="digram screenshot" width="800" height="844"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;


&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# CERT_HOSTNAME=10.32.0.1,&amp;lt;controller node 1 Private IP&amp;gt;,&amp;lt;controller node 1 hostname&amp;gt;,&amp;lt;controller node 2 Private IP&amp;gt;,&amp;lt;controller node 2 hostname&amp;gt;,&amp;lt;API load balancer Private IP&amp;gt;,&amp;lt;API load balancer hostname&amp;gt;,127.0.0.1,localhost,kubernetes.default
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CERT_HOSTNAME=10.32.0.1,192.168.0.1,kubecon01.k8s.com,192.168.0.2,kubecon02.k8s.com,192.168.0.3,api_loadbalancer.k8s.com,127.0.0.1,localhost,kubernetes.default
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Generate the Kubernetes API Server certificate and private key:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
cat &amp;gt; kubernetes-csr.json &amp;lt;&amp;lt;EOF
{
  "CN": "kubernetes",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "US",
      "L": "Portland",
      "O": "Kubernetes",
      "OU": "Kubernetes The Hard Way",
      "ST": "Oregon"
    }
  ]
}
EOF

cfssl gencert \
  -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -hostname=${CERT_HOSTNAME} \
  -profile=kubernetes \
  kubernetes-csr.json | cfssljson -bare kubernetes

}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;The Kubernetes API server is automatically assigned the &lt;code&gt;kubernetes&lt;/code&gt; internal dns name, which will be linked to the first IP address (&lt;code&gt;10.32.0.1&lt;/code&gt;) from the address range (&lt;code&gt;10.32.0.0/24&lt;/code&gt;) reserved for internal cluster services during the &lt;a href="//07-bootstrapping-kubernetes-controllers.md#configure-the-kubernetes-api-server"&gt;control plane bootstrapping&lt;/a&gt; lab.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Results:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubernetes-key.pem
kubernetes.pem
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Kubernetes provides the ability for service accounts to authenticate using tokens. It uses a key-pair to provide signatures for those tokens. In this lesson, we will generate a certificate that will be used as that key-pair. After completing this lesson, you will have a certificate ready to be used as a service account key-pair in the form of two files: service-account-key.pem and service-account.pem . Here are the commands used&lt;/p&gt;

&lt;h2&gt;
  
  
  The Service Account Key Pair
&lt;/h2&gt;

&lt;p&gt;The Kubernetes Controller Manager leverages a key pair to generate and sign service account tokens as described in the &lt;a href="https://kubernetes.io/docs/admin/service-accounts-admin/" rel="noopener noreferrer"&gt;managing service accounts&lt;/a&gt; documentation.&lt;/p&gt;

&lt;p&gt;Generate the &lt;code&gt;service-account&lt;/code&gt; certificate and private key:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{

cat &amp;gt; service-account-csr.json &amp;lt;&amp;lt;EOF
{
  "CN": "service-accounts",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "US",
      "L": "Portland",
      "O": "Kubernetes",
      "OU": "Kubernetes The Hard Way",
      "ST": "Oregon"
    }
  ]
}
EOF

cfssl gencert \
  -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -profile=kubernetes \
  service-account-csr.json | cfssljson -bare service-account

}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;service-account-key.pem
service-account.pem
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Distribute the Client and Server Certificates
&lt;/h2&gt;

&lt;p&gt;Copy the appropriate certificates and private keys to each worker instance:&lt;br&gt;
Now that all of the necessary certificates have been generated, we need to move the files onto the appropriate servers. In this lesson, we will copy the necessary certificate files to each of our cloud servers. After completing this lesson, your controller and worker nodes should each have the certificate files which they need. Here are the commands used in the demo. Be sure to replace the placeholders with the actual values from from your cloud servers. Move certificate files to the worker nodes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# scp ca.pem $WORKER0_HOST-key.pem $WORKER0_HOST.pem root@$WORKER0_HOST:~/
# scp ca.pem $WORKER1_HOST-key.pem $WORKER1_HOST.pem root@$WORKER1_HOST:~/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Move certificate files to the controller nodes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# scp ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem service-account-key.pem service-account.pem root@kubecon01.k8s.com:~/
# scp ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem service-account-key.pem service-account.pem root@kubecon02.k8s.com:~/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Generating Kubernetes Configuration Files for Authentication
&lt;/h1&gt;

&lt;p&gt;In this lab you will generate &lt;a href="https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/" rel="noopener noreferrer"&gt;Kubernetes configuration files&lt;/a&gt;, also known as kubeconfigs, which enable Kubernetes clients to locate and authenticate to the Kubernetes API Servers.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Are Kubeconfigs and Why Do We Need Them?
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Kubeconfigs

&lt;ul&gt;
&lt;li&gt;A Kubernetes configuration file, or kubeconfig, is a file that stores “information about clusters, users, namespaces, and authentication     mechanisms.” It contains the configuration data needed to connect to and interact with one or more Kubernetes clusters. You can find more information about kubeconfigs in the Kubernetes documentation: &lt;a href="https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/" rel="noopener noreferrer"&gt;Kubernetes configuration files&lt;/a&gt;, Kubeconfigs contain information such as

&lt;ul&gt;
&lt;li&gt;The location of the cluster you want to connect to&lt;/li&gt;
&lt;li&gt;What user you want to authenticate as&lt;/li&gt;
&lt;li&gt;Data needed in order to authenticate, such as tokens or client certificates&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;You can even define multiple contexts in a kubeconfig file, allowing you to easily switch between multiple clusters.
&lt;h2&gt;
  
  
  Why Do We Need Kubeconfigs? &lt;/h2&gt;
&lt;/li&gt;



&lt;/ul&gt;

&lt;/li&gt;

&lt;li&gt;How to Generate a Kubeconfig 

&lt;ul&gt;
&lt;li&gt;Kubeconfigs can be generated using kubectl
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# kubectl config set-cluster // set up the configuration for the location of the cluster.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# kubectl config set-credentials // set the username and client certificate that will be used to authenticate.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# kubectl config set-context default // set up the default context
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# kubectl config use-context default // set the current context to the configuration we provided
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  What Kubeconfigs Do We Need to Generate?
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;We will need several Kubeconfig files for various components of the Kubernetes cluster:

&lt;ul&gt;
&lt;li&gt;Kubelet(one for each worker node)&lt;/li&gt;
&lt;li&gt;Kube-proxy&lt;/li&gt;
&lt;li&gt;Kube-controller-manager&lt;/li&gt;
&lt;li&gt;Kube-scheduler &lt;/li&gt;
&lt;li&gt;Admin&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;The next step in building a Kubernetes cluster the hard way is to generate kubeconfigs which will be used by the various services that will make up the cluster. In this lesson, we will generate these kubeconfigs. After completing this lesson, you should have a set of kubeconfigs which you will need later in order to configure the Kubernetes cluster. Here are the commands used in the demo. Be sure to replace the placeholders with actual values from your machine . Create an environment variable to store the address of the Kubernetes API, and set it to the  IP of your load balancer
&amp;gt; in our digram the ip for loadblancer is 192.168.0.3 you can see blow 
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy5epqrljtutp1kecem38.png" alt="digram screenshot" width="800" height="844"&gt;
&lt;/li&gt;

&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# KUBERNETES_PUBLIC_ADDRESS=192.168.0.3
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Client Authentication Configs
&lt;/h2&gt;

&lt;p&gt;In this section you will generate kubeconfig files for the &lt;code&gt;controller manager&lt;/code&gt;, &lt;code&gt;kubelet&lt;/code&gt;, &lt;code&gt;kube-proxy&lt;/code&gt;, and &lt;code&gt;scheduler&lt;/code&gt; clients and the &lt;code&gt;admin&lt;/code&gt; user.&lt;/p&gt;

&lt;h3&gt;
  
  
  The kubelet Kubernetes Configuration File
&lt;/h3&gt;

&lt;p&gt;When generating kubeconfig files for Kubelets the client certificate matching the Kubelet's node name must be used. This will ensure Kubelets are properly authorized by the Kubernetes &lt;a href="https://kubernetes.io/docs/admin/authorization/node/" rel="noopener noreferrer"&gt;Node Authorizer&lt;/a&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The following commands must be run in the same directory used to generate the SSL certificates during the &lt;a href="//04-certificate-authority.md"&gt;Generating TLS Certificates&lt;/a&gt; lab.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Generate a kubeconfig file for each worker node:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;for instance in worknode01.k8s.com worknode02.k8s.com; do
  kubectl config set-cluster kubernetes-the-hard-way \
    --certificate-authority=ca.pem \
    --embed-certs=true \
    --server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 \
    --kubeconfig=${instance}.kubeconfig

  kubectl config set-credentials system:node:${instance} \
    --client-certificate=${instance}.pem \
    --client-key=${instance}-key.pem \
    --embed-certs=true \
    --kubeconfig=${instance}.kubeconfig

  kubectl config set-context default \
    --cluster=kubernetes-the-hard-way \
    --user=system:node:${instance} \
    --kubeconfig=${instance}.kubeconfig

  kubectl config use-context default --kubeconfig=${instance}.kubeconfig
done
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;worknode01.k8s.com.kubeconfig
worknode02.k8s.com.kubeconfig
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  The kube-proxy Kubernetes Configuration File
&lt;/h3&gt;

&lt;p&gt;Generate a kubeconfig file for the &lt;code&gt;kube-proxy&lt;/code&gt; service:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  kubectl config set-cluster kubernetes-the-hard-way \
    --certificate-authority=ca.pem \
    --embed-certs=true \
    --server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 \
    --kubeconfig=kube-proxy.kubeconfig

  kubectl config set-credentials system:kube-proxy \
    --client-certificate=kube-proxy.pem \
    --client-key=kube-proxy-key.pem \
    --embed-certs=true \
    --kubeconfig=kube-proxy.kubeconfig

  kubectl config set-context default \
    --cluster=kubernetes-the-hard-way \
    --user=system:kube-proxy \
    --kubeconfig=kube-proxy.kubeconfig

  kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kube-proxy.kubeconfig
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  The kube-controller-manager Kubernetes Configuration File
&lt;/h3&gt;

&lt;p&gt;Generate a kubeconfig file for the &lt;code&gt;kube-controller-manager&lt;/code&gt; service:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  kubectl config set-cluster kubernetes-the-hard-way \
    --certificate-authority=ca.pem \
    --embed-certs=true \
    --server=https://127.0.0.1:6443 \
    --kubeconfig=kube-controller-manager.kubeconfig

  kubectl config set-credentials system:kube-controller-manager \
    --client-certificate=kube-controller-manager.pem \
    --client-key=kube-controller-manager-key.pem \
    --embed-certs=true \
    --kubeconfig=kube-controller-manager.kubeconfig

  kubectl config set-context default \
    --cluster=kubernetes-the-hard-way \
    --user=system:kube-controller-manager \
    --kubeconfig=kube-controller-manager.kubeconfig

  kubectl config use-context default --kubeconfig=kube-controller-manager.kubeconfig
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kube-controller-manager.kubeconfig
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  The kube-scheduler Kubernetes Configuration File
&lt;/h3&gt;

&lt;p&gt;Generate a kubeconfig file for the &lt;code&gt;kube-scheduler&lt;/code&gt; service:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  kubectl config set-cluster kubernetes-the-hard-way \
    --certificate-authority=ca.pem \
    --embed-certs=true \
    --server=https://127.0.0.1:6443 \
    --kubeconfig=kube-scheduler.kubeconfig

  kubectl config set-credentials system:kube-scheduler \
    --client-certificate=kube-scheduler.pem \
    --client-key=kube-scheduler-key.pem \
    --embed-certs=true \
    --kubeconfig=kube-scheduler.kubeconfig

  kubectl config set-context default \
    --cluster=kubernetes-the-hard-way \
    --user=system:kube-scheduler \
    --kubeconfig=kube-scheduler.kubeconfig

  kubectl config use-context default --kubeconfig=kube-scheduler.kubeconfig
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kube-scheduler.kubeconfig
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  The admin Kubernetes Configuration File
&lt;/h3&gt;

&lt;p&gt;Generate a kubeconfig file for the &lt;code&gt;admin&lt;/code&gt; user:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  kubectl config set-cluster kubernetes-the-hard-way \
    --certificate-authority=ca.pem \
    --embed-certs=true \
    --server=https://127.0.0.1:6443 \
    --kubeconfig=admin.kubeconfig

  kubectl config set-credentials admin \
    --client-certificate=admin.pem \
    --client-key=admin-key.pem \
    --embed-certs=true \
    --kubeconfig=admin.kubeconfig

  kubectl config set-context default \
    --cluster=kubernetes-the-hard-way \
    --user=admin \
    --kubeconfig=admin.kubeconfig

  kubectl config use-context default --kubeconfig=admin.kubeconfig
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Results:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;admin.kubeconfig
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Distribute the Kubernetes Configuration Files
&lt;/h2&gt;

&lt;p&gt;Copy the appropriate &lt;code&gt;kubelet&lt;/code&gt; and &lt;code&gt;kube-proxy&lt;/code&gt; kubeconfig files to each worker instance:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;for instance in worknode01.k8s.com worknode02.k8s.com; do
    scp ${instance}.kubeconfig kube-proxy.kubeconfig ${instance}:~/
done
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Copy the appropriate &lt;code&gt;kube-controller-manager&lt;/code&gt; and &lt;code&gt;kube-scheduler&lt;/code&gt; kubeconfig files to each controller instance:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;for instance in kubecon01.k8s.com kubecon02.k8s.com; do 
    scp admin.kubeconfig kube-controller-manager.kubeconfig kube-scheduler.kubeconfig ${instance}:~/
done
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Generating the Data Encryption Config and Key
&lt;/h1&gt;

&lt;p&gt;Kubernetes stores a variety of data including cluster state, application configurations, and secrets. Kubernetes supports the ability to &lt;a href="https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data" rel="noopener noreferrer"&gt;encrypt&lt;/a&gt; cluster data at rest.&lt;/p&gt;

&lt;p&gt;In this lab you will generate an encryption key and an &lt;a href="https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/#understanding-the-encryption-at-rest-configuration" rel="noopener noreferrer"&gt;encryption config&lt;/a&gt; suitable for encrypting Kubernetes Secrets.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Ist he Kubernetes Data Encryption Config?
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Kubernetes Secret Encryption &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Kubernetes supports the ability to encrypt secret data at rest. This means that secrets are encrypted so that they are never stored on disc in plain text. This feature is important for security, but in order to use it we need to provide Kubernetes with an encryption key. We will generate an encryption key and put it into a configuration file. We will then copy that file to our Kubernetes controller servers.&lt;/li&gt;
&lt;li&gt;In order to make use of Kubernetes' ability to encrypt sensitive data at rest, you need to provide Kubernetes with an encrpytion key using a data encryption config file. This lesson walks you through the process of creating a encryption key and storing it in the necessary file, as well as showing how to copy that file to your Kubernetes controllers. After completing this lesson, you should have a valid Kubernetes data encryption config file, and there should be a copy of that file on each of your Kubernetes controller servers. 
## The Encryption Key
Here are the commands used in the demo. Generate the Kubernetes Data encrpytion config file containing the encrpytion key&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;Generate an encryption key:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  The Encryption Config File
&lt;/h2&gt;

&lt;p&gt;Create the &lt;code&gt;encryption-config.yaml&lt;/code&gt; encryption config file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cat &amp;gt; encryption-config.yaml &amp;lt;&amp;lt;EOF
kind: EncryptionConfig
apiVersion: v1
resources:
  - resources:
      - secrets
    providers:
      - aescbc:
          keys:
            - name: key1
              secret: ${ENCRYPTION_KEY}
      - identity: {}
EOF
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Distribute the Kubernetes Encryption Config
&lt;/h2&gt;

&lt;p&gt;Copy the &lt;code&gt;encryption-config.yaml&lt;/code&gt; encryption config file to each controller instance:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;for instance in kubecon01.k8s.com kubecon02.k8s.com; do
     scp encryption-config.yaml ${instance}:~/
done
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Bootstrapping the etcd Cluster
&lt;/h1&gt;

&lt;p&gt;Kubernetes components are stateless and store cluster state in &lt;a href="https://github.com/etcd-io/etcd" rel="noopener noreferrer"&gt;etcd&lt;/a&gt;. In this lab you will bootstrap a three node etcd cluster and configure it for high availability and secure remote access.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Is etcd?
&lt;/h2&gt;

&lt;p&gt;“etcd is a distributed key value store that provides a reliable way to store data across a cluster of machines.” &lt;a href="https://coreos.com/etcd/" rel="noopener noreferrer"&gt;etcd&lt;/a&gt; etcd provides a way to store data across a distributed cluster of machines and make sure the data is synchronized across all machines. You can find more information, as well as the etcd source code, in the etcd GitHub repository &lt;a href="https://github.com/etcd-io/etcd" rel="noopener noreferrer"&gt;etcd&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How Is etcd Used in Kubernetes?
&lt;/h2&gt;

&lt;p&gt;Kubernetes uses etcd to store all of its internal data about cluster state. This data needs to be stored, but it also needs to be reliably synchronized across all controller nodes in the cluster. etcd fulfills that purpose. We will need to install etcd on each of our Kubernetes controller nodes and create an etcd cluster that includes all of those controller nodes. You can find more information on managing an etcd cluster for Kubernetes here &lt;a href="https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/" rel="noopener noreferrer"&gt;k8setcd&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating the etcd Cluster
&lt;/h2&gt;

&lt;p&gt;Before you can stand up controllers for a Kubernetes cluster, you must first build an etcd cluster across your Kubernetes control nodes. This lesson provides a demonstration of how to set up an etcd cluster in preparation for bootstrapping Kubernetes. After completing this lesson, you should have a working etcd cluster that consists of your Kubernetes control nodes. Here are the commands used in the demo (note that these have to be run on both controller servers, with a few differences between them): &lt;/p&gt;

&lt;h3&gt;
  
  
  Download and Install the etcd Binaries
&lt;/h3&gt;

&lt;p&gt;Download the official etcd release binaries from the &lt;a href="https://github.com/etcd-io/etcd" rel="noopener noreferrer"&gt;etcd&lt;/a&gt; GitHub project:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;wget "https://github.com/etcd-io/etcd/releases/download/v3.4.15/etcd-v3.4.15-linux-amd64.tar.gz"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Extract and install the &lt;code&gt;etcd&lt;/code&gt; server and the &lt;code&gt;etcdctl&lt;/code&gt; command line utility:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# tar -xvf etcd-v3.4.15-linux-amd64.tar.gz
# mv etcd-v3.4.15-linux-amd64/etcd* /usr/local/bin/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Configure the etcd Server
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# mkdir -p /etc/etcd /var/lib/etcd
# chmod 700 /var/lib/etcd
# cp ca.pem kubernetes-key.pem kubernetes.pem /etc/etcd/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Set up the following environment variables.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# ETCD_NAME=$(hostname -s)
# INTERNAL_IP=$(/sbin/ip -o -4 addr list eth0 | awk '{print $4}' | cut -d/ -f1)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Set up the following environment variables. Be sure you replace all of the with their corresponding real values: &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;you can see the lab digram  in your case you only to change the ip for varaible &lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy5epqrljtutp1kecem38.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy5epqrljtutp1kecem38.png" alt="digram screenshot" width="800" height="844"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# CONTROLLER0_IP=192.168.0.1
# CONTROLLER0_host=kubecon01
# CONTROLLER1_IP=192.168.0.2
# CONTROLLER1_host=kubecon02
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create the &lt;code&gt;etcd.service&lt;/code&gt; systemd unit file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cat &amp;lt;&amp;lt;EOF | sudo tee /etc/systemd/system/etcd.service
[Unit]
Description=etcd
Documentation=https://github.com/coreos

[Service]
Type=notify
ExecStart=/usr/local/bin/etcd \\
  --name ${ETCD_NAME} \\
  --cert-file=/etc/etcd/kubernetes.pem \\
  --key-file=/etc/etcd/kubernetes-key.pem \\
  --peer-cert-file=/etc/etcd/kubernetes.pem \\
  --peer-key-file=/etc/etcd/kubernetes-key.pem \\
  --trusted-ca-file=/etc/etcd/ca.pem \\
  --peer-trusted-ca-file=/etc/etcd/ca.pem \\
  --peer-client-cert-auth \\
  --client-cert-auth \\
  --initial-advertise-peer-urls https://${INTERNAL_IP}:2380 \\
  --listen-peer-urls https://${INTERNAL_IP}:2380 \\
  --listen-client-urls https://${INTERNAL_IP}:2379,https://127.0.0.1:2379 \\
  --advertise-client-urls https://${INTERNAL_IP}:2379 \\
  --initial-cluster-token etcd-cluster-0 \\
  --initial-cluster ${CONTROLLER0_host}=https://${CONTROLLER0_IP}:2380,${CONTROLLER1_host}=https://${CONTROLLER1_IP}:2380 \\
  --initial-cluster-state new \\
  --data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Start the etcd Server
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# systemctl daemon-reload
# systemctl enable etcd
# systemctl start etcd
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;Remember to run the above commands on each controller node: &lt;code&gt;kubecon01&lt;/code&gt;, &lt;code&gt;kubecon02&lt;/code&gt; .&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Verification
&lt;/h2&gt;

&lt;p&gt;List the etcd cluster members:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo ETCDCTL_API=3 etcdctl member list \
  --endpoints=https://127.0.0.1:2379 \
  --cacert=/etc/etcd/ca.pem \
  --cert=/etc/etcd/kubernetes.pem \
  --key=/etc/etcd/kubernetes-key.pem
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;output&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;19e6cf768d9d542e, started, kubecon02, https://192.168.0.2:2380, https://192.168.0.2:2379, false
508e54ff346cdb88, started, kubecon01, https://192.168.0.1:2380, https://192.168.0.1:2379, false
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Bootstrapping the Kubernetes Control Plane
&lt;/h1&gt;

&lt;p&gt;In this lab you will bootstrap the Kubernetes control plane across three compute instances and configure it for high availability. You will also create an external load balancer that exposes the Kubernetes API Servers to remote clients. The following components will be installed on each node: Kubernetes API Server, Scheduler, and Controller Manager.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;The commands in this lab must be run on each controller instance: &lt;code&gt;kubecon01&lt;/code&gt;, &lt;code&gt;kubecon02&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Provision the Kubernetes Control Plane
&lt;/h2&gt;

&lt;p&gt;The first step in bootstrapping a new Kubernetes control plane is to install the necessary binaries on the controller servers. We will walk through the process of downloading and installing the binaries on both Kubernetes controllers. This will prepare your environment for the lessons that follow, in which we will configure these binaries to run as  systemd  services. You can install the control plane binaries on each control node like this&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# mkdir -p /etc/kubernetes/config
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Download and Install the Kubernetes Controller Binaries
&lt;/h3&gt;

&lt;p&gt;Download the official Kubernetes release binaries&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# wget "https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kube-apiserver" \
  "https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kube-controller-manager" \
  "https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kube-scheduler" \
  "https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kubectl"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Lets change the permission to be executable&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# chmod +x kube-apiserver kube-controller-manager kube-scheduler kubectl 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Lets mv binary to /usr/local/bin&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# mv kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/local/bin/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Configure the Kubernetes API Server
&lt;/h3&gt;

&lt;p&gt;The Kubernetes API server provides the primary interface for the Kubernetes control plane and the cluster as a whole. When you interact with Kubernetes, you are nearly always doing it through the Kubernetes API server. This lesson will guide you through the process of configuring the kube-apiserver service on your two Kubernetes control nodes. After completing this lesson, you should have a  systemd  unit set up to run kube-apiserver as a service on each Kubernetes control node. You can configure the Kubernetes API server like so&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# mkdir -p /var/lib/kubernetes/
# mv ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem \
    service-account-key.pem service-account.pem \
    encryption-config.yaml /var/lib/kubernetes/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Set some environment variables that will be used to create the  systemd  unit file. Make sure you replace the placeholders with their actual values&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# INTERNAL_IP=$(/sbin/ip -o -4 addr list eth0 | awk '{print $4}' | cut -d/ -f1)
# CONTROLLER0_IP=192.168.0.1
# KUBERNETES_PUBLIC_ADDRESS=$(/sbin/ip -o -4 addr list eth0 | awk '{print $4}' | cut -d/ -f1)
# CONTROLLER1_IP=192.168.0.2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Generate the kube-apiserver unit file for  systemd :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# cat &amp;lt;&amp;lt;EOF | sudo tee /etc/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes

[Service]
ExecStart=/usr/local/bin/kube-apiserver \\
  --advertise-address=${INTERNAL_IP} \\
  --allow-privileged=true \\
  --apiserver-count=3 \\
  --audit-log-maxage=30 \\
  --audit-log-maxbackup=3 \\
  --audit-log-maxsize=100 \\
  --audit-log-path=/var/log/audit.log \\
  --authorization-mode=Node,RBAC \\
  --bind-address=0.0.0.0 \\
  --client-ca-file=/var/lib/kubernetes/ca.pem \\
  --enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \\
  --etcd-cafile=/var/lib/kubernetes/ca.pem \\
  --etcd-certfile=/var/lib/kubernetes/kubernetes.pem \\
  --etcd-keyfile=/var/lib/kubernetes/kubernetes-key.pem \\
  --etcd-servers=https://${CONTROLLER0_IP}:2379,https://${CONTROLLER1_IP}:2379 \\
  --event-ttl=1h \\
  --encryption-provider-config=/var/lib/kubernetes/encryption-config.yaml \\
  --kubelet-certificate-authority=/var/lib/kubernetes/ca.pem \\
  --kubelet-client-certificate=/var/lib/kubernetes/kubernetes.pem \\
  --kubelet-client-key=/var/lib/kubernetes/kubernetes-key.pem \\
  --service-account-key-file=/var/lib/kubernetes/service-account.pem \\
  --service-account-signing-key-file=/var/lib/kubernetes/service-account-key.pem \\
  --service-account-issuer=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 \\
  --service-cluster-ip-range=10.32.0.0/24 \\
  --service-node-port-range=30000-32767 \\
  --tls-cert-file=/var/lib/kubernetes/kubernetes.pem \\
  --tls-private-key-file=/var/lib/kubernetes/kubernetes-key.pem \\
  --v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Configure the Kubernetes Controller Manager
&lt;/h3&gt;

&lt;p&gt;Move the &lt;code&gt;kube-controller-manager&lt;/code&gt; kubeconfig into place:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo mv kube-controller-manager.kubeconfig /var/lib/kubernetes/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create the &lt;code&gt;kube-controller-manager.service&lt;/code&gt; systemd unit file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cat &amp;lt;&amp;lt;EOF | sudo tee /etc/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes

[Service]
ExecStart=/usr/local/bin/kube-controller-manager \\
  --bind-address=0.0.0.0 \\
  --cluster-cidr=10.200.0.0/16 \\
  --cluster-name=kubernetes \\
  --cluster-signing-cert-file=/var/lib/kubernetes/ca.pem \\
  --cluster-signing-key-file=/var/lib/kubernetes/ca-key.pem \\
  --kubeconfig=/var/lib/kubernetes/kube-controller-manager.kubeconfig \\
  --leader-elect=true \\
  --root-ca-file=/var/lib/kubernetes/ca.pem \\
  --service-account-private-key-file=/var/lib/kubernetes/service-account-key.pem \\
  --service-cluster-ip-range=10.32.0.0/24 \\
  --use-service-account-credentials=true \\
  --v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Configure the Kubernetes Scheduler
&lt;/h3&gt;

&lt;p&gt;Move the &lt;code&gt;kube-scheduler&lt;/code&gt; kubeconfig into place:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo mv kube-scheduler.kubeconfig /var/lib/kubernetes/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create the &lt;code&gt;kube-scheduler.yaml&lt;/code&gt; configuration file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cat &amp;lt;&amp;lt;EOF | sudo tee /etc/kubernetes/config/kube-scheduler.yaml
apiVersion: kubescheduler.config.k8s.io/v1beta1
kind: KubeSchedulerConfiguration
clientConnection:
  kubeconfig: "/var/lib/kubernetes/kube-scheduler.kubeconfig"
leaderElection:
  leaderElect: true
EOF
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create the &lt;code&gt;kube-scheduler.service&lt;/code&gt; systemd unit file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cat &amp;lt;&amp;lt;EOF | sudo tee /etc/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
ExecStart=/usr/local/bin/kube-scheduler \\
  --config=/etc/kubernetes/config/kube-scheduler.yaml \\
  --v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Start the Controller Services
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# systemctl daemon-reload
# systemctl enable kube-apiserver kube-controller-manager kube-scheduler
# systemctl start kube-apiserver kube-controller-manager kube-scheduler
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;Allow up to 10 seconds for the Kubernetes API Server to fully initialize.&lt;/p&gt;
&lt;h3&gt;
  
  
  Enable HTTP Health Checks
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Why Do We Need to Enable HTTP Health Checks? 

&lt;ul&gt;
&lt;li&gt;In Kelsey Hightower’s original Kubernetes the Hard Way guide, he uses a Good Cloud Platform (GCP) load balancer. The load balancer needs to  be able to perform health checks against the Kubernetes API to measure the health status of API nodes. The GCP load balancer cannot easily perform health checks over HTTPS, so the guide instructs us to set up a proxy server to allow these health checks to be performed over HTTP. Since we are using Nginx as our load balancer, we don’t actually need to do this, but it will be good practice for us. This exercise will help you understand the methods used in the original guide.&lt;/li&gt;
&lt;li&gt;Part of Kelsey Hightower's original Kubernetes the Hard Way guide involves setting up an nginx proxy on each controller to provide access    to
&lt;/li&gt;
&lt;li&gt;the Kubernetes API /healthz endpoint over http  This lesson explains the reasoning behind the inclusion of that step and guides you through
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- the process of implementing the http /healthz proxy. You can set up a basic nginx proxy for the healthz endpoint by first installing nginx" 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;blockquote&gt;
&lt;p&gt;The &lt;code&gt;/healthz&lt;/code&gt; API server endpoint does not require authentication by default.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Install a basic web server to handle HTTP health checks:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# yum install epel-release  nginx -y
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create an nginx configuration for the health check proxy:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cat &amp;gt; /etc/nginx/conf.d/kubernetes.default.svc.cluster.local.conf &amp;lt;&amp;lt;EOF
server {
  listen      80;
  server_name kubernetes.default.svc.cluster.local;

  location /healthz {
     proxy_pass                    https://127.0.0.1:6443/healthz;
     proxy_ssl_trusted_certificate /var/lib/kubernetes/ca.pem;
  }
}
EOF
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Started and enabled nginx&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# systemctl enable --now nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Verification
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl cluster-info --kubeconfig admin.kubeconfig
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Kubernetes control plane is running at https://127.0.0.1:6443
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Test the nginx HTTP health check proxy:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -H "Host: kubernetes.default.svc.cluster.local" -i http://127.0.0.1/healthz
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;HTTP/1.1 200 OK
Server: nginx/1.18.0 (Ubuntu)
Date: Sun, 02 May 2021 04:19:29 GMT
Content-Type: text/plain; charset=utf-8
Content-Length: 2
Connection: keep-alive
Cache-Control: no-cache, private
X-Content-Type-Options: nosniff
X-Kubernetes-Pf-Flowschema-Uid: c43f32eb-e038-457f-9474-571d43e5c325
X-Kubernetes-Pf-Prioritylevel-Uid: 8ba5908f-5569-4330-80fd-c643e7512366

ok
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;Remember to run the above commands on each controller node: &lt;code&gt;kubecon01&lt;/code&gt;, &lt;code&gt;kubecon02&lt;/code&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  RBAC for Kubelet Authorization
&lt;/h2&gt;

&lt;p&gt;One of the necessary steps in setting up a new Kubernetes cluster from scratch is to assign permissions that allow the Kubernetes API to access various functionality within the worker kubelets. This lesson guides you through the process of creating a ClusterRole and binding it to the kubernetes user so that those permissions will be in place. After completing this lesson, your cluster will have the necessary role-based access control configuration to allow the cluster's API to access kubelet functionality such as logs and metrics. You can configure RBAC for kubelet authorization with these commands. Note that these commands only need to be run on one control node. Create a role with the necessary permissions&lt;/p&gt;

&lt;p&gt;In this section you will configure RBAC permissions to allow the Kubernetes API Server to access the Kubelet API on each worker node. Access to the Kubelet API is required for retrieving metrics, logs, and executing commands in pods.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;This tutorial sets the Kubelet &lt;code&gt;--authorization-mode&lt;/code&gt; flag to &lt;code&gt;Webhook&lt;/code&gt;. Webhook mode uses the &lt;a href="https://kubernetes.io/docs/admin/authorization/#checking-api-access" rel="noopener noreferrer"&gt;SubjectAccessReview&lt;/a&gt; API to determine authorization.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The commands in this section will effect the entire cluster and only need to be run once from one of the controller nodes.&lt;/p&gt;

&lt;p&gt;Create the &lt;code&gt;system:kube-apiserver-to-kubelet&lt;/code&gt; &lt;a href="https://kubernetes.io/docs/admin/authorization/rbac/#role-and-clusterrole" rel="noopener noreferrer"&gt;ClusterRole&lt;/a&gt; with permissions to access the Kubelet API and perform most common tasks associated with managing pods:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cat &amp;lt;&amp;lt;EOF | kubectl apply --kubeconfig admin.kubeconfig -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:kube-apiserver-to-kubelet
rules:
  - apiGroups:
      - ""
    resources:
      - nodes/proxy
      - nodes/stats
      - nodes/log
      - nodes/spec
      - nodes/metrics
    verbs:
      - "*"
EOF
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The Kubernetes API Server authenticates to the Kubelet as the &lt;code&gt;kubernetes&lt;/code&gt; user using the client certificate as defined by the &lt;code&gt;--kubelet-client-certificate&lt;/code&gt; flag.&lt;/p&gt;

&lt;p&gt;Bind the &lt;code&gt;system:kube-apiserver-to-kubelet&lt;/code&gt; ClusterRole to the &lt;code&gt;kubernetes&lt;/code&gt; user:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cat &amp;lt;&amp;lt;EOF | kubectl apply --kubeconfig admin.kubeconfig -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: system:kube-apiserver
  namespace: ""
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:kube-apiserver-to-kubelet
subjects:
  - apiGroup: rbac.authorization.k8s.io
    kind: User
    name: kubernetes
EOF
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Setting up a Kube API Frontend Load Balancer
&lt;/h3&gt;

&lt;p&gt;In order to achieve redundancy for your Kubernetes cluster, you will need to load balance usage of the Kubernetes API across multiple control nodes. In this lesson, you will learn how to create a simple nginx server to perform this balancing. After completing this lesson, you will be able to interact with both control nodes of your kubernetes cluster using the nginx load balancer. Here are the commands you can use to set up the nginx load balancer. Run these on the server that you have designated as your load balancer server:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# ssh root@192.168.0.3
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note will use stream module for nginx for easy wiil create a docker image that contain the all moudle config first lets install docker&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# yum install -y yum-utils
# yum-config-manager \
    --add-repo \
    https://download.docker.com/linux/centos/docker-ce.repo
# yum-config-manager --enable docker-ce-nightly
# yum install docker-ce docker-ce-cli containerd.io -y  
# systemctl enable --now docker
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Config the Loadbalancer
&lt;/h3&gt;

&lt;p&gt;Now will create the dir for image to configure all image nassry&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# mkdir nginx &amp;amp;&amp;amp; cd nginx 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Set up some environment variables for the lead balancer config file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# CONTROLLER0_IP=192.168.0.1
# CONTROLLER1_IP=192.168.0.2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create the load balancer nginx config file&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# cat &amp;lt;&amp;lt; EOF | sudo tee k8s.conf
   stream {
    upstream kubernetes {
        least_conn;
        server $CONTROLLER0_IP:6443;
        server $CONTROLLER1_IP:6443;
     }
    server {
        listen 6443;
        listen 443;
        proxy_pass kubernetes;
    }
}
EOF
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Lets create a nginx config&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# cat &amp;lt;&amp;lt; EOF |  tee nginx.conf 
user www-data;
worker_processes auto;
pid /run/nginx.pid;

events {
        worker_connections 768;
        # multi_accept on;
}

http {

        ##
        # Basic Settings
        ##

        sendfile on;
        tcp_nopush on;
        tcp_nodelay on;
        keepalive_timeout 65;
        types_hash_max_size 2048;
        # server_tokens off;

        # server_names_hash_bucket_size 64;
        # server_name_in_redirect off;

        include /etc/nginx/mime.types;
        default_type application/octet-stream;

        ##
        # SSL Settings
        ##

        ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
        ssl_prefer_server_ciphers on;

        ##
        # Logging Settings
        ##

        access_log /var/log/nginx/access.log;
        error_log /var/log/nginx/error.log;

        ##
        # Gzip Settings
        ##

        gzip on;
        gzip_disable "msie6";

        # gzip_vary on;
        # gzip_proxied any;
        # gzip_comp_level 6;
        # gzip_buffers 16 8k;
        # gzip_http_version 1.1;
        # gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;

        ##
        # Virtual Host Configs
        ##

        include /etc/nginx/conf.d/*.conf;
        include /etc/nginx/sites-enabled/*;
}


#mail {
#       # See sample authentication script at:
#       # http://wiki.nginx.org/ImapAuthenticateWithApachePhpScript
#
#       # auth_http localhost/auth.php;
#       # pop3_capabilities "TOP" "USER";
#       # imap_capabilities "IMAP4rev1" "UIDPLUS";
#
#       server {
#               listen     localhost:110;
#               protocol   pop3;
#               proxy      on;
#       }
#
#       server {
#               listen     localhost:143;
#               protocol   imap;
#               proxy      on;
#       }
#}
include /etc/nginx/tcpconf.d/*;
EOF
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Lets create a Dockerfile&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# cat &amp;lt;&amp;lt; EOF |  tee Dockerfile
FROM ubuntu:16.04
RUN apt-get update -y &amp;amp;&amp;amp; apt-get upgrade -y &amp;amp;&amp;amp; apt-get install -y nginx &amp;amp;&amp;amp; mkdir -p  /etc/nginx/tcpconf.d
RUN rm -rf /etc/nginx/nginx.conf
ADD nginx.conf /etc/nginx/
ADD k8s.conf /etc/nginx/tcpconf.d/
CMD ["nginx", "-g", "daemon off;"]
EOF
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Lets build and run docker&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# docker build -t nginx .
# docker run -d --network host --name nginx --restart unless-stopped nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Make a HTTP request for the Kubernetes version info:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl --cacert ca.pem https://${KUBERNETES_PUBLIC_ADDRESS}:6443/version
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;output&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "major": "1",
  "minor": "21",
  "gitVersion": "v1.21.0",
  "gitCommit": "cb303e613a121a29364f75cc67d3d580833a7479",
  "gitTreeState": "clean",
  "buildDate": "2021-04-08T16:25:06Z",
  "goVersion": "go1.16.1",
  "compiler": "gc",
  "platform": "linux/amd64"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Bootstrapping the Kubernetes Worker Nodes
&lt;/h1&gt;

&lt;p&gt;In this lab you will bootstrap three Kubernetes worker nodes. The following components will be installed on each node: &lt;a href="https://github.com/opencontainers/runc" rel="noopener noreferrer"&gt;runc&lt;/a&gt;, &lt;a href="https://github.com/containernetworking/cni" rel="noopener noreferrer"&gt;container networking plugins&lt;/a&gt;, &lt;a href="https://github.com/containerd/containerd" rel="noopener noreferrer"&gt;containerd&lt;/a&gt;, &lt;a href="https://kubernetes.io/docs/admin/kubelet" rel="noopener noreferrer"&gt;kubelet&lt;/a&gt;, and &lt;a href="https://kubernetes.io/docs/concepts/cluster-administration/proxies" rel="noopener noreferrer"&gt;kube-proxy&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  What Are the Kubernetes Worker Nodes?
&lt;/h3&gt;

&lt;p&gt;Kubernetes worker nodes are responsible for the actual work of running container applications managed by Kubernetes. “The Kubernetes node has the services necessary to run application containers and be managed from the master systems.” You can find more information about Kubernetes worker nodes in the Kubernetes documentation:&lt;/p&gt;

&lt;h3&gt;
  
  
  Kubernetes Worker Node Components
&lt;/h3&gt;

&lt;p&gt;Each Kubernetes worker node consists of the following components&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Kubelet

&lt;ul&gt;
&lt;li&gt;Controls each worker node, providing the APIs that are used by the control plane to manage nodes and pods, and interacts with the container runtime to manage containers&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Kube-proxy

&lt;ul&gt;
&lt;li&gt;Manages iptables rules on the node to provide virtual network access to pods.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Container runtime

&lt;ul&gt;
&lt;li&gt;Downloads images and runs containers. Two examples of container runtimes are Docker and containerd (Kubernetes the Hard Way uses containerd)
###  Prerequisites
The commands in this lab must be run on each worker instance: &lt;code&gt;worknode01&lt;/code&gt;, &lt;code&gt;worknode01&lt;/code&gt;
## Provisioning a Kubernetes Worker Node&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;Install the OS dependencies:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# yum install socat conntrack ipset -y 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;The socat binary enables support for the &lt;code&gt;kubectl port-forward&lt;/code&gt; command.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Disable Swap
&lt;/h3&gt;

&lt;p&gt;By default the kubelet will fail to start if &lt;a href="https://help.ubuntu.com/community/SwapFaq" rel="noopener noreferrer"&gt;swap&lt;/a&gt; is enabled. It is &lt;a href="https://github.com/kubernetes/kubernetes/issues/7294" rel="noopener noreferrer"&gt;recommended&lt;/a&gt; that swap be disabled to ensure Kubernetes can provide proper resource allocation and quality of service.&lt;/p&gt;

&lt;p&gt;Verify if swap is enabled:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo swapon --show
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If output is empthy then swap is not enabled. If swap is enabled run the following command to disable swap immediately:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# swapoff -a 
# sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Download and Install Worker Binaries
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# wget https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.21.0/crictl-v1.21.0-linux-amd64.tar.gz \
  https://github.com/opencontainers/runc/releases/download/v1.0.0-rc93/runc.amd64 \
  https://github.com/containernetworking/plugins/releases/download/v0.9.1/cni-plugins-linux-amd64-v0.9.1.tgz \
  https://github.com/containerd/containerd/releases/download/v1.4.4/containerd-1.4.4-linux-amd64.tar.gz \
  https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kubectl \
  https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kube-proxy \
  https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kubelet
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create the installation directories:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# mkdir -p \
  /etc/cni/net.d \
  /opt/cni/bin \
  /var/lib/kubelet \
  /var/lib/kube-proxy \
  /var/lib/kubernetes \
  /var/run/kubernetes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Install the worker binaries:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# mkdir containerd
# tar -xvf crictl-v1.21.0-linux-amd64.tar.gz
# tar -xvf containerd-1.4.4-linux-amd64.tar.gz -C containerd
# tar -xvf cni-plugins-linux-amd64-v0.9.1.tgz -C /opt/cni/bin/
# mv runc.amd64 runc
# chmod +x crictl kubectl kube-proxy kubelet runc 
# mv crictl kubectl kube-proxy kubelet runc /usr/local/bin/
# mv containerd/bin/* /bin/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Configure containerd
&lt;/h3&gt;

&lt;p&gt;Create the containerd configuration file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# mkdir -p /etc/containerd/
# cat &amp;lt;&amp;lt; EOF | sudo tee /etc/containerd/config.toml
[plugins]
  [plugins.cri.containerd]
    snapshotter = "overlayfs"
    [plugins.cri.containerd.default_runtime]
      runtime_type = "io.containerd.runtime.v1.linux"
      runtime_engine = "/usr/local/bin/runc"
      runtime_root = ""
EOF
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create the &lt;code&gt;containerd.service&lt;/code&gt; systemd unit file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cat &amp;lt;&amp;lt;EOF | sudo tee /etc/systemd/system/containerd.service
[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target

[Service]
ExecStartPre=/sbin/modprobe overlay
ExecStart=/bin/containerd
Restart=always
RestartSec=5
Delegate=yes
KillMode=process
OOMScoreAdjust=-999
LimitNOFILE=1048576
LimitNPROC=infinity
LimitCORE=infinity

[Install]
WantedBy=multi-user.target
EOF
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Configure the Kubelet
&lt;/h3&gt;

&lt;p&gt;Kubelet is the Kubernetes agent which runs on each worker node. Acting as a middleman between the Kubernetes control plane and the underlying container runtime, it coordinates the running of containers on the worker node. In this lesson, we will configure our systemd service for kubelet. After completing this lesson, you should have a systemd service configured and ready to run on each worker node. You can configure the kubelet service like so. Run these commands on both worker nodes. Set a HOSTNAME environment variable that will be used to generate your config files. Make sure you set the HOSTNAME appropriately for each worker node:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# mv ${HOSTNAME}-key.pem ${HOSTNAME}.pem /var/lib/kubelet/
# mv ${HOSTNAME}.kubeconfig /var/lib/kubelet/kubeconfig
# mv ca.pem /var/lib/kubernetes/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create the &lt;code&gt;kubelet-config.yaml&lt;/code&gt; configuration file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# cat &amp;lt;&amp;lt;EOF | sudo tee /var/lib/kubelet/kubelet-config.yaml
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
  anonymous:
    enabled: false
  webhook:
    enabled: true
  x509:
    clientCAFile: "/var/lib/kubernetes/ca.pem"
authorization:
  mode: Webhook
clusterDomain: "cluster.local"
clusterDNS:
  - "10.32.0.10"
runtimeRequestTimeout: "15m"
tlsCertFile: "/var/lib/kubelet/${HOSTNAME}.pem"
tlsPrivateKeyFile: "/var/lib/kubelet/${HOSTNAME}-key.pem"
EOF
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create the &lt;code&gt;kubelet.service&lt;/code&gt; systemd unit file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cat &amp;lt;&amp;lt;EOF | sudo tee /etc/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=containerd.service
Requires=containerd.service

[Service]
ExecStart=/usr/local/bin/kubelet \\
  --config=/var/lib/kubelet/kubelet-config.yaml \\
  --container-runtime=remote \\
  --container-runtime-endpoint=unix:///var/run/containerd/containerd.sock \\
  --image-pull-progress-deadline=2m \\
  --kubeconfig=/var/lib/kubelet/kubeconfig \\
  --network-plugin=cni \\
  --register-node=true \\
  --v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Configure the Kubernetes Proxy
&lt;/h3&gt;

&lt;p&gt;Kube-proxy is an important component of each Kubernetes worker node. It is responsible for providing network routing to support Kubernetes networking components. In this lesson, we will configure our kube-proxy systemd service. Since this is the last of the three worker node services that we need to configure, we will also go ahead and start all of our worker node services once we're done. Finally, we will complete some steps to verify that our cluster is set up properly and functioning as expected so far. After completing this lesson, you should have two Kubernetes worker nodes up and running, and they should be able to successfully register themselves with the cluster. You can configure the kube-proxy service like so. Run these commands on both worker nodes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# mv kube-proxy.kubeconfig /var/lib/kube-proxy/kubeconfig
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create the &lt;code&gt;kube-proxy-config.yaml&lt;/code&gt; configuration file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# cat &amp;lt;&amp;lt;EOF | sudo tee /var/lib/kube-proxy/kube-proxy-config.yaml
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
clientConnection:
  kubeconfig: "/var/lib/kube-proxy/kubeconfig"
mode: "iptables"
clusterCIDR: "10.200.0.0/16"
EOF

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create the &lt;code&gt;kube-proxy.service&lt;/code&gt; systemd unit file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# cat &amp;lt;&amp;lt;EOF | sudo tee /etc/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Kube Proxy
Documentation=https://github.com/kubernetes/kubernetes

[Service]
ExecStart=/usr/local/bin/kube-proxy \\
  --config=/var/lib/kube-proxy/kube-proxy-config.yaml
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Start the Worker Services&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# systemctl daemon-reload
# systemctl enable containerd kubelet kube-proxy
# systemctl start containerd kubelet kube-proxy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Verification
&lt;/h2&gt;

&lt;p&gt;Finally, verify that both workers have registered themselves with the cluster. Log in to one of your control nodes and run this: &lt;br&gt;
First should we create a dir in both of controller nodes &lt;br&gt;
will crate dir for kubectl to containe the certificate and config&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# mkdir -p $HOME/.kube
# cp -i admin.kubeconfig $HOME/.kube/config
# chown $(id -u):$(id -g) $HOME/.kube/config
# kubectl get nodes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;output&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME                 STATUS     ROLES    AGE     VERSION
worknode01.k8s.com   NotReady   &amp;lt;none&amp;gt;   5m28s   v1.21.0
worknode02.k8s.com   NotReady   &amp;lt;none&amp;gt;   5m31s   v1.21.0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;dotnot wary about &lt;code&gt;NotReady&lt;/code&gt; because in networking will fix this issues &lt;/p&gt;
&lt;/blockquote&gt;

&lt;h1&gt;
  
  
  Configuring kubectl for Remote Access
&lt;/h1&gt;

&lt;p&gt;In this lab you will generate a kubeconfig file for the &lt;code&gt;kubectl&lt;/code&gt; command line utility based on the &lt;code&gt;admin&lt;/code&gt; user credentials.&lt;/p&gt;

&lt;h3&gt;
  
  
  What Is Kubectl?
&lt;/h3&gt;

&lt;p&gt;Kubectl&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;is the Kubernetes command line tool. It allows us to interact with Kubernetes clusters from the command line.
We will set up kubectl to allow remote access from our machine in order to manage the cluster remotely. To do this, we will generate a local kubeconfig that will authenticate as the admin user and access the Kubernetes API through the load balancer.
In this lab you will generate a kubeconfig file for the kubectl command line utility based on the admin user credentials.
Run the commands in this lab from the same directory used to generate the admin client certificates.
There are a few steps to configuring a local kubectl installation for managing a remote cluster. This lesson will guide you through that process. After completing this lesson, you should have a local kubectl installation that is capable of running kubectl commands against your remote Kubernetes cluster. In a separate shell, open up an ssh tunnel to port 6443 on your Kubernetes API load balancer: 
The Admin Kubernetes Configuration File
Each kubeconfig requires a Kubernetes API Server to connect to. To support high availability the IP address assigned to the external load balancer fronting the Kubernetes API Servers will be used. 
Generate a kubeconfig file suitable for authenticating as the admin user:
Lets configure the
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# cd /k8s
# mkdir -p $HOME/.kube
# cp -i admin.kubeconfig $HOME/.kube/config
# chown $(id -u):$(id -g) $HOME/.kube/config
# KUBERNETES_PUBLIC_ADDRESS=192.168.0.3
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  The Admin Kubernetes Configuration File
&lt;/h2&gt;

&lt;p&gt;Each kubeconfig requires a Kubernetes API Server to connect to. To support high availability the IP address assigned to the external load balancer fronting the Kubernetes API Servers will be used.&lt;/p&gt;

&lt;p&gt;Generate a kubeconfig file suitable for authenticating as the &lt;code&gt;admin&lt;/code&gt; user:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  kubectl config set-cluster kubernetes-the-hard-way \
    --certificate-authority=ca.pem \
    --embed-certs=true \
    --server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443

  kubectl config set-credentials admin \
    --client-certificate=admin.pem \
    --client-key=admin-key.pem

  kubectl config set-context kubernetes-the-hard-way \
    --cluster=kubernetes-the-hard-way \
    --user=admin

  kubectl config use-context kubernetes-the-hard-way
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Verification
&lt;/h2&gt;

&lt;p&gt;Check the version of the remote Kubernetes cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl version
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;output&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.0", GitCommit:"cb303e613a121a29364f75cc67d3d580833a7479", GitTreeState:"clean", BuildDate:"2021-04-08T16:31:21Z", GoVersion:"go1.16.1", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.0", GitCommit:"cb303e613a121a29364f75cc67d3d580833a7479", GitTreeState:"clean", BuildDate:"2021-04-08T16:25:06Z", GoVersion:"go1.16.1", Compiler:"gc", Platform:"linux/amd64"}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;List the nodes in the remote Kubernetes cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get nodes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;output&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME                 STATUS     ROLES    AGE     VERSION
worknode01.k8s.com   NotReady   &amp;lt;none&amp;gt;   5m28s   v1.21.0
worknode02.k8s.com   NotReady   &amp;lt;none&amp;gt;   5m31s   v1.21.0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;dotnot wary about &lt;code&gt;NotReady&lt;/code&gt; because in networking will fix this issues &lt;/p&gt;
&lt;/blockquote&gt;

&lt;h1&gt;
  
  
  Provisioning Pod Network Routes
&lt;/h1&gt;

&lt;p&gt;In this lab you will use &lt;a href="https://docs.projectcalico.org/getting-started/kubernetes/" rel="noopener noreferrer"&gt;calico &lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;There are &lt;a href="https://kubernetes.io/docs/concepts/cluster-administration/networking/#how-to-achieve-this" rel="noopener noreferrer"&gt;other ways&lt;/a&gt; to implement the Kubernetes networking model.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  The Kubernetes Networking Model
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;What Problems Does the Networking Model Solve?

&lt;ul&gt;
&lt;li&gt;How will containers communicate with each other?&lt;/li&gt;
&lt;li&gt;What if the containers are on different hosts (worker nodes)?&lt;/li&gt;
&lt;li&gt;How will containers communicate with services?&lt;/li&gt;
&lt;li&gt;How will containers be assigned unique IP addresses? What port(s) will be used?&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h3&gt;
  
  
  The Docker Model
&lt;/h3&gt;

&lt;p&gt;Docker allows containers to communicate with one another using a virtual network bridge configured on the host. Each host has its own virtual network serving all of the containers on that host. But what about containers on different hosts? We have to proxy traffic from the host to the containers, making sure no two containers use the same port on a host. The Kubernetes networking model was created in response to the Docker model. It was designed to improve on some of the limitations of the Docker model&lt;/p&gt;

&lt;h3&gt;
  
  
  The Kubernetes Networking Model
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;One virtual network for the whole cluster.&lt;/li&gt;
&lt;li&gt;Each pod has a unique IP within the cluster.&lt;/li&gt;
&lt;li&gt;Each service has a unique IP that is in a different range than pod IPs.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Cluster Network Architecture
&lt;/h3&gt;

&lt;p&gt;Some Important CIDR ranges:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Cluster CIDR

&lt;ul&gt;
&lt;li&gt;IP range used to assign IPs to pods in the cluster. In this course, we’ll be using a cluster CIDR of 10.200.0.0/16&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Service Cluster IP Range

&lt;ul&gt;
&lt;li&gt;IP range for services in the cluster. This should not overlap with the cluster CIDR range! In this course, our service cluster IP range is 10.32.0.0/24.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Pod CIDR

&lt;ul&gt;
&lt;li&gt;IP range for pods on a specific worker node. This range should fall within the cluster CIDR but not overlap with the pod CIDR of any other worker node. In this course,    our networking plugin will automatically handle IP allocation to nodes, so we do not need to manually set a pod CIDR.
### Install Calico Networking  on  Kubernetes
We will be using calico Net to implement networking in our Kubernetes cluster.
We are now ready to set up networking in our Kubernetes cluster. This lesson guides you through the process of installing Weave Net in the cluster. It also shows you how to test your cluster network to make sure that everything is working as expected so far. After completing this lesson, you should have a functioning cluster network within your Kubernetes cluster. You can configure Weave Net like this: First, log in to both worker nodes and enable IP forwarding
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# sysctl net.ipv4.conf.all.forwarding=1
# echo "net.ipv4.conf.all.forwarding=1" | sudo tee -a /etc/sysctl.conf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;login in remote kubectl then install clico&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;note it take between 10 to 15 min to be up &lt;br&gt;
Now calico Net is installed, but we need to test our network to make sure everything is working. First, make sure the calico Net pods are up and running:&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# kubectl get pods -n kube-system 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Verification
&lt;/h3&gt;

&lt;p&gt;Next, we want to test that pods can connect to each other and that they can connect to services. We will set up two Nginx pods and a service for those two pods. Then, we will create a busybox pod and use it to test connectivity to both Nginx pods and the service. First, create an Nginx deployment with 2 replicas:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# cat &amp;lt;&amp;lt;EOF | kubectl apply  -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: system:kube-apiserver
  namespace: ""
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:kube-apiserver-to-kubelet
subjects:
  - apiGroup: rbac.authorization.k8s.io
    kind: User
    name: kubernetes
EOFnginx.yml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
spec:
  selector:
     matchLabels:
      run: nginx
  replicas: 2
  template:
    metadata:
      labels:
        run: nginx
    spec:
      containers:
      - name: my-nginx
        image: nginx
        ports:
        - containerPort: 80
EOF
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, create a service for that deployment so that we can test connectivity to services as well:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# kubectl expose deployment/nginx 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now let's start up another pod. We will use this pod to test our networking. We will test whether we can connect to the other pods and services from this pod.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# kubectl run busybox --image=radial/busyboxplus:curl --command -- sleep 3600
# POD_NAME=$(kubectl get pods -l run=busybox -o jsonpath="{.items[0].metadata.name}")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now let's get the IP addresses of our two Nginx pods:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# kubectl get ep nginx 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now let's make sure the busybox pod can connect to the Nginx pods on both of those IP addresses&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# kubectl exec $POD_NAME -- curl &amp;lt;first nginx pod IP address&amp;gt;
# kubectl exec $POD_NAME -- curl &amp;lt;second nginx pod IP address&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Both commands should return some HTML with the title "Welcome to Nginx!" This means that we can successfully connect to other pods. Now let's verify that we can connect to services.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# kubectl get svc
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's see if we can access the service from the busybox pod!&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# kubectl exec $POD_NAME -- curl &amp;lt;nginx service IP address&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This should also return HTML with the title "Welcome to Nginx!" This means that we have successfully reached the Nginx service from inside a pod and that our networking configuration is working!&lt;br&gt;
Now that we have networking set up in the cluster, we need to clean up the objects that were created in order to test the networking. These object could get in the way or become confusing in later lessons, so it is a good idea to remove them from the cluster before proceeding. After completing this lesson, your networking should still be in place, but the pods and services that were used to test it will be cleaned up.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# kubectl get deploy 
# kubectl delete deployment nginx
# kubectl delete svc nginx
# kubectl delete pod busybox
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  DNS in a Kubernetes Pod Network
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Provides a DNS service to be used by pods within the network.&lt;/li&gt;
&lt;li&gt;Configures containers to use the DNS service to perform DNS lookups for example

&lt;ul&gt;
&lt;li&gt;You can access services using DNS names assigned to them.&lt;/li&gt;
&lt;li&gt;You can access other pods using DNS names&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h1&gt;
  
  
  Deploying the DNS Cluster Add-on
&lt;/h1&gt;

&lt;p&gt;In this lab you will deploy the &lt;a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/" rel="noopener noreferrer"&gt;DNS add-on&lt;/a&gt; which provides DNS based service discovery, backed by &lt;a href="https://coredns.io/" rel="noopener noreferrer"&gt;CoreDNS&lt;/a&gt;, to applications running inside the Kubernetes cluster.&lt;/p&gt;

&lt;p&gt;Deploying the DNS Cluster Add-on&lt;br&gt;
In this lab you will deploy the DNS add-on which provides DNS based service discovery, backed by CoreDNS, to applications running inside the Kubernetes cluster.&lt;br&gt;
The DNS Cluster Add-on&lt;/p&gt;
&lt;h2&gt;
  
  
  The DNS Cluster Add-on
&lt;/h2&gt;

&lt;p&gt;Deploy the &lt;code&gt;coredns&lt;/code&gt; cluster add-on:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f https://storage.googleapis.com/kubernetes-the-hard-way/coredns-1.8.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;output&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;serviceaccount/coredns created
clusterrole.rbac.authorization.k8s.io/system:coredns created
clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
configmap/coredns created
deployment.apps/coredns created
service/kube-dns created
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;List the pods created by the &lt;code&gt;kube-dns&lt;/code&gt; deployment:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get pods -l k8s-app=kube-dns -n kube-system
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;it take 3 min then the pods will up &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;output&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;coredns-8494f9c688-j97h2   1/1     Running   5          3m31s
coredns-8494f9c688-wjn4n   1/1     Running   1          3m31s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Verification
&lt;/h2&gt;

&lt;p&gt;Create a &lt;code&gt;busybox&lt;/code&gt; deployment:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl run busybox --image=busybox:1.28 --command -- sleep 3600
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;List the pod created by the &lt;code&gt;busybox&lt;/code&gt; deployment:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get pods -l run=busybox
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;output&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME      READY   STATUS    RESTARTS   AGE
busybox   1/1     Running   0          3s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Retrieve the full name of the &lt;code&gt;busybox&lt;/code&gt; pod:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;POD_NAME=$(kubectl get pods -l run=busybox -o jsonpath="{.items[0].metadata.name}")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Execute a DNS lookup for the &lt;code&gt;kubernetes&lt;/code&gt; service inside the &lt;code&gt;busybox&lt;/code&gt; pod:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl exec -ti $POD_NAME -- nslookup kubernetes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;output&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Server:    10.32.0.10
Address 1: 10.32.0.10 kube-dns.kube-system.svc.cluster.local

Name:      kubernetes
Address 1: 10.32.0.1 kubernetes.default.svc.cluster.local
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Smoke Test
&lt;/h2&gt;

&lt;p&gt;In this lab you will complete a series of tasks to ensure your Kubernetes cluster is functioning correctly.&lt;br&gt;
Now we want to run some basic smoke tests to make sure everything in our cluster is working correctly. We will test the following features: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Data encryption&lt;/li&gt;
&lt;li&gt;Deployments&lt;/li&gt;
&lt;li&gt;Port forwarding&lt;/li&gt;
&lt;li&gt;Logs&lt;/li&gt;
&lt;li&gt;Exec&lt;/li&gt;
&lt;li&gt;Services&lt;/li&gt;
&lt;li&gt;Untrusted workloads&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Data Encryption
&lt;/h2&gt;

&lt;p&gt;In this section you will verify the ability to &lt;a href="https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/#verifying-that-data-is-encrypted" rel="noopener noreferrer"&gt;encrypt secret data at rest&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Create a generic secret:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Goal: 

&lt;ul&gt;
&lt;li&gt;Verify that we can encrypt secret data at rest. &lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Strategy:

&lt;ul&gt;
&lt;li&gt;Create a generic secret in the cluster. &lt;/li&gt;
&lt;li&gt;Dump the raw data from etcd and verify that it is encrypted
we set up a data encryption config to allow Kubernetes to encrypt sensitive data. In this lesson, we will smoke test that functionality by creating some secret data and verifying that it is stored in an encrypted format in etcd. After completing this lesson, you will have verified that your cluster can successfully encrypt sensitive data
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl create secret generic kubernetes-the-hard-way \
  --from-literal="mykey=mydata"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Print a hexdump of the &lt;code&gt;kubernetes-the-hard-way&lt;/code&gt; secret stored in etcd:&lt;br&gt;
Log in to one of your controller servers, and get the raw data for the test secret from etcd&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#  ETCDCTL_API=3 etcdctl get   \
   --endpoints=https://127.0.0.1:2379 \
   --cacert=/etc/etcd/ca.pem \
   --cert=/etc/etcd/kubernetes.pem \
   --key=/etc/etcd/kubernetes-key.pem\
    /registry/secrets/default/kubernetes-the-hard-way | hexdump -C
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;output&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;00000000  2f 72 65 67 69 73 74 72  79 2f 73 65 63 72 65 74  |/registry/secret|
00000010  73 2f 64 65 66 61 75 6c  74 2f 6b 75 62 65 72 6e  |s/default/kubern|
00000020  65 74 65 73 2d 74 68 65  2d 68 61 72 64 2d 77 61  |etes-the-hard-wa|
00000030  79 0a 6b 38 73 3a 65 6e  63 3a 61 65 73 63 62 63  |y.k8s:enc:aescbc|
00000040  3a 76 31 3a 6b 65 79 31  3a ea 5f 64 1f 22 63 ac  |:v1:key1:._d."c.|
00000050  e5 a0 2d 7f 1e cd e3 03  64 a0 8e 7f cf 58 db 50  |..-.....d....X.P|
00000060  d7 d0 12 a1 31 2e 72 53  e3 51 de 31 53 96 d7 3f  |....1.rS.Q.1S..?|
00000070  71 f5 e3 3f 07 bc 33 56  55 ed 9c 67 6a 91 77 18  |q..?..3VU..gj.w.|
00000080  52 bb ad 61 64 76 43 df  00 b5 aa 7e 8e cb 16 e9  |R..advC....~....|
00000090  9b 5a 21 04 49 37 63 a5  6c df 09 b7 2b 5c 96 69  |.Z!.I7c.l...+\.i|
000000a0  02 03 42 02 93 7d 42 57  c9 8d 28 2d 1c 9d dd 2b  |..B..}BW..(-...+|
000000b0  a3 69 fa ca c8 8f a0 0e  66 c8 5b 5a 40 29 80 0d  |.i......f.[Z@)..|
000000c0  06 c3 56 87 27 ba d2 19  a6 b0 e6 b5 70 b3 18 02  |..V.'.......p...|
000000d0  69 ed ae b1 4d 03 be 92  08 9e 20 62 41 cd e6 a4  |i...M..... bA...|
000000e0  8c e0 fd b0 5f 44 11 a1  e0 99 a4 61 71 b2 c2 98  |...._D.....aq...|
000000f0  b1 f3 bf 48 a5 26 11 8c  9e 4e 12 7a 81 f4 20 11  |...H.&amp;amp;...N.z.. .|
00000100  05 0d db 62 82 53 2c d9  71 0d 9f af d7 e2 b6 94  |...b.S,.q.......|
00000110  4c 67 98 2e 66 21 77 5e  ea 4d f5 23 6c d4 4b 56  |Lg..f!w^.M.#l.KV|
00000120  58 a7 f1 3b 23 8d 5b 45  14 2c 05 3a a9 90 95 a4  |X..;#.[E.,.:....|
00000130  9a 5f 06 cc 42 65 b3 31  d8 9c 78 a9 f1 da a2 81  |._..Be.1..x.....|
00000140  5a a6 f6 d8 7c 2e 8c 13  f0 30 b1 25 ab 6e bb 2f  |Z...|....0.%.n./|
00000150  cd 7f fd 44 98 64 97 9b  31 0a                    |...D.d..1.|
0000015a
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The etcd key should be prefixed with &lt;code&gt;k8s:enc:aescbc:v1:key1&lt;/code&gt;, which indicates the &lt;code&gt;aescbc&lt;/code&gt; provider was used to encrypt the data with the &lt;code&gt;key1&lt;/code&gt; encryption key.&lt;/p&gt;

&lt;h3&gt;
  
  
  Deployments
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Goal:

&lt;ul&gt;
&lt;li&gt;Verify that we can create a deployment and that it can successfully create pods. &lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Strategy:

&lt;ul&gt;
&lt;li&gt;Create a simple deployment. &lt;/li&gt;
&lt;li&gt;Verify that the deployment successfully creates a pod
Deployments are one of the powerful orchestration tools offered by Kubernetes. In this lesson, we will make sure that deployments are working in our cluster. We will verify that we can create a deployment, and that the deployment is able to successfully stand up a new pod and container.
In this section you will verify the ability to create and manage &lt;a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="noopener noreferrer"&gt;Deployments&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;Create a deployment for the &lt;a href="https://nginx.org/en/" rel="noopener noreferrer"&gt;nginx&lt;/a&gt; web server:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl create deployment nginx --image=nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;List the pod created by the &lt;code&gt;nginx&lt;/code&gt; deployment:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get pods -l app=nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;output&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nginx-6799fc88d8-vtz4c   1/1     Running   0          21s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Port Forwarding
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Goal:

&lt;ul&gt;
&lt;li&gt;Verify that we can use port forwarding to access pods remotely&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Strategy:

&lt;ul&gt;
&lt;li&gt;Use kubectl port-forward to set up port forwarding for an Nginx pod&lt;/li&gt;
&lt;li&gt;Access the pod remotely with curl.
In this section you will verify the ability to access applications remotely using &lt;a href="https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/" rel="noopener noreferrer"&gt;port forwarding&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;Retrieve the full name of the &lt;code&gt;nginx&lt;/code&gt; pod:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;POD_NAME=$(kubectl get pods -l app=nginx -o jsonpath="{.items[0].metadata.name}")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Forward port &lt;code&gt;8080&lt;/code&gt; on your local machine to port &lt;code&gt;80&lt;/code&gt; of the &lt;code&gt;nginx&lt;/code&gt; pod:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl port-forward $POD_NAME 8080:80
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;output&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Forwarding from 127.0.0.1:8080 -&amp;gt; 80
Forwarding from [::1]:8080 -&amp;gt; 80
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In a new terminal make an HTTP request using the forwarding address:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl --head http://127.0.0.1:8080
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;output&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;HTTP/1.1 200 OK
Server: nginx/1.19.10
Date: Sun, 02 May 2021 05:29:25 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 13 Apr 2021 15:13:59 GMT
Connection: keep-alive
ETag: "6075b537-264"
Accept-Ranges: bytes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Switch back to the previous terminal and stop the port forwarding to the &lt;code&gt;nginx&lt;/code&gt; pod:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Forwarding from 127.0.0.1:8080 -&amp;gt; 80
Forwarding from [::1]:8080 -&amp;gt; 80
Handling connection for 8080
^C
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Logs
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Goal:

&lt;ul&gt;
&lt;li&gt;Verify that we can get container logs with kubectl logs.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Strategy: 

&lt;ul&gt;
&lt;li&gt;Get the logs from the Nginx pod container.
When managing a cluster, it is often necessary to access container logs to check their health and diagnose issues. Kubernetes offers access to container logs via the kubectl logs command. In this lesson, In this section you will verify the ability to &lt;a href="https://kubernetes.io/docs/concepts/cluster-administration/logging/" rel="noopener noreferrer"&gt;retrieve container logs&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;Print the &lt;code&gt;nginx&lt;/code&gt; pod logs:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl logs $POD_NAME
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;output&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;2021/10/26 18:07:29 [notice] 1#1: start worker processes
2021/10/26 18:07:29 [notice] 1#1: start worker process 30
2021/10/26 18:07:29 [notice] 1#1: start worker process 31
127.0.0.1 - - [26/Oct/2021:18:20:44 +0000] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.29.0" "-"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Exec
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Goal: 

&lt;ul&gt;
&lt;li&gt;Verify that we can run commands in a container with kubectl exec&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Strategy: 

&lt;ul&gt;
&lt;li&gt;Use kubectl exec to run a command in the Nginx pod container.
The kubectl exec command is a powerful management tool that allows us to run commands inside of Kubernetes-managed containers. In order to verify that our cluster is set up correctly, we need to make sure that kubectl exec is working. 
In this section you will verify the ability to &lt;a href="https://kubernetes.io/docs/tasks/debug-application-cluster/get-shell-running-container/#running-individual-commands-in-a-container" rel="noopener noreferrer"&gt;execute commands in a container&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;Print the nginx version by executing the &lt;code&gt;nginx -v&lt;/code&gt; command in the &lt;code&gt;nginx&lt;/code&gt; container:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl exec -ti $POD_NAME -- nginx -v
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;output&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nginx version: nginx/1.21.3
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Services
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Goal:

&lt;ul&gt;
&lt;li&gt;Verify that we can create and access services.&lt;/li&gt;
&lt;li&gt;Verify that we can run an untrusted workload under gVisor (runsc) &lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Strategy: 

&lt;ul&gt;
&lt;li&gt;Create a NodePort service to expose the Nginx deployment.&lt;/li&gt;
&lt;li&gt;Access the service remotely using the NodePort.&lt;/li&gt;
&lt;li&gt;Run a pod as an untrusted workload.&lt;/li&gt;
&lt;li&gt;Log in to the worker node that is running the pod and verify that its container is running using runsc.
In order to make sure that the cluster is set up correctly, we need to ensure that services can be created and accessed appropriately. In this lesson, we will smoke test our cluster's ability to create and access services by creating a simple testing service, and accessing it using a node port. If we can successfully create the service and use it to access our nginx pod, then we will know that our cluster is able to correctly handle services! 
In this section you will verify the ability to expose applications using a &lt;a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="noopener noreferrer"&gt;Service&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;Expose the &lt;code&gt;nginx&lt;/code&gt; deployment using a &lt;a href="https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport" rel="noopener noreferrer"&gt;NodePort&lt;/a&gt; service:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl expose deployment nginx --port 80 --type NodePort
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Retrieve the node port assigned to the &lt;code&gt;nginx&lt;/code&gt; service:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NODE_PORT=$(kubectl get svc nginx \
  --output=jsonpath='{range .spec.ports[0]}{.nodePort}')
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Make an HTTP request using the external IP address and the &lt;code&gt;nginx&lt;/code&gt; node port:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -I http://${EXTERNAL_IP}:${NODE_PORT}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;output&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;HTTP/1.1 200 OK
Server: nginx/1.19.10
Date: Sun, 02 May 2021 05:31:52 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 13 Apr 2021 15:13:59 GMT
Connection: keep-alive
ETag: "6075b537-264"
Accept-Ranges: bytes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Cleaning Up
&lt;/h1&gt;

&lt;p&gt;In this lab you will delete the compute resources created during this tutorial.&lt;br&gt;
Now that we have finished smoke testing the cluster, it is a good idea to clean up the objects that we created for testing. In this lesson, we will spend a moment removing the objects that were created in our cluster in order to perform the smoke testing&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# kubectl delete secret kubernetes-the-hard-way 
# kubectl delete svc nginx 
# kubectl delete deployment nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>devops</category>
      <category>kubernetes</category>
      <category>docker</category>
      <category>linux</category>
    </item>
  </channel>
</rss>
