DEV Community

Haider Raed
Haider Raed

Posted on • Updated on

k8s the hard way on Centos

Inforamtion

Technical Writer Haider Raed

Kubernetes The Hard Way

This tutorial walks you through setting up Kubernetes the hard way. This guide is not for people looking for a fully automated command to bring up a Kubernetes cluster. If that's you then check out Google Kubernetes Engine, or the Getting Started Guides. the repo k8s-the-hard-way

Kubernetes The Hard Way is optimized for learning, which means taking the long route to ensure you understand each task required to bootstrap a Kubernetes cluster.

The results of this tutorial should not be viewed as production ready, and may receive limited support from the community, but don't let that stop you from learning!

Target Audience

The target audience for this tutorial is someone planning to support a production Kubernetes cluster and wants to understand how everything fits together.

Cluster Details

Kubernetes The Hard Way guides you through bootstrapping a highly available Kubernetes cluster with end-to-end encryption between components and RBAC authentication.

Labs

Prerequisites

  • its Need 6 centos vm
    • A compatible Linux host. The Kubernetes project provides generic instructions
    • 2 GB or more of RAM per machine (any less will leave little room for your apps).
    • 2 CPUs or more.
    • Full network connectivity between all machines in the cluster (public or private network is fine).
    • Unique hostname, MAC address, and product_uuid for every node. See here for more details.
    • Swap disabled. You MUST disable swap in order for the kubelet to work properly > you can see the lab digram in your case you only to change the ip for your machine edit hostname and mapping to your machines ip then add to /etc/hosts digram screenshot

editing host file

note: the ip will change to your ip range

# cat <<EOF>> /etc/hosts 
192.168.0.1 kubecon01.k8s.com
192.168.0.2 kubecon02.k8s.com
192.168.0.5 worknode01.k8s.com
192.168.0.6 worknode02.k8s.com
192.168.0.3 api_loadbalancer.k8s.com
EOF
Enter fullscreen mode Exit fullscreen mode

Install Some Package in machine will help you

# yum install bash-completion vim telnet -y 
Enter fullscreen mode Exit fullscreen mode

make sure the firewalld servies is stop and disabled

# systemctl disable --now firewalld
Enter fullscreen mode Exit fullscreen mode

make sure the Selinux is disabled

# setenforce 0
# sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
Enter fullscreen mode Exit fullscreen mode

Installing the Client Tools

In this lab you will install the command line utilities required to complete this tutorial: cfssl, cfssljson, and kubectl.
in this lissions will work on remote kubectl

Install CFSSL

The cfssl and cfssljson command line utilities will be used to provision a PKI Infrastructure and generate TLS certificates.

Download and install cfssl and cfssljson:

# wget https://storage.googleapis.com/kubernetes-the-hard-way/cfssl/1.4.1/linux/cfssl
# wget https://storage.googleapis.com/kubernetes-the-hard-way/cfssl/1.4.1/linux/cfssljson
# chmod +x cfssl cfssljson
# sudo mv cfssl cfssljson /usr/local/bin/
Enter fullscreen mode Exit fullscreen mode

Verification

Verify cfssl and cfssljson version 1.4.1 or higher is installed:

cfssl version
Enter fullscreen mode Exit fullscreen mode

output

Version: 1.4.1
Runtime: go1.12.12
Enter fullscreen mode Exit fullscreen mode
cfssljson --version
Enter fullscreen mode Exit fullscreen mode
Version: 1.4.1
Runtime: go1.12.12
Enter fullscreen mode Exit fullscreen mode

Install kubectl

The kubectl command line utility is used to interact with the Kubernetes API Server. Download and install kubectl from the official release binaries:

# wget https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kubectl
# chmod +x kubectl
# sudo mv kubectl /usr/local/bin/
Enter fullscreen mode Exit fullscreen mode

Verification

Verify kubectl version 1.21.0 or higher is installed:

# kubectl version --client
Enter fullscreen mode Exit fullscreen mode

output

Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.0", GitCommit:"cb303e613a121a29364f75cc67d3d580833a7479", GitTreeState:"clean", BuildDate:"2021-04-08T16:31:21Z", GoVersion:"go1.16.1", Compiler:"gc", Platform:"linux/amd64"}
Enter fullscreen mode Exit fullscreen mode

Provisioning a CA and Generating TLS Certificates

In this lab you will provision a PKI Infrastructure using CloudFlare's PKI toolkit, cfssl, then use it to bootstrap a Certificate Authority, and generate TLS certificates for the following components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler, kubelet, and kube-proxy.

Why Do We Need a CA and TLS Certificates?

Note: In this section, we will be provisioning a certificate authority (CA). We will then use the CA to generate several certificates

Certificates.

are used to confirm (authenticate) identity. They are used to prove that you are who you say you are.

Certificate Authority

provides the ability to confirm that a certificate is valid. A certificate authority can be used to validate any certificate that was issued using that certificate authority. Kubernetes uses certificates for a variety of security functions, and the different parts of our cluster will validate certificates using the certificate authority. In this section, we will generate all of these certificates and copy the necessary files to the servers that need them.

What Certificates Do We Need?

  • Client Certificates
    • These certificates provide client authentication for various users: admin, kubecontroller-manager, kube-proxy, kube-scheduler, and the kubelet client on each worker node.
  • Kubernetes API Server Certificate
    • This is the TLS certificate for the Kubernetes API.
  • Service Account Key Pair
    • Kubernetes uses a certificate to sign service account tokens, so we need to provide a certificate for that purpose. ## Provisioning the Certificate Authority In order to generate the certificates needed by Kubernetes, you must first provision a certificate authority. This lesson will guide you through the process of provisioning a new certificate authority for your Kubernetes cluster. After completing this lesson, you should have a certificate authority, which consists of two files: ca-key.pem and ca.pem lets create dir that contains all certificates ## Generating Client Certificates Now that you have provisioned a certificate authority for the Kubernetes cluster, you are ready to begin generating certificates. The first set of certificates you will need to generate consists of the client certificates used by various Kubernetes components. In this lesson, we will generate the following client certificates: admin , kubelet (one for each worker node), kube-controller-manager , kube-proxy , and kube-scheduler . After completing this lesson, you will have the client certificate files which you will need later to set up the cluster. Here are the commands used in the demo. The command blocks surrounded by curly braces can be entered as a single command: In this lab you will provision a PKI Infrastructure using CloudFlare's PKI toolkit, cfssl, then use it to bootstrap a Certificate Authority, and generate TLS certificates for the following components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler, kubelet, and kube-proxy. ## Certificate Authority In this section you will provision a Certificate Authority that can be used to generate additional TLS certificates. Generate the CA configuration file, certificate, and private key:
# mkdir k8s
# cd k8s 
Enter fullscreen mode Exit fullscreen mode

Use this command to generate the certificate authority. Include the opening and closing curly braces to run this entire block as a single command.

{

cat > ca-config.json <<EOF
{
  "signing": {
    "default": {
      "expiry": "8760h"
    },
    "profiles": {
      "kubernetes": {
        "usages": ["signing", "key encipherment", "server auth", "client auth"],
        "expiry": "8760h"
      }
    }
  }
}
EOF

cat > ca-csr.json <<EOF
{
  "CN": "Kubernetes",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "US",
      "L": "Portland",
      "O": "Kubernetes",
      "OU": "CA",
      "ST": "Oregon"
    }
  ]
}
EOF

cfssl gencert -initca ca-csr.json | cfssljson -bare ca

}
Enter fullscreen mode Exit fullscreen mode

Results:

ca-key.pem
ca.pem
Enter fullscreen mode Exit fullscreen mode

Client and Server Certificates

In this section you will generate client and server certificates for each Kubernetes component and a client certificate for the Kubernetes admin user.

The Admin Client Certificate

Generate the admin client certificate and private key:

{

cat > admin-csr.json <<EOF
{
  "CN": "admin",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "US",
      "L": "Portland",
      "O": "system:masters",
      "OU": "Kubernetes The Hard Way",
      "ST": "Oregon"
    }
  ]
}
EOF

cfssl gencert \
  -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -profile=kubernetes \
  admin-csr.json | cfssljson -bare admin

}
Enter fullscreen mode Exit fullscreen mode

Results:

admin-key.pem
admin.pem
Enter fullscreen mode Exit fullscreen mode

The Kubelet Client Certificates

Kubernetes uses a special-purpose authorization mode called Node Authorizer, that specifically authorizes API requests made by Kubelets. In order to be authorized by the Node Authorizer, Kubelets must use a credential that identifies them as being in the system:nodes group, with a username of system:node:<nodeName>. In this section you will create a certificate for each Kubernetes worker node that meets the Node Authorizer requirements.

Kubelet Client certificates. Be sure to enter your actual machines values for all four of the variables at the top:

digram screenshot

# WORKER0_HOST=worknode01.k8s.com
# WORKER0_IP=192.168.0.5
# WORKER1_HOST=worknode02.k8s.com
# WORKER1_IP=192.168.0.6
Enter fullscreen mode Exit fullscreen mode
for instance in worknode01.k8s.com worknode02.k8s.com; do
cat > ${instance}-csr.json <<EOF
{
  "CN": "system:node:${instance}",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "US",
      "L": "Portland",
      "O": "system:nodes",
      "OU": "Kubernetes The Hard Way",
      "ST": "Oregon"
    }
  ]
}
EOF
cfssl gencert \
  -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -hostname=${instance} \
  -profile=kubernetes \
  ${instance}-csr.json | cfssljson -bare ${instance}
done
Enter fullscreen mode Exit fullscreen mode

Results:

worknode01.k8s.com-key.pem
worknode01.k8s.com.pem
worknode02.k8s.com-key.pem
worknode02.k8s.com.pem
Enter fullscreen mode Exit fullscreen mode

The Controller Manager Client Certificate

Generate the kube-controller-manager client certificate and private key:

{

cat > kube-controller-manager-csr.json <<EOF
{
  "CN": "system:kube-controller-manager",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "US",
      "L": "Portland",
      "O": "system:kube-controller-manager",
      "OU": "Kubernetes The Hard Way",
      "ST": "Oregon"
    }
  ]
}
EOF

cfssl gencert \
  -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -profile=kubernetes \
  kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager

}
Enter fullscreen mode Exit fullscreen mode

Results:

kube-controller-manager-key.pem
kube-controller-manager.pem
Enter fullscreen mode Exit fullscreen mode

The Kube Proxy Client Certificate

Generate the kube-proxy client certificate and private key:

{

cat > kube-proxy-csr.json <<EOF
{
  "CN": "system:kube-proxy",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "US",
      "L": "Portland",
      "O": "system:node-proxier",
      "OU": "Kubernetes The Hard Way",
      "ST": "Oregon"
    }
  ]
}
EOF

cfssl gencert \
  -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -profile=kubernetes \
  kube-proxy-csr.json | cfssljson -bare kube-proxy

}
Enter fullscreen mode Exit fullscreen mode

Results:

kube-proxy-key.pem
kube-proxy.pem
Enter fullscreen mode Exit fullscreen mode

The Scheduler Client Certificate

Generate the kube-scheduler client certificate and private key:

{

cat > kube-scheduler-csr.json <<EOF
{
  "CN": "system:kube-scheduler",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "US",
      "L": "Portland",
      "O": "system:kube-scheduler",
      "OU": "Kubernetes The Hard Way",
      "ST": "Oregon"
    }
  ]
}
EOF

cfssl gencert \
  -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -profile=kubernetes \
  kube-scheduler-csr.json | cfssljson -bare kube-scheduler

}
Enter fullscreen mode Exit fullscreen mode

Results:

kube-scheduler-key.pem
kube-scheduler.pem
Enter fullscreen mode Exit fullscreen mode

The Kubernetes API Server Certificate

The kubernetes-the-hard-way static IP address will be included in the list of subject alternative names for the Kubernetes API Server certificate. This will ensure the certificate can be validated by remote clients.
We have generated all of the the client certificates our Kubernetes cluster will need, but we also need a server certificate for the Kubernetes API. In this lesson, we will generate one, signed with all of the hostnames and IPs that may be used later in order to access the Kubernetes API. After completing this lesson, you will have a Kubernetes API server certificate in the form of two files called kubernetes-key.pem and kubernetes.pem .

Here are the commands used in the demo. Be sure to replace all the placeholder values in CERT_HOSTNAME with their real values from your machines .
digram screenshot

# CERT_HOSTNAME=10.32.0.1,<controller node 1 Private IP>,<controller node 1 hostname>,<controller node 2 Private IP>,<controller node 2 hostname>,<API load balancer Private IP>,<API load balancer hostname>,127.0.0.1,localhost,kubernetes.default
Enter fullscreen mode Exit fullscreen mode
CERT_HOSTNAME=10.32.0.1,192.168.0.1,kubecon01.k8s.com,192.168.0.2,kubecon02.k8s.com,192.168.0.3,api_loadbalancer.k8s.com,127.0.0.1,localhost,kubernetes.default
Enter fullscreen mode Exit fullscreen mode

Generate the Kubernetes API Server certificate and private key:

{
cat > kubernetes-csr.json <<EOF
{
  "CN": "kubernetes",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "US",
      "L": "Portland",
      "O": "Kubernetes",
      "OU": "Kubernetes The Hard Way",
      "ST": "Oregon"
    }
  ]
}
EOF

cfssl gencert \
  -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -hostname=${CERT_HOSTNAME} \
  -profile=kubernetes \
  kubernetes-csr.json | cfssljson -bare kubernetes

}
Enter fullscreen mode Exit fullscreen mode

The Kubernetes API server is automatically assigned the kubernetes internal dns name, which will be linked to the first IP address (10.32.0.1) from the address range (10.32.0.0/24) reserved for internal cluster services during the control plane bootstrapping lab.

Results:

kubernetes-key.pem
kubernetes.pem
Enter fullscreen mode Exit fullscreen mode

Kubernetes provides the ability for service accounts to authenticate using tokens. It uses a key-pair to provide signatures for those tokens. In this lesson, we will generate a certificate that will be used as that key-pair. After completing this lesson, you will have a certificate ready to be used as a service account key-pair in the form of two files: service-account-key.pem and service-account.pem . Here are the commands used

The Service Account Key Pair

The Kubernetes Controller Manager leverages a key pair to generate and sign service account tokens as described in the managing service accounts documentation.

Generate the service-account certificate and private key:

{

cat > service-account-csr.json <<EOF
{
  "CN": "service-accounts",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "US",
      "L": "Portland",
      "O": "Kubernetes",
      "OU": "Kubernetes The Hard Way",
      "ST": "Oregon"
    }
  ]
}
EOF

cfssl gencert \
  -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -profile=kubernetes \
  service-account-csr.json | cfssljson -bare service-account

}
Enter fullscreen mode Exit fullscreen mode

Results:

service-account-key.pem
service-account.pem
Enter fullscreen mode Exit fullscreen mode

Distribute the Client and Server Certificates

Copy the appropriate certificates and private keys to each worker instance:
Now that all of the necessary certificates have been generated, we need to move the files onto the appropriate servers. In this lesson, we will copy the necessary certificate files to each of our cloud servers. After completing this lesson, your controller and worker nodes should each have the certificate files which they need. Here are the commands used in the demo. Be sure to replace the placeholders with the actual values from from your cloud servers. Move certificate files to the worker nodes:

# scp ca.pem $WORKER0_HOST-key.pem $WORKER0_HOST.pem root@$WORKER0_HOST:~/
# scp ca.pem $WORKER1_HOST-key.pem $WORKER1_HOST.pem root@$WORKER1_HOST:~/
Enter fullscreen mode Exit fullscreen mode

Move certificate files to the controller nodes:

# scp ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem service-account-key.pem service-account.pem root@kubecon01.k8s.com:~/
# scp ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem service-account-key.pem service-account.pem root@kubecon02.k8s.com:~/
Enter fullscreen mode Exit fullscreen mode

Generating Kubernetes Configuration Files for Authentication

In this lab you will generate Kubernetes configuration files, also known as kubeconfigs, which enable Kubernetes clients to locate and authenticate to the Kubernetes API Servers.

What Are Kubeconfigs and Why Do We Need Them?

  • Kubeconfigs
    • A Kubernetes configuration file, or kubeconfig, is a file that stores “information about clusters, users, namespaces, and authentication mechanisms.” It contains the configuration data needed to connect to and interact with one or more Kubernetes clusters. You can find more information about kubeconfigs in the Kubernetes documentation: Kubernetes configuration files, Kubeconfigs contain information such as
      • The location of the cluster you want to connect to
      • What user you want to authenticate as
      • Data needed in order to authenticate, such as tokens or client certificates
    • You can even define multiple contexts in a kubeconfig file, allowing you to easily switch between multiple clusters.

      Why Do We Need Kubeconfigs?

  • How to Generate a Kubeconfig
    • Kubeconfigs can be generated using kubectl
# kubectl config set-cluster // set up the configuration for the location of the cluster.
Enter fullscreen mode Exit fullscreen mode
# kubectl config set-credentials // set the username and client certificate that will be used to authenticate.
Enter fullscreen mode Exit fullscreen mode
# kubectl config set-context default // set up the default context
Enter fullscreen mode Exit fullscreen mode
# kubectl config use-context default // set the current context to the configuration we provided
Enter fullscreen mode Exit fullscreen mode

What Kubeconfigs Do We Need to Generate?

  • We will need several Kubeconfig files for various components of the Kubernetes cluster:
    • Kubelet(one for each worker node)
    • Kube-proxy
    • Kube-controller-manager
    • Kube-scheduler
    • Admin
  • The next step in building a Kubernetes cluster the hard way is to generate kubeconfigs which will be used by the various services that will make up the cluster. In this lesson, we will generate these kubeconfigs. After completing this lesson, you should have a set of kubeconfigs which you will need later in order to configure the Kubernetes cluster. Here are the commands used in the demo. Be sure to replace the placeholders with actual values from your machine . Create an environment variable to store the address of the Kubernetes API, and set it to the IP of your load balancer > in our digram the ip for loadblancer is 192.168.0.3 you can see blow digram screenshot
# KUBERNETES_PUBLIC_ADDRESS=192.168.0.3
Enter fullscreen mode Exit fullscreen mode

Client Authentication Configs

In this section you will generate kubeconfig files for the controller manager, kubelet, kube-proxy, and scheduler clients and the admin user.

The kubelet Kubernetes Configuration File

When generating kubeconfig files for Kubelets the client certificate matching the Kubelet's node name must be used. This will ensure Kubelets are properly authorized by the Kubernetes Node Authorizer.

The following commands must be run in the same directory used to generate the SSL certificates during the Generating TLS Certificates lab.

Generate a kubeconfig file for each worker node:

for instance in worknode01.k8s.com worknode02.k8s.com; do
  kubectl config set-cluster kubernetes-the-hard-way \
    --certificate-authority=ca.pem \
    --embed-certs=true \
    --server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 \
    --kubeconfig=${instance}.kubeconfig

  kubectl config set-credentials system:node:${instance} \
    --client-certificate=${instance}.pem \
    --client-key=${instance}-key.pem \
    --embed-certs=true \
    --kubeconfig=${instance}.kubeconfig

  kubectl config set-context default \
    --cluster=kubernetes-the-hard-way \
    --user=system:node:${instance} \
    --kubeconfig=${instance}.kubeconfig

  kubectl config use-context default --kubeconfig=${instance}.kubeconfig
done
Enter fullscreen mode Exit fullscreen mode

Results:

worknode01.k8s.com.kubeconfig
worknode02.k8s.com.kubeconfig
Enter fullscreen mode Exit fullscreen mode

The kube-proxy Kubernetes Configuration File

Generate a kubeconfig file for the kube-proxy service:

{
  kubectl config set-cluster kubernetes-the-hard-way \
    --certificate-authority=ca.pem \
    --embed-certs=true \
    --server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 \
    --kubeconfig=kube-proxy.kubeconfig

  kubectl config set-credentials system:kube-proxy \
    --client-certificate=kube-proxy.pem \
    --client-key=kube-proxy-key.pem \
    --embed-certs=true \
    --kubeconfig=kube-proxy.kubeconfig

  kubectl config set-context default \
    --cluster=kubernetes-the-hard-way \
    --user=system:kube-proxy \
    --kubeconfig=kube-proxy.kubeconfig

  kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
}
Enter fullscreen mode Exit fullscreen mode

Results:

kube-proxy.kubeconfig
Enter fullscreen mode Exit fullscreen mode

The kube-controller-manager Kubernetes Configuration File

Generate a kubeconfig file for the kube-controller-manager service:

{
  kubectl config set-cluster kubernetes-the-hard-way \
    --certificate-authority=ca.pem \
    --embed-certs=true \
    --server=https://127.0.0.1:6443 \
    --kubeconfig=kube-controller-manager.kubeconfig

  kubectl config set-credentials system:kube-controller-manager \
    --client-certificate=kube-controller-manager.pem \
    --client-key=kube-controller-manager-key.pem \
    --embed-certs=true \
    --kubeconfig=kube-controller-manager.kubeconfig

  kubectl config set-context default \
    --cluster=kubernetes-the-hard-way \
    --user=system:kube-controller-manager \
    --kubeconfig=kube-controller-manager.kubeconfig

  kubectl config use-context default --kubeconfig=kube-controller-manager.kubeconfig
}
Enter fullscreen mode Exit fullscreen mode

Results:

kube-controller-manager.kubeconfig
Enter fullscreen mode Exit fullscreen mode

The kube-scheduler Kubernetes Configuration File

Generate a kubeconfig file for the kube-scheduler service:

{
  kubectl config set-cluster kubernetes-the-hard-way \
    --certificate-authority=ca.pem \
    --embed-certs=true \
    --server=https://127.0.0.1:6443 \
    --kubeconfig=kube-scheduler.kubeconfig

  kubectl config set-credentials system:kube-scheduler \
    --client-certificate=kube-scheduler.pem \
    --client-key=kube-scheduler-key.pem \
    --embed-certs=true \
    --kubeconfig=kube-scheduler.kubeconfig

  kubectl config set-context default \
    --cluster=kubernetes-the-hard-way \
    --user=system:kube-scheduler \
    --kubeconfig=kube-scheduler.kubeconfig

  kubectl config use-context default --kubeconfig=kube-scheduler.kubeconfig
}
Enter fullscreen mode Exit fullscreen mode

Results:

kube-scheduler.kubeconfig
Enter fullscreen mode Exit fullscreen mode

The admin Kubernetes Configuration File

Generate a kubeconfig file for the admin user:

{
  kubectl config set-cluster kubernetes-the-hard-way \
    --certificate-authority=ca.pem \
    --embed-certs=true \
    --server=https://127.0.0.1:6443 \
    --kubeconfig=admin.kubeconfig

  kubectl config set-credentials admin \
    --client-certificate=admin.pem \
    --client-key=admin-key.pem \
    --embed-certs=true \
    --kubeconfig=admin.kubeconfig

  kubectl config set-context default \
    --cluster=kubernetes-the-hard-way \
    --user=admin \
    --kubeconfig=admin.kubeconfig

  kubectl config use-context default --kubeconfig=admin.kubeconfig
}
Enter fullscreen mode Exit fullscreen mode

Results:

admin.kubeconfig
Enter fullscreen mode Exit fullscreen mode

Distribute the Kubernetes Configuration Files

Copy the appropriate kubelet and kube-proxy kubeconfig files to each worker instance:

for instance in worknode01.k8s.com worknode02.k8s.com; do
    scp ${instance}.kubeconfig kube-proxy.kubeconfig ${instance}:~/
done
Enter fullscreen mode Exit fullscreen mode

Copy the appropriate kube-controller-manager and kube-scheduler kubeconfig files to each controller instance:

for instance in kubecon01.k8s.com kubecon02.k8s.com; do 
    scp admin.kubeconfig kube-controller-manager.kubeconfig kube-scheduler.kubeconfig ${instance}:~/
done
Enter fullscreen mode Exit fullscreen mode

Generating the Data Encryption Config and Key

Kubernetes stores a variety of data including cluster state, application configurations, and secrets. Kubernetes supports the ability to encrypt cluster data at rest.

In this lab you will generate an encryption key and an encryption config suitable for encrypting Kubernetes Secrets.

What Ist he Kubernetes Data Encryption Config?

  • Kubernetes Secret Encryption

    • Kubernetes supports the ability to encrypt secret data at rest. This means that secrets are encrypted so that they are never stored on disc in plain text. This feature is important for security, but in order to use it we need to provide Kubernetes with an encryption key. We will generate an encryption key and put it into a configuration file. We will then copy that file to our Kubernetes controller servers.
    • In order to make use of Kubernetes' ability to encrypt sensitive data at rest, you need to provide Kubernetes with an encrpytion key using a data encryption config file. This lesson walks you through the process of creating a encryption key and storing it in the necessary file, as well as showing how to copy that file to your Kubernetes controllers. After completing this lesson, you should have a valid Kubernetes data encryption config file, and there should be a copy of that file on each of your Kubernetes controller servers. ## The Encryption Key Here are the commands used in the demo. Generate the Kubernetes Data encrpytion config file containing the encrpytion key

Generate an encryption key:

ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64)
Enter fullscreen mode Exit fullscreen mode

The Encryption Config File

Create the encryption-config.yaml encryption config file:

cat > encryption-config.yaml <<EOF
kind: EncryptionConfig
apiVersion: v1
resources:
  - resources:
      - secrets
    providers:
      - aescbc:
          keys:
            - name: key1
              secret: ${ENCRYPTION_KEY}
      - identity: {}
EOF
Enter fullscreen mode Exit fullscreen mode

Distribute the Kubernetes Encryption Config

Copy the encryption-config.yaml encryption config file to each controller instance:

for instance in kubecon01.k8s.com kubecon02.k8s.com; do
     scp encryption-config.yaml ${instance}:~/
done
Enter fullscreen mode Exit fullscreen mode

Bootstrapping the etcd Cluster

Kubernetes components are stateless and store cluster state in etcd. In this lab you will bootstrap a three node etcd cluster and configure it for high availability and secure remote access.

What Is etcd?

“etcd is a distributed key value store that provides a reliable way to store data across a cluster of machines.” etcd etcd provides a way to store data across a distributed cluster of machines and make sure the data is synchronized across all machines. You can find more information, as well as the etcd source code, in the etcd GitHub repository etcd

How Is etcd Used in Kubernetes?

Kubernetes uses etcd to store all of its internal data about cluster state. This data needs to be stored, but it also needs to be reliably synchronized across all controller nodes in the cluster. etcd fulfills that purpose. We will need to install etcd on each of our Kubernetes controller nodes and create an etcd cluster that includes all of those controller nodes. You can find more information on managing an etcd cluster for Kubernetes here k8setcd

Creating the etcd Cluster

Before you can stand up controllers for a Kubernetes cluster, you must first build an etcd cluster across your Kubernetes control nodes. This lesson provides a demonstration of how to set up an etcd cluster in preparation for bootstrapping Kubernetes. After completing this lesson, you should have a working etcd cluster that consists of your Kubernetes control nodes. Here are the commands used in the demo (note that these have to be run on both controller servers, with a few differences between them):

Download and Install the etcd Binaries

Download the official etcd release binaries from the etcd GitHub project:

wget "https://github.com/etcd-io/etcd/releases/download/v3.4.15/etcd-v3.4.15-linux-amd64.tar.gz"
Enter fullscreen mode Exit fullscreen mode

Extract and install the etcd server and the etcdctl command line utility:

# tar -xvf etcd-v3.4.15-linux-amd64.tar.gz
# mv etcd-v3.4.15-linux-amd64/etcd* /usr/local/bin/
Enter fullscreen mode Exit fullscreen mode

Configure the etcd Server

# mkdir -p /etc/etcd /var/lib/etcd
# chmod 700 /var/lib/etcd
# cp ca.pem kubernetes-key.pem kubernetes.pem /etc/etcd/
Enter fullscreen mode Exit fullscreen mode

Set up the following environment variables.

# ETCD_NAME=$(hostname -s)
# INTERNAL_IP=$(/sbin/ip -o -4 addr list eth0 | awk '{print $4}' | cut -d/ -f1)
Enter fullscreen mode Exit fullscreen mode

Set up the following environment variables. Be sure you replace all of the with their corresponding real values:

you can see the lab digram in your case you only to change the ip for varaible
digram screenshot

# CONTROLLER0_IP=192.168.0.1
# CONTROLLER0_host=kubecon01
# CONTROLLER1_IP=192.168.0.2
# CONTROLLER1_host=kubecon02
Enter fullscreen mode Exit fullscreen mode

Create the etcd.service systemd unit file:

cat <<EOF | sudo tee /etc/systemd/system/etcd.service
[Unit]
Description=etcd
Documentation=https://github.com/coreos

[Service]
Type=notify
ExecStart=/usr/local/bin/etcd \\
  --name ${ETCD_NAME} \\
  --cert-file=/etc/etcd/kubernetes.pem \\
  --key-file=/etc/etcd/kubernetes-key.pem \\
  --peer-cert-file=/etc/etcd/kubernetes.pem \\
  --peer-key-file=/etc/etcd/kubernetes-key.pem \\
  --trusted-ca-file=/etc/etcd/ca.pem \\
  --peer-trusted-ca-file=/etc/etcd/ca.pem \\
  --peer-client-cert-auth \\
  --client-cert-auth \\
  --initial-advertise-peer-urls https://${INTERNAL_IP}:2380 \\
  --listen-peer-urls https://${INTERNAL_IP}:2380 \\
  --listen-client-urls https://${INTERNAL_IP}:2379,https://127.0.0.1:2379 \\
  --advertise-client-urls https://${INTERNAL_IP}:2379 \\
  --initial-cluster-token etcd-cluster-0 \\
  --initial-cluster ${CONTROLLER0_host}=https://${CONTROLLER0_IP}:2380,${CONTROLLER1_host}=https://${CONTROLLER1_IP}:2380 \\
  --initial-cluster-state new \\
  --data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF
Enter fullscreen mode Exit fullscreen mode

Start the etcd Server

# systemctl daemon-reload
# systemctl enable etcd
# systemctl start etcd
Enter fullscreen mode Exit fullscreen mode

Remember to run the above commands on each controller node: kubecon01, kubecon02 .

Verification

List the etcd cluster members:

sudo ETCDCTL_API=3 etcdctl member list \
  --endpoints=https://127.0.0.1:2379 \
  --cacert=/etc/etcd/ca.pem \
  --cert=/etc/etcd/kubernetes.pem \
  --key=/etc/etcd/kubernetes-key.pem
Enter fullscreen mode Exit fullscreen mode

output

19e6cf768d9d542e, started, kubecon02, https://192.168.0.2:2380, https://192.168.0.2:2379, false
508e54ff346cdb88, started, kubecon01, https://192.168.0.1:2380, https://192.168.0.1:2379, false
Enter fullscreen mode Exit fullscreen mode

Bootstrapping the Kubernetes Control Plane

In this lab you will bootstrap the Kubernetes control plane across three compute instances and configure it for high availability. You will also create an external load balancer that exposes the Kubernetes API Servers to remote clients. The following components will be installed on each node: Kubernetes API Server, Scheduler, and Controller Manager.

Prerequisites

The commands in this lab must be run on each controller instance: kubecon01, kubecon02

Provision the Kubernetes Control Plane

The first step in bootstrapping a new Kubernetes control plane is to install the necessary binaries on the controller servers. We will walk through the process of downloading and installing the binaries on both Kubernetes controllers. This will prepare your environment for the lessons that follow, in which we will configure these binaries to run as systemd services. You can install the control plane binaries on each control node like this

# mkdir -p /etc/kubernetes/config
Enter fullscreen mode Exit fullscreen mode

Download and Install the Kubernetes Controller Binaries

Download the official Kubernetes release binaries

# wget "https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kube-apiserver" \
  "https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kube-controller-manager" \
  "https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kube-scheduler" \
  "https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kubectl"
Enter fullscreen mode Exit fullscreen mode

Lets change the permission to be executable

# chmod +x kube-apiserver kube-controller-manager kube-scheduler kubectl 
Enter fullscreen mode Exit fullscreen mode

Lets mv binary to /usr/local/bin

# mv kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/local/bin/
Enter fullscreen mode Exit fullscreen mode

Configure the Kubernetes API Server

The Kubernetes API server provides the primary interface for the Kubernetes control plane and the cluster as a whole. When you interact with Kubernetes, you are nearly always doing it through the Kubernetes API server. This lesson will guide you through the process of configuring the kube-apiserver service on your two Kubernetes control nodes. After completing this lesson, you should have a systemd unit set up to run kube-apiserver as a service on each Kubernetes control node. You can configure the Kubernetes API server like so

# mkdir -p /var/lib/kubernetes/
# mv ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem \
    service-account-key.pem service-account.pem \
    encryption-config.yaml /var/lib/kubernetes/
Enter fullscreen mode Exit fullscreen mode

Set some environment variables that will be used to create the systemd unit file. Make sure you replace the placeholders with their actual values

# INTERNAL_IP=$(/sbin/ip -o -4 addr list eth0 | awk '{print $4}' | cut -d/ -f1)
# CONTROLLER0_IP=192.168.0.1
# KUBERNETES_PUBLIC_ADDRESS=$(/sbin/ip -o -4 addr list eth0 | awk '{print $4}' | cut -d/ -f1)
# CONTROLLER1_IP=192.168.0.2
Enter fullscreen mode Exit fullscreen mode

Generate the kube-apiserver unit file for systemd :

# cat <<EOF | sudo tee /etc/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes

[Service]
ExecStart=/usr/local/bin/kube-apiserver \\
  --advertise-address=${INTERNAL_IP} \\
  --allow-privileged=true \\
  --apiserver-count=3 \\
  --audit-log-maxage=30 \\
  --audit-log-maxbackup=3 \\
  --audit-log-maxsize=100 \\
  --audit-log-path=/var/log/audit.log \\
  --authorization-mode=Node,RBAC \\
  --bind-address=0.0.0.0 \\
  --client-ca-file=/var/lib/kubernetes/ca.pem \\
  --enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \\
  --etcd-cafile=/var/lib/kubernetes/ca.pem \\
  --etcd-certfile=/var/lib/kubernetes/kubernetes.pem \\
  --etcd-keyfile=/var/lib/kubernetes/kubernetes-key.pem \\
  --etcd-servers=https://${CONTROLLER0_IP}:2379,https://${CONTROLLER1_IP}:2379 \\
  --event-ttl=1h \\
  --encryption-provider-config=/var/lib/kubernetes/encryption-config.yaml \\
  --kubelet-certificate-authority=/var/lib/kubernetes/ca.pem \\
  --kubelet-client-certificate=/var/lib/kubernetes/kubernetes.pem \\
  --kubelet-client-key=/var/lib/kubernetes/kubernetes-key.pem \\
  --service-account-key-file=/var/lib/kubernetes/service-account.pem \\
  --service-account-signing-key-file=/var/lib/kubernetes/service-account-key.pem \\
  --service-account-issuer=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 \\
  --service-cluster-ip-range=10.32.0.0/24 \\
  --service-node-port-range=30000-32767 \\
  --tls-cert-file=/var/lib/kubernetes/kubernetes.pem \\
  --tls-private-key-file=/var/lib/kubernetes/kubernetes-key.pem \\
  --v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF

Enter fullscreen mode Exit fullscreen mode

Configure the Kubernetes Controller Manager

Move the kube-controller-manager kubeconfig into place:

sudo mv kube-controller-manager.kubeconfig /var/lib/kubernetes/
Enter fullscreen mode Exit fullscreen mode

Create the kube-controller-manager.service systemd unit file:

cat <<EOF | sudo tee /etc/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes

[Service]
ExecStart=/usr/local/bin/kube-controller-manager \\
  --bind-address=0.0.0.0 \\
  --cluster-cidr=10.200.0.0/16 \\
  --cluster-name=kubernetes \\
  --cluster-signing-cert-file=/var/lib/kubernetes/ca.pem \\
  --cluster-signing-key-file=/var/lib/kubernetes/ca-key.pem \\
  --kubeconfig=/var/lib/kubernetes/kube-controller-manager.kubeconfig \\
  --leader-elect=true \\
  --root-ca-file=/var/lib/kubernetes/ca.pem \\
  --service-account-private-key-file=/var/lib/kubernetes/service-account-key.pem \\
  --service-cluster-ip-range=10.32.0.0/24 \\
  --use-service-account-credentials=true \\
  --v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF
Enter fullscreen mode Exit fullscreen mode

Configure the Kubernetes Scheduler

Move the kube-scheduler kubeconfig into place:

sudo mv kube-scheduler.kubeconfig /var/lib/kubernetes/
Enter fullscreen mode Exit fullscreen mode

Create the kube-scheduler.yaml configuration file:

cat <<EOF | sudo tee /etc/kubernetes/config/kube-scheduler.yaml
apiVersion: kubescheduler.config.k8s.io/v1beta1
kind: KubeSchedulerConfiguration
clientConnection:
  kubeconfig: "/var/lib/kubernetes/kube-scheduler.kubeconfig"
leaderElection:
  leaderElect: true
EOF
Enter fullscreen mode Exit fullscreen mode

Create the kube-scheduler.service systemd unit file:

cat <<EOF | sudo tee /etc/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
ExecStart=/usr/local/bin/kube-scheduler \\
  --config=/etc/kubernetes/config/kube-scheduler.yaml \\
  --v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF
Enter fullscreen mode Exit fullscreen mode

Start the Controller Services

# systemctl daemon-reload
# systemctl enable kube-apiserver kube-controller-manager kube-scheduler
# systemctl start kube-apiserver kube-controller-manager kube-scheduler
Enter fullscreen mode Exit fullscreen mode

Allow up to 10 seconds for the Kubernetes API Server to fully initialize.

Enable HTTP Health Checks

  • Why Do We Need to Enable HTTP Health Checks?
    • In Kelsey Hightower’s original Kubernetes the Hard Way guide, he uses a Good Cloud Platform (GCP) load balancer. The load balancer needs to be able to perform health checks against the Kubernetes API to measure the health status of API nodes. The GCP load balancer cannot easily perform health checks over HTTPS, so the guide instructs us to set up a proxy server to allow these health checks to be performed over HTTP. Since we are using Nginx as our load balancer, we don’t actually need to do this, but it will be good practice for us. This exercise will help you understand the methods used in the original guide.
    • Part of Kelsey Hightower's original Kubernetes the Hard Way guide involves setting up an nginx proxy on each controller to provide access to
    • the Kubernetes API /healthz endpoint over http This lesson explains the reasoning behind the inclusion of that step and guides you through
- the process of implementing the http /healthz proxy. You can set up a basic nginx proxy for the healthz endpoint by first installing nginx" 
Enter fullscreen mode Exit fullscreen mode

The /healthz API server endpoint does not require authentication by default.

Install a basic web server to handle HTTP health checks:

# yum install epel-release  nginx -y
Enter fullscreen mode Exit fullscreen mode

Create an nginx configuration for the health check proxy:

cat > /etc/nginx/conf.d/kubernetes.default.svc.cluster.local.conf <<EOF
server {
  listen      80;
  server_name kubernetes.default.svc.cluster.local;

  location /healthz {
     proxy_pass                    https://127.0.0.1:6443/healthz;
     proxy_ssl_trusted_certificate /var/lib/kubernetes/ca.pem;
  }
}
EOF
Enter fullscreen mode Exit fullscreen mode

Started and enabled nginx

# systemctl enable --now nginx
Enter fullscreen mode Exit fullscreen mode

Verification

kubectl cluster-info --kubeconfig admin.kubeconfig
Enter fullscreen mode Exit fullscreen mode
Kubernetes control plane is running at https://127.0.0.1:6443
Enter fullscreen mode Exit fullscreen mode

Test the nginx HTTP health check proxy:

curl -H "Host: kubernetes.default.svc.cluster.local" -i http://127.0.0.1/healthz
Enter fullscreen mode Exit fullscreen mode
HTTP/1.1 200 OK
Server: nginx/1.18.0 (Ubuntu)
Date: Sun, 02 May 2021 04:19:29 GMT
Content-Type: text/plain; charset=utf-8
Content-Length: 2
Connection: keep-alive
Cache-Control: no-cache, private
X-Content-Type-Options: nosniff
X-Kubernetes-Pf-Flowschema-Uid: c43f32eb-e038-457f-9474-571d43e5c325
X-Kubernetes-Pf-Prioritylevel-Uid: 8ba5908f-5569-4330-80fd-c643e7512366

ok
Enter fullscreen mode Exit fullscreen mode

Remember to run the above commands on each controller node: kubecon01, kubecon02

RBAC for Kubelet Authorization

One of the necessary steps in setting up a new Kubernetes cluster from scratch is to assign permissions that allow the Kubernetes API to access various functionality within the worker kubelets. This lesson guides you through the process of creating a ClusterRole and binding it to the kubernetes user so that those permissions will be in place. After completing this lesson, your cluster will have the necessary role-based access control configuration to allow the cluster's API to access kubelet functionality such as logs and metrics. You can configure RBAC for kubelet authorization with these commands. Note that these commands only need to be run on one control node. Create a role with the necessary permissions

In this section you will configure RBAC permissions to allow the Kubernetes API Server to access the Kubelet API on each worker node. Access to the Kubelet API is required for retrieving metrics, logs, and executing commands in pods.

This tutorial sets the Kubelet --authorization-mode flag to Webhook. Webhook mode uses the SubjectAccessReview API to determine authorization.

The commands in this section will effect the entire cluster and only need to be run once from one of the controller nodes.

Create the system:kube-apiserver-to-kubelet ClusterRole with permissions to access the Kubelet API and perform most common tasks associated with managing pods:

cat <<EOF | kubectl apply --kubeconfig admin.kubeconfig -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:kube-apiserver-to-kubelet
rules:
  - apiGroups:
      - ""
    resources:
      - nodes/proxy
      - nodes/stats
      - nodes/log
      - nodes/spec
      - nodes/metrics
    verbs:
      - "*"
EOF
Enter fullscreen mode Exit fullscreen mode

The Kubernetes API Server authenticates to the Kubelet as the kubernetes user using the client certificate as defined by the --kubelet-client-certificate flag.

Bind the system:kube-apiserver-to-kubelet ClusterRole to the kubernetes user:

cat <<EOF | kubectl apply --kubeconfig admin.kubeconfig -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: system:kube-apiserver
  namespace: ""
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:kube-apiserver-to-kubelet
subjects:
  - apiGroup: rbac.authorization.k8s.io
    kind: User
    name: kubernetes
EOF
Enter fullscreen mode Exit fullscreen mode

Setting up a Kube API Frontend Load Balancer

In order to achieve redundancy for your Kubernetes cluster, you will need to load balance usage of the Kubernetes API across multiple control nodes. In this lesson, you will learn how to create a simple nginx server to perform this balancing. After completing this lesson, you will be able to interact with both control nodes of your kubernetes cluster using the nginx load balancer. Here are the commands you can use to set up the nginx load balancer. Run these on the server that you have designated as your load balancer server:

# ssh root@192.168.0.3
Enter fullscreen mode Exit fullscreen mode

Note will use stream module for nginx for easy wiil create a docker image that contain the all moudle config first lets install docker

# yum install -y yum-utils
# yum-config-manager \
    --add-repo \
    https://download.docker.com/linux/centos/docker-ce.repo
# yum-config-manager --enable docker-ce-nightly
# yum install docker-ce docker-ce-cli containerd.io -y  
# systemctl enable --now docker
Enter fullscreen mode Exit fullscreen mode

Config the Loadbalancer

Now will create the dir for image to configure all image nassry

# mkdir nginx && cd nginx 
Enter fullscreen mode Exit fullscreen mode

Set up some environment variables for the lead balancer config file:

# CONTROLLER0_IP=192.168.0.1
# CONTROLLER1_IP=192.168.0.2
Enter fullscreen mode Exit fullscreen mode

Create the load balancer nginx config file

# cat << EOF | sudo tee k8s.conf
   stream {
    upstream kubernetes {
        least_conn;
        server $CONTROLLER0_IP:6443;
        server $CONTROLLER1_IP:6443;
     }
    server {
        listen 6443;
        listen 443;
        proxy_pass kubernetes;
    }
}
EOF
Enter fullscreen mode Exit fullscreen mode

Lets create a nginx config

# cat << EOF |  tee nginx.conf 
user www-data;
worker_processes auto;
pid /run/nginx.pid;

events {
        worker_connections 768;
        # multi_accept on;
}

http {

        ##
        # Basic Settings
        ##

        sendfile on;
        tcp_nopush on;
        tcp_nodelay on;
        keepalive_timeout 65;
        types_hash_max_size 2048;
        # server_tokens off;

        # server_names_hash_bucket_size 64;
        # server_name_in_redirect off;

        include /etc/nginx/mime.types;
        default_type application/octet-stream;

        ##
        # SSL Settings
        ##

        ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
        ssl_prefer_server_ciphers on;

        ##
        # Logging Settings
        ##

        access_log /var/log/nginx/access.log;
        error_log /var/log/nginx/error.log;

        ##
        # Gzip Settings
        ##

        gzip on;
        gzip_disable "msie6";

        # gzip_vary on;
        # gzip_proxied any;
        # gzip_comp_level 6;
        # gzip_buffers 16 8k;
        # gzip_http_version 1.1;
        # gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;

        ##
        # Virtual Host Configs
        ##

        include /etc/nginx/conf.d/*.conf;
        include /etc/nginx/sites-enabled/*;
}


#mail {
#       # See sample authentication script at:
#       # http://wiki.nginx.org/ImapAuthenticateWithApachePhpScript
#
#       # auth_http localhost/auth.php;
#       # pop3_capabilities "TOP" "USER";
#       # imap_capabilities "IMAP4rev1" "UIDPLUS";
#
#       server {
#               listen     localhost:110;
#               protocol   pop3;
#               proxy      on;
#       }
#
#       server {
#               listen     localhost:143;
#               protocol   imap;
#               proxy      on;
#       }
#}
include /etc/nginx/tcpconf.d/*;
EOF
Enter fullscreen mode Exit fullscreen mode

Lets create a Dockerfile

# cat << EOF |  tee Dockerfile
FROM ubuntu:16.04
RUN apt-get update -y && apt-get upgrade -y && apt-get install -y nginx && mkdir -p  /etc/nginx/tcpconf.d
RUN rm -rf /etc/nginx/nginx.conf
ADD nginx.conf /etc/nginx/
ADD k8s.conf /etc/nginx/tcpconf.d/
CMD ["nginx", "-g", "daemon off;"]
EOF
Enter fullscreen mode Exit fullscreen mode

Lets build and run docker

# docker build -t nginx .
# docker run -d --network host --name nginx --restart unless-stopped nginx
Enter fullscreen mode Exit fullscreen mode

Make a HTTP request for the Kubernetes version info:

curl --cacert ca.pem https://${KUBERNETES_PUBLIC_ADDRESS}:6443/version
Enter fullscreen mode Exit fullscreen mode

output

{
  "major": "1",
  "minor": "21",
  "gitVersion": "v1.21.0",
  "gitCommit": "cb303e613a121a29364f75cc67d3d580833a7479",
  "gitTreeState": "clean",
  "buildDate": "2021-04-08T16:25:06Z",
  "goVersion": "go1.16.1",
  "compiler": "gc",
  "platform": "linux/amd64"
}
Enter fullscreen mode Exit fullscreen mode

Bootstrapping the Kubernetes Worker Nodes

In this lab you will bootstrap three Kubernetes worker nodes. The following components will be installed on each node: runc, container networking plugins, containerd, kubelet, and kube-proxy.

What Are the Kubernetes Worker Nodes?

Kubernetes worker nodes are responsible for the actual work of running container applications managed by Kubernetes. “The Kubernetes node has the services necessary to run application containers and be managed from the master systems.” You can find more information about Kubernetes worker nodes in the Kubernetes documentation:

Kubernetes Worker Node Components

Each Kubernetes worker node consists of the following components

  • Kubelet
    • Controls each worker node, providing the APIs that are used by the control plane to manage nodes and pods, and interacts with the container runtime to manage containers
  • Kube-proxy
    • Manages iptables rules on the node to provide virtual network access to pods.
  • Container runtime
    • Downloads images and runs containers. Two examples of container runtimes are Docker and containerd (Kubernetes the Hard Way uses containerd) ### Prerequisites The commands in this lab must be run on each worker instance: worknode01, worknode01 ## Provisioning a Kubernetes Worker Node

Install the OS dependencies:

# yum install socat conntrack ipset -y 
Enter fullscreen mode Exit fullscreen mode

The socat binary enables support for the kubectl port-forward command.

Disable Swap

By default the kubelet will fail to start if swap is enabled. It is recommended that swap be disabled to ensure Kubernetes can provide proper resource allocation and quality of service.

Verify if swap is enabled:

sudo swapon --show
Enter fullscreen mode Exit fullscreen mode

If output is empthy then swap is not enabled. If swap is enabled run the following command to disable swap immediately:

# swapoff -a 
# sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
Enter fullscreen mode Exit fullscreen mode

Download and Install Worker Binaries

# wget https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.21.0/crictl-v1.21.0-linux-amd64.tar.gz \
  https://github.com/opencontainers/runc/releases/download/v1.0.0-rc93/runc.amd64 \
  https://github.com/containernetworking/plugins/releases/download/v0.9.1/cni-plugins-linux-amd64-v0.9.1.tgz \
  https://github.com/containerd/containerd/releases/download/v1.4.4/containerd-1.4.4-linux-amd64.tar.gz \
  https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kubectl \
  https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kube-proxy \
  https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kubelet
Enter fullscreen mode Exit fullscreen mode

Create the installation directories:

# mkdir -p \
  /etc/cni/net.d \
  /opt/cni/bin \
  /var/lib/kubelet \
  /var/lib/kube-proxy \
  /var/lib/kubernetes \
  /var/run/kubernetes
Enter fullscreen mode Exit fullscreen mode

Install the worker binaries:

# mkdir containerd
# tar -xvf crictl-v1.21.0-linux-amd64.tar.gz
# tar -xvf containerd-1.4.4-linux-amd64.tar.gz -C containerd
# tar -xvf cni-plugins-linux-amd64-v0.9.1.tgz -C /opt/cni/bin/
# mv runc.amd64 runc
# chmod +x crictl kubectl kube-proxy kubelet runc 
# mv crictl kubectl kube-proxy kubelet runc /usr/local/bin/
# mv containerd/bin/* /bin/
Enter fullscreen mode Exit fullscreen mode

Configure containerd

Create the containerd configuration file:

# mkdir -p /etc/containerd/
# cat << EOF | sudo tee /etc/containerd/config.toml
[plugins]
  [plugins.cri.containerd]
    snapshotter = "overlayfs"
    [plugins.cri.containerd.default_runtime]
      runtime_type = "io.containerd.runtime.v1.linux"
      runtime_engine = "/usr/local/bin/runc"
      runtime_root = ""
EOF
Enter fullscreen mode Exit fullscreen mode

Create the containerd.service systemd unit file:

cat <<EOF | sudo tee /etc/systemd/system/containerd.service
[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target

[Service]
ExecStartPre=/sbin/modprobe overlay
ExecStart=/bin/containerd
Restart=always
RestartSec=5
Delegate=yes
KillMode=process
OOMScoreAdjust=-999
LimitNOFILE=1048576
LimitNPROC=infinity
LimitCORE=infinity

[Install]
WantedBy=multi-user.target
EOF
Enter fullscreen mode Exit fullscreen mode

Configure the Kubelet

Kubelet is the Kubernetes agent which runs on each worker node. Acting as a middleman between the Kubernetes control plane and the underlying container runtime, it coordinates the running of containers on the worker node. In this lesson, we will configure our systemd service for kubelet. After completing this lesson, you should have a systemd service configured and ready to run on each worker node. You can configure the kubelet service like so. Run these commands on both worker nodes. Set a HOSTNAME environment variable that will be used to generate your config files. Make sure you set the HOSTNAME appropriately for each worker node:

# mv ${HOSTNAME}-key.pem ${HOSTNAME}.pem /var/lib/kubelet/
# mv ${HOSTNAME}.kubeconfig /var/lib/kubelet/kubeconfig
# mv ca.pem /var/lib/kubernetes/
Enter fullscreen mode Exit fullscreen mode

Create the kubelet-config.yaml configuration file:

# cat <<EOF | sudo tee /var/lib/kubelet/kubelet-config.yaml
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
  anonymous:
    enabled: false
  webhook:
    enabled: true
  x509:
    clientCAFile: "/var/lib/kubernetes/ca.pem"
authorization:
  mode: Webhook
clusterDomain: "cluster.local"
clusterDNS:
  - "10.32.0.10"
runtimeRequestTimeout: "15m"
tlsCertFile: "/var/lib/kubelet/${HOSTNAME}.pem"
tlsPrivateKeyFile: "/var/lib/kubelet/${HOSTNAME}-key.pem"
EOF
Enter fullscreen mode Exit fullscreen mode

Create the kubelet.service systemd unit file:

cat <<EOF | sudo tee /etc/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=containerd.service
Requires=containerd.service

[Service]
ExecStart=/usr/local/bin/kubelet \\
  --config=/var/lib/kubelet/kubelet-config.yaml \\
  --container-runtime=remote \\
  --container-runtime-endpoint=unix:///var/run/containerd/containerd.sock \\
  --image-pull-progress-deadline=2m \\
  --kubeconfig=/var/lib/kubelet/kubeconfig \\
  --network-plugin=cni \\
  --register-node=true \\
  --v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF
Enter fullscreen mode Exit fullscreen mode

Configure the Kubernetes Proxy

Kube-proxy is an important component of each Kubernetes worker node. It is responsible for providing network routing to support Kubernetes networking components. In this lesson, we will configure our kube-proxy systemd service. Since this is the last of the three worker node services that we need to configure, we will also go ahead and start all of our worker node services once we're done. Finally, we will complete some steps to verify that our cluster is set up properly and functioning as expected so far. After completing this lesson, you should have two Kubernetes worker nodes up and running, and they should be able to successfully register themselves with the cluster. You can configure the kube-proxy service like so. Run these commands on both worker nodes:

# mv kube-proxy.kubeconfig /var/lib/kube-proxy/kubeconfig
Enter fullscreen mode Exit fullscreen mode

Create the kube-proxy-config.yaml configuration file:

# cat <<EOF | sudo tee /var/lib/kube-proxy/kube-proxy-config.yaml
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
clientConnection:
  kubeconfig: "/var/lib/kube-proxy/kubeconfig"
mode: "iptables"
clusterCIDR: "10.200.0.0/16"
EOF

Enter fullscreen mode Exit fullscreen mode

Create the kube-proxy.service systemd unit file:

# cat <<EOF | sudo tee /etc/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Kube Proxy
Documentation=https://github.com/kubernetes/kubernetes

[Service]
ExecStart=/usr/local/bin/kube-proxy \\
  --config=/var/lib/kube-proxy/kube-proxy-config.yaml
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF
Enter fullscreen mode Exit fullscreen mode

Start the Worker Services

# systemctl daemon-reload
# systemctl enable containerd kubelet kube-proxy
# systemctl start containerd kubelet kube-proxy
Enter fullscreen mode Exit fullscreen mode

Verification

Finally, verify that both workers have registered themselves with the cluster. Log in to one of your control nodes and run this:
First should we create a dir in both of controller nodes
will crate dir for kubectl to containe the certificate and config

# mkdir -p $HOME/.kube
# cp -i admin.kubeconfig $HOME/.kube/config
# chown $(id -u):$(id -g) $HOME/.kube/config
# kubectl get nodes
Enter fullscreen mode Exit fullscreen mode

output

NAME                 STATUS     ROLES    AGE     VERSION
worknode01.k8s.com   NotReady   <none>   5m28s   v1.21.0
worknode02.k8s.com   NotReady   <none>   5m31s   v1.21.0
Enter fullscreen mode Exit fullscreen mode

dotnot wary about NotReady because in networking will fix this issues

Configuring kubectl for Remote Access

In this lab you will generate a kubeconfig file for the kubectl command line utility based on the admin user credentials.

What Is Kubectl?

Kubectl

  • is the Kubernetes command line tool. It allows us to interact with Kubernetes clusters from the command line. We will set up kubectl to allow remote access from our machine in order to manage the cluster remotely. To do this, we will generate a local kubeconfig that will authenticate as the admin user and access the Kubernetes API through the load balancer. In this lab you will generate a kubeconfig file for the kubectl command line utility based on the admin user credentials. Run the commands in this lab from the same directory used to generate the admin client certificates. There are a few steps to configuring a local kubectl installation for managing a remote cluster. This lesson will guide you through that process. After completing this lesson, you should have a local kubectl installation that is capable of running kubectl commands against your remote Kubernetes cluster. In a separate shell, open up an ssh tunnel to port 6443 on your Kubernetes API load balancer: The Admin Kubernetes Configuration File Each kubeconfig requires a Kubernetes API Server to connect to. To support high availability the IP address assigned to the external load balancer fronting the Kubernetes API Servers will be used. Generate a kubeconfig file suitable for authenticating as the admin user: Lets configure the
# cd /k8s
# mkdir -p $HOME/.kube
# cp -i admin.kubeconfig $HOME/.kube/config
# chown $(id -u):$(id -g) $HOME/.kube/config
# KUBERNETES_PUBLIC_ADDRESS=192.168.0.3
Enter fullscreen mode Exit fullscreen mode

The Admin Kubernetes Configuration File

Each kubeconfig requires a Kubernetes API Server to connect to. To support high availability the IP address assigned to the external load balancer fronting the Kubernetes API Servers will be used.

Generate a kubeconfig file suitable for authenticating as the admin user:

{
  kubectl config set-cluster kubernetes-the-hard-way \
    --certificate-authority=ca.pem \
    --embed-certs=true \
    --server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443

  kubectl config set-credentials admin \
    --client-certificate=admin.pem \
    --client-key=admin-key.pem

  kubectl config set-context kubernetes-the-hard-way \
    --cluster=kubernetes-the-hard-way \
    --user=admin

  kubectl config use-context kubernetes-the-hard-way
}
Enter fullscreen mode Exit fullscreen mode

Verification

Check the version of the remote Kubernetes cluster:

kubectl version
Enter fullscreen mode Exit fullscreen mode

output

Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.0", GitCommit:"cb303e613a121a29364f75cc67d3d580833a7479", GitTreeState:"clean", BuildDate:"2021-04-08T16:31:21Z", GoVersion:"go1.16.1", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.0", GitCommit:"cb303e613a121a29364f75cc67d3d580833a7479", GitTreeState:"clean", BuildDate:"2021-04-08T16:25:06Z", GoVersion:"go1.16.1", Compiler:"gc", Platform:"linux/amd64"}
Enter fullscreen mode Exit fullscreen mode

List the nodes in the remote Kubernetes cluster:

kubectl get nodes
Enter fullscreen mode Exit fullscreen mode

output

NAME                 STATUS     ROLES    AGE     VERSION
worknode01.k8s.com   NotReady   <none>   5m28s   v1.21.0
worknode02.k8s.com   NotReady   <none>   5m31s   v1.21.0
Enter fullscreen mode Exit fullscreen mode

dotnot wary about NotReady because in networking will fix this issues

Provisioning Pod Network Routes

In this lab you will use calico

There are other ways to implement the Kubernetes networking model.

The Kubernetes Networking Model

  • What Problems Does the Networking Model Solve?
    • How will containers communicate with each other?
    • What if the containers are on different hosts (worker nodes)?
    • How will containers communicate with services?
    • How will containers be assigned unique IP addresses? What port(s) will be used?

The Docker Model

Docker allows containers to communicate with one another using a virtual network bridge configured on the host. Each host has its own virtual network serving all of the containers on that host. But what about containers on different hosts? We have to proxy traffic from the host to the containers, making sure no two containers use the same port on a host. The Kubernetes networking model was created in response to the Docker model. It was designed to improve on some of the limitations of the Docker model

The Kubernetes Networking Model

  • One virtual network for the whole cluster.
  • Each pod has a unique IP within the cluster.
  • Each service has a unique IP that is in a different range than pod IPs.

Cluster Network Architecture

Some Important CIDR ranges:

  • Cluster CIDR
    • IP range used to assign IPs to pods in the cluster. In this course, we’ll be using a cluster CIDR of 10.200.0.0/16
  • Service Cluster IP Range
    • IP range for services in the cluster. This should not overlap with the cluster CIDR range! In this course, our service cluster IP range is 10.32.0.0/24.
  • Pod CIDR
    • IP range for pods on a specific worker node. This range should fall within the cluster CIDR but not overlap with the pod CIDR of any other worker node. In this course, our networking plugin will automatically handle IP allocation to nodes, so we do not need to manually set a pod CIDR. ### Install Calico Networking on Kubernetes We will be using calico Net to implement networking in our Kubernetes cluster. We are now ready to set up networking in our Kubernetes cluster. This lesson guides you through the process of installing Weave Net in the cluster. It also shows you how to test your cluster network to make sure that everything is working as expected so far. After completing this lesson, you should have a functioning cluster network within your Kubernetes cluster. You can configure Weave Net like this: First, log in to both worker nodes and enable IP forwarding
# sysctl net.ipv4.conf.all.forwarding=1
# echo "net.ipv4.conf.all.forwarding=1" | sudo tee -a /etc/sysctl.conf
Enter fullscreen mode Exit fullscreen mode

login in remote kubectl then install clico

# kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml 
Enter fullscreen mode Exit fullscreen mode

note it take between 10 to 15 min to be up
Now calico Net is installed, but we need to test our network to make sure everything is working. First, make sure the calico Net pods are up and running:

# kubectl get pods -n kube-system 
Enter fullscreen mode Exit fullscreen mode

Verification

Next, we want to test that pods can connect to each other and that they can connect to services. We will set up two Nginx pods and a service for those two pods. Then, we will create a busybox pod and use it to test connectivity to both Nginx pods and the service. First, create an Nginx deployment with 2 replicas:

# cat <<EOF | kubectl apply  -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: system:kube-apiserver
  namespace: ""
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:kube-apiserver-to-kubelet
subjects:
  - apiGroup: rbac.authorization.k8s.io
    kind: User
    name: kubernetes
EOFnginx.yml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
spec:
  selector:
     matchLabels:
      run: nginx
  replicas: 2
  template:
    metadata:
      labels:
        run: nginx
    spec:
      containers:
      - name: my-nginx
        image: nginx
        ports:
        - containerPort: 80
EOF
Enter fullscreen mode Exit fullscreen mode

Next, create a service for that deployment so that we can test connectivity to services as well:

# kubectl expose deployment/nginx 
Enter fullscreen mode Exit fullscreen mode

Now let's start up another pod. We will use this pod to test our networking. We will test whether we can connect to the other pods and services from this pod.

# kubectl run busybox --image=radial/busyboxplus:curl --command -- sleep 3600
# POD_NAME=$(kubectl get pods -l run=busybox -o jsonpath="{.items[0].metadata.name}")
Enter fullscreen mode Exit fullscreen mode

Now let's get the IP addresses of our two Nginx pods:

# kubectl get ep nginx 
Enter fullscreen mode Exit fullscreen mode

Now let's make sure the busybox pod can connect to the Nginx pods on both of those IP addresses

# kubectl exec $POD_NAME -- curl <first nginx pod IP address>
# kubectl exec $POD_NAME -- curl <second nginx pod IP address>
Enter fullscreen mode Exit fullscreen mode

Both commands should return some HTML with the title "Welcome to Nginx!" This means that we can successfully connect to other pods. Now let's verify that we can connect to services.

# kubectl get svc
Enter fullscreen mode Exit fullscreen mode

Let's see if we can access the service from the busybox pod!

# kubectl exec $POD_NAME -- curl <nginx service IP address>
Enter fullscreen mode Exit fullscreen mode

This should also return HTML with the title "Welcome to Nginx!" This means that we have successfully reached the Nginx service from inside a pod and that our networking configuration is working!
Now that we have networking set up in the cluster, we need to clean up the objects that were created in order to test the networking. These object could get in the way or become confusing in later lessons, so it is a good idea to remove them from the cluster before proceeding. After completing this lesson, your networking should still be in place, but the pods and services that were used to test it will be cleaned up.

# kubectl get deploy 
# kubectl delete deployment nginx
# kubectl delete svc nginx
# kubectl delete pod busybox
Enter fullscreen mode Exit fullscreen mode

DNS in a Kubernetes Pod Network

  • Provides a DNS service to be used by pods within the network.
  • Configures containers to use the DNS service to perform DNS lookups for example
    • You can access services using DNS names assigned to them.
    • You can access other pods using DNS names

Deploying the DNS Cluster Add-on

In this lab you will deploy the DNS add-on which provides DNS based service discovery, backed by CoreDNS, to applications running inside the Kubernetes cluster.

Deploying the DNS Cluster Add-on
In this lab you will deploy the DNS add-on which provides DNS based service discovery, backed by CoreDNS, to applications running inside the Kubernetes cluster.
The DNS Cluster Add-on

The DNS Cluster Add-on

Deploy the coredns cluster add-on:

kubectl apply -f https://storage.googleapis.com/kubernetes-the-hard-way/coredns-1.8.yaml
Enter fullscreen mode Exit fullscreen mode

output

serviceaccount/coredns created
clusterrole.rbac.authorization.k8s.io/system:coredns created
clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
configmap/coredns created
deployment.apps/coredns created
service/kube-dns created
Enter fullscreen mode Exit fullscreen mode

List the pods created by the kube-dns deployment:

kubectl get pods -l k8s-app=kube-dns -n kube-system
Enter fullscreen mode Exit fullscreen mode

it take 3 min then the pods will up

output

coredns-8494f9c688-j97h2   1/1     Running   5          3m31s
coredns-8494f9c688-wjn4n   1/1     Running   1          3m31s
Enter fullscreen mode Exit fullscreen mode

Verification

Create a busybox deployment:

kubectl run busybox --image=busybox:1.28 --command -- sleep 3600
Enter fullscreen mode Exit fullscreen mode

List the pod created by the busybox deployment:

kubectl get pods -l run=busybox
Enter fullscreen mode Exit fullscreen mode

output

NAME      READY   STATUS    RESTARTS   AGE
busybox   1/1     Running   0          3s
Enter fullscreen mode Exit fullscreen mode

Retrieve the full name of the busybox pod:

POD_NAME=$(kubectl get pods -l run=busybox -o jsonpath="{.items[0].metadata.name}")
Enter fullscreen mode Exit fullscreen mode

Execute a DNS lookup for the kubernetes service inside the busybox pod:

kubectl exec -ti $POD_NAME -- nslookup kubernetes
Enter fullscreen mode Exit fullscreen mode

output

Server:    10.32.0.10
Address 1: 10.32.0.10 kube-dns.kube-system.svc.cluster.local

Name:      kubernetes
Address 1: 10.32.0.1 kubernetes.default.svc.cluster.local
Enter fullscreen mode Exit fullscreen mode

Smoke Test

In this lab you will complete a series of tasks to ensure your Kubernetes cluster is functioning correctly.
Now we want to run some basic smoke tests to make sure everything in our cluster is working correctly. We will test the following features:

  • Data encryption
  • Deployments
  • Port forwarding
  • Logs
  • Exec
  • Services
  • Untrusted workloads

Data Encryption

In this section you will verify the ability to encrypt secret data at rest.

Create a generic secret:

  • Goal:
    • Verify that we can encrypt secret data at rest.
  • Strategy:
    • Create a generic secret in the cluster.
    • Dump the raw data from etcd and verify that it is encrypted we set up a data encryption config to allow Kubernetes to encrypt sensitive data. In this lesson, we will smoke test that functionality by creating some secret data and verifying that it is stored in an encrypted format in etcd. After completing this lesson, you will have verified that your cluster can successfully encrypt sensitive data
kubectl create secret generic kubernetes-the-hard-way \
  --from-literal="mykey=mydata"
Enter fullscreen mode Exit fullscreen mode

Print a hexdump of the kubernetes-the-hard-way secret stored in etcd:
Log in to one of your controller servers, and get the raw data for the test secret from etcd

#  ETCDCTL_API=3 etcdctl get   \
   --endpoints=https://127.0.0.1:2379 \
   --cacert=/etc/etcd/ca.pem \
   --cert=/etc/etcd/kubernetes.pem \
   --key=/etc/etcd/kubernetes-key.pem\
    /registry/secrets/default/kubernetes-the-hard-way | hexdump -C
Enter fullscreen mode Exit fullscreen mode

output

00000000  2f 72 65 67 69 73 74 72  79 2f 73 65 63 72 65 74  |/registry/secret|
00000010  73 2f 64 65 66 61 75 6c  74 2f 6b 75 62 65 72 6e  |s/default/kubern|
00000020  65 74 65 73 2d 74 68 65  2d 68 61 72 64 2d 77 61  |etes-the-hard-wa|
00000030  79 0a 6b 38 73 3a 65 6e  63 3a 61 65 73 63 62 63  |y.k8s:enc:aescbc|
00000040  3a 76 31 3a 6b 65 79 31  3a ea 5f 64 1f 22 63 ac  |:v1:key1:._d."c.|
00000050  e5 a0 2d 7f 1e cd e3 03  64 a0 8e 7f cf 58 db 50  |..-.....d....X.P|
00000060  d7 d0 12 a1 31 2e 72 53  e3 51 de 31 53 96 d7 3f  |....1.rS.Q.1S..?|
00000070  71 f5 e3 3f 07 bc 33 56  55 ed 9c 67 6a 91 77 18  |q..?..3VU..gj.w.|
00000080  52 bb ad 61 64 76 43 df  00 b5 aa 7e 8e cb 16 e9  |R..advC....~....|
00000090  9b 5a 21 04 49 37 63 a5  6c df 09 b7 2b 5c 96 69  |.Z!.I7c.l...+\.i|
000000a0  02 03 42 02 93 7d 42 57  c9 8d 28 2d 1c 9d dd 2b  |..B..}BW..(-...+|
000000b0  a3 69 fa ca c8 8f a0 0e  66 c8 5b 5a 40 29 80 0d  |.i......f.[Z@)..|
000000c0  06 c3 56 87 27 ba d2 19  a6 b0 e6 b5 70 b3 18 02  |..V.'.......p...|
000000d0  69 ed ae b1 4d 03 be 92  08 9e 20 62 41 cd e6 a4  |i...M..... bA...|
000000e0  8c e0 fd b0 5f 44 11 a1  e0 99 a4 61 71 b2 c2 98  |...._D.....aq...|
000000f0  b1 f3 bf 48 a5 26 11 8c  9e 4e 12 7a 81 f4 20 11  |...H.&...N.z.. .|
00000100  05 0d db 62 82 53 2c d9  71 0d 9f af d7 e2 b6 94  |...b.S,.q.......|
00000110  4c 67 98 2e 66 21 77 5e  ea 4d f5 23 6c d4 4b 56  |Lg..f!w^.M.#l.KV|
00000120  58 a7 f1 3b 23 8d 5b 45  14 2c 05 3a a9 90 95 a4  |X..;#.[E.,.:....|
00000130  9a 5f 06 cc 42 65 b3 31  d8 9c 78 a9 f1 da a2 81  |._..Be.1..x.....|
00000140  5a a6 f6 d8 7c 2e 8c 13  f0 30 b1 25 ab 6e bb 2f  |Z...|....0.%.n./|
00000150  cd 7f fd 44 98 64 97 9b  31 0a                    |...D.d..1.|
0000015a
Enter fullscreen mode Exit fullscreen mode

The etcd key should be prefixed with k8s:enc:aescbc:v1:key1, which indicates the aescbc provider was used to encrypt the data with the key1 encryption key.

Deployments

  • Goal:
    • Verify that we can create a deployment and that it can successfully create pods.
  • Strategy:
    • Create a simple deployment.
    • Verify that the deployment successfully creates a pod Deployments are one of the powerful orchestration tools offered by Kubernetes. In this lesson, we will make sure that deployments are working in our cluster. We will verify that we can create a deployment, and that the deployment is able to successfully stand up a new pod and container. In this section you will verify the ability to create and manage Deployments.

Create a deployment for the nginx web server:

kubectl create deployment nginx --image=nginx
Enter fullscreen mode Exit fullscreen mode

List the pod created by the nginx deployment:

kubectl get pods -l app=nginx
Enter fullscreen mode Exit fullscreen mode

output

nginx-6799fc88d8-vtz4c   1/1     Running   0          21s
Enter fullscreen mode Exit fullscreen mode

Port Forwarding

  • Goal:
    • Verify that we can use port forwarding to access pods remotely
  • Strategy:
    • Use kubectl port-forward to set up port forwarding for an Nginx pod
    • Access the pod remotely with curl. In this section you will verify the ability to access applications remotely using port forwarding.

Retrieve the full name of the nginx pod:

POD_NAME=$(kubectl get pods -l app=nginx -o jsonpath="{.items[0].metadata.name}")
Enter fullscreen mode Exit fullscreen mode

Forward port 8080 on your local machine to port 80 of the nginx pod:

kubectl port-forward $POD_NAME 8080:80
Enter fullscreen mode Exit fullscreen mode

output

Forwarding from 127.0.0.1:8080 -> 80
Forwarding from [::1]:8080 -> 80
Enter fullscreen mode Exit fullscreen mode

In a new terminal make an HTTP request using the forwarding address:

curl --head http://127.0.0.1:8080
Enter fullscreen mode Exit fullscreen mode

output

HTTP/1.1 200 OK
Server: nginx/1.19.10
Date: Sun, 02 May 2021 05:29:25 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 13 Apr 2021 15:13:59 GMT
Connection: keep-alive
ETag: "6075b537-264"
Accept-Ranges: bytes
Enter fullscreen mode Exit fullscreen mode

Switch back to the previous terminal and stop the port forwarding to the nginx pod:

Forwarding from 127.0.0.1:8080 -> 80
Forwarding from [::1]:8080 -> 80
Handling connection for 8080
^C
Enter fullscreen mode Exit fullscreen mode

Logs

  • Goal:
    • Verify that we can get container logs with kubectl logs.
  • Strategy:
    • Get the logs from the Nginx pod container. When managing a cluster, it is often necessary to access container logs to check their health and diagnose issues. Kubernetes offers access to container logs via the kubectl logs command. In this lesson, In this section you will verify the ability to retrieve container logs.

Print the nginx pod logs:

kubectl logs $POD_NAME
Enter fullscreen mode Exit fullscreen mode

output

2021/10/26 18:07:29 [notice] 1#1: start worker processes
2021/10/26 18:07:29 [notice] 1#1: start worker process 30
2021/10/26 18:07:29 [notice] 1#1: start worker process 31
127.0.0.1 - - [26/Oct/2021:18:20:44 +0000] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.29.0" "-"
Enter fullscreen mode Exit fullscreen mode

Exec

  • Goal:
    • Verify that we can run commands in a container with kubectl exec
  • Strategy:
    • Use kubectl exec to run a command in the Nginx pod container. The kubectl exec command is a powerful management tool that allows us to run commands inside of Kubernetes-managed containers. In order to verify that our cluster is set up correctly, we need to make sure that kubectl exec is working. In this section you will verify the ability to execute commands in a container.

Print the nginx version by executing the nginx -v command in the nginx container:

kubectl exec -ti $POD_NAME -- nginx -v
Enter fullscreen mode Exit fullscreen mode

output

nginx version: nginx/1.21.3
Enter fullscreen mode Exit fullscreen mode

Services

  • Goal:
    • Verify that we can create and access services.
    • Verify that we can run an untrusted workload under gVisor (runsc)
  • Strategy:
    • Create a NodePort service to expose the Nginx deployment.
    • Access the service remotely using the NodePort.
    • Run a pod as an untrusted workload.
    • Log in to the worker node that is running the pod and verify that its container is running using runsc. In order to make sure that the cluster is set up correctly, we need to ensure that services can be created and accessed appropriately. In this lesson, we will smoke test our cluster's ability to create and access services by creating a simple testing service, and accessing it using a node port. If we can successfully create the service and use it to access our nginx pod, then we will know that our cluster is able to correctly handle services! In this section you will verify the ability to expose applications using a Service.

Expose the nginx deployment using a NodePort service:

kubectl expose deployment nginx --port 80 --type NodePort
Enter fullscreen mode Exit fullscreen mode

Retrieve the node port assigned to the nginx service:

NODE_PORT=$(kubectl get svc nginx \
  --output=jsonpath='{range .spec.ports[0]}{.nodePort}')
Enter fullscreen mode Exit fullscreen mode

Make an HTTP request using the external IP address and the nginx node port:

curl -I http://${EXTERNAL_IP}:${NODE_PORT}
Enter fullscreen mode Exit fullscreen mode

output

HTTP/1.1 200 OK
Server: nginx/1.19.10
Date: Sun, 02 May 2021 05:31:52 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 13 Apr 2021 15:13:59 GMT
Connection: keep-alive
ETag: "6075b537-264"
Accept-Ranges: bytes
Enter fullscreen mode Exit fullscreen mode

Cleaning Up

In this lab you will delete the compute resources created during this tutorial.
Now that we have finished smoke testing the cluster, it is a good idea to clean up the objects that we created for testing. In this lesson, we will spend a moment removing the objects that were created in our cluster in order to perform the smoke testing

# kubectl delete secret kubernetes-the-hard-way 
# kubectl delete svc nginx 
# kubectl delete deployment nginx
Enter fullscreen mode Exit fullscreen mode

Top comments (3)

Collapse
 
morsound profile image
mor-sound

Grate lab!
But Some correction and advice:
1.Some kubeconfigs and cert files should be delpoy for others controllers (I'll be more specific later this week).

  1. Centos7 users must run:

echo 'export KUBECONFIG=/var/lib/kubelet/kubeconfig' >> ~/.bash_profile

In workers nodes (worknode01, worknode02).

  1. still didn't get the right way to access Nginx pods from the loadBalancer (Maybe I should't?)
Collapse
 
contactparthshah profile image
Parth Shah

@haydercyber Seems docs.projectcalico.org/manifests/c... file is not available anymore. getting below error:

kubectl apply -f docs.projectcalico.org/manifests/c...

error: unable to read URL "docs.projectcalico.org/manifests/c...", server reported 404 Not Found, status code=404

Can you help me where can I find the file?

Collapse
 
jhawithu profile image
Alok Kumar

Very nice article, lots of learning, keep posting. I also created a full course on Kubernetes in 10hr on YouTube, please have a look and motivate me. Thanks
youtu.be/toLAU_QPF6o