<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: balajivedagiri</title>
    <description>The latest articles on DEV Community by balajivedagiri (@balajivedagiri).</description>
    <link>https://dev.to/balajivedagiri</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/balajivedagiri"/>
    <language>en</language>
    <item>
      <title>Installing Openshift Cluster on vSphere7</title>
      <dc:creator>balajivedagiri</dc:creator>
      <pubDate>Wed, 04 Oct 2023 05:27:59 +0000</pubDate>
      <link>https://dev.to/balajivedagiri/installing-openshift-cluster-on-vsphere7-4n4l</link>
      <guid>https://dev.to/balajivedagiri/installing-openshift-cluster-on-vsphere7-4n4l</guid>
      <description>&lt;h1&gt;
  
  
  Contents
&lt;/h1&gt;

&lt;ol&gt;
&lt;li&gt;Pre-requisites&lt;/li&gt;
&lt;li&gt;Generate Pull secret from Redhat&lt;/li&gt;
&lt;li&gt;Creating openshift cluster &lt;/li&gt;
&lt;li&gt;Fixing Internal Image registry&lt;/li&gt;
&lt;li&gt;Deploy a sample nginx application&lt;/li&gt;
&lt;/ol&gt;

&lt;h1&gt;
  
  
  1. Pre-requisites
&lt;/h1&gt;

&lt;p&gt;a) Connectivity to vCenter on port 443 from openshift network.&lt;/p&gt;

&lt;p&gt;b) Connectivity to ESXi hosts on port 443 from openshift network.&lt;/p&gt;

&lt;p&gt;c) Generate ssh keys (we can use the existing), this needs to be passed during cluster creation.&lt;/p&gt;

&lt;p&gt;d) Working DHCP for openshift cluster nodes.&lt;/p&gt;

&lt;p&gt;e) Two static ip's for API and Apps, for Step d.&lt;/p&gt;

&lt;p&gt;f) DNS entry for "api.." and "*.apps.." .&lt;/p&gt;

&lt;p&gt;In our case we mapped as below in our DNS,&lt;br&gt;
api.openshift-test01.tanzu.local =&amp;gt; 192.168.144.22&lt;br&gt;
*.apps.openshift-test01.tanzu.local =&amp;gt; 192.168.144.23&lt;/p&gt;
&lt;h1&gt;
  
  
  2. Generate Pull secret from Redhat
&lt;/h1&gt;

&lt;p&gt;Lets get the Pull secret and also download the installer and client tools.&lt;/p&gt;

&lt;p&gt;a) Register with &lt;a href="https://console.redhat.com/openshift/"&gt;https://console.redhat.com/openshift/&lt;/a&gt; using your personal email or official email.&lt;/p&gt;

&lt;p&gt;b) Once logged in, Click on Create Cluster.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--uTo5OMn0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/02i6ao9x3o1dfv1bxc7j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--uTo5OMn0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/02i6ao9x3o1dfv1bxc7j.png" alt="Image description" width="800" height="212"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;c) Choose "Datacenter" and scroll down&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--j0BuQUtC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ept8vft5kk2tgycq4pge.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--j0BuQUtC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ept8vft5kk2tgycq4pge.png" alt="Image description" width="800" height="381"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;d) Click on vSphere&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--TW8ONdA---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/g7lewh9z618bigfp11j6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--TW8ONdA---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/g7lewh9z618bigfp11j6.png" alt="Image description" width="800" height="373"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;e) Click on Automated installation &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--BnK4E1pB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6fdnej5solgx4ytnbzo7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--BnK4E1pB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6fdnej5solgx4ytnbzo7.png" alt="Image description" width="800" height="382"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;f) Download the Installer, Pull secret, and Command line tools.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Ho6P3nwK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/joq08ypf54r2xghqpqb8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Ho6P3nwK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/joq08ypf54r2xghqpqb8.png" alt="Image description" width="800" height="441"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h1&gt;
  
  
  3. Creating openshift cluster
&lt;/h1&gt;

&lt;p&gt;We use a linux jumpserver which is in same network as openshift network to create the cluster so the installer can connect to API server to verify the installation without any dependencies on Firewall.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;root@linux-vm-automation:~/openshift# ls -ltr
total 414864
-rw-r--r-- 1 root root      2783 May 30 17:30 pull-secret.txt
-rw-r--r-- 1 root root  59819571 May 30 17:31 openshift-client-linux.tar.gz
-rw-r--r-- 1 root root 364993703 May 30 17:31 openshift-install-linux.tar.gz
root@linux-vm-automation:~/openshift#
root@linux-vm-automation:~/openshift#
root@linux-vm-automation:~/openshift# tar -xvf openshift-install-linux.tar.gz
README.md
openshift-install
root@linux-vm-automation:~/openshift# ll
total 975252
drwxr-xr-x  2 root root       146 May 30 18:04 ./
drwx------ 22 root root      4096 May 30 18:02 ../
-rw-r--r--  1 root root  59819571 May 30 17:31 openshift-client-linux.tar.gz
-rwxr-xr-x  1 root root 573825024 May  9 18:10 openshift-install*
-rw-r--r--  1 root root 364993703 May 30 17:31 openshift-install-linux.tar.gz
-rw-r--r--  1 root root      2783 May 30 17:30 pull-secret.txt
-rw-r--r--  1 root root       706 May  9 18:10 README.md
root@linux-vm-automation:~/openshift#

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Parameters we passed to the installer are below, so ensure you have the details ready.&lt;/p&gt;

&lt;p&gt;a) ssh public key.&lt;br&gt;
b) select vsphere as platform.&lt;br&gt;
c) vcenter ip address.&lt;br&gt;
d) vcenter username and password with required previleges.&lt;br&gt;
e) datacenter.&lt;br&gt;
f) datastore.&lt;br&gt;
g) network.&lt;br&gt;
h) VIP for API and Ingress.&lt;br&gt;
i) Domain Name.&lt;br&gt;
j) cluster name.&lt;br&gt;
k) enter the pull secret that we copied from redhat console.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;root@linux-vm-automation:~/openshift# ./openshift-install create cluster
? SSH Public Key /root/.ssh/id_rsa.pub
? Platform vsphere
? vCenter 172.17.22.118
? Username administrator@vsphere.local
? Password [? for help] *************
INFO Connecting to vCenter 172.17.22.118
INFO Defaulting to only available datacenter: vcenter-datacenter
? Cluster tenant-cluster
? Default Datastore SSD_Storage
? Network tenant43-ntw-72a59d1a-398e-4018-8dbd-5afa8ca60d40
? Virtual IP Address for API 192.168.144.22
? Virtual IP Address for Ingress 192.168.144.23
? Base Domain tanzu.local
? Cluster Name openshift-test01
? Pull Secret [? for help] ******************************************************************************************************************************************************************************************************************INFO Obtaining RHCOS image file from 'https://rhcos.mirror.openshift.com/art/storage/prod/streams/4.13-9.2/builds/413.92.202305021736-0/x86_64/rhcos-413.92.202305021736-0-vmware.x86_64.ova?sha256='
INFO The file was found in cache: /root/.cache/openshift-installer/image_cache/rhcos-413.92.202305021736-0-vmware.x86_64.ova. Reusing...
INFO Creating infrastructure resources...
INFO Waiting up to 20m0s (until 8:22AM) for the Kubernetes API at https://api.openshift-test01.tanzu.local:6443...
INFO API v1.26.3+b404935 up
INFO Waiting up to 30m0s (until 8:35AM) for bootstrapping to complete...
INFO Destroying the bootstrap resources...
INFO Waiting up to 40m0s (until 9:05AM) for the cluster at https://api.openshift-test01.tanzu.local:6443 to initialize...
INFO Checking to see if there is a route at openshift-console/console...
INFO Install complete!
INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/root/openshift/auth/kubeconfig'
INFO Access the OpenShift web-console here: https://console-openshift-console.apps.openshift-test01.tanzu.local
INFO Login to the console with user: "kubeadmin", and password: "c9T8a-ALwe9-ZU7D2-ENTDh"
INFO Time elapsed: 44m32s
root@linux-vm-automation:~/openshift#
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So Cluster is created, lets login and verify.&lt;/p&gt;

&lt;p&gt;Installer above provided the url and credentials to login&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/root/openshift/auth/kubeconfig'
INFO Access the OpenShift web-console here: https://console-openshift-console.apps.openshift-test01.tanzu.local
INFO Login to the console with user: "kubeadmin", and password: "c9T8a-ALwe9-ZU7D2-ENTDh"
INFO Time elapsed: 44m32s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--df9XN5yP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/oiiymweoueh5w6vr7svu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--df9XN5yP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/oiiymweoueh5w6vr7svu.png" alt="Image description" width="800" height="444"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--trK2OlEH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uiek85s5ahgkrnsar2dj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--trK2OlEH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uiek85s5ahgkrnsar2dj.png" alt="Image description" width="800" height="409"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Login to redhat console if you see your cluster,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--QQb4FE-J--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pcud7n7w6kw4wq4n3xwm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--QQb4FE-J--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pcud7n7w6kw4wq4n3xwm.png" alt="Image description" width="800" height="322"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Access the cluster using oc or kubectl,&lt;/p&gt;

&lt;p&gt;We already download oc tool "openshift-client-linux.tar.gz" from the redhat console, extract it and place it in /usr/local/bin/ or the location that you prefer.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export KUBECONFIG=/root/openshift/auth/kubeconfig

root@linux-vm-automation:~/openshift# oc get nodes
NAME                                    STATUS   ROLES                  AGE   VERSION
openshift-test01-pg8s9-master-0         Ready    control-plane,master   35m   v1.26.3+b404935
openshift-test01-pg8s9-master-1         Ready    control-plane,master   35m   v1.26.3+b404935
openshift-test01-pg8s9-master-2         Ready    control-plane,master   34m   v1.26.3+b404935
openshift-test01-pg8s9-worker-0-5c42f   Ready    worker                 14m   v1.26.3+b404935
openshift-test01-pg8s9-worker-0-djzl5   Ready    worker                 15m   v1.26.3+b404935
openshift-test01-pg8s9-worker-0-mtgzh   Ready    worker                 14m   v1.26.3+b404935
root@linux-vm-automation:~/openshift#
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  4. Fixing Internal Image registry
&lt;/h1&gt;

&lt;p&gt;In vSphere environment, Openshift Internal Image registry won't be available since shareable stroage ReadWriteMany can't be created on vSphere storage.&lt;/p&gt;

&lt;p&gt;If you try to create a pod with image pointing to internal image registry,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--DSmbDEc7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/y3yop99x1h5wsxi80wc4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--DSmbDEc7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/y3yop99x1h5wsxi80wc4.png" alt="Image description" width="800" height="411"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--3H7JhrFx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/38j2vvawgxt8qktegtdk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--3H7JhrFx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/38j2vvawgxt8qktegtdk.png" alt="Image description" width="800" height="191"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It will fail like below,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Ko3EyidP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0fpiv3d69a6iq30qy1z8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Ko3EyidP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0fpiv3d69a6iq30qy1z8.png" alt="Image description" width="800" height="418"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To Fix it, first create a PVC&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;root@linux-vm-automation:~/openshift# cat openshift-image-registry-pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: image-registry-storage
  namespace: openshift-image-registry
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 100Gi
root@linux-vm-automation:~/openshift#

root@linux-vm-automation:~/openshift# oc create -f openshift-image-registry-pvc.yaml -n openshift-image-registry
persistentvolumeclaim/image-registry-storage created
root@linux-vm-automation:~/openshift#
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Update the Registry CR spec with the following command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;oc edit configs.imageregistry.operator.openshift.io -n openshift-image-registry

Change spec.managementState from Removed to Managed.
Change spec.storage from {} to: claim: image-registry-storage

spec:
    managementState: Managed
storage:
      pvc:
        claim: image-registry-storage
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After updating it should look like below,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;spec:
    managementState: Managed
storage:
      pvc:
        claim: image-registry-storage
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once image registry pod is running fine, images from the internal image registry should be available.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--fTU6SMvX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/l8bhu80wll4qqfixx513.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--fTU6SMvX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/l8bhu80wll4qqfixx513.png" alt="Image description" width="800" height="80"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The example which was not running earlier is running now,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8yFLKfz2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/y50iwolg40d93kwty4ks.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8yFLKfz2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/y50iwolg40d93kwty4ks.png" alt="Image description" width="800" height="273"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  5. Deploy a sample nginx application.
&lt;/h1&gt;

&lt;p&gt;You should already be familiar on how to deploy a pod. Below we created a deployment using nginx image and created a service.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--jIVtVz_J--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d2fjph9yubg4t5k0ybk1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jIVtVz_J--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d2fjph9yubg4t5k0ybk1.png" alt="Image description" width="800" height="191"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Lets create a route in openshift,&lt;/p&gt;

&lt;p&gt;Note : This is not a Kubernetes object like Service.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--6yN9iKO1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8bdgsff4jtm6v0q2cqrc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6yN9iKO1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8bdgsff4jtm6v0q2cqrc.png" alt="Image description" width="800" height="381"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--A_QLCJDT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/c0vrp1qkuifd4xkag173.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--A_QLCJDT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/c0vrp1qkuifd4xkag173.png" alt="Image description" width="800" height="347"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--0v3I6Wu1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lj0bxb8l43qtyd1vgzmh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--0v3I6Wu1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lj0bxb8l43qtyd1vgzmh.png" alt="Image description" width="800" height="442"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--GgyHf30a--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/20tr3tarp5sked3mq46j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--GgyHf30a--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/20tr3tarp5sked3mq46j.png" alt="Image description" width="800" height="242"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--wQiE0TVj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/my0v2ntg8o7x23dunc96.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--wQiE0TVj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/my0v2ntg8o7x23dunc96.png" alt="Image description" width="800" height="435"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--lRddiCMN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vd26d917o3iy3c0odbux.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--lRddiCMN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vd26d917o3iy3c0odbux.png" alt="Image description" width="800" height="278"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Provisioning an RKE2 (Rancher Kubernetes Engine 2) cluster on vSphere</title>
      <dc:creator>balajivedagiri</dc:creator>
      <pubDate>Sun, 04 Jun 2023 23:07:22 +0000</pubDate>
      <link>https://dev.to/balajivedagiri/provisioning-an-rke2-cluster-on-vsphere-13mh</link>
      <guid>https://dev.to/balajivedagiri/provisioning-an-rke2-cluster-on-vsphere-13mh</guid>
      <description>&lt;p&gt;In this article i will walk you down with steps to create RKE2 cluster on vSphere vCenter from Rancher UI.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pre-requisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Rancher nodes needs to communicate with vSphere vCenter on port 443.&lt;/li&gt;
&lt;li&gt;Rancher nodes needs to communicate with RKE2 cluster nodes on port 22.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  1. Installing packages in the template VM.
&lt;/h2&gt;

&lt;p&gt;Create a new Ubuntu VM, perform below steps and later convert it into a template. We will let rancher use this template to create VM's.&lt;/p&gt;

&lt;p&gt;Ensure below packages are installed in the template,&lt;br&gt;
• curl&lt;br&gt;
• wget&lt;br&gt;
• git&lt;br&gt;
• net-tools&lt;br&gt;
• unzip&lt;br&gt;
• apparmor-parser&lt;br&gt;
• ca-certificates&lt;br&gt;
• cloud-init&lt;br&gt;
• cloud-guest-utils&lt;br&gt;
• cloud-image-utils&lt;br&gt;
• growpart&lt;br&gt;
• cloud-initramfs-growroot&lt;br&gt;
• open-iscsi&lt;br&gt;
• openssh-server&lt;br&gt;
• open-vm-tools&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

sudo apt-get update
sudo apt-get install -y curl wget git net-tools unzip ca-certificates cloud-init cloud-guest-utils cloud-image-utils cloud-initramfs-growroot open-iscsi openssh-server open-vm-tools net-tools apparmor


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  2. Configure the datasource for cloud-init in the template VM.
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Rancher will use cloud-init for things like setting hostname, creating a user, running a script, etc.&lt;/li&gt;
&lt;li&gt;Set the datasource for cloud-init using command “dpkg-reconfigure cloud-init”.&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

sudo dpkg-reconfigure cloud-init


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;And ensure “NoCloud” datasource is selected like below, I have deselected all other datasources since my requirement for rancher is only “NoCloud”.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feoc64fqr77i029x56sbx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feoc64fqr77i029x56sbx.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Verify that changes are propagated to the config file,&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

root@:~# cat /etc/cloud/cloud.cfg.d/90_dpkg.cfg
# to update this file, run dpkg-reconfigure cloud-init
datasource_list: [ NoCloud ]
root@:~#


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  3. Convert the VM into a template.
&lt;/h2&gt;

&lt;p&gt;Run below script to scrub the VM ( similar to sysprep on Windows ).&lt;br&gt;
Save below contents to a file and execute it.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

#!/bin/bash
# Cleaning logs.
if [ -f /var/log/audit/audit.log ]; then
  cat /dev/null &amp;gt; /var/log/audit/audit.log
fi
if [ -f /var/log/wtmp ]; then
  cat /dev/null &amp;gt; /var/log/wtmp
fi
if [ -f /var/log/lastlog ]; then
  cat /dev/null &amp;gt; /var/log/lastlog
fi

# Cleaning udev rules.
if [ -f /etc/udev/rules.d/70-persistent-net.rules ]; then
  rm /etc/udev/rules.d/70-persistent-net.rules
fi

# Cleaning the /tmp directories
rm -rf /tmp/*
rm -rf /var/tmp/*

# Cleaning the SSH host keys
rm -f /etc/ssh/ssh_host_*

# Cleaning the machine-id
truncate -s 0 /etc/machine-id
rm /var/lib/dbus/machine-id
ln -s /etc/machine-id /var/lib/dbus/machine-id

# Cleaning the shell history
unset HISTFILE
history -cw
echo &amp;gt; ~/.bash_history
rm -fr /root/.bash_history

# Truncating hostname, hosts, resolv.conf and setting hostname to localhost
truncate -s 0 /etc/{hostname,hosts,resolv.conf}
hostnamectl set-hostname localhost

# Clean cloud-init
cloud-init clean -s -l


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now clone the VM to a template.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Add vSphere vCenter credentials in Rancher.
&lt;/h2&gt;

&lt;p&gt;Login into Rancher =&amp;gt; Click on Burger menu =&amp;gt; Click on Cluster Management =&amp;gt; Click on Cloud Credentials.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff9atiqyfui6h4vibb9gb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff9atiqyfui6h4vibb9gb.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click Create and Click on VMware vSphere&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frsc2b5osekvglwpumalu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frsc2b5osekvglwpumalu.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Enter your vSphere vCenter credentials, we will use administrator account. For granular permissions, please refer rancher documentation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr6zj4h4ocmzmm1sy45ry.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr6zj4h4ocmzmm1sy45ry.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  5. Create RKE2 Cluster in vSphere
&lt;/h2&gt;

&lt;p&gt;Go back to Rancher homepage, Click on Create.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb1r5nqa633wj2yin5jhq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb1r5nqa633wj2yin5jhq.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Ensure you toggle the switch to RKE2 as highlighted below and click on VMware vSphere.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fshselmmi7khjktua1pkn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fshselmmi7khjktua1pkn.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Enter the details,&lt;/p&gt;

&lt;p&gt;Pool1, we will use to create "Control Plane Nodes". Ensure you select appropriate Data Center/Resource Pool/Data Store/Folder.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fui64gyx01icrf7zpxjmm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fui64gyx01icrf7zpxjmm.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Select the Template that you created in the initial steps, along with CPU/Memory/Networks/etc like below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1g9qau0cjd32hghi09q8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1g9qau0cjd32hghi09q8.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Add another pool for worker node,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feaxvxm9rzo5qqsex2c4x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feaxvxm9rzo5qqsex2c4x.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzh7ywucllg0uenhgz7nd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzh7ywucllg0uenhgz7nd.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Fill the information like we did above for control plane nodes.&lt;/p&gt;

&lt;p&gt;For the sake simplicity, we will keep the default values for the cluster and Click Create,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fof4q6ncmmjwojhiwqdtr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fof4q6ncmmjwojhiwqdtr.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Cluster creation in progress,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxq4yosz3gg64ho0w7lxc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxq4yosz3gg64ho0w7lxc.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Wait for the nodes to be bootstrapped and cluster creation,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyxhyhm3r4gcq48d42z9j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyxhyhm3r4gcq48d42z9j.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flb0bq9cz8g0j0q9q1v2n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flb0bq9cz8g0j0q9q1v2n.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you see VM settings, rancher would have mounted an iso called user-data.iso&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fju2tn5ge3afilbha8z6f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fju2tn5ge3afilbha8z6f.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you login to one of the node and navigate to /mnt, it will have user-data and meta-data and used by cloud-init (if you remember, this is the reason, we selected NoCloud as data source for cloud-init)&lt;/p&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

&lt;p&gt;root@vsphere-rke2-test01-pool1-4a86ea2d-tf5jq:~# cd /mnt&lt;br&gt;
root@vsphere-rke2-test01-pool1-4a86ea2d-tf5jq:/mnt# ls&lt;br&gt;
meta-data  user-data&lt;br&gt;
root@vsphere-rke2-test01-pool1-4a86ea2d-tf5jq:/mnt# cat meta-data&lt;br&gt;
hostname: vsphere-rke2-test01-pool1-4a86ea2d-tf5jq&lt;br&gt;
root@vsphere-rke2-test01-pool1-4a86ea2d-tf5jq:/mnt#&lt;br&gt;
root@vsphere-rke2-test01-pool1-4a86ea2d-tf5jq:/mnt# cat user-data&lt;/p&gt;
&lt;h1&gt;
  
  
  cloud-config
&lt;/h1&gt;

&lt;p&gt;groups:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;staff
hostname: vsphere-rke2-test01-pool1-4a86ea2d-tf5jq
runcmd:&lt;/li&gt;
&lt;li&gt;sh /usr/local/custom_script/install.sh
set_hostname:&lt;/li&gt;
&lt;li&gt;vsphere-rke2-test01-pool1-4a86ea2d-tf5jq
users:&lt;/li&gt;
&lt;li&gt;create_groups: false
groups: staff
lock_passwd: true
name: docker
no_user_group: true&lt;/li&gt;
&lt;/ul&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
&lt;br&gt;
  &lt;br&gt;
  

&lt;ol&gt;
&lt;li&gt;Add a new worker node.
&lt;/li&gt;
&lt;/ol&gt;
&lt;/h2&gt;


&lt;p&gt;Lets add a new worker node to existing cluster.&lt;br&gt;
Go to Rancher home =&amp;gt; Click burger menu =&amp;gt; Click on Cluster Management =&amp;gt; Click on your cluster.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpysun34gu0ekzgssg1m5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpysun34gu0ekzgssg1m5.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Under pool2 dedicated to worker nodes, click on Plus icon,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7udahvi94kc18s3adfvf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7udahvi94kc18s3adfvf.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;events on vcenter of node getting created,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1vudljfh12eoqljulo54.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1vudljfh12eoqljulo54.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Worker node2 successfully added.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fah0jv4vyc5arfo3p1yvv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fah0jv4vyc5arfo3p1yvv.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>rancher</category>
      <category>kubernetes</category>
      <category>rke</category>
      <category>vsphere</category>
    </item>
    <item>
      <title>Installing and Configuring Elasticsearch/Kibana 8.x with Security</title>
      <dc:creator>balajivedagiri</dc:creator>
      <pubDate>Wed, 10 May 2023 13:17:02 +0000</pubDate>
      <link>https://dev.to/balajivedagiri/installing-and-configuring-elasticsearchkibana-8x-with-security-2e68</link>
      <guid>https://dev.to/balajivedagiri/installing-and-configuring-elasticsearchkibana-8x-with-security-2e68</guid>
      <description>&lt;p&gt;We will be installing,configuring elasticsearch and kibana 8.4, but steps should be same for most versions.&lt;/p&gt;

&lt;p&gt;Our cluster will have 3 master nodes, 3 hot data nodes, 3 warm data nodes and 1 machine learning node.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;1) pre-requisites&lt;/strong&gt;
&lt;/h2&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;1a) create /var/lib/elasticsearch mount point on all the nodes.
1b) turn off swap on OS(to ensure JVM heap is not swapped out).
1c) since we are using packages to install elasticsearch, ulimits are enforced in systemd unit file /usr/lib/systemd/system/elasticsearch.service.
1d) settings like file descriptors, max processes, max virtual memory size , max file size, etc are controlled from the systemd unit file.
1e) change default value of TCP retransmission timeout value, update the net.ipv4.tcp_retries2 setting in /etc/sysctl.conf to 5, and sysctl -w net.ipv4.tcp_retries2=5.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;2) Installing elasticsearch&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Our cluster will have 3 master nodes, 3 hot data nodes, 3 warm data nodes and 1 machine learning node.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2a) Import elasticsearch PGP key.&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo gpg --dearmor -o /usr/share/keyrings/elasticsearch-keyring.gpg
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;2b) Install apt-transport-https package&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt install apt-transport-https
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;2c) save the repo,&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;echo "deb [signed-by=/usr/share/keyrings/elasticsearch-keyring.gpg] https://artifacts.elastic.co/packages/8.x/apt stable main" | sudo tee /etc/apt/sources.list.d/elastic-8.x.list
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;2d) update the repo and install the package,&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apt update &amp;amp;&amp;amp; apt install elasticsearch
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apt-get install elasticsearch
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following NEW packages will be installed:
  elasticsearch
0 upgraded, 1 newly installed, 0 to remove and 119 not upgraded.
Need to get 0 B/566 MB of archives.
After this operation, 1,170 MB of additional disk space will be used.
Selecting previously unselected package elasticsearch.
(Reading database ... 111616 files and directories currently installed.)
Preparing to unpack .../elasticsearch_8.4.3_amd64.deb ...
Creating elasticsearch group... OK
Creating elasticsearch user... OK
Unpacking elasticsearch (8.4.3) ...
Setting up elasticsearch (8.4.3) ...
--------------------------- Security autoconfiguration information ------------------------------

Authentication and authorization are enabled.
TLS for the transport and HTTP layers is enabled and configured.

The generated password for the elastic built-in superuser is : B25meUI2L6WcfTWBNvNp

If this node should join an existing cluster, you can reconfigure this with
'/usr/share/elasticsearch/bin/elasticsearch-reconfigure-node --enrollment-token &amp;lt;token-here&amp;gt;'
after creating an enrollment token on your existing cluster.

You can complete the following actions at any time:

Reset the password of the elastic built-in superuser with
'/usr/share/elasticsearch/bin/elasticsearch-reset-password -u elastic'.

Generate an enrollment token for Kibana instances with
 '/usr/share/elasticsearch/bin/elasticsearch-create-enrollment-token -s kibana'.

Generate an enrollment token for Elasticsearch nodes with
'/usr/share/elasticsearch/bin/elasticsearch-create-enrollment-token -s node'.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;2e) Ansible playbook to install the package.&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
- hosts: elasticsearch
  become: true
  gather_facts: true
  tasks:
  - name: Import the Elasticsearch PGP key
    apt_key:
      url: https://artifacts.elastic.co/GPG-KEY-elasticsearch
      keyring: /usr/share/keyrings/elasticsearch-keyring.gpg
      state: present
  - name: Install apt-transport-https
    apt:
      name: apt-transport-https
      state: present
# Add elasticsearch repo into sources list file /etc/apt/sources.list.d/elastic-8.x.list, after adding it will also run apt update or apt-get update by default
  - apt_repository:
      repo: 'deb [signed-by=/usr/share/keyrings/elasticsearch-keyring.gpg] https://artifacts.elastic.co/packages/8.x/apt stable main'
      state: present
      filename: elastic-8.x.list
  - name: Install a specific version of elasticsearch
    apt:
      name: elasticsearch=8.4.3
      state: present
      update_cache: yes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;2f) enable the service to start automatically on boot&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo systemctl daemon-reload
sudo systemctl enable elasticsearch
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  3) Generating certificates to enable TLS for transport and http.
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;3a) Generate CA certificate.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Login to one of the node where you installed elasticsearch , and issue below command to generate CA certificate. For higher protection, ensure you are setting password the certificate when it prompts below at the end and ensure you save that password in a secure location to use it later.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/usr/share/elasticsearch/bin/elasticsearch-certutil ca --out /root/elasticsearch_certs/elasticsearch-test-ca.p12
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/usr/share/elasticsearch/bin/elasticsearch-certutil ca --out /root/elasticsearch_certs/elasticsearch-test-ca.p12
This tool assists you in the generation of X.509 certificates and certificate
signing requests for use with SSL/TLS in the Elastic stack.

The 'ca' mode generates a new 'certificate authority'
This will create a new X.509 certificate and private key that can be used
to sign certificate when running in 'cert' mode.

Use the 'ca-dn' option if you wish to configure the 'distinguished name'
of the certificate authority

By default the 'ca' mode produces a single PKCS#12 output file which holds:
    * The CA certificate
    * The CA's private key

If you elect to generate PEM format certificates (the -pem option), then the output will
be a zip file containing individual files for the CA certificate and private key

Enter password for elasticsearch-test-ca.p12 :

root@jumperserver:~/elasticsearch_certs# ls
elasticsearch-test-ca.p12
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;3b) Generate node certificates&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We use node certificates to join nodes to cluster and for transport layer encrytion. add all of your node details with dns name and ip into an yaml file like below,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;root@jumperserver:~# cat /root/elasticsearch_certs/instances.yaml
instances:
  - name: "test-elastic-master01"
    ip: "10.10.4.6"
    dns: "test-elastic-master01"
  - name: "test-elastic-master02"
    ip: "10.10.4.7"
    dns: "test-elastic-master02"
  - name: "test-elastic-master03"
    ip: "10.10.4.8"
    dns: "test-elastic-master03"
  - name: "test-elastic-hotdata01"
    ip: "10.10.4.2"
    dns: "test-elastic-hotdata01"
  - name: "test-elastic-hotdata02"
    ip: "10.10.4.3"
    dns: "test-elastic-hotdata02"
  - name: "test-elastic-hotdata03"
    ip: "10.10.4.4"
    dns: "test-elastic-hotdata03"
  - name: "test-elastic-warmdata01"
    ip: "10.10.4.11"
    dns: "test-elastic-warmdata01"
  - name: "test-elastic-warmdata02"
    ip: "10.10.4.12"
    dns: "test-elastic-warmdata02"
  - name: "test-elastic-warmdata03"
    ip: "10.10.4.13"
    dns: "test-elastic-warmdata03"
  - name: "test-elastic-ml01"
    ip: "10.10.4.10"
    dns: "test-elastic-ml01"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;below you need to enter CA certificate password that you entered in step 3a, and ensure you set password for each and every node certificate ( you can set same password for all the nodes or different password as per security compliance)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/usr/share/elasticsearch/bin/elasticsearch-certutil cert --in /root/elasticsearch_certs/instances.yaml --out /root/elasticsearch_certs/server-cert-bundle.zip --ca /root/elasticsearch_certs/elasticsearch-test-ca.p12
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;root@elasticsearch-jumperserver:~/elasticsearch_certs# /usr/share/elasticsearch/bin/elasticsearch-certutil cert --in /root/elasticsearch_certs/instances.yaml --out /root/elasticsearch_certs/server-cert-bundle.zip --ca /root/elasticsearch_certs/elasticsearch-test-ca.p12
This tool assists you in the generation of X.509 certificates and certificate
signing requests for use with SSL/TLS in the Elastic stack.

The 'cert' mode generates X.509 certificate and private keys.
    * By default, this generates a single certificate and key for use
       on a single instance.
    * The '-multiple' option will prompt you to enter details for multiple
       instances and will generate a certificate and key for each one
    * The '-in' option allows for the certificate generation to be automated by describing
       the details of each instance in a YAML file

    * An instance is any piece of the Elastic Stack that requires an SSL certificate.
      Depending on your configuration, Elasticsearch, Logstash, Kibana, and Beats
      may all require a certificate and private key.
    * The minimum required value for each instance is a name. This can simply be the
      hostname, which will be used as the Common Name of the certificate. A full
      distinguished name may also be used.
    * A filename value may be required for each instance. This is necessary when the
      name would result in an invalid file or directory name. The name provided here
      is used as the directory name (within the zip) and the prefix for the key and
      certificate files. The filename is required if you are prompted and the name
      is not displayed in the prompt.
    * IP addresses and DNS names are optional. Multiple values can be specified as a
      comma separated string. If no IP addresses or DNS names are provided, you may
      disable hostname verification in your SSL configuration.


    * All certificates generated by this tool will be signed by a certificate authority (CA)
      unless the --self-signed command line option is specified.
      The tool can automatically generate a new CA for you, or you can provide your own with
      the --ca or --ca-cert command line options.


By default the 'cert' mode produces a single PKCS#12 output file which holds:
    * The instance certificate
    * The private key for the instance certificate
    * The CA certificate

If you specify any of the following options:
    * -pem (PEM formatted output)
    * -multiple (generate multiple certificates)
    * -in (generate certificates from an input file)
then the output will be be a zip file containing individual certificate/key files

Enter password for CA (/root/elasticsearch_certs/elasticsearch-test-ca.p12) :
Enter password for test-elastic-master01/test-elastic-master01.p12 :
Enter password for test-elastic-master02/test-elastic-master02.p12 :
Enter password for test-elastic-master03/test-elastic-master03.p12 :
Enter password for test-elastic-hotdata01/test-elastic-hotdata01.p12 :
Enter password for test-elastic-hotdata02/test-elastic-hotdata02.p12 :
Enter password for test-elastic-hotdata03/test-elastic-hotdata03.p12 :
Enter password for test-elastic-warmdata01/test-elastic-warmdata01.p12 :
Enter password for test-elastic-warmdata02/test-elastic-warmdata02.p12 :
Enter password for test-elastic-warmdata03/test-elastic-warmdata03.p12 :
Enter password for test-elastic-ml01/test-elastic-ml01.p12 :

Certificates written to /root/elasticsearch_certs/server-cert-bundle.zip

This file should be properly secured as it contains the private keys for
all instances
After unzipping the file, there will be a directory for each instance.
Each instance has a single PKCS#12 (.p12) file containing the instance
certificate, instance private key and the CA certificate
For each Elastic product that you wish to configure, you should copy
this '.p12' file to the relevant configuration directory
and then follow the SSL configuration instructions in the product guide.

For client applications, you may only need to copy the CA certificate and
configure the client to trust this certificate.
root@elasticsearch-jumperserver:~/elasticsearch_certs#
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;below we are checking the generated certificates,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;root@elasticsearch-jumperserver:~/elasticsearch_certs# ls
elasticsearch-test-ca.p12  instances.yaml  server-cert-bundle.zip
root@elasticsearch-jumperserver:~/elasticsearch_certs#

root@elasticsearch-jumperserver:~/elasticsearch_certs# unzip server-cert-bundle.zip
Archive:  server-cert-bundle.zip
   creating: test-elastic-master01/
  inflating: test-elastic-master01/test-elastic-master01.p12
   creating: test-elastic-master02/
  inflating: test-elastic-master02/test-elastic-master02.p12
   creating: test-elastic-master03/
  inflating: test-elastic-master03/test-elastic-master03.p12
   creating: test-elastic-hotdata01/
  inflating: test-elastic-hotdata01/test-elastic-hotdata01.p12
   creating: test-elastic-hotdata02/
  inflating: test-elastic-hotdata02/test-elastic-hotdata02.p12
   creating: test-elastic-hotdata03/
  inflating: test-elastic-hotdata03/test-elastic-hotdata03.p12
   creating: test-elastic-warmdata01/
  inflating: test-elastic-warmdata01/test-elastic-warmdata01.p12
   creating: test-elastic-warmdata02/
  inflating: test-elastic-warmdata02/test-elastic-warmdata02.p12
   creating: test-elastic-warmdata03/
  inflating: test-elastic-warmdata03/test-elastic-warmdata03.p12
   creating: test-elastic-ml01/
  inflating: test-elastic-ml01/test-elastic-ml01.p12
root@elasticsearch-jumperserver:~/elasticsearch_certs#
root@elasticsearch-jumperserver:~/elasticsearch_certs# ls
elasticsearch-test-ca.p12  test-elastic-hotdata02  test-elastic-master01  test-elastic-master03  test-elastic-warmdata01  test-elastic-warmdata03  server-cert-bundle.zip
test-elastic-hotdata01      test-elastic-hotdata03  test-elastic-master02  test-elastic-ml01      test-elastic-warmdata02  instances.yaml
root@elasticsearch-jumperserver:~/elasticsearch_certs#
root@elasticsearch-jumperserver:~/elasticsearch_certs# ls -ltr *
-rw-r--r-- 1 root root   876 Oct 26 18:49 instances.yaml
-rw------- 1 root root  2672 Oct 26 18:55 elasticsearch-test-ca.p12
-rw------- 1 root root 39406 Oct 26 18:56 server-cert-bundle.zip

test-elastic-master01:
total 4
-rw-r--r-- 1 root root 3700 Oct 26 18:56 test-elastic-master01.p12

test-elastic-master02:
total 4
-rw-r--r-- 1 root root 3700 Oct 26 18:56 test-elastic-master02.p12

test-elastic-master03:
total 4
-rw-r--r-- 1 root root 3700 Oct 26 18:56 test-elastic-master03.p12

test-elastic-hotdata01:
total 4
-rw-r--r-- 1 root root 3702 Oct 26 18:56 test-elastic-hotdata01.p12

test-elastic-hotdata03:
total 4
-rw-r--r-- 1 root root 3702 Oct 26 18:56 test-elastic-hotdata03.p12

test-elastic-hotdata02:
total 4
-rw-r--r-- 1 root root 3702 Oct 26 18:56 test-elastic-hotdata02.p12

test-elastic-warmdata01:
total 4
-rw-r--r-- 1 root root 3704 Oct 26 18:56 test-elastic-warmdata01.p12

test-elastic-warmdata03:
total 4
-rw-r--r-- 1 root root 3704 Oct 26 18:56 test-elastic-warmdata03.p12

test-elastic-warmdata02:
total 4
-rw-r--r-- 1 root root 3704 Oct 26 18:56 test-elastic-warmdata02.p12

test-elastic-ml01:
total 4
-rw-r--r-- 1 root root 3676 Oct 26 18:56 test-elastic-ml01.p12
root@elasticsearch-jumperserver:~/elasticsearch_certs#
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;3c) Generate http certificate.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;generate http certificates for http encryption, ensure you enter hostnames and ip's of the machines from which you would like you to communicate with elaticsearch over http, e.g jumpservers, kibana, elasticsearch nodes , so on.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/usr/share/elasticsearch/bin/elasticsearch-certutil http
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;root@elasticsearch-jumperserver:~/elasticsearch_certs/http# /usr/share/elasticsearch/bin/elasticsearch-certutil http

## Elasticsearch HTTP Certificate Utility

The 'http' command guides you through the process of generating certificates
for use on the HTTP (Rest) interface for Elasticsearch.

This tool will ask you a number of questions in order to generate the right
set of files for your needs.

## Do you wish to generate a Certificate Signing Request (CSR)?

A CSR is used when you want your certificate to be created by an existing
Certificate Authority (CA) that you do not control (that is, you don't have
access to the keys for that CA).

If you are in a corporate environment with a central security team, then you
may have an existing Corporate CA that can generate your certificate for you.
Infrastructure within your organisation may already be configured to trust this
CA, so it may be easier for clients to connect to Elasticsearch if you use a
CSR and send that request to the team that controls your CA.

If you choose not to generate a CSR, this tool will generate a new certificate
for you. That certificate will be signed by a CA under your control. This is a
quick and easy way to secure your cluster with TLS, but you will need to
configure all your clients to trust that custom CA.

Generate a CSR? [y/N]n

## Do you have an existing Certificate Authority (CA) key-pair that you wish to use to sign your certificate?

If you have an existing CA certificate and key, then you can use that CA to
sign your new http certificate. This allows you to use the same CA across
multiple Elasticsearch clusters which can make it easier to configure clients,
and may be easier for you to manage.

If you do not have an existing CA, one will be generated for you.

Use an existing CA? [y/N]y

## What is the path to your CA?

Please enter the full pathname to the Certificate Authority that you wish to
use for signing your new http certificate. This can be in PKCS#12 (.p12), JKS
(.jks) or PEM (.crt, .key, .pem) format.
CA Path: /root/elasticsearch_certs/elasticsearch-test-ca.p12
Reading a PKCS12 keystore requires a password.
It is possible for the keystore's password to be blank,
in which case you can simply press &amp;lt;ENTER&amp;gt; at the prompt
Password for elasticsearch-test-ca.p12:

## How long should your certificates be valid?

Every certificate has an expiry date. When the expiry date is reached clients
will stop trusting your certificate and TLS connections will fail.

Best practice suggests that you should either:
(a) set this to a short duration (90 - 120 days) and have automatic processes
to generate a new certificate before the old one expires, or
(b) set it to a longer duration (3 - 5 years) and then perform a manual update
a few months before it expires.

You may enter the validity period in years (e.g. 3Y), months (e.g. 18M), or days (e.g. 90D)

For how long should your certificate be valid? [5y] 10y

## Do you wish to generate one certificate per node?

If you have multiple nodes in your cluster, then you may choose to generate a
separate certificate for each of these nodes. Each certificate will have its
own private key, and will be issued for a specific hostname or IP address.

Alternatively, you may wish to generate a single certificate that is valid
across all the hostnames or addresses in your cluster.

If all of your nodes will be accessed through a single domain
(e.g. node01.es.example.com, node02.es.example.com, etc) then you may find it
simpler to generate one certificate with a wildcard hostname (*.es.example.com)
and use that across all of your nodes.

However, if you do not have a common domain name, and you expect to add
additional nodes to your cluster in the future, then you should generate a
certificate per node so that you can more easily generate new certificates when
you provision new nodes.

Generate a certificate per node? [y/N]N

## Which hostnames will be used to connect to your nodes?

These hostnames will be added as "DNS" names in the "Subject Alternative Name"
(SAN) field in your certificate.

You should list every hostname and variant that people will use to connect to
your cluster over http.
Do not list IP addresses here, you will be asked to enter them later.

If you wish to use a wildcard certificate (for example *.es.example.com) you
can enter that here.

Enter all the hostnames that you need, one per line.
When you are done, press &amp;lt;ENTER&amp;gt; once more to move on to the next step.

test-elastic-master01
test-elastic-master02
test-elastic-master03
test-elastic-kibana01
test-elastic-clustmon01
elasticsearch-jumpserver

You entered the following hostnames.

 - test-elastic-master01
 - test-elastic-master02
 - test-elastic-master03
 - test-elastic-kibana01
 - test-elastic-clustmon01
 - elasticsearch-jumpserver

Is this correct [Y/n]Y

## Which IP addresses will be used to connect to your nodes?

If your clients will ever connect to your nodes by numeric IP address, then you
can list these as valid IP "Subject Alternative Name" (SAN) fields in your
certificate.

If you do not have fixed IP addresses, or not wish to support direct IP access
to your cluster then you can just press &amp;lt;ENTER&amp;gt; to skip this step.

Enter all the IP addresses that you need, one per line.
When you are done, press &amp;lt;ENTER&amp;gt; once more to move on to the next step.

10.10.4.6
10.10.4.7
10.10.4.8
10.10.4.5
10.10.4.16
10.10.4.17
10.10.4.18
10.10.4.1
10.10.4.31

You entered the following IP addresses.

 - 10.10.4.6
 - 10.10.4.7
 - 10.10.4.8
 - 10.10.4.5
 - 10.10.4.16
 - 10.10.4.17
 - 10.10.4.18
 - 10.10.4.1
 - 10.10.4.31

Is this correct [Y/n]Y

## Other certificate options

The generated certificate will have the following additional configuration
values. These values have been selected based on a combination of the
information you have provided above and secure defaults. You should not need to
change these values unless you have specific requirements.

Key Name: test-elastic-master01
Subject DN: CN=test-elastic-master01
Key Size: 2048

Do you wish to change any of these options? [y/N]N

## What password do you want for your private key(s)?

Your private key(s) will be stored in a PKCS#12 keystore file named "http.p12".
This type of keystore is always password protected, but it is possible to use a
blank password.

If you wish to use a blank password, simply press &amp;lt;enter&amp;gt; at the prompt below.
Provide a password for the "http.p12" file:  [&amp;lt;ENTER&amp;gt; for none]
Repeat password to confirm:

## Where should we save the generated files?

A number of files will be generated including your private key(s),
public certificate(s), and sample configuration options for Elastic Stack products.

These files will be included in a single zip archive.

What filename should be used for the output zip file? [/usr/share/elasticsearch/elasticsearch-ssl-http.zip]

Zip file written to /usr/share/elasticsearch/elasticsearch-ssl-http.zip
root@elasticsearch-jumperserver:~/elasticsearch_certs/http#
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;root@elasticsearch-jumperserver:~/elasticsearch_certs/http# ls
elasticsearch  elasticsearch-ssl-http.zip  kibana
root@elasticsearch-jumperserver:~/elasticsearch_certs/http# mv elasticsearch-ssl-http.zip elasticsearch-ssl-http.zip_old
root@elasticsearch-jumperserver:~/elasticsearch_certs/http# mv elasticsearch elasticsearch_old
root@elasticsearch-jumperserver:~/elasticsearch_certs/http# mv kibana kibana_old
root@elasticsearch-jumperserver:~/elasticsearch_certs/http# pwd
/root/elasticsearch_certs/http
root@elasticsearch-jumperserver:~/elasticsearch_certs/http# cp /usr/share/elasticsearch/elasticsearch-ssl-http.zip .
root@elasticsearch-jumperserver:~/elasticsearch_certs/http# unzip elasticsearch-ssl-http.zip
Archive:  elasticsearch-ssl-http.zip
   creating: elasticsearch/
  inflating: elasticsearch/README.txt
  inflating: elasticsearch/http.p12
  inflating: elasticsearch/sample-elasticsearch.yml
   creating: kibana/
  inflating: kibana/README.txt
  inflating: kibana/elasticsearch-ca.pem
  inflating: kibana/sample-kibana.yml
root@elasticsearch-jumperserver:~/elasticsearch_certs/http#
root@elasticsearch-jumperserver:~/elasticsearch_certs/http# ls
elasticsearch  elasticsearch_old  elasticsearch-ssl-http.zip  elasticsearch-ssl-http.zip_old  kibana  kibana_old
root@elasticsearch-jumperserver:~/elasticsearch_certs/http#
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  &lt;strong&gt;4) Copy the generated certificates&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Copy the node certificate and http certificate to respective nodes to the path /etc/elasticsearch/certs/&lt;/p&gt;

&lt;p&gt;Note: Node certificate is different for each and every elasticsearch node, http certificate is common for all the nodes.&lt;/p&gt;

&lt;h2&gt;
  
  
  *&lt;em&gt;5) Setting keystore and trustore for transport and http *&lt;/em&gt;
&lt;/h2&gt;

&lt;p&gt;Transport Truststore password is the password of CA certificate.&lt;br&gt;
Transport Keystore password is the password of node certificates.&lt;/p&gt;

&lt;p&gt;Transport http password is the password of http certificate.&lt;/p&gt;

&lt;p&gt;set transport truststore/keystore and http keystore with below commands on all the nodes, you need run below commands on each and every elasticsearch nodes,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/usr/share/elasticsearch/bin/elasticsearch-keystore add xpack.security.transport.ssl.keystore.secure_password
/usr/share/elasticsearch/bin/elasticsearch-keystore add xpack.security.transport.ssl.truststore.secure_password
/usr/share/elasticsearch/bin/elasticsearch-keystore add xpack.security.http.ssl.keystore.secure_password
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  &lt;strong&gt;6) Configuring elasticsearch parameters&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Setting the configuration in /etc/elasticsearch/elasticsearch.yml, comment all the existing lines and append below after changing ip and hostnames to your node ip's and hostnames,&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6a) Master nodes&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
xpack.security.enabled: true
xpack.security.enrollment.enabled: true
cluster.name: test-elasticsearch
node.name: test-elastic-master01
network.host: 10.10.4.6
discovery.seed_hosts: ["test-elastic-master01", "test-elastic-master02", "test-elastic-master03"]
cluster.initial_master_nodes: ["test-elastic-master01", "test-elastic-master02", "test-elastic-master03"]
node.roles: [ master ]
xpack.watcher.enabled: true


# transport SSL/TLS
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: full
xpack.security.transport.ssl.client_authentication: required
xpack.security.transport.ssl.keystore.path: certs/test-elastic-master01.p12
xpack.security.transport.ssl.truststore.path: certs/test-elastic-master01.p12

# http SSL/TLS
http.host: 0.0.0.0
xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.keystore.path: certs/http.p12
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;6b) Hot nodes&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
xpack.security.enabled: true
xpack.security.enrollment.enabled: true
cluster.name: test-elasticsearch
node.name: test-elastic-hotdata01
network.host: 10.10.4.2
discovery.seed_hosts: ["test-elastic-master01", "test-elastic-master02", "test-elastic-master03"]
cluster.initial_master_nodes: ["test-elastic-master01", "test-elastic-master02", "test-elastic-master03"]
node.roles: [ data,ingest ]
node.attr.box_type: hot
xpack.watcher.enabled: true


# transport SSL/TLS
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: full
xpack.security.transport.ssl.client_authentication: required
xpack.security.transport.ssl.keystore.path: certs/test-elastic-hotdata01.p12
xpack.security.transport.ssl.truststore.path: certs/test-elastic-hotdata01.p12

# http SSL/TLS
http.host: 0.0.0.0
xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.keystore.path: certs/http.p12
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;6c) Warm nodes&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
xpack.security.enabled: true
xpack.security.enrollment.enabled: true
cluster.name: test-elasticsearch
node.name: test-elastic-warmdata01
network.host: 10.10.4.11
discovery.seed_hosts: ["test-elastic-master01", "test-elastic-master02", "test-elastic-master03"]
cluster.initial_master_nodes: ["test-elastic-master01", "test-elastic-master02", "test-elastic-master03"]
node.roles: [ data,ingest ]
node.attr.box_type: warm
xpack.watcher.enabled: true


# transport SSL/TLS
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: full
xpack.security.transport.ssl.client_authentication: required
xpack.security.transport.ssl.keystore.path: certs/test-elastic-warmdata01.p12
xpack.security.transport.ssl.truststore.path: certs/test-elastic-warmdata01.p12
# http SSL/TLS
http.host: 0.0.0.0
xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.keystore.path: certs/http.p12
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;6d) ML nodes&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
xpack.security.enabled: true
xpack.security.enrollment.enabled: true
cluster.name: test-elasticsearch
node.name: test-elastic-ml01
network.host: 10.10.4.10
discovery.seed_hosts: ["test-elastic-master01", "test-elastic-master02", "test-elastic-master03"]
cluster.initial_master_nodes: ["test-elastic-master01", "test-elastic-master02", "test-elastic-master03"]
node.roles: [ ml ]
xpack.watcher.enabled: true


# transport SSL/TLS
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: full
xpack.security.transport.ssl.client_authentication: required
xpack.security.transport.ssl.keystore.path: certs/test-elastic-ml01.p12
xpack.security.transport.ssl.truststore.path: certs/test-elastic-ml01.p12
# http SSL/TLS
http.host: 0.0.0.0
xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.keystore.path: certs/http.p12
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  &lt;strong&gt;7) Starting elasticsearch&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Start the nodes one by one using systemctl start elasticsearch, you can monitor the logs in /var/log/elasticsearch/test-elasticsearch.log&lt;/p&gt;

&lt;p&gt;We need to ensure we remove the paramter once cluster is formed in /etc/elasticsearch/elasticsearch.yml&lt;/p&gt;

&lt;p&gt;cluster.initial_master_nodes: ["test-elastic-master01", "test-elastic-master02", "test-elastic-master03"]&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;8) resetting elastic user password&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;you can also do this once you start the first node,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;root@test-elastic-master01:/var/log/elasticsearch# /usr/share/elasticsearch/bin/elasticsearch-reset-password -u elastic
This tool will reset the password of the [elastic] user to an autogenerated value.
The password will be printed in the console.
Please confirm that you would like to continue [y/N]y


Password for the [elastic] user successfully reset.
New value: xxxxxxxxxxxxxxxxxxxxxxxx
root@test-elastic-master01:/var/log/elasticsearch#
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  &lt;strong&gt;9) Check the status of cluster and list nodes&lt;/strong&gt;
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;root@test-elastic-master01:/var/log/elasticsearch# curl -X GET "https://10.10.4.2:9200/_cluster/health?pretty"  -u elastic -k
Enter host password for user 'elastic':
{
  "cluster_name" : "test-elasticsearch",
  "status" : "green",
  "timed_out" : false,
  "number_of_nodes" : 6,
  "number_of_data_nodes" : 3,
  "active_primary_shards" : 2,
  "active_shards" : 4,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 0,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 100.0
}


root@test-elastic-master01:/var/log/elasticsearch# curl -X GET "https://10.10.4.2:9200/_cat/nodes?pretty"  -u elastic -k
Enter host password for user 'elastic':
10.10.4.2  2 63 0 0.04 0.05 0.02 di - test-elastic-hotdata01
10.10.4.3  2 63 0 0.00 0.06 0.06 di - test-elastic-hotdata02
10.10.4.8  7 97 1 0.00 0.10 0.09 m  - test-elastic-master03
10.10.4.7 11 96 2 0.00 0.03 0.01 m  * test-elastic-master02
10.10.4.6 10 97 2 0.00 0.04 0.02 m  - test-elastic-master01
10.10.4.4  2 62 0 0.00 0.06 0.05 di - test-elastic-hotdata03
root@test-elastic-master01:/var/log/elasticsearch#
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  &lt;strong&gt;10) Install and Configure Kibana&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;10a) Installing kibana&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;root@test-elastic-kibana01:~# wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo gpg --dearmor -o /usr/share/keyrings/elasticsearch-keyring.gpg

root@test-elastic-kibana01:~# apt-get install apt-transport-https
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following packages will be upgraded:
  apt-transport-https
1 upgraded, 0 newly installed, 0 to remove and 118 not upgraded.
Need to get 1,704 B of archives.
After this operation, 0 B of additional disk space will be used.
Get:1 http://us.archive.ubuntu.com/ubuntu focal-updates/universe amd64 apt-transport-https all 2.0.9 [1,704 B]
Fetched 1,704 B in 1s (3,407 B/s)
(Reading database ... 111616 files and directories currently installed.)
Preparing to unpack .../apt-transport-https_2.0.9_all.deb ...
Unpacking apt-transport-https (2.0.9) over (2.0.8) ...
Setting up apt-transport-https (2.0.9) ...
root@test-elastic-kibana01:~#

root@test-elastic-kibana01:~# echo "deb [signed-by=/usr/share/keyrings/elasticsearch-keyring.gpg] https://artifacts.elastic.co/packages/8.x/apt stable main" | sudo tee /etc/apt/sources.list.d/elastic-8.x.list
deb [signed-by=/usr/share/keyrings/elasticsearch-keyring.gpg] https://artifacts.elastic.co/packages/8.x/apt stable main
root@test-elastic-kibana01:~#

root@test-elastic-kibana01:~# sudo apt-get update &amp;amp;&amp;amp; sudo apt-get install kibana
0% [Working]
Hit:1 http://security.ubuntu.com/ubuntu focal-security InRelease
Get:2 https://artifacts.elastic.co/packages/8.x/apt stable InRelease [10.4 kB]
Hit:3 http://us.archive.ubuntu.com/ubuntu focal InRelease
Hit:4 http://us.archive.ubuntu.com/ubuntu focal-updates InRelease
Get:5 https://artifacts.elastic.co/packages/8.x/apt stable/main amd64 Packages [34.0 kB]
Hit:6 http://us.archive.ubuntu.com/ubuntu focal-backports InRelease
Get:7 https://artifacts.elastic.co/packages/8.x/apt stable/main i386 Packages [3,556 B]
Fetched 48.0 kB in 1s (33.1 kB/s)
Reading package lists... Done
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following NEW packages will be installed:
  kibana
0 upgraded, 1 newly installed, 0 to remove and 118 not upgraded.
Need to get 285 MB of archives.
After this operation, 680 MB of additional disk space will be used.
Get:1 https://artifacts.elastic.co/packages/8.x/apt stable/main amd64 kibana amd64 8.4.3 [285 MB]
Fetched 285 MB in 3s (83.2 MB/s)
Selecting previously unselected package kibana.
(Reading database ... 111616 files and directories currently installed.)
Preparing to unpack .../kibana_8.4.3_amd64.deb ...
Unpacking kibana (8.4.3) ...
Setting up kibana (8.4.3) ...
Creating kibana group... OK
Creating kibana user... OK
Created Kibana keystore in /etc/kibana/kibana.keystore
root@test-elastic-kibana01:~#
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;10b) Copy the ca certificate to kibana server&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Copy the ca certificate that was generated from the step 3c kibana/elasticsearch-ca.pem to /etc/kibana/elasticsearch-ca.pem&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;10c) Reset kibana_system password&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To do below, login into one of the elasticsearch node which is added to http certificate.&lt;/p&gt;

&lt;p&gt;root@test-elastic-master01:/var/log/elasticsearch# /usr/share/elasticsearch/bin/elasticsearch-reset-password -u kibana_system&lt;br&gt;
This tool will reset the password of the [kibana_system] user to an autogenerated value.&lt;br&gt;
The password will be printed in the console.&lt;br&gt;
Please confirm that you would like to continue [y/N]y&lt;/p&gt;

&lt;p&gt;Password for the [kibana_system] user successfully reset.&lt;br&gt;
New value: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx&lt;br&gt;
root@test-elastic-master01:/var/log/elasticsearch#&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;10d) Configuring kibana&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;set below parameters in /etc/kibana/kibana.yml, we are pointing to hot data nodes below&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;elasticsearch.hosts: ["https://10.10.4.2:9200","https://10.10.4.3:9200","https://10.10.4.4:9200"]
elasticsearch.username: "kibana_system"
elasticsearch.password: "xxxxxxxxxxxxxxxxxxxxxxxx"
elasticsearch.ssl.certificateAuthorities: [ "/etc/kibana/elasticsearch-ca.pem" ]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;10f) Start kibana and enable the service&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;systemctl start kibana
systemctl enable kibana
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Access kibana using elastic user using url &lt;a href="http://kibana-hostname:5601" rel="noopener noreferrer"&gt;http://kibana-hostname:5601&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjleelhooeon9itvhgpa3.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjleelhooeon9itvhgpa3.JPG" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Using cloud-init to customize a VM while provisioning with terraform on vSphere</title>
      <dc:creator>balajivedagiri</dc:creator>
      <pubDate>Thu, 02 Mar 2023 03:45:59 +0000</pubDate>
      <link>https://dev.to/balajivedagiri/guest-vm-customization-using-cloud-init-with-terraform-2h2i</link>
      <guid>https://dev.to/balajivedagiri/guest-vm-customization-using-cloud-init-with-terraform-2h2i</guid>
      <description>&lt;p&gt;We are going to perform below steps on a newly installed ubuntu VM and convert it to a template&lt;/p&gt;

&lt;p&gt;1) Check if VMware tools is installed,&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

root@linux:~# dpkg -l | grep open-vm-tools
ii  open-vm-tools                          2:11.3.0-2ubuntu0~ubuntu20.04.4   amd64        Open VMware Tools for virtual machines hosted on VMware (CLI)
root@linux:~#


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;2) Update VMware tools,&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

root@linux:~# apt install open-vm-tools
Reading package lists... Done
Building dependency tree
Reading state information... Done
open-vm-tools is already the newest version (2:11.3.0-2ubuntu0~ubuntu20.04.4).
The following packages were automatically installed and are no longer required:
  linux-headers-5.15.0-46-generic linux-headers-5.4.0-137 linux-headers-5.4.0-137-generic linux-hwe-5.15-headers-5.15.0-46 linux-image-5.15.0-46-generic linux-image-5.4.0-137-generic
  linux-modules-5.15.0-46-generic linux-modules-5.4.0-137-generic linux-modules-extra-5.15.0-46-generic linux-modules-extra-5.4.0-137-generic
Use 'apt autoremove' to remove them.
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
root@linux:~#


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;3) Check the version installed,&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

root@linux:~# vmtoolsd -v
VMware Tools daemon, version 11.3.0.29534 (build-18090558)
root@linux:~#


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;4) Ensure the open-vm-tools service is enabled and started,&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

root@linux:~# systemctl enable open-vm-tools.service
Synchronizing state of open-vm-tools.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable open-vm-tools
root@linux:~#
root@linux:~# systemctl start open-vm-tools.service
root@linux:~#
root@linux:~# systemctl is-enabled open-vm-tools.service
enabled
root@linux:~#
root@linux:~# systemctl status open-vm-tools.service
● open-vm-tools.service - Service for virtual machines hosted on VMware
     Loaded: loaded (/lib/systemd/system/open-vm-tools.service; enabled; vendor preset: enabled)
     Active: active (running) since Thu 2023-03-02 06:08:57 +04; 9min ago
       Docs: http://open-vm-tools.sourceforge.net/about.php
   Main PID: 710 (vmtoolsd)
      Tasks: 3 (limit: 9350)
     Memory: 4.3M
     CGroup: /system.slice/open-vm-tools.service
             └─710 /usr/bin/vmtoolsd

Mar 02 06:08:57 linux systemd[1]: Started Service for virtual machines hosted on VMware.
root@linux:~#


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;5) Install cloud-init package,&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

root@linux:~# dpkg -l | grep -w cloud-init
root@linux:~#
root@linux:~# apt install cloud-init
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following packages were automatically installed and are no longer required:
  linux-headers-5.15.0-46-generic linux-headers-5.4.0-137 linux-headers-5.4.0-137-generic linux-hwe-5.15-headers-5.15.0-46 linux-image-5.15.0-46-generic linux-image-5.4.0-137-generic
  linux-modules-5.15.0-46-generic linux-modules-5.4.0-137-generic linux-modules-extra-5.15.0-46-generic linux-modules-extra-5.4.0-137-generic
Use 'apt autoremove' to remove them.
The following additional packages will be installed:
  eatmydata libeatmydata1 python3-distutils python3-importlib-metadata python3-json-pointer python3-jsonpatch python3-jsonschema python3-lib2to3 python3-more-itertools python3-pyrsistent
  python3-setuptools python3-zipp
Suggested packages:
  python-jsonschema-doc python-setuptools-doc
The following NEW packages will be installed:
  cloud-init eatmydata libeatmydata1 python3-distutils python3-importlib-metadata python3-json-pointer python3-jsonpatch python3-jsonschema python3-lib2to3 python3-more-itertools
  python3-pyrsistent python3-setuptools python3-zipp
0 upgraded, 13 newly installed, 0 to remove and 0 not upgraded.
Need to get 1,260 kB of archives.
After this operation, 7,347 kB of additional disk space will be used.
Do you want to continue? [Y/n] y

root@linux:~# dpkg -l | grep -w cloud-init
ii  cloud-init                             22.4.2-0ubuntu0~20.04.2           all          initialization and customization tool for cloud instances
root@linux:~#



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;6) enable the cloud-init service,&lt;br&gt;
it doesn't need to be started, but ensure it is enabled,&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

root@linux:~# systemctl status cloud-init.service
● cloud-init.service - Initial cloud-init job (metadata service crawler)
     Loaded: loaded (/lib/systemd/system/cloud-init.service; enabled; vendor preset: enabled)
     Active: inactive (dead)
root@linux:~#
root@linux:~# systemctl enable cloud-init.service
root@linux:~#
root@linux:~# sudo systemctl is-enabled cloud-init.service
enabled
root@linux:~#



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;7) Ensure no datasources are enabled in /etc/cloud/cloud.cfg like below,&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

root@linux:~# cat /etc/cloud/cloud.cfg | grep datasource
# If you use datasource_list array, keep array items in a single line.
# Example datasource config
# datasource:
root@linux:~#


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;8) Check the current activated datasources,&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

root@linux:/etc/cloud/cloud.cfg.d# cat /etc/cloud/cloud.cfg.d/90_dpkg.cfg
# to update this file, run dpkg-reconfigure cloud-init
datasource_list: [ NoCloud, ConfigDrive, OpenNebula, DigitalOcean, Azure, AltCloud, OVF, MAAS, GCE, OpenStack, CloudSigma, SmartOS, Bigstep, Scaleway, AliYun, Ec2, CloudStack, Hetzner, IBMCloud, Oracle, Exoscale, RbxCloud, UpCloud, VMware, Vultr, LXD, NWCS, None ]
root@linux:/etc/cloud/cloud.cfg.d#


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;9) Run the command "dpkg-reconfigure cloud-init" and select only VMware (this is not mandatory , but deselecting other datasources will reduce our boot time, so cloud-init doesn't need to look for the userdata/metadata from all the datasources in the list)&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

root@linux:/etc/cloud/cloud.cfg.d# dpkg-reconfigure cloud-init
root@linux:/etc/cloud/cloud.cfg.d#
root@linux:/etc/cloud/cloud.cfg.d# cat /etc/cloud/cloud.cfg.d/90_dpkg.cfg
# to update this file, run dpkg-reconfigure cloud-init
datasource_list: [ VMware ]
root@linux:/etc/cloud/cloud.cfg.d#


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiwxr27dvrib9dq1oo0h2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiwxr27dvrib9dq1oo0h2.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;10) ensure /etc/cloud/cloud.cfg.d has only below files,&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

root@linux:/etc/cloud/cloud.cfg.d# ls
05_logging.cfg  90_dpkg.cfg  README
root@linux:/etc/cloud/cloud.cfg.d#


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;11) disable network, so cloud-init doesn't configure networking,&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

root@linux:/etc/cloud/cloud.cfg.d# vi /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg
root@linux:/etc/cloud/cloud.cfg.d#
root@linux:/etc/cloud/cloud.cfg.d# cat /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg
network: {config: disabled}
root@linux:/etc/cloud/cloud.cfg.d#


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;12) As a final step, run the clean command to ensure that cloud-init will all the modules in the userdata and metadata.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

root@linux:/# cloud-init clean --logs
root@linux:/#


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;13) below we are verifying that niginx/apache2 are not installed on the vm before converting it to a template,&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

root@linux:~# dpkg -l | grep nginx
root@linux:~#
root@linux:~# dpkg -l | grep apache2
root@linux:~#



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;14) Convert the VM to a template from vCenter&lt;/p&gt;

&lt;p&gt;15) Provision a VM using terraform code which can be obtained from below repo,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/balajivedagiri/terraform_cloud-init.git" rel="noopener noreferrer"&gt;https://github.com/balajivedagiri/terraform_cloud-init.git&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;userdata we defined it,&lt;/p&gt;

&lt;h1&gt;
  
  
  cloud-config
&lt;/h1&gt;

&lt;p&gt;runcmd:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;[ sh, -c, echo "=========Hello There from Terraform and Cloud-init automation=========" &amp;gt; /root/testing-01]
packages:

&lt;ul&gt;
&lt;li&gt;nginx&lt;/li&gt;
&lt;li&gt;apache2&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;16) Once VM is provisioned, login to the new VM and verify if cloud-init ran successfully.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

root@cloud-init-testing02:~# cloud-init status
status: running
root@cloud-init-testing02:~#

# Above it is still running, wait for sometime and check
# Below it completed successfully

root@cloud-init-testing02:~# cloud-init status
status: done
root@cloud-init-testing02:~#


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;17) Check if packages are installed successfully,&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

root@cloud-init-testing02:~# dpkg -l | grep nginx
ii  libnginx-mod-http-image-filter         1.18.0-0ubuntu1.4                 amd64        HTTP image filter module for Nginx
ii  libnginx-mod-http-xslt-filter          1.18.0-0ubuntu1.4                 amd64        XSLT Transformation module for Nginx
ii  libnginx-mod-mail                      1.18.0-0ubuntu1.4                 amd64        Mail module for Nginx
ii  libnginx-mod-stream                    1.18.0-0ubuntu1.4                 amd64        Stream module for Nginx
ii  nginx                                  1.18.0-0ubuntu1.4                 all          small, powerful, scalable web/proxy server
ii  nginx-common                           1.18.0-0ubuntu1.4                 all          small, powerful, scalable web/proxy server - common files
ii  nginx-core                             1.18.0-0ubuntu1.4                 amd64        nginx web/proxy server (standard version)
root@cloud-init-testing02:~#
root@cloud-init-testing02:~# dpkg -l | grep apache2
ii  apache2                                2.4.41-4ubuntu3.13                amd64        Apache HTTP Server
ii  apache2-bin                            2.4.41-4ubuntu3.13                amd64        Apache HTTP Server (modules and other binary files)
ii  apache2-data                           2.4.41-4ubuntu3.13                all          Apache HTTP Server (common files)
ii  apache2-utils                          2.4.41-4ubuntu3.13                amd64        Apache HTTP Server (utility programs for web servers)
root@cloud-init-testing02:~#
root@cloud-init-testing02:~# cat /root/testing-01
=========Hello There from Terraform and Cloud-init automation=========
root@cloud-init-testing02:~#
root@cloud-init-testing02:~# ls -ltr /root/testing-01
-rw-r--r-- 1 root root 71 Mar  2 07:15 /root/testing-01
root@cloud-init-testing02:~#


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;18) to check what userdata we have passed,&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

root@cloud-init-testing02:~# cloud-init query userdata
#cloud-config
runcmd:
  - [ sh, -c, echo "=========Hello There from Terraform and Cloud-init automation=========" &amp;gt; /root/testing-01]
packages:
- nginx
- apache2
root@cloud-init-testing02:~#
root@cloud-init-testing02:~# vmware-rpctool "info-get guestinfo.userdata" | base64 -d
#cloud-config
runcmd:
  - [ sh, -c, echo "=========Hello There from Terraform and Cloud-init automation=========" &amp;gt; /root/testing-01]
packages:
- nginx
- apache2

root@cloud-init-testing02:~#


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

</description>
    </item>
    <item>
      <title>Installing Rancher on a highly available RKE2 cluster</title>
      <dc:creator>balajivedagiri</dc:creator>
      <pubDate>Sun, 11 Sep 2022 19:13:43 +0000</pubDate>
      <link>https://dev.to/balajivedagiri/installing-rke2-and-rancher-4oia</link>
      <guid>https://dev.to/balajivedagiri/installing-rke2-and-rancher-4oia</guid>
      <description>&lt;p&gt;Rancher is a complete software stack for teams adopting containers. It addresses the operational and security challenges of managing multiple Kubernetes clusters across any infrastructure.&lt;/p&gt;

&lt;p&gt;Below is a sample rancher architecture.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuqrylht0cr1upmlqhnb1.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuqrylht0cr1upmlqhnb1.JPG" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We are going to install a 3 node rke2 kubernetes cluster and install rancher in the rke2 cluster using Helm.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;1. Create a private Loadbalancer&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Create a L4 loadbalancer with the first rke2 node as backendpool( once rest of the two nodes are added to the rke2 cluster, you can add the 2 rke2 nodes to loadbalancer backendpool), and below are ports that loadbalancer and backend traffic should be listening on,&lt;/p&gt;

&lt;p&gt;1) 9345 to register new nodes&lt;br&gt;
2) 6443 for Kubernetes API Server&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;2. Maintain the host entries on the rke2 nodes like below&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;maintain a hostname/dns for the loadbalancer vip that is created in the first setup. Here i am pointing dns/hostname "rke2.mydomain.ae" to my loadbalancer vip 182.17.12.5&lt;/p&gt;

&lt;p&gt;172.17.11.11    rancher01&lt;br&gt;
172.17.11.12    rancher02&lt;br&gt;
172.17.11.13    rancher03&lt;br&gt;
172.17.12.5 rke2.mydomain.ae&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;3. Launch the first server node&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;3a. adding hostnames or IP for tls&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To avoid certificate errors with the fixed registration address, you should launch the server with the tls-san parameter set. This option adds an additional hostname or IP as a Subject Alternative Name in the server's TLS cert, and it can be specified as a list if you would like to access via both the IP and the hostname.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

root@rancher01:~# mkdir -p /etc/rancher/rke2/


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; below i maintained all my nodes ip's, hostnames and including loadbalancer dns name that points to loadbalancer vip that was created in the first step.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

root@rancher01:~# cat /etc/rancher/rke2/config.yaml
tls-san:
  - rke2.mydomain.ae
  - 172.17.11.11
  - rancher01
  - 172.17.11.12
  - rancher02
  - 172.17.11.13
  - rancher03
root@rancher01:~#


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;3b. Installing rke2&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Here i am pulling the latest stable rke2 release. &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

root@rancher01:~# curl -sfL https://get.rke2.io | sh -
[INFO]  finding release for channel stable
[INFO]  using v1.23.9+rke2r1 as release
[INFO]  downloading checksums at https://github.com/rancher/rke2/releases/download/v1.23.9+rke2r1/sha256sum-amd64.txt
[INFO]  downloading tarball at https://github.com/rancher/rke2/releases/download/v1.23.9+rke2r1/rke2.linux-amd64.tar.gz
[INFO]  verifying tarball
[INFO]  unpacking tarball file to /usr/local
root@rancher01:~#


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

root@rancher01:~# systemctl start rke2-server.service



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Note: When the rke2-server service is started for the first time , it will take few minutes as it needs to pull all the images need to spin up the cluster. So don't panic.&lt;br&gt;
If you want, you can monitor the setup process using the command "journalctl -u rke2-server -f"&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

root@rancher01:~# systemctl status rke2-server.service
● rke2-server.service - Rancher Kubernetes Engine v2 (server)
     Loaded: loaded (/usr/local/lib/systemd/system/rke2-server.service; enabled; vendor preset: enabled)
     Active: active (running) since Thu 2022-08-25 18:22:05 +04; 1min 15s ago
       Docs: https://github.com/rancher/rke2#readme
    Process: 235133 ExecStartPre=/bin/sh -xc ! /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service (code=exited, status=0/SUCCESS)
    Process: 235135 ExecStartPre=/sbin/modprobe br_netfilter (code=exited, status=0/SUCCESS)
    Process: 235136 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS)
   Main PID: 235138 (rke2)
      Tasks: 181
     Memory: 3.4G
     CGroup: /system.slice/rke2-server.service
             ├─235138 /usr/local/bin/rke2 server
             ├─235177 containerd -c /var/lib/rancher/rke2/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/rke2/a&amp;gt;
             ├─235290 kubelet --volume-plugin-dir=/var/lib/kubelet/volumeplugins --file-check-frequency=5s --sync-frequency=30s --address=0.0.0.0 --alsologtostderr=false --anonymous-auth=f&amp;gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
After running this installation rke2-server service will be installed. The rke2-server service will be configured to automatically restart after node reboots or if the process crashes or is killed. &lt;/p&gt;

&lt;p&gt;Additional utilities will be installed at /var/lib/rancher/rke2/bin/. They include: kubectl, crictl, and ctr. Note that these are not on your path by default.&lt;/p&gt;

&lt;p&gt;Two cleanup scripts will be installed to the path at /usr/local/bin/rke2. They are: rke2-killall.sh and rke2-uninstall.sh.&lt;/p&gt;

&lt;p&gt;A kubeconfig file will be written to /etc/rancher/rke2/rke2.yaml.&lt;/p&gt;

&lt;p&gt;A token that can be used to register other server or agent nodes will be created at /var/lib/rancher/rke2/server/node-token&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3c. Accessing the cluster&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;rke2 will automatically download the kubectl needed, i will be available in below location.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

root@rancher01:~# ll /var/lib/rancher/rke2/bin/
total 315140
drwxr-xr-x 2 root root       176 Aug 25 18:20 ./
drwxr-xr-x 4 root root        31 Aug 25 18:20 ../
-rwxr-xr-x 1 root root  54096264 Aug 25 18:20 containerd*
-rwxr-xr-x 1 root root   7369488 Aug 25 18:20 containerd-shim*
-rwxr-xr-x 1 root root  11527464 Aug 25 18:20 containerd-shim-runc-v1*
-rwxr-xr-x 1 root root  11539944 Aug 25 18:20 containerd-shim-runc-v2*
-rwxr-xr-x 1 root root  35018008 Aug 25 18:20 crictl*
-rwxr-xr-x 1 root root  20463560 Aug 25 18:20 ctr*
-rwxr-xr-x 1 root root  49328448 Aug 25 18:20 kubectl*
-rwxr-xr-x 1 root root 122372296 Aug 25 18:20 kubelet*
-rwxr-xr-x 1 root root  10961304 Aug 25 18:20 runc*


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

root@rancher01:~# cp /var/lib/rancher/rke2/bin/kubectl /usr/local/bin/

root@rancher01:~# chmod +x /usr/local/bin/kubectl


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

root@rancher01:~# cat /etc/rancher/rke2/rke2.yaml
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: 
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
    server: https://127.0.0.1:6443
  name: default
contexts:
- context:
    cluster: default
    user: default
  name: default
current-context: default
kind: Config
preferences: {}
users:
- name: default
  user:
    client-certificate-data: 
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
    client-key-data: 
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Set the KUBECONFIG env variable as specified below&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

root@rancher01:~# export KUBECONFIG=/etc/rancher/rke2/rke2.yaml
root@rancher01:~#
root@rancher01:~# kubectl get nodes -o wide
NAME               STATUS   ROLES                       AGE     VERSION          INTERNAL-IP      EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
rancher01   Ready    control-plane,etcd,master   3m31s   v1.23.9+rke2r1   172.17.11.11   &amp;lt;none&amp;gt;        Ubuntu 20.04.4 LTS   5.4.0-100-generic   containerd://1.5.13-k3s1
root@rancher01:~#


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

root@rancher01:~# kubectl get all -A
NAMESPACE     NAME                                                        READY   STATUS      RESTARTS   AGE
kube-system   pod/cloud-controller-manager-rancher01               1/1     Running     0          5m24s
kube-system   pod/etcd-rancher01                                   1/1     Running     0          5m21s
kube-system   pod/helm-install-rke2-canal-kdm5c                           0/1     Completed   0          5m12s
kube-system   pod/helm-install-rke2-coredns-jnzgh                         0/1     Completed   0          5m12s
kube-system   pod/helm-install-rke2-ingress-nginx-bsxkp                   0/1     Completed   0          5m12s
kube-system   pod/helm-install-rke2-metrics-server-8vn8f                  0/1     Completed   0          5m12s
kube-system   pod/kube-apiserver-rancher01                         1/1     Running     0          4m59s
kube-system   pod/kube-controller-manager-rancher01                1/1     Running     0          5m25s
kube-system   pod/kube-proxy-rancher01                             1/1     Running     0          5m23s
kube-system   pod/kube-scheduler-rancher01                         1/1     Running     0          4m48s
kube-system   pod/rke2-canal-vkr74                                        2/2     Running     0          4m57s
kube-system   pod/rke2-coredns-rke2-coredns-545d64676-s7hhs               1/1     Running     0          4m57s
kube-system   pod/rke2-coredns-rke2-coredns-autoscaler-5dd676f5c7-fvdhw   1/1     Running     0          4m57s
kube-system   pod/rke2-ingress-nginx-controller-67zjf                     1/1     Running     0          4m11s
kube-system   pod/rke2-metrics-server-6564db4569-vllzm                    1/1     Running     0          4m29s

NAMESPACE     NAME                                              TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE
default       service/kubernetes                                ClusterIP   10.43.0.1       &amp;lt;none&amp;gt;        443/TCP         5m28s
kube-system   service/rke2-coredns-rke2-coredns                 ClusterIP   10.43.0.10      &amp;lt;none&amp;gt;        53/UDP,53/TCP   4m58s
kube-system   service/rke2-ingress-nginx-controller-admission   ClusterIP   10.43.188.253   &amp;lt;none&amp;gt;        443/TCP         4m11s
kube-system   service/rke2-metrics-server                       ClusterIP   10.43.43.187    &amp;lt;none&amp;gt;        443/TCP         4m29s

NAMESPACE     NAME                                           DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
kube-system   daemonset.apps/rke2-canal                      1         1         1       1            1           kubernetes.io/os=linux   4m57s
kube-system   daemonset.apps/rke2-ingress-nginx-controller   1         1         1       1            1           kubernetes.io/os=linux   4m11s

NAMESPACE     NAME                                                   READY   UP-TO-DATE   AVAILABLE   AGE
kube-system   deployment.apps/rke2-coredns-rke2-coredns              1/1     1            1           4m58s
kube-system   deployment.apps/rke2-coredns-rke2-coredns-autoscaler   1/1     1            1           4m58s
kube-system   deployment.apps/rke2-metrics-server                    1/1     1            1           4m29s

NAMESPACE     NAME                                                              DESIRED   CURRENT   READY   AGE
kube-system   replicaset.apps/rke2-coredns-rke2-coredns-545d64676               1         1         1       4m58s
kube-system   replicaset.apps/rke2-coredns-rke2-coredns-autoscaler-5dd676f5c7   1         1         1       4m58s
kube-system   replicaset.apps/rke2-metrics-server-6564db4569                    1         1         1       4m29s

NAMESPACE     NAME                                         COMPLETIONS   DURATION   AGE
kube-system   job.batch/helm-install-rke2-canal            1/1           17s        5m23s
kube-system   job.batch/helm-install-rke2-coredns          1/1           17s        5m23s
kube-system   job.batch/helm-install-rke2-ingress-nginx    1/1           68s        5m23s
kube-system   job.batch/helm-install-rke2-metrics-server   1/1           46s        5m23s
root@rancher01:~#


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;**&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Adding Second server node to the cluster
&lt;/h2&gt;

&lt;p&gt;**&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4a. Setting up the Second server node&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;First copy the node token generated from the first server node as highlighted below.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

root@rancher01:/var/lib/rancher# cat /var/lib/rancher/rke2/server/node-token
K10d11c154bab23851058711225726a1189ba7f00642b87c82e0b7407cdfc25c82d::server:2ffddc9f0d8901c2b6e30bde043850e1
root@rancher01:/var/lib/rancher#


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

root@rancher01:~# mkdir -p /etc/rancher/rke2/


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Maintain the config.yaml as below,&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

root@rancher01:~# cat /etc/rancher/rke2/config.yaml
token: K10d11c154bab23851058711225726a1189ba7fasfsafalsfasf8a8f9saf::server:2ffddc9f0dasdfasf9a898980
server: https://rke2.mydomain.ae:9345
tls-san:
  - rke2.mydomain.ae
  - 172.17.11.11
  - rancher01
  - 172.17.11.12
  - rancher02
  - 172.17.11.13
  - rancher03


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Ensure host entry on the second server node is setup similar to first server node.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

cat /etc/hosts
172.17.11.11    rancher01
172.17.11.12    rancher02
172.17.11.13    rancher03
172.16.132.35   rke2.mydomain.ae


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;4b. Installing rke2 on the second server node&lt;/strong&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

root@rancher02:~# curl -sfL https://get.rke2.io | sh -
[INFO]  finding release for channel stable
[INFO]  using v1.23.9+rke2r1 as release
[INFO]  downloading checksums at https://github.com/rancher/rke2/releases/download/v1.23.9+rke2r1/sha256sum-amd64.txt
[INFO]  downloading tarball at https://github.com/rancher/rke2/releases/download/v1.23.9+rke2r1/rke2.linux-amd64.tar.gz
[INFO]  verifying tarball
[INFO]  unpacking tarball file to /usr/local
root@rancher02:~#


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;we ran into an issue. Second server nodes didn't start, below is the error&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

journalctl -u rke2-server -f

Aug 25 18:56:40 rancher02 systemd[1]: Failed to start Rancher Kubernetes Engine v2 (server).
Aug 25 18:56:45 rancher02 systemd[1]: rke2-server.service: Scheduled restart job, restart counter is at 15.
Aug 25 18:56:45 rancher02 systemd[1]: Stopped Rancher Kubernetes Engine v2 (server).
Aug 25 18:56:45 rancher02 systemd[1]: Starting Rancher Kubernetes Engine v2 (server)...
Aug 25 18:56:45 rancher02 sh[195790]: + /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service
Aug 25 18:56:45 rancher02 sh[195798]: /bin/sh: 1: /usr/bin/systemctl: not found
Aug 25 18:56:45 rancher02 rke2[195804]: time="2022-08-25T18:56:45+04:00" level=warning msg="not running in CIS mode"
Aug 25 18:56:45 rancher02 rke2[195804]: time="2022-08-25T18:56:45+04:00" level=info msg="Starting rke2 v1.23.9+rke2r1 (2d206eba8d0180351408dbed544c852b6b4fdd42)"
Aug 25 18:57:05 rancher02 rke2[195804]: time="2022-08-25T18:57:05+04:00" level=fatal msg="starting kubernetes: preparing server: failed to get CA certs: Get \"https://rke2.mydomain.ae:9345/cacerts\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)"
Aug 25 18:57:05 rancher02 systemd[1]: rke2-server.service: Main process exited, code=exited, status=1/FAILURE
Aug 25 18:57:05 rancher02 systemd[1]: rke2-server.service: Failed with result 'exit-code'.
Aug 25 18:57:05 rancher02 systemd[1]: Failed to start Rancher Kubernetes Engine v2 (server).


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;As per the log above, it says failed to get CA certs: Get \"&lt;a href="https://rke2.mydomain.ae:9345/cacerts%5C" rel="noopener noreferrer"&gt;https://rke2.mydomain.ae:9345/cacerts\&lt;/a&gt;"&lt;/p&gt;

&lt;p&gt;When we accessed the same cacerts using first node hostname, it works. It returns the certificate.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

root@rancher01:~# curl https://rancher01:9345/cacerts -k
-----BEGIN CERTIFICATE-----
cnZlci1jYUAxNjYxNDM4NDI1MB4XDTIyMDgyNTE0NDAyNVoXDTMyMDgyMjE0NDA
dc5sEBfQUANmtPK4ckGrMYpYmFT5EAyBMnmoNRB0brnfxFCjQjBAMA4GA1UdDwEB
NVowJDEiMCAGA1UEAwwZcmtlMi1zZXJ2ZXItY2FAMTY2MTQzODQyNTBZMBMGByqG
SM49AgEGCCqGSM49AwEHA0IABG7NRoHKS8bDW1IZZE2gGxGrEYCDUfvWtSk/xw3R
dc5sEBfQUANmtPK4ckGrMYpYmFT5EAyBMnmoNRB0brnfxFCjQjBAMA4GA1UdDwEB
/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBQ/5eGLG0uix3SOpwhc
pDJr95aaEzAKBggqhkjOPQQDAgNIADBFAiEA26zz5tif+FH7UT6VbJp8ig631yMV
APBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBQ/5eGLG0uix3SOpwhc
-----END CERTIFICATE-----
root@rancher01:~#


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

root@rancher01:~# curl https://rke2.mydomain.ae:9345/cacerts
^C


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;we are able to get the CA cert using node ip or hostname but not with vip. There was an issue in loadbalancer config, after correcting the config in loadbalancer.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

root@rancher01:~# curl https://rke2.mydomain.ae:9345/cacerts -k
-----BEGIN CERTIFICATE-----
cnZlci1jYUAxNjYxNDM4NDI1MB4XDTIyMDgyNTE0NDAyNVoXDTMyMDgyMjE0NDA
dc5sEBfQUANmtPK4ckGrMYpYmFT5EAyBMnmoNRB0brnfxFCjQjBAMA4GA1UdDwEB
NVowJDEiMCAGA1UEAwwZcmtlMi1zZXJ2ZXItY2FAMTY2MTQzODQyNTBZMBMGByqG
SM49AgEGCCqGSM49AwEHA0IABG7NRoHKS8bDW1IZZE2gGxGrEYCDUfvWtSk/xw3R
dc5sEBfQUANmtPK4ckGrMYpYmFT5EAyBMnmoNRB0brnfxFCjQjBAMA4GA1UdDwEB
/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBQ/5eGLG0uix3SOpwhc
pDJr95aaEzAKBggqhkjOPQQDAgNIADBFAiEA26zz5tif+FH7UT6VbJp8ig631yMV
APBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBQ/5eGLG0uix3SOpwhc
-----END CERTIFICATE-----
root@rancher01:~#


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

root@rancher02:~# systemctl status rke2-server.service
● rke2-server.service - Rancher Kubernetes Engine v2 (server)
     Loaded: loaded (/usr/local/lib/systemd/system/rke2-server.service; enabled; vendor preset: enabled)
     Active: active (running) since Thu 2022-08-25 19:10:15 +04; 1min 13s ago
       Docs: https://github.com/rancher/rke2#readme
    Process: 198178 ExecStartPre=/bin/sh -xc ! /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service (code=exited, status=0/SUCCESS)
    Process: 198180 ExecStartPre=/sbin/modprobe br_netfilter (code=exited, status=0/SUCCESS)
    Process: 198181 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS)
   Main PID: 198182 (rke2)
      Tasks: 142
     Memory: 3.7G
     CGroup: /system.slice/rke2-server.service
             ├─198182 /usr/local/bin/rke2 server
             ├─198192 containerd -c /var/lib/rancher/rke2/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/rke2/a&amp;gt;

root@rancher02:~#


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;4c.Access the cluster from first server node&lt;/strong&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

root@rancher01:~# kubectl get nodes -o wide
NAME               STATUS   ROLES                       AGE    VERSION          INTERNAL-IP      EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
rancher01   Ready    control-plane,etcd,master   30m    v1.23.9+rke2r1   172.17.11.11   &amp;lt;none&amp;gt;        Ubuntu 20.04.4 LTS   5.4.0-100-generic   containerd://1.5.13-k3s1
rancher02   Ready    control-plane,etcd,master   2m8s   v1.23.9+rke2r1   172.17.11.12   &amp;lt;none&amp;gt;        Ubuntu 20.04.4 LTS   5.4.0-100-generic   containerd://1.5.13-k3s1
root@rancher01:~#


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;5. Adding third server node to the cluster.&lt;/strong&gt;
&lt;/h2&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

root@rancher03:~# mkdir -p /etc/rancher/rke2/


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

root@rancher03:~# cat /etc/rancher/rke2/config.yaml

token: K10d11c154bab23851058711225726a1189ba7fasfsafalsfasf8a8f9saf::server:2ffddc9f0dasdfasf9a898980
server: https://rke2.mydomain.ae:9345
tls-san:
  - rke2.mydomain.ae
  - 172.17.11.11
  - rancher01
  - 172.17.11.12
  - rancher02
  - 172.17.11.13
  - rancher03


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

cat /etc/hosts
172.17.11.11    rancher01
172.17.11.12    rancher02
172.17.11.13    rancher03
172.16.132.35   rke2.mydomain.ae


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

root@rancher03:~# curl -sfL https://get.rke2.io | sh -
[INFO]  finding release for channel stable
[INFO]  using v1.23.9+rke2r1 as release
[INFO]  downloading checksums at https://github.com/rancher/rke2/releases/download/v1.23.9+rke2r1/sha256sum-amd64.txt
[INFO]  downloading tarball at https://github.com/rancher/rke2/releases/download/v1.23.9+rke2r1/rke2.linux-amd64.tar.gz
[INFO]  verifying tarball
[INFO]  unpacking tarball file to /usr/local
root@rancher03:~#


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

root@rancher03:~# systemctl status rke2-server.service
● rke2-server.service - Rancher Kubernetes Engine v2 (server)
     Loaded: loaded (/usr/local/lib/systemd/system/rke2-server.service; disabled; vendor preset: enabled)
     Active: inactive (dead)
       Docs: https://github.com/rancher/rke2#readme
root@rancher03:~#


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

root@rancher03:~# systemctl enable rke2-server.service
Created symlink /etc/systemd/system/multi-user.target.wants/rke2-server.service → /usr/local/lib/systemd/system/rke2-server.service.
root@rancher03:~#


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

root@rancher03:~# systemctl start rke2-server.service
root@rancher03:~#


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

root@rancher01:~# kubectl get nodes -o wide
NAME               STATUS   ROLES                       AGE   VERSION          INTERNAL-IP      EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
rancher01   Ready    control-plane,etcd,master   39m   v1.23.9+rke2r1   172.17.11.11   &amp;lt;none&amp;gt;        Ubuntu 20.04.4 LTS   5.4.0-100-generic   containerd://1.5.13-k3s1
rancher02   Ready    control-plane,etcd,master   11m   v1.23.9+rke2r1   172.17.11.12   &amp;lt;none&amp;gt;        Ubuntu 20.04.4 LTS   5.4.0-100-generic   containerd://1.5.13-k3s1
rancher03   Ready    control-plane,etcd,master   54s   v1.23.9+rke2r1   172.17.11.13   &amp;lt;none&amp;gt;        Ubuntu 20.04.4 LTS   5.4.0-100-generic   containerd://1.5.13-k3s1
root@rancher01:~#


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;**&lt;/p&gt;

&lt;h2&gt;
  
  
  6. Installing Rancher using HELM on the rke2 3 node cluster.
&lt;/h2&gt;

&lt;p&gt;**&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

root@rancher01:~# wget https://get.helm.sh/helm-v3.9.4-linux-amd64.tar.gz
--2022-08-25 19:37:08--  https://get.helm.sh/helm-v3.9.4-linux-amd64.tar.gz
Resolving get.helm.sh (get.helm.sh)... 152.199.21.175, 2606:2800:233:1cb7:261b:1f9c:2074:3c
Connecting to get.helm.sh (get.helm.sh)|152.199.21.175|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 14026634 (13M) [application/x-tar]
Saving to: ‘helm-v3.9.4-linux-amd64.tar.gz’

helm-v3.9.4-linux-amd64.tar.gz                  100%[====================================================================================================&amp;gt;]  13.38M  --.-KB/s    in 0.06s

2022-08-25 19:37:08 (233 MB/s) - ‘helm-v3.9.4-linux-amd64.tar.gz’ saved [14026634/14026634]



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

root@rancher01:~# tar -zxvf helm-v3.8.2-linux-amd64.tar.gz
linux-amd64/
linux-amd64/helm
linux-amd64/LICENSE
linux-amd64/README.md


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

root@rancher01:~# cd linux-amd64/

root@rancher01:~/linux-amd64# ls
helm  LICENSE  README.md


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

root@rancher01:~/linux-amd64# cp -pr helm /usr/local/bin/
root@rancher01:~/linux-amd64# which helm
/usr/local/bin/helm


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

root@rancher01:~# helm repo add rancher-stable https://releases.rancher.com/server-charts/stable

"rancher-stable" has been added to your repositories
root@rancher01:~#


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

root@rancher01:~# helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "rancher-stable" chart repository
Update Complete. ⎈Happy Helming!⎈
root@rancher01:~#


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

root@rancher01:~# kubectl create namespace cattle-system
namespace/cattle-system created
root@rancher01:~#


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Below we have set tls=external since we will be installing the tls/ssl certificates in the loadbalancer. You can setup according to your needs. Refer rancher documentation below for more information.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.ranchermanager.rancher.io/pages-for-subheaders/install-upgrade-on-a-kubernetes-cluster" rel="noopener noreferrer"&gt;https://docs.ranchermanager.rancher.io/pages-for-subheaders/install-upgrade-on-a-kubernetes-cluster&lt;/a&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

root@rancher01:~# helm install rancher rancher-stable/rancher --namespace cattle-system --set hostname=rancher.mydomain.com --set bootstrapPassword='mypassword123' --set tls=external
NAME: rancher
LAST DEPLOYED: Thu Aug 25 19:57:37 2022
NAMESPACE: cattle-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Rancher Server has been installed.

NOTE: Rancher may take several minutes to fully initialize. Please standby while Certificates are being issued, Containers are started and the Ingress rule comes up.

Check out our docs at https://rancher.com/docs/

If you provided your own bootstrap password during installation, browse to https://rancher.mydomain.com to get started.

If this is the first time you installed Rancher, get started by running this command and clicking the URL it generates:


echo https://rancher.mydomain.com/dashboard/?setup=$(kubectl get secret --namespace cattle-system bootstrap-secret -o go-template='{{.data.bootstrapPassword|base64decode}}')


To get just the bootstrap password on its own, run:


kubectl get secret --namespace cattle-system bootstrap-secret -o go-template='{{.data.bootstrapPassword|base64decode}}{{ "\n" }}'



Happy Containering!
root@rancher01:~#


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

root@rancher01:~# kubectl get all -n cattle-system
NAME                           READY   STATUS              RESTARTS   AGE
pod/rancher-75b7b67cbb-lrh7z   0/1     ContainerCreating   0          9s
pod/rancher-75b7b67cbb-nqt85   0/1     ContainerCreating   0          9s
pod/rancher-75b7b67cbb-rjggf   0/1     ContainerCreating   0          9s

NAME              TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE
service/rancher   ClusterIP   10.43.251.67   &amp;lt;none&amp;gt;        80/TCP,443/TCP   9s

NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/rancher   0/3     3            0           9s

NAME                                 DESIRED   CURRENT   READY   AGE
replicaset.apps/rancher-75b7b67cbb   3         3         0       9s
root@rancher01:~#


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;After few mins,&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

root@rancher01:~# kubectl get all -n cattle-system
NAME                           READY   STATUS    RESTARTS   AGE
pod/rancher-75b7b67cbb-lrh7z   1/1     Running   0          93s
pod/rancher-75b7b67cbb-nqt85   1/1     Running   0          93s
pod/rancher-75b7b67cbb-rjggf   1/1     Running   0          93s

NAME              TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE
service/rancher   ClusterIP   10.43.251.67   &amp;lt;none&amp;gt;        80/TCP,443/TCP   93s

NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/rancher   3/3     3            3           93s

NAME                                 DESIRED   CURRENT   READY   AGE
replicaset.apps/rancher-75b7b67cbb   3         3         3       93s
root@rancher01:~#


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;To login to the UI, use the dns name that was using during the helm installation. For testing you can point the dns/hostname to the loadbalancer ip or any node ip.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://rancher.mydomain.com" rel="noopener noreferrer"&gt;https://rancher.mydomain.com&lt;/a&gt; or &lt;a href="http://rancher.mydomain.com" rel="noopener noreferrer"&gt;http://rancher.mydomain.com&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fen72hc5olyeze7cztb5o.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fen72hc5olyeze7cztb5o.JPG" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F04swjua5z114wie882o0.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F04swjua5z114wie882o0.JPG" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>rancher</category>
      <category>devops</category>
    </item>
  </channel>
</rss>
