Rancher is a complete software stack for teams adopting containers. It addresses the operational and security challenges of managing multiple Kubernetes clusters across any infrastructure.
Below is a sample rancher architecture.
We are going to install a 3 node rke2 kubernetes cluster and install rancher in the rke2 cluster using Helm.
1. Create a private Loadbalancer
Create a L4 loadbalancer with the first rke2 node as backendpool( once rest of the two nodes are added to the rke2 cluster, you can add the 2 rke2 nodes to loadbalancer backendpool), and below are ports that loadbalancer and backend traffic should be listening on,
1) 9345 to register new nodes
2) 6443 for Kubernetes API Server
2. Maintain the host entries on the rke2 nodes like below
maintain a hostname/dns for the loadbalancer vip that is created in the first setup. Here i am pointing dns/hostname "rke2.mydomain.ae" to my loadbalancer vip 182.17.12.5
172.17.11.11 rancher01
172.17.11.12 rancher02
172.17.11.13 rancher03
172.17.12.5 rke2.mydomain.ae
3. Launch the first server node
3a. adding hostnames or IP for tls
To avoid certificate errors with the fixed registration address, you should launch the server with the tls-san parameter set. This option adds an additional hostname or IP as a Subject Alternative Name in the server's TLS cert, and it can be specified as a list if you would like to access via both the IP and the hostname.
root@rancher01:~# mkdir -p /etc/rancher/rke2/
Note: below i maintained all my nodes ip's, hostnames and including loadbalancer dns name that points to loadbalancer vip that was created in the first step.
root@rancher01:~# cat /etc/rancher/rke2/config.yaml
tls-san:
- rke2.mydomain.ae
- 172.17.11.11
- rancher01
- 172.17.11.12
- rancher02
- 172.17.11.13
- rancher03
root@rancher01:~#
3b. Installing rke2
Here i am pulling the latest stable rke2 release.
root@rancher01:~# curl -sfL https://get.rke2.io | sh -
[INFO] finding release for channel stable
[INFO] using v1.23.9+rke2r1 as release
[INFO] downloading checksums at https://github.com/rancher/rke2/releases/download/v1.23.9+rke2r1/sha256sum-amd64.txt
[INFO] downloading tarball at https://github.com/rancher/rke2/releases/download/v1.23.9+rke2r1/rke2.linux-amd64.tar.gz
[INFO] verifying tarball
[INFO] unpacking tarball file to /usr/local
root@rancher01:~#
root@rancher01:~# systemctl start rke2-server.service
Note: When the rke2-server service is started for the first time , it will take few minutes as it needs to pull all the images need to spin up the cluster. So don't panic.
If you want, you can monitor the setup process using the command "journalctl -u rke2-server -f"
root@rancher01:~# systemctl status rke2-server.service
● rke2-server.service - Rancher Kubernetes Engine v2 (server)
Loaded: loaded (/usr/local/lib/systemd/system/rke2-server.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2022-08-25 18:22:05 +04; 1min 15s ago
Docs: https://github.com/rancher/rke2#readme
Process: 235133 ExecStartPre=/bin/sh -xc ! /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service (code=exited, status=0/SUCCESS)
Process: 235135 ExecStartPre=/sbin/modprobe br_netfilter (code=exited, status=0/SUCCESS)
Process: 235136 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS)
Main PID: 235138 (rke2)
Tasks: 181
Memory: 3.4G
CGroup: /system.slice/rke2-server.service
├─235138 /usr/local/bin/rke2 server
├─235177 containerd -c /var/lib/rancher/rke2/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/rke2/a>
├─235290 kubelet --volume-plugin-dir=/var/lib/kubelet/volumeplugins --file-check-frequency=5s --sync-frequency=30s --address=0.0.0.0 --alsologtostderr=false --anonymous-auth=f>
Note:
After running this installation rke2-server service will be installed. The rke2-server service will be configured to automatically restart after node reboots or if the process crashes or is killed.
Additional utilities will be installed at /var/lib/rancher/rke2/bin/. They include: kubectl, crictl, and ctr. Note that these are not on your path by default.
Two cleanup scripts will be installed to the path at /usr/local/bin/rke2. They are: rke2-killall.sh and rke2-uninstall.sh.
A kubeconfig file will be written to /etc/rancher/rke2/rke2.yaml.
A token that can be used to register other server or agent nodes will be created at /var/lib/rancher/rke2/server/node-token
3c. Accessing the cluster
rke2 will automatically download the kubectl needed, i will be available in below location.
root@rancher01:~# ll /var/lib/rancher/rke2/bin/
total 315140
drwxr-xr-x 2 root root 176 Aug 25 18:20 ./
drwxr-xr-x 4 root root 31 Aug 25 18:20 ../
-rwxr-xr-x 1 root root 54096264 Aug 25 18:20 containerd*
-rwxr-xr-x 1 root root 7369488 Aug 25 18:20 containerd-shim*
-rwxr-xr-x 1 root root 11527464 Aug 25 18:20 containerd-shim-runc-v1*
-rwxr-xr-x 1 root root 11539944 Aug 25 18:20 containerd-shim-runc-v2*
-rwxr-xr-x 1 root root 35018008 Aug 25 18:20 crictl*
-rwxr-xr-x 1 root root 20463560 Aug 25 18:20 ctr*
-rwxr-xr-x 1 root root 49328448 Aug 25 18:20 kubectl*
-rwxr-xr-x 1 root root 122372296 Aug 25 18:20 kubelet*
-rwxr-xr-x 1 root root 10961304 Aug 25 18:20 runc*
root@rancher01:~# cp /var/lib/rancher/rke2/bin/kubectl /usr/local/bin/
root@rancher01:~# chmod +x /usr/local/bin/kubectl
root@rancher01:~# cat /etc/rancher/rke2/rke2.yaml
apiVersion: v1
clusters:
- cluster:
certificate-authority-data:
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
server: https://127.0.0.1:6443
name: default
contexts:
- context:
cluster: default
user: default
name: default
current-context: default
kind: Config
preferences: {}
users:
- name: default
user:
client-certificate-data:
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
client-key-data:
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Set the KUBECONFIG env variable as specified below
root@rancher01:~# export KUBECONFIG=/etc/rancher/rke2/rke2.yaml
root@rancher01:~#
root@rancher01:~# kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
rancher01 Ready control-plane,etcd,master 3m31s v1.23.9+rke2r1 172.17.11.11 <none> Ubuntu 20.04.4 LTS 5.4.0-100-generic containerd://1.5.13-k3s1
root@rancher01:~#
root@rancher01:~# kubectl get all -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system pod/cloud-controller-manager-rancher01 1/1 Running 0 5m24s
kube-system pod/etcd-rancher01 1/1 Running 0 5m21s
kube-system pod/helm-install-rke2-canal-kdm5c 0/1 Completed 0 5m12s
kube-system pod/helm-install-rke2-coredns-jnzgh 0/1 Completed 0 5m12s
kube-system pod/helm-install-rke2-ingress-nginx-bsxkp 0/1 Completed 0 5m12s
kube-system pod/helm-install-rke2-metrics-server-8vn8f 0/1 Completed 0 5m12s
kube-system pod/kube-apiserver-rancher01 1/1 Running 0 4m59s
kube-system pod/kube-controller-manager-rancher01 1/1 Running 0 5m25s
kube-system pod/kube-proxy-rancher01 1/1 Running 0 5m23s
kube-system pod/kube-scheduler-rancher01 1/1 Running 0 4m48s
kube-system pod/rke2-canal-vkr74 2/2 Running 0 4m57s
kube-system pod/rke2-coredns-rke2-coredns-545d64676-s7hhs 1/1 Running 0 4m57s
kube-system pod/rke2-coredns-rke2-coredns-autoscaler-5dd676f5c7-fvdhw 1/1 Running 0 4m57s
kube-system pod/rke2-ingress-nginx-controller-67zjf 1/1 Running 0 4m11s
kube-system pod/rke2-metrics-server-6564db4569-vllzm 1/1 Running 0 4m29s
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 5m28s
kube-system service/rke2-coredns-rke2-coredns ClusterIP 10.43.0.10 <none> 53/UDP,53/TCP 4m58s
kube-system service/rke2-ingress-nginx-controller-admission ClusterIP 10.43.188.253 <none> 443/TCP 4m11s
kube-system service/rke2-metrics-server ClusterIP 10.43.43.187 <none> 443/TCP 4m29s
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
kube-system daemonset.apps/rke2-canal 1 1 1 1 1 kubernetes.io/os=linux 4m57s
kube-system daemonset.apps/rke2-ingress-nginx-controller 1 1 1 1 1 kubernetes.io/os=linux 4m11s
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
kube-system deployment.apps/rke2-coredns-rke2-coredns 1/1 1 1 4m58s
kube-system deployment.apps/rke2-coredns-rke2-coredns-autoscaler 1/1 1 1 4m58s
kube-system deployment.apps/rke2-metrics-server 1/1 1 1 4m29s
NAMESPACE NAME DESIRED CURRENT READY AGE
kube-system replicaset.apps/rke2-coredns-rke2-coredns-545d64676 1 1 1 4m58s
kube-system replicaset.apps/rke2-coredns-rke2-coredns-autoscaler-5dd676f5c7 1 1 1 4m58s
kube-system replicaset.apps/rke2-metrics-server-6564db4569 1 1 1 4m29s
NAMESPACE NAME COMPLETIONS DURATION AGE
kube-system job.batch/helm-install-rke2-canal 1/1 17s 5m23s
kube-system job.batch/helm-install-rke2-coredns 1/1 17s 5m23s
kube-system job.batch/helm-install-rke2-ingress-nginx 1/1 68s 5m23s
kube-system job.batch/helm-install-rke2-metrics-server 1/1 46s 5m23s
root@rancher01:~#
**
4. Adding Second server node to the cluster
**
4a. Setting up the Second server node
First copy the node token generated from the first server node as highlighted below.
root@rancher01:/var/lib/rancher# cat /var/lib/rancher/rke2/server/node-token
K10d11c154bab23851058711225726a1189ba7f00642b87c82e0b7407cdfc25c82d::server:2ffddc9f0d8901c2b6e30bde043850e1
root@rancher01:/var/lib/rancher#
root@rancher01:~# mkdir -p /etc/rancher/rke2/
Maintain the config.yaml as below,
root@rancher01:~# cat /etc/rancher/rke2/config.yaml
token: K10d11c154bab23851058711225726a1189ba7fasfsafalsfasf8a8f9saf::server:2ffddc9f0dasdfasf9a898980
server: https://rke2.mydomain.ae:9345
tls-san:
- rke2.mydomain.ae
- 172.17.11.11
- rancher01
- 172.17.11.12
- rancher02
- 172.17.11.13
- rancher03
Ensure host entry on the second server node is setup similar to first server node.
cat /etc/hosts
172.17.11.11 rancher01
172.17.11.12 rancher02
172.17.11.13 rancher03
172.16.132.35 rke2.mydomain.ae
4b. Installing rke2 on the second server node
root@rancher02:~# curl -sfL https://get.rke2.io | sh -
[INFO] finding release for channel stable
[INFO] using v1.23.9+rke2r1 as release
[INFO] downloading checksums at https://github.com/rancher/rke2/releases/download/v1.23.9+rke2r1/sha256sum-amd64.txt
[INFO] downloading tarball at https://github.com/rancher/rke2/releases/download/v1.23.9+rke2r1/rke2.linux-amd64.tar.gz
[INFO] verifying tarball
[INFO] unpacking tarball file to /usr/local
root@rancher02:~#
we ran into an issue. Second server nodes didn't start, below is the error
journalctl -u rke2-server -f
Aug 25 18:56:40 rancher02 systemd[1]: Failed to start Rancher Kubernetes Engine v2 (server).
Aug 25 18:56:45 rancher02 systemd[1]: rke2-server.service: Scheduled restart job, restart counter is at 15.
Aug 25 18:56:45 rancher02 systemd[1]: Stopped Rancher Kubernetes Engine v2 (server).
Aug 25 18:56:45 rancher02 systemd[1]: Starting Rancher Kubernetes Engine v2 (server)...
Aug 25 18:56:45 rancher02 sh[195790]: + /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service
Aug 25 18:56:45 rancher02 sh[195798]: /bin/sh: 1: /usr/bin/systemctl: not found
Aug 25 18:56:45 rancher02 rke2[195804]: time="2022-08-25T18:56:45+04:00" level=warning msg="not running in CIS mode"
Aug 25 18:56:45 rancher02 rke2[195804]: time="2022-08-25T18:56:45+04:00" level=info msg="Starting rke2 v1.23.9+rke2r1 (2d206eba8d0180351408dbed544c852b6b4fdd42)"
Aug 25 18:57:05 rancher02 rke2[195804]: time="2022-08-25T18:57:05+04:00" level=fatal msg="starting kubernetes: preparing server: failed to get CA certs: Get \"https://rke2.mydomain.ae:9345/cacerts\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)"
Aug 25 18:57:05 rancher02 systemd[1]: rke2-server.service: Main process exited, code=exited, status=1/FAILURE
Aug 25 18:57:05 rancher02 systemd[1]: rke2-server.service: Failed with result 'exit-code'.
Aug 25 18:57:05 rancher02 systemd[1]: Failed to start Rancher Kubernetes Engine v2 (server).
As per the log above, it says failed to get CA certs: Get \"https://rke2.mydomain.ae:9345/cacerts\"
When we accessed the same cacerts using first node hostname, it works. It returns the certificate.
root@rancher01:~# curl https://rancher01:9345/cacerts -k
-----BEGIN CERTIFICATE-----
cnZlci1jYUAxNjYxNDM4NDI1MB4XDTIyMDgyNTE0NDAyNVoXDTMyMDgyMjE0NDA
dc5sEBfQUANmtPK4ckGrMYpYmFT5EAyBMnmoNRB0brnfxFCjQjBAMA4GA1UdDwEB
NVowJDEiMCAGA1UEAwwZcmtlMi1zZXJ2ZXItY2FAMTY2MTQzODQyNTBZMBMGByqG
SM49AgEGCCqGSM49AwEHA0IABG7NRoHKS8bDW1IZZE2gGxGrEYCDUfvWtSk/xw3R
dc5sEBfQUANmtPK4ckGrMYpYmFT5EAyBMnmoNRB0brnfxFCjQjBAMA4GA1UdDwEB
/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBQ/5eGLG0uix3SOpwhc
pDJr95aaEzAKBggqhkjOPQQDAgNIADBFAiEA26zz5tif+FH7UT6VbJp8ig631yMV
APBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBQ/5eGLG0uix3SOpwhc
-----END CERTIFICATE-----
root@rancher01:~#
root@rancher01:~# curl https://rke2.mydomain.ae:9345/cacerts
^C
we are able to get the CA cert using node ip or hostname but not with vip. There was an issue in loadbalancer config, after correcting the config in loadbalancer.
root@rancher01:~# curl https://rke2.mydomain.ae:9345/cacerts -k
-----BEGIN CERTIFICATE-----
cnZlci1jYUAxNjYxNDM4NDI1MB4XDTIyMDgyNTE0NDAyNVoXDTMyMDgyMjE0NDA
dc5sEBfQUANmtPK4ckGrMYpYmFT5EAyBMnmoNRB0brnfxFCjQjBAMA4GA1UdDwEB
NVowJDEiMCAGA1UEAwwZcmtlMi1zZXJ2ZXItY2FAMTY2MTQzODQyNTBZMBMGByqG
SM49AgEGCCqGSM49AwEHA0IABG7NRoHKS8bDW1IZZE2gGxGrEYCDUfvWtSk/xw3R
dc5sEBfQUANmtPK4ckGrMYpYmFT5EAyBMnmoNRB0brnfxFCjQjBAMA4GA1UdDwEB
/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBQ/5eGLG0uix3SOpwhc
pDJr95aaEzAKBggqhkjOPQQDAgNIADBFAiEA26zz5tif+FH7UT6VbJp8ig631yMV
APBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBQ/5eGLG0uix3SOpwhc
-----END CERTIFICATE-----
root@rancher01:~#
root@rancher02:~# systemctl status rke2-server.service
● rke2-server.service - Rancher Kubernetes Engine v2 (server)
Loaded: loaded (/usr/local/lib/systemd/system/rke2-server.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2022-08-25 19:10:15 +04; 1min 13s ago
Docs: https://github.com/rancher/rke2#readme
Process: 198178 ExecStartPre=/bin/sh -xc ! /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service (code=exited, status=0/SUCCESS)
Process: 198180 ExecStartPre=/sbin/modprobe br_netfilter (code=exited, status=0/SUCCESS)
Process: 198181 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS)
Main PID: 198182 (rke2)
Tasks: 142
Memory: 3.7G
CGroup: /system.slice/rke2-server.service
├─198182 /usr/local/bin/rke2 server
├─198192 containerd -c /var/lib/rancher/rke2/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/rke2/a>
root@rancher02:~#
4c.Access the cluster from first server node
root@rancher01:~# kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
rancher01 Ready control-plane,etcd,master 30m v1.23.9+rke2r1 172.17.11.11 <none> Ubuntu 20.04.4 LTS 5.4.0-100-generic containerd://1.5.13-k3s1
rancher02 Ready control-plane,etcd,master 2m8s v1.23.9+rke2r1 172.17.11.12 <none> Ubuntu 20.04.4 LTS 5.4.0-100-generic containerd://1.5.13-k3s1
root@rancher01:~#
5. Adding third server node to the cluster.
root@rancher03:~# mkdir -p /etc/rancher/rke2/
root@rancher03:~# cat /etc/rancher/rke2/config.yaml
token: K10d11c154bab23851058711225726a1189ba7fasfsafalsfasf8a8f9saf::server:2ffddc9f0dasdfasf9a898980
server: https://rke2.mydomain.ae:9345
tls-san:
- rke2.mydomain.ae
- 172.17.11.11
- rancher01
- 172.17.11.12
- rancher02
- 172.17.11.13
- rancher03
cat /etc/hosts
172.17.11.11 rancher01
172.17.11.12 rancher02
172.17.11.13 rancher03
172.16.132.35 rke2.mydomain.ae
root@rancher03:~# curl -sfL https://get.rke2.io | sh -
[INFO] finding release for channel stable
[INFO] using v1.23.9+rke2r1 as release
[INFO] downloading checksums at https://github.com/rancher/rke2/releases/download/v1.23.9+rke2r1/sha256sum-amd64.txt
[INFO] downloading tarball at https://github.com/rancher/rke2/releases/download/v1.23.9+rke2r1/rke2.linux-amd64.tar.gz
[INFO] verifying tarball
[INFO] unpacking tarball file to /usr/local
root@rancher03:~#
root@rancher03:~# systemctl status rke2-server.service
● rke2-server.service - Rancher Kubernetes Engine v2 (server)
Loaded: loaded (/usr/local/lib/systemd/system/rke2-server.service; disabled; vendor preset: enabled)
Active: inactive (dead)
Docs: https://github.com/rancher/rke2#readme
root@rancher03:~#
root@rancher03:~# systemctl enable rke2-server.service
Created symlink /etc/systemd/system/multi-user.target.wants/rke2-server.service → /usr/local/lib/systemd/system/rke2-server.service.
root@rancher03:~#
root@rancher03:~# systemctl start rke2-server.service
root@rancher03:~#
root@rancher01:~# kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
rancher01 Ready control-plane,etcd,master 39m v1.23.9+rke2r1 172.17.11.11 <none> Ubuntu 20.04.4 LTS 5.4.0-100-generic containerd://1.5.13-k3s1
rancher02 Ready control-plane,etcd,master 11m v1.23.9+rke2r1 172.17.11.12 <none> Ubuntu 20.04.4 LTS 5.4.0-100-generic containerd://1.5.13-k3s1
rancher03 Ready control-plane,etcd,master 54s v1.23.9+rke2r1 172.17.11.13 <none> Ubuntu 20.04.4 LTS 5.4.0-100-generic containerd://1.5.13-k3s1
root@rancher01:~#
**
6. Installing Rancher using HELM on the rke2 3 node cluster.
**
root@rancher01:~# wget https://get.helm.sh/helm-v3.9.4-linux-amd64.tar.gz
--2022-08-25 19:37:08-- https://get.helm.sh/helm-v3.9.4-linux-amd64.tar.gz
Resolving get.helm.sh (get.helm.sh)... 152.199.21.175, 2606:2800:233:1cb7:261b:1f9c:2074:3c
Connecting to get.helm.sh (get.helm.sh)|152.199.21.175|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 14026634 (13M) [application/x-tar]
Saving to: ‘helm-v3.9.4-linux-amd64.tar.gz’
helm-v3.9.4-linux-amd64.tar.gz 100%[====================================================================================================>] 13.38M --.-KB/s in 0.06s
2022-08-25 19:37:08 (233 MB/s) - ‘helm-v3.9.4-linux-amd64.tar.gz’ saved [14026634/14026634]
root@rancher01:~# tar -zxvf helm-v3.8.2-linux-amd64.tar.gz
linux-amd64/
linux-amd64/helm
linux-amd64/LICENSE
linux-amd64/README.md
root@rancher01:~# cd linux-amd64/
root@rancher01:~/linux-amd64# ls
helm LICENSE README.md
root@rancher01:~/linux-amd64# cp -pr helm /usr/local/bin/
root@rancher01:~/linux-amd64# which helm
/usr/local/bin/helm
root@rancher01:~# helm repo add rancher-stable https://releases.rancher.com/server-charts/stable
"rancher-stable" has been added to your repositories
root@rancher01:~#
root@rancher01:~# helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "rancher-stable" chart repository
Update Complete. ⎈Happy Helming!⎈
root@rancher01:~#
root@rancher01:~# kubectl create namespace cattle-system
namespace/cattle-system created
root@rancher01:~#
Below we have set tls=external since we will be installing the tls/ssl certificates in the loadbalancer. You can setup according to your needs. Refer rancher documentation below for more information.
https://docs.ranchermanager.rancher.io/pages-for-subheaders/install-upgrade-on-a-kubernetes-cluster
root@rancher01:~# helm install rancher rancher-stable/rancher --namespace cattle-system --set hostname=rancher.mydomain.com --set bootstrapPassword='mypassword123' --set tls=external
NAME: rancher
LAST DEPLOYED: Thu Aug 25 19:57:37 2022
NAMESPACE: cattle-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Rancher Server has been installed.
NOTE: Rancher may take several minutes to fully initialize. Please standby while Certificates are being issued, Containers are started and the Ingress rule comes up.
Check out our docs at https://rancher.com/docs/
If you provided your own bootstrap password during installation, browse to https://rancher.mydomain.com to get started.
If this is the first time you installed Rancher, get started by running this command and clicking the URL it generates:
echo https://rancher.mydomain.com/dashboard/?setup=$(kubectl get secret --namespace cattle-system bootstrap-secret -o go-template='{{.data.bootstrapPassword|base64decode}}')
To get just the bootstrap password on its own, run:
kubectl get secret --namespace cattle-system bootstrap-secret -o go-template='{{.data.bootstrapPassword|base64decode}}{{ "\n" }}'
Happy Containering!
root@rancher01:~#
root@rancher01:~# kubectl get all -n cattle-system
NAME READY STATUS RESTARTS AGE
pod/rancher-75b7b67cbb-lrh7z 0/1 ContainerCreating 0 9s
pod/rancher-75b7b67cbb-nqt85 0/1 ContainerCreating 0 9s
pod/rancher-75b7b67cbb-rjggf 0/1 ContainerCreating 0 9s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/rancher ClusterIP 10.43.251.67 <none> 80/TCP,443/TCP 9s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/rancher 0/3 3 0 9s
NAME DESIRED CURRENT READY AGE
replicaset.apps/rancher-75b7b67cbb 3 3 0 9s
root@rancher01:~#
After few mins,
root@rancher01:~# kubectl get all -n cattle-system
NAME READY STATUS RESTARTS AGE
pod/rancher-75b7b67cbb-lrh7z 1/1 Running 0 93s
pod/rancher-75b7b67cbb-nqt85 1/1 Running 0 93s
pod/rancher-75b7b67cbb-rjggf 1/1 Running 0 93s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/rancher ClusterIP 10.43.251.67 <none> 80/TCP,443/TCP 93s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/rancher 3/3 3 3 93s
NAME DESIRED CURRENT READY AGE
replicaset.apps/rancher-75b7b67cbb 3 3 3 93s
root@rancher01:~#
To login to the UI, use the dns name that was using during the helm installation. For testing you can point the dns/hostname to the loadbalancer ip or any node ip.
Top comments (1)
Hi, Good article, one question , what change did you make to load balancer config to get the second node connected ? I'm trying to use an azure load balancer with my custom cluster setup