DEV Community

Ștefănescu Liviu
Ștefănescu Liviu

Posted on

K8s cluster with OCI free-tier and Raspberry Pi4 (part 4)

This long read is a multiple part tutorial for building a Kubernetes cluster (using k3s) with 4 x OCI free-tier ARM instances and 4 x Raspberry Pi 4. Plus some applications needed for installation (Terraform and Ansible) and a lot of things installed on the cluster.
Part 4 is adding applications to the cluster.
GitHub repository is here

I'll just install the apps and access them. But you need to configure to use all of them.

Lens

First thing installed is a beautiful dashboard - Lens. Install the desktop app on your PC, go to File > Add Cluster. You have to paste here all that you receive when running kubectl config view --minify --raw on server. Edit 127.0.0.1:6443 from that result with your server IP, in my case 10.20.30.1:6443.

MetalLB

Metal LB will work as our load balancer, it will give an external IP to every service type LoadBalancer. Install it with kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.7/config/manifests/metallb-native.yaml. Create a file config-metallb.yaml and write this in it:

---
apiVersion: metallb.io/v1beta1

kind: IPAddressPool
metadata:
  name: default
  namespace: metallb-system
spec:
  addresses:
  - 192.168.0.150 - 192.168.0.250

---
apiVersion: metallb.io/v1beta1

kind: L2Advertisement
metadata:
  name: default
  namespace: metallb-system
spec:
  ipAddressPools:
  - default
Enter fullscreen mode Exit fullscreen mode

Edit IP range with your values from home network. Apply it with kubectl apply -f config-metallb.yaml. Now for every time in the rest of the guide you acces a service with node ip (10.20.30.1 -10.20.30.8:port) you could only use the external IP given by metallb.

Helm & Arkade

On the server it seems git is not installed. So sudo apt install git first. Plus KUBECONFIG env variable wasn't configured until now. So add this line to ~/.bashrc.

export KUBECONFIG=/etc/rancher/k3s/k3s.yaml
Enter fullscreen mode Exit fullscreen mode

Helm is the package manager for Kubernetes. Installing helm is very easy, just run following commands on server:

curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh
Enter fullscreen mode Exit fullscreen mode

Arkade is an Open Source marketplace For Kubernetes. Installation is just curl -sLS https://get.arkade.dev | sudo sh.

Bash completion

To run commands faster and easier you can use auto-completion for kubectl. Run apt install bash-completion and then echo 'source <(kubectl completion bash)' >>~/.bashrc. Now after every kubectl you can hit tab and it will autocomplete for you. Try with a running pod, just write kubectl get pod and hit TAB.

Traefik dashboard

First write a yaml file traefik-crd.yaml and apply it. It should have this content:

apiVersion: helm.cattle.io/v1
kind: HelmChartConfig
metadata:
  name: traefik
  namespace: kube-system
spec:
  valuesContent: |-
    additionalArguments:
      - "--api"
      - "--api.dashboard=true"
      - "--api.insecure=true"
      - "--log.level=DEBUG"
    ports:
      traefik:
        expose: true
    providers:
      kubernetesCRD:
        allowCrossNamespace: true
Enter fullscreen mode Exit fullscreen mode

This should be available at http://10.20.30.4:9000/dashboard/#/ in your browser (or any ip from those 8 that are nodes).

Longhorn

Longhorn is a cloud native distributed block storage for Kubernetes. First create a new install-longhorn.yml file for Ansible playbook. Paste in it :

---
- hosts: all
  tasks:
  - name: Install some package for Longhorns
    apt:
      name:
        - nfs-common
        - open-iscsi
        - util-linux
...
Enter fullscreen mode Exit fullscreen mode

Run it with ansible-playbook install-longhorn.yml -K -b to install some extra components on nodes. Now run ansible -a "lsblk -f" all to find what are the names of the drives.
Move to server. Run helm repo add longhorn https://charts.longhorn.io then helm repo update
and then helm install longhorn longhorn/longhorn --namespace longhorn-system --create-namespace. It will last a while, ~7 minutes. Now create a longhorn-service.yaml file and paste this:

apiVersion: v1
kind: Service
metadata:
  name: longhorn-ingress-lb
  namespace: longhorn-system
spec:
  selector:
    app: longhorn-ui
  type: LoadBalancer
  ports:
    - name: http
      protocol: TCP
      port: 80
      targetPort: http
Enter fullscreen mode Exit fullscreen mode

Run it with kubectl apply -f longhorn-service.yaml. Now you can access the Longhorn Dashboard in your browser in 10.20.30.1:port (or any of your nodes IPs). The port you get it from running bectl describe svc longhorn-ingress-lb -n longhorn-system | grep NodePort
Now to make Longhorn default StorageClass. Run kubectl patch storageclass local-path -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}' and now you should have only one default in kubectl get storageclass.

Portainer

This app is a container management platform. We'll be installed using again helm. Run helm repo add portainer https://portainer.github.io/k8s/ and then helm repo update.
Now just run helm install --create-namespace -n portainer portainer portainer/portainer. You can acces portainer UI from 10.20.30.1:30777 (or any other IP from your Wireguard network 10.20.30.1 - 10.20.30.8 :30777). It uses 10 GB, check with kubectl get pvc -n portainer (you can check in your Longhorn Dashboard the new volume created). There you will create a user and password. This is what I have after clicking on Get Started:

portainer

ArgoCD

Argo CD is a declarative, GitOps continuous delivery tool for Kubernetes.
Installation very easy with kubectl create namespace argocd then kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml and wait a few minutes, or check progress with kubectl get all -n argocd. Now to access the UI run kubectl patch svc argocd-server -n argocd -p '{"spec": {"type": "LoadBalancer"}}'. Find your port with kubectl describe service/argocd-server -n argocd | grep NodePort and access the UI from 10.20.30.1:port (or another IP form your network). User is _admin _and password is stored in kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d; echo.

Prometheus & Grafana

Prometheus is probably the best monitoring system out there and Grafana will be used as it's dashboard. Will be installed from the ArgoCD UI using a official helm chart kube-prometheus-stack . Open Applications and click New App. Edit:

  • App Name : kube-prometheus-stack
  • Project Name : default
  • Sync Policy : Automatic
  • check Auto Create Namespace and Server Side Apply
  • Repository URL : https://prometheus-community.github.io/helm-charts (select Helm)
  • Chart : kube-prometheus-stack
  • Cluster URL : https://kubernetes.default.svc
  • Namespace : kube-prometheus-stack
  • alertmanager.service.type : LoadBalancer
  • prometheus.service.type : LoadBalancer
  • prometheusOperator.service.type : LoadBalancer
  • grafana.ingress.enabled : true.

Now hit Create. We configured the services as LoadBalancer, because by default they are ClusterIP and if you wanted to access them you have to do port-forwarding every time. You can acces the Prometheus UI from node:port (port you get it with kubectl describe svc kube-prometheus-stack-prometheus -n kube-prometheus-stack | grep NodePort). The same for Prometheus Alert Manager (get port with kubectl describe svc kube-prometheus-stack-alertmanager -n kube-prometheus-stack | grep NodePort). For Grafana Dashboard you need to go to any nodeIP/kube-prometheus-stack-grafana:80. User is admin and password get it with kubectl get secret --namespace kube-prometheus-stack kube-prometheus-stack-grafana -o jsonpath='{.data.admin-password}' | base64 -d.

Top comments (0)