DEV Community

Rahul Kiran Gaddam
Rahul Kiran Gaddam

Posted on

Installing K8 on ARM64 [4 cpu, 24Gb RAM]


  • Kubernetes/K8 has solved biggest problem of Infrastructure.
  • Unfortunately to work with it we require lot of infrastructure [Static IP, Hardware, Domain Name].
  • There are lot of alternatives that will help us to explore it like Play with Kubernetes, Katacoda. There are always something [Persistence, Availability] that is missing.
  • In this article we will explore how to create a K8 Single Node Cluster and explore K8. This document is based on inspiration from article Medium K8 Installation



  • Oracle is revolutionizing Cloud for Industries. Oracle is only SaaS company in the market that provide all offering of cloud [IaaS, PaaS, SaaS]
  • Majority of cloud offering are giving minimum free kits to explore.
  • Oracle has crossed this barrier by providing free offering of Compute, Network, Load Balancer, Autonomous Database for all under strategy of Always Free Resources. image


  • Using OCI free tire we will create k8 Single node cluster with 24GB & 4 OCPU.
  • For this installation, I have considered below. I tried to create two nodes, networking between nodes I was not able to solve.

    • Instance Name : K8-Master image
    • Image: Oracle Linux Cloud Developer 8 image
    • Processor: Amper Arm64 Bit Processor image image
  • This will create a VM with Public IP. We have to be careful while we selecting container/deliverable to run on this VM.

    • In general deliverables are listed as linux-amd64 & darwin-amd64, we need to consider deliverables labeled as linux-arm64. image
  • Once VM is provisioned, its suggested to associate it with a domain as it simplifies access to K8 Cluster.

    • There are a lot of free domain providers. I have used No-ip image
  • Below are steps that we have followed to install K8

  # Login to Root
  sudo su

  # Updating Host File - Add entry
  ## Get CIDR Private IP

  vi /etc/hosts
  **<private.ip>** k8-master **<>**

  # Firewall Configuration
  systemctl disable firewalld
  yum install iptables-services -y
  systemctl start iptables
  systemctl enable iptables
  iptables -F
  iptables -P INPUT ACCEPT
  iptables -P OUTPUT ACCEPT
  service iptables save
  systemctl restart iptables
  iptables -L -n

  # Docker Installation
  ## Podman is by default provided, K8 can run on Podman
  ## I was unable to install using Podman and need to move to docker

  # -- Remove Podman
  yum remove podman buildah  -y

  # -- Install Docker
  sudo yum install -y yum-utils
  sudo yum-config-manager --add-repo
  yum install -y docker-ce

  # -- Configure Docker
  systemctl  stop docker
  /usr/sbin/usermod -a -G docker opc
  /usr/sbin/sysctl net.ipv4.conf.all.forwarding=1
  systemctl  start docker
  chmod 777 /var/run/docker.sock
  swapoff -a
  sed -i '/ swap / s/^/#/' /etc/fstab
  vi /etc/docker/daemon.json
      "exec-opts": ["native.cgroupdriver=systemd"]

  # Install K8 Software

  # -- Pre configurations
  cat <<EOF |  tee /etc/modules-load.d/k8s.conf

  cat <<EOF |  tee /etc/sysctl.d/k8s.conf
  net.bridge.bridge-nf-call-ip6tables = 1
  net.bridge.bridge-nf-call-iptables = 1

  sysctl --system

  cat <<EOF |  tee /etc/yum.repos.d/kubernetes.repo
  exclude=kubelet kubeadm kubectl

  setenforce 0
  sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

  # -- Download
  yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
  systemctl enable --now kubelet

  # -- Validate
  kubectl version --short
  kubeadm version --short

  # -- Creating OS Services
  systemctl enable docker.service
  systemctl enable kubelet.service
  systemctl daemon-reload
  systemctl restart docker
  systemctl restart kubelet

  # -- Installing K8 Single Node Cluster
  CERTKEY=$(kubeadm certs certificate-key)
  kubeadm init --apiserver-cert-extra-sans=<>,<public.ip>,<private.ip> --pod-network-cidr=   --control-plane-endpoint=<> --upload-certs --certificate-key=$CERTKEY

  # -- Moving k8 config file  
  mkdir -p $HOME/.kube
  cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  chown $(id -u):$(id -g) $HOME/.kube/config
  mkdir -p /home/opc/.kube
  cp $HOME/.kube/config /home/opc/.kube/config
  chmod 777 /home/opc/.kube/config

  # -- Validating Installation
  netstat -nplt
  kubectl get nodes
  kubectl get pods -n kube-system

  # -- Enabling Flannel Networking
  kubectl apply -f
Enter fullscreen mode Exit fullscreen mode


  • With a successful K8 environment installation, we wanted to run pods and access them using DNS name associated.
  • Ingress controller helps to do this. We will associate ingress to two Pods.
# Taint Master
## This will allow pods to be scheduled on Master
kubectl get nodes -o json | jq '.items[].spec.taints'
kubectl taint nodes k8-master 

# Install Helm
curl | bash
mv /usr/local/bin/helm /usr/bin

# -- Validating Helm Installation
helm version

# -- Add Helm Repo
helm repo add stable
helm repo list

# Install Nginx Ingress Controller

# -- Add Helm Chart as default is Depricated
helm repo add ingress-nginx
helm repo update
helm repo list

# -- Download default chart
helm show values ingress-nginx/ingress-nginx > ngingress-metal-custom.yaml
chmod 777 ngingress-metal-custom.yaml

# -- Update Settings to run Nginx on OCI
hostNetwork: true ## change to false

  enabled: false ## change to true

kind: Deployment ## change to DaemonSet

- public.ip ## replace with your instance's Public IP

- public.ip/32 ## replace with your instance's Public IP

# -- Run Chart
kubectl create ns ingress-nginx
helm install helm-ngingress ingress-nginx/ingress-nginx -n ingress-nginx --values ngingress-metal-custom.yaml

# -- Verification
kubectl get all -n ingress-nginx
helm list -n ingress-nginx
Enter fullscreen mode Exit fullscreen mode
  • Connecting Service to an Ingress
# -- This will create Deployment, ClusterIP Service, Ingress
kubectl apply -f

# -- Verify Ingress 
kubectl get ing
Enter fullscreen mode Exit fullscreen mode
  • On accessing http://<public.ip>, http://<> system will display Hello, World! image


  • K8 team has created k8 dashboard to view insights on Kubernetes.
  • Typically it is accessed using kube proxy or node port. We will deploy it and access it using Ingress.
# -- Install Dashboard
kubectl apply -f

# -- Verify Dashboard 
kubectl get svc -n kubernetes-dashboard
kubectl get pods -n kubernetes-dashboard

# -- Create Service Account to Access Dashboard
kubectl create serviceaccount rahgadda -n default
kubectl create clusterrolebinding dashboard-admin -n default --clusterrole=cluster-admin --serviceaccount=default:rahgadda
kubectl create clusterrolebinding user-cluster-admin-binding --clusterrole=cluster-admin --user=default

# -- Create Config file to Login
name=$(kubectl get serviceaccount rahgadda -n default -o jsonpath="{.secrets[0].name}")
ca=$(kubectl get secret/$name -o jsonpath='{\.crt}')
token=$(kubectl get secret/$name -o jsonpath='{.data.token}' | base64 --decode)
namespace=$(kubectl get secret/$name -o jsonpath='{.data.namespace}' | base64 --decode)

echo "
apiVersion: v1
kind: Config
- name: default-cluster
    certificate-authority-data: ${ca}
    server: ${server}
- name: default-context
    cluster: default-cluster
    namespace: default
    user: default-user
current-context: default-context
- name: default-user
    token: ${token}
" > rahgadda-kubeconfig.yaml

# -- Use rahgadda-kubeconfig.yaml file to login to Dashboard

# -- Create Ingress for Dashboard Service
kubectl apply -f

# -- Dashboard will be available at URL https://<>/dashboard/
Enter fullscreen mode Exit fullscreen mode


Discussion (0)