NOTE: This articles expects you already know bits about K8s and wanna setup local cluster with 1 or more node, but even if you aren't familiar with K8s, you can still follow along..
Context on K3s
Kubernetes is amazing to run with managed services like AWS EKS or GCP GKE in production, but making changes in production cluster directly might not be a good idea since:
- Testing in a dev environment is much faster and better practice
- When learning K8s initially, it's better to avoid unnecessary cloud bills
So that's where K3s comes in, it can let you run local K8s clusters with >=1 node and give you consistent production like environment!
There's also other tools like minikube, kinD that you'll find in the offical K8s tool guide
but I'd still recommend this since:
- kinD runs on your system's docker's daemon, not with K8s containerd runtime, it's better to have that seperate and not mixup your docker containers here.
- minikube has experimental support for multi node cluster
So how is K3s different from K8s
I built a project here recently to deploy to GKE and tested with K3s locally first, so here's what I can say:
- KubeAPI metrics for K8s are all supported.
- All objects like pods,namespaces,service, everything is the same!
- Only difference is etcd store --> uses sqlite instead (which can also be configured to etcd, MySQL, etc)
It's at the end of the day, a K8s binary for running even in low end devices like ARM based Pi, or normal x86.64 systems!
Setup K3s in single node
- K3s already installs dependencies like kubectl so you can follow along even if you may or may not already have that
Only if you have firewall rules enabled on local machine
To setup, we'll first need to allow some firewall rules to allow the socket for Kubernetes API server.
I'm assuming you have ufw or some other firewall tool, so here are the rules you need to enable:
6443/tcp ALLOW Anywhere # K8s API server
10250/tcp ALLOW Anywhere # K8s worker node kubelet metrics
Here's the install script for K3s from the official k3s quick start guide:
curl -sfL https://get.k3s.io | sh -
That's it, you should see a k3s daemon service running if you check with:
systemctl status k3s
What you should see (or something similar):
● k3s.service - Lightweight Kubernetes
Loaded: loaded (/etc/systemd/system/k3s.service; enabled; preset: disabled)
Active: active (running) since Mon 2025-10-27 21:43:25 IST; 32min ago
Invocation: bb923ad8337d473da638169bbfe6867f
Docs: https://k3s.io
Process: 309939 ExecStartPre=/sbin/modprobe br_netfilter (code=exited, status=0/SUCCESS)
Process: 309940 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS)
Main PID: 309942 (k3s-server)
K3s also creates a kube-config file in /etc/rancher/k3s/k3s.yaml. To use this cluster you have to make this path as KUBECONFIG:
sudo chown $USER:$USER /etc/rancher/k3s/k3s.yaml #(if user doesn't have permission)
export KUBECONFIG=/etc/rancher/k3s/k3s.yaml
Then check if it is working:
kubectl get ns
kubectl cluster-info
#Expected log:
#Kubernetes control plane is running at https://127.0.0.1:6443
#CoreDNS is running at https://127.0.0.1:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
#Metrics-server is running at https://127.0.0.1:6443/api/v1/namespaces/kube-system/services/https:metrics-server:https/proxy
Testing with running a sample deployment
Run a sample nginx deployment with 3 replicas:
kubectl create deployment nginx --image=nginx --replicas=3
kubectl get deploy
#expected logs:
#NAME READY UP-TO-DATE AVAILABLE AGE
#nginx 3/3 3 3 16s
Then expose the service with a ClusterIP:
kubectl expose deployment nginx --port=80 --target-port=80 --type=ClusterIP
kubectl get svc
#expected
#NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
#kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 41m
#nginx ClusterIP 10.43.2.40 <none> 80/TCP 93s
Here, the clusterIP is 10.43.2.40 (could be different in yours).
Let's try calling HTTP with curl onto that port 80:
curl 10.43.2.40
#<!DOCTYPE html>
#<html>
#<head>
#<title>Welcome to nginx!</title>
#<style>
#html { color-scheme: light dark; }
#body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; }
#</style>
#</head>
#<body>
#<h1>Welcome to nginx!</h1>
#<p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p>
#<p>For online documentation and support please refer to
#<a href="http://nginx.org/">nginx.org</a>.<br/>
#Commercial support is available at
#<a href="http://nginx.com/">nginx.com</a>.</p>
#<p><em>Thank you for using nginx.</em></p>
#</body>
#</html>
Setup more nodes to connect to cluster (optional) - agent node
So for this, setup any other machine you may have - VMs, physical machine that share a network interface.
For example, let's say I have another machine connected in same wifi in wlan0 interface,
And I get this machine's private IP as 10.158.66.194/24 when I check with:
ip -4 addr show
K3s exposes a "node token" to connect to clusters, and needs the URL where the host node is, so:
export K3S_URL="https://10.158.66.194:6443"
export K3S_TOKEN=$(sudo cat /var/lib/rancher/k3s/server/node-token)
On the other machine you run for connecting:
curl -sfL https://get.k3s.io | K3S_URL=https://10.158.66.194:6443 K3S_TOKEN=mynodetoken sh -
NOTE: 10.158.66.194 is just a dummy IP for example, check and use what was there for your interface.
Then check the nodes with:
k8s get nodes
While in terms of setup, yes kinD and minikube might be easier to configure initially, but once this setup is done, its rather easy to manage and relieves you of all the headaches you may have of "it works locally but not in production".
If you liked this, give feedback since I write a lot of stuff on my obsidian that I translate and write here, so I can go on more about K8s, devops and other stuff..
Top comments (0)