Recently, while setting up K3s on a Beelink mini PC running Ubuntu Server, I ran into two frustrating errors while trying to run:
kubectl get nodes
The first issue was a permissions error loading the kubeconfig, and once that was fixed, a second error appeared indicating that the Kubernetes control plane wasn’t running. After debugging, here’s the full process and solution in case you run into the same problem.
Problem 1: kubeconfig Permission Denied
The initial error looked like this:
WARN[0000] Unable to read /etc/rancher/k3s/k3s.yaml, please start server with --write-kubeconfig-mode or --write-kubeconfig-group to modify kube config permissions
error: error loading config file "/etc/rancher/k3s/k3s.yaml": open /etc/rancher/k3s/k3s.yaml: permission denied
This happens because K3s generates the Kubernetes config file owned by root with restrictive permissions (600). When we run kubectl as a regular user, kubectl can't access it.
To fix the issue, we create another directory, copy the kubectl config to it and change the ownership of the directory so that kubectl can access it.
mkdir -p ~/.kube
sudo cp /etc/rancher/k3s/k3s.yaml ~/.kube/config
sudo chown $(id -u):$(id -g) ~/.kube/config
Now running kubectl get nodes threw a different error
The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
Finally some progress!
This means kubectl can now read the config file, but the K3s API server isn’t running or reachable.
Problem 2: K3s controller-manager port conflict
Checking the K3s service:
sudo systemctl status k3s
The logs showed this error:
failed to listen on 127.0.0.1:10257: bind: address already in use
This indicates another process is already using port 10257, which is required by the Kubernetes controller-manager. We verify what process is using the port by running the lsof command with the port as argument.
sudo lsof -i :10257
kubelite 188849 root 120u IPv6 ... TCP *:10257 (LISTEN)
kubelite is K3s’s internal Kubernetes binary. That means a previous crashed instance left it running, preventing the new instance from starting.
To fix this, stop k3s completely:
sudo systemctl stop k3s
and kill leftover processes:
sudo pkill -f k3s
sudo pkill -f kubelite
Now to verify the ports are free, we run the lsof command again and that should give us no output.
Finally we just restart k3s:
sudo systemctl start k3s
sudo systemctl status k3s
Now when we run kubectl get nodes, it should run without any issues!
K3s is incredibly powerful and lightweight, but diagnosing startup issues can be intimidating if you’re new to Kubernetes internals. Understanding the logs and system service behavior helped uncover the root cause quickly.
Hopefully this helps someone trying to set up K3s on bare metal or home lab hardware!
Top comments (0)