Most Kubernetes tutorials show you how to get something running – but rarely explain why specific decisions are made or what pitfalls to avoid. This article walks you through building a production-grade Kubernetes cluster on Hetzner Cloud, complete with Control Plane, Load Balancer, Persistent Volumes, Private Registry, and automated TLS certificates. You will find all scripts and yamls here: https://github.com/erdiD/hetznerk8s
Architecture Overview
We start with three Ubuntu 22.04 servers on Hetzner.
Hostname | Role | Public IP | Private IP |
---|---|---|---|
mykubu-1 | Control Plane | 128.140.X.X | 10.0.1.2 |
mykubu-2 | Worker | 91.99.X.X | 10.0.1.3 |
mykubu-3 | Worker | 95.217.X.X | 10.0.1.4 |
Why private IPs? Cluster traffic is more efficient and secure via private interfaces. Especially for internal storage like Longhorn.
1. Base Setup on All Nodes
This script (You will find here: https://github.com/erdiD/hetznerk8s/blob/main/setupHost.sh) ensures baseline settings for networking, Docker dependencies, and kernel modules. It’s the minimal groundwork for a stable kubeadm setup. It is installing everything we need for each Host to run a cluster node.
./setupHost.sh
reboot
2. Control Plane Initialization
We also need to install Helm to simplify the handling of the packages. Helm is the de-facto package manager for Kubernetes. We’ll use it for most components like ingress, cert-manager, Longhorn, etc.
sudo snap install helm --classic
Generate Init Configuration
Now, let Kubernetes create a initial configuration. You will find an example in the Folder setupCluster/InitConfiguration.yaml
or you can create it by using this command:
kubeadm config print init-defaults > InitConfiguration.yaml
This config controls things like pod networking, cert parameters, and API bindings.
Bootstrap the Cluster
Run the Init Command and setup the kubectl command. This sets up kubectl
for your admin user and makes command-line usage smooth.
kubeadm init --config InitConfiguration.yaml
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
echo 'alias k=kubectl' >>~/.bashrc
3. Joining Worker Nodes
On mykubu-2
and mykubu-3
: Create a JoinConfiguration from the kube. You will find an example in the Folder setupCluster/JoinConfiguration.yaml
.
kubeadm config print join-defaults > JoinConfiguration.yaml
kubeadm join --config JoinConfiguration.yaml
The join config ensures that tokens, endpoints, and bootstrap flows are consistent.
4. CNI and Node Configuration
Label Worker Nodes
kubectl label node mykubu-2 node-role.kubernetes.io/worker=worker
kubectl label node mykubu-3 node-role.kubernetes.io/worker=worker
These labels help Kubernetes schedule workloads using node selectors.
Allow Scheduling on Control Plane (for testing only)
kubectl taint nodes mykubu-1 node-role.kubernetes.io/control-plane-
Not recommended in production – but fine for learning or small setups.
Install Flannel (CNI)
Prepare your values-flannel.yaml. You will find a example in setupCluster/values-flannel.yaml
. You have to write your podsubnet unter 'networking.podSubnet' (line 8, you can leave it with default).
kubectl create ns kube-flannel
kubectl label --overwrite ns kube-flannel pod-security.kubernetes.io/enforce=privileged
helm repo add flannel https://flannel-io.github.io/flannel/
helm install flannel flannel/flannel --values values-flannel.yaml -n kube-flannel
Flannel handles inter-pod networking. Ensure your pod subnet is correctly defined in the values file.
Fix CoreDNS tolerations
This unblocks CoreDNS which is otherwise blocked by the cloud-controller tolerations.
kubectl -n kube-system patch deployment coredns --type json -p '[{"op":"add","path":"/spec/template/spec/tolerations/-","value":{"key":"node.cloudprovider.kubernetes.io/uninitialized","value":"true","effect":"NoSchedule"}}]'
kubectl taint nodes mykubu-1 node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule-
kubectl taint nodes mykubu-2 node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule-
kubectl taint nodes mykubu-3 node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule-
5. Hetzner Cloud Controller Manager (HCCM)
HCCM bridges Kubernetes with Hetzner’s native APIs: it manages load balancers, floating IPs, and metadata. You have to add a API Token from Hetzner and your private Networkid. The API Token can you get from the Hetzner Console under Security and then API-Tokens
. The Networkid is under Networks section. Here you can extract it from the url, i.e. https://console.hetzner.com/projects/11111111/networks/<HETZNER_NETWORK_ID>/resources
. Here is a Example of the Screen where you can get your API-Key.
kubectl -n kube-system create secret generic hcloud --from-literal=token=<TOKEN> --from-literal=network=<NETWORK_ID>
helm repo add hcloud https://charts.hetzner.cloud
helm install hccm hcloud/hcloud-cloud-controller-manager -n kube-system
6. NGINX Ingress Controller
You need a ingress-value.yaml. You will find it under setupCluster/values-ingress.yaml
. I will highlight the changes you have to make. It has around 1300 lines. I will only highlight the lines you have to change.
controller:
dnsPolicy: ClusterFirstWithHostNet
hostNetwork: true
kind: DaemonSet
service:
annotations:
load-balancer.hetzner.cloud/name: "k8s-lb"
load-balancer.hetzner.cloud/location: "fsn1"
This setup exposes the ingress via Hetzner Load Balancer through host networking.
Install NGINX Helm Chart
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm install ingress-nginx-controller ingress-nginx/ingress-nginx -f values.yaml -n kube-system
After installation, go to Hetzner Cloud UI and manually attach the LB to the private network and nodes. HCCM doesn’t do this reliably yet.
Connect to your Hetzner Cloud UI and select your loadbalancer.
Go to the network tab and add the loadbalancer to the private network.
- Go to the target tab and add all hosts.
7. TLS with Cert-Manager & Let’s Encrypt
helm repo add jetstack https://charts.jetstack.io
helm install cert-manager jetstack/cert-manager --namespace cert-manager --create-namespace --version v1.18.0 --set crds.enabled=true
Cert-Manager manages ACME challenges and TLS certificate lifecycles automatically.
Define a ClusterIssuer
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: your@email.com
privateKeySecretRef:
name: letsencrypt-prod
solvers:
- http01:
ingress:
class: nginx
kubectl apply -f cluster-issuer.yaml
8. Longhorn for Persistent Storage
Create a Secret to get at least a basic auth. You can fill out the namespaces and
USER=<ADMIN>; PASSWORD=<PASSWORD>; echo "${USER}:$(openssl passwd -stdin -apr1 <<< ${PASSWORD})" >> auth
kubectl create ns longhorn-system
kubectl -n longhorn-system create secret generic basic-auth --from-file=auth
rm auth
Install Helm Chart
helm repo add longhorn https://charts.longhorn.io
helm repo update
helm install longhorn longhorn/longhorn \
--namespace longhorn-system \
--set persistence.defaultClassReplicaCount=1
Also add a Ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: longhorn-ingress
namespace: longhorn-system
annotations:
nginx.ingress.kubernetes.io/auth-type: basic
nginx.ingress.kubernetes.io/ssl-redirect: 'true'
nginx.ingress.kubernetes.io/auth-secret: basic-auth
nginx.ingress.kubernetes.io/auth-realm: 'Authentication Required '
cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
ingressClassName: nginx
tls:
- hosts:
- longhorn.dogruel.me
secretName: longhorn-tls
rules:
- host: longhorn.dogruel.me
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: longhorn-frontend
port:
number: 80
Step 9: Install Harbor
Please fill out the with a PW of your choice. The user will be admin
.
helm repo add harbor https://helm.goharbor.io
helm repo update
helm install harbor harbor/harbor \
--create-namespace \
--namespace harbor \
--set expose.ingress.hosts.core=artifactory.dogruel.me \
--set externalURL=http://artifactory.dogruel.me \
--set expose.ingress.className=nginx \
--set expose.tls.secretName=harbor-tls \
--set expose.ingress.annotations."cert-manager\.io/cluster-issuer"=letsencrypt-prod \
--set persistence.persistentVolumeClaim.registry.size=5Gi \
--set persistence.persistentVolumeClaim.database.size=5Gi \
--set persistence.persistentVolumeClaim.jobservice.size=2Gi \
--set harborAdminPassword="<ADMINPW>"
Summary
This manual setup gives you full control and deep understanding of how Kubernetes interacts with a cloud provider like Hetzner. You now have a cluster with:
- Load-balanced ingress via NGINX
- Persistent storage with Longhorn
- Private registry with Harbor
- Fully automated TLS with Let’s Encrypt
In the next part will be coming soon. If you have any questions let me know at https://www.linkedin.com/in/erdidogruel/.
Top comments (0)