DEV Community

Adil Kairbekov
Adil Kairbekov

Posted on • Edited on

Setup Kubernetes cluster using Talos Linux on Hetzner Cloud

Introduction

Welcome to this comprehensive guide on building a production-grade, highly available Kubernetes cluster using Talos Linux on Hetzner Cloud. By the end of this walkthrough, you'll have a private Kubernetes environment, seamlessly integrated with Cilium as the CNI provider.

To ensure isolation and security, your Kubernetes nodes will reside in a private network, with access to the internet routed through a NAT virtual machine, which you'll also set up as part of this process. This architecture allows you to expose only what's necessary while keeping your control plane and worker nodes safely tucked away from the public internet.

Introduction Diagra

Prerequisites

Before starting, ensure you have:

Step 1 - Initial Setup

Set up your project configuration:

export PROJECT_NAME=example
export HCLOUD_TOKEN=<your_api_token_here>
export TALOS_IMAGE_ID=<your_snapshot_id_here>
Enter fullscreen mode Exit fullscreen mode

Important: Replace the placeholder values with your project name, actual Hetzner API token and image snapshot ID.

Create a new context:

hcloud context create $PROJECT_NAME
# Type (Y)
Enter fullscreen mode Exit fullscreen mode

Step 2 - Infrastructure Components

In this step, you'll create the necessary networking components, load balancer, and the NAT VM that will form the foundation of your Kubernetes cluster infrastructure.

Step 2.1 - Private Network Setup

Create a private network:

# Create the main network
hcloud network create --name $PROJECT_NAME --ip-range 10.0.0.0/16

# Add a subnet for servers
hcloud network add-subnet $PROJECT_NAME \
  --type server \
  --ip-range 10.0.0.0/24 \
  --network-zone eu-central

# Configure routing (we'll create the NAT VM gateway at 10.0.0.3 later)
hcloud network add-route $PROJECT_NAME \
  --destination 0.0.0.0/0 \
  --gateway 10.0.0.3
Enter fullscreen mode Exit fullscreen mode

Step 2.2 - Create Load Balancer

Create a load balancer for accessing Kubernetes and Talos API:

# Create the load balancer
hcloud load-balancer create --name apid --location hel1 --type lb11

# Add Kubernetes API service
hcloud load-balancer add-service apid \
  --listen-port 6443 --destination-port 6443 --protocol tcp

# Add Talos API service  
hcloud load-balancer add-service apid \
  --listen-port 50000 --destination-port 50000 --protocol tcp

# Connect to private network
hcloud load-balancer attach-to-network apid \
  --network $PROJECT_NAME \
  --ip 10.0.0.2

# Target control plane nodes
hcloud load-balancer add-target apid \
  --label-selector type=cp \
  --use-private-ip
Enter fullscreen mode Exit fullscreen mode

Step 2.3 - Create NAT VM Gateway

Create cloud-init configuration file:

cat <<EOF > nat-vm-cloud-init.yaml
#cloud-config
write_files:
  - path: /etc/network/interfaces
    content: |
      auto eth0
      iface eth0 inet dhcp
          post-up echo 1 > /proc/sys/net/ipv4/ip_forward
          post-up iptables -t nat -A POSTROUTING -s '10.0.0.0/16' -o eth0 -j MASQUERADE
    append: true
runcmd:
  - reboot
EOF
Enter fullscreen mode Exit fullscreen mode

Create a NAT VM gateway:

hcloud server create \
  --name nat-vm \
  --type cx22 \
  --image debian-12 \
  --user-data-from-file nat-vm-cloud-init.yaml \
  --network $PROJECT_NAME
Enter fullscreen mode Exit fullscreen mode

What this does: Prepares the small Debian VM to act as a NAT gateway: it enables IP forwarding and sets up a rule to replace private IPs (10.0.0.0/16) with the VM's public IP when traffic goes to the internet. This allows private Talos nodes to access the internet through the NAT VM.

Step 3 - Talos Linux Configuration

Here we generate the machine configurations for Talos nodes, customized for your environment.

Step 3.1 - Get Load Balancer IP

export LOAD_BALANCER_IP=$(hcloud load-balancer describe apid -o format='{{.PublicNet.IPv4.IP}}')
Enter fullscreen mode Exit fullscreen mode

Step 3.2 - Configuration Patches

Create directory and configuration files:

mkdir patches
Enter fullscreen mode Exit fullscreen mode
# Control plane specific config
cat <<EOF > patches/patch-cp.yaml
cluster:
  proxy:
    disabled: true
  externalCloudProvider:
    enabled: false
  allowSchedulingOnControlPlanes: false
EOF
Enter fullscreen mode Exit fullscreen mode
# General machine config
cat <<EOF > patches/patch.yaml
machine:
  certSANs:
    - ${LOAD_BALANCER_IP}
  network:
    interfaces:
      - interface: eth0
        dhcp: true
        routes:
          - network: 0.0.0.0/0
            gateway: 10.0.0.1
          - network: 10.0.0.1/32

cluster:
  network:
    cni:
      name: none
  externalCloudProvider:
    enabled: true
EOF
Enter fullscreen mode Exit fullscreen mode

Step 3.3 - Generate Machine Configs from Patches

# Generate secrets
talosctl gen secrets
Enter fullscreen mode Exit fullscreen mode
# Generate configurations
talosctl gen config $PROJECT_NAME https://${LOAD_BALANCER_IP}:6443 \
  --with-secrets secrets.yaml \
  --config-patch @patches/patch.yaml \
  --config-patch-control-plane @patches/patch-cp.yaml \
  --with-examples=false --with-docs=false \
  --force \
  --output .
Enter fullscreen mode Exit fullscreen mode

Step 4 - Create Nodes

This step walks you through creating control plane and worker nodes, then bootstrapping your Kubernetes cluster.

Step 4.1 - Control Plane Nodes

Create 3 control plane nodes for high availability:

# Control plane node 1
hcloud server create --name cp1 \
  --image $TALOS_IMAGE_ID \
  --type cx22 \
  --location hel1 \
  --label 'type=cp' \
  --network $PROJECT_NAME \
  --without-ipv4 \
  --without-ipv6 \
  --user-data-from-file controlplane.yaml

# Control plane node 2
hcloud server create --name cp2 \
  --image $TALOS_IMAGE_ID \
  --type cx22 \
  --location hel1 \
  --label 'type=cp' \
  --network $PROJECT_NAME \
  --without-ipv4 \
  --without-ipv6 \
  --user-data-from-file controlplane.yaml

# Control plane node 3
hcloud server create --name cp3 \
  --image $TALOS_IMAGE_ID \
  --type cx22 \
  --location hel1 \
  --label 'type=cp' \
  --network $PROJECT_NAME \
  --without-ipv4 \
  --without-ipv6 \
  --user-data-from-file controlplane.yaml
Enter fullscreen mode Exit fullscreen mode

Step 4.2 - Worker Nodes

Create at least 1 worker node:

# Worker node 1
hcloud server create --name worker1 \
  --image $TALOS_IMAGE_ID \
  --type cx22 \
  --location hel1 \
  --label 'type=worker' \
  --network $PROJECT_NAME \
  --without-ipv4 \
  --without-ipv6 \
  --user-data-from-file worker.yaml
Enter fullscreen mode Exit fullscreen mode

Wait Time: Nodes take about 5 minutes to boot up. Check with hcloud server list until all show "running" status.

Step 4.3 - Bootstrap the Cluster

# Configure talosctl endpoint
talosctl --talosconfig talosconfig config endpoint $LOAD_BALANCER_IP
Enter fullscreen mode Exit fullscreen mode
# Bootstrap the cluster (only run this once!)
talosctl --talosconfig talosconfig bootstrap -n $LOAD_BALANCER_IP
Enter fullscreen mode Exit fullscreen mode
# Generate kubeconfig
talosctl --talosconfig talosconfig kubeconfig . -n $LOAD_BALANCER_IP
Enter fullscreen mode Exit fullscreen mode

Verify Kubernetes nodes:

kubectl --kubeconfig kubeconfig get nodes
Enter fullscreen mode Exit fullscreen mode

You should see all nodes in "NotReady" state - this is normal until we install Cilium CNI.

Step 5 - Install Cilium CNI

Install Cilium using Helm for pod networking with kube-proxy replacement mode:

helm repo add cilium https://helm.cilium.io/ --force-update

helm install cilium cilium/cilium \
  --namespace kube-system \
  --kubeconfig kubeconfig \
  --set ipam.mode=kubernetes \
  --set kubeProxyReplacement=true \
  --set securityContext.capabilities.ciliumAgent="{CHOWN,KILL,NET_ADMIN,NET_RAW,IPC_LOCK,SYS_ADMIN,SYS_RESOURCE,DAC_OVERRIDE,FOWNER,SETGID,SETUID}" \
  --set securityContext.capabilities.cleanCiliumState="{NET_ADMIN,SYS_ADMIN,SYS_RESOURCE}" \
  --set cgroup.autoMount.enabled=false \
  --set cgroup.hostRoot=/sys/fs/cgroup \
  --set k8sServiceHost=localhost \
  --set k8sServicePort=7445 \
  --set envoy.enabled=false
Enter fullscreen mode Exit fullscreen mode

Wait for nodes to become ready:

kubectl --kubeconfig kubeconfig get nodes
Enter fullscreen mode Exit fullscreen mode

Step 6 - Essential Components Installation (optional)

Create secret with Hetzner API token and Network ID:

kubectl -n kube-system create secret generic hcloud \
  --from-literal=token="$HCLOUD_TOKEN" \
  --from-literal=network="$(hcloud network describe "$PROJECT_NAME" -o format='{{.ID}}')" \
  --kubeconfig=kubeconfig
Enter fullscreen mode Exit fullscreen mode

Step 6.1 - Hetzner Cloud Controller Manager

The Hetzner CCM integrates your cluster with Hetzner Cloud, enabling automatic provisioning of load balancers for Services of type LoadBalancer. This is required for exposing the Ingress Nginx Controller, which we'll install in Step 6.2.

Create values directory and value file containing Hetzner API Token and Network ID:

mkdir values
Enter fullscreen mode Exit fullscreen mode
cat <<EOF > values/hccm.yaml
env:
  - name: HCLOUD_NETWORK
    valueFrom:
      secretKeyRef:
        name: hcloud
        key: network
  - name: HCLOUD_NETWORK_ROUTES_ENABLED
    value: "false"
  - name: HCLOUD_TOKEN
    valueFrom:
      secretKeyRef:
        name: hcloud
        key: token
EOF
Enter fullscreen mode Exit fullscreen mode

Install Hetzner CCM:

helm repo add hcloud https://charts.hetzner.cloud --force-update

helm install hccm hcloud/hcloud-cloud-controller-manager \
  -n kube-system \
  --kubeconfig kubeconfig \
  --values values/hccm.yaml
Enter fullscreen mode Exit fullscreen mode

Step 6.2 - Ingress Nginx Controller

Install the Ingress Nginx Controller to handle external web traffic to your services. It will be exposed via a LoadBalancer Service, with the actual load balancer automatically created by the Hetzner CCM.

Create value file:

cat <<EOF > values/ingress-nginx.yaml
controller:
  service:
    annotations:
      load-balancer.hetzner.cloud/location: "hel1"
      load-balancer.hetzner.cloud/name: "ingress-nginx"
      load-balancer.hetzner.cloud/type: "lb11"
      load-balancer.hetzner.cloud/use-private-ip: "true"
      # # Redirect from HTTP to HTTPS.
      # load-balancer.hetzner.cloud/http-redirect-http: "true"
EOF
Enter fullscreen mode Exit fullscreen mode

Install Ingress Nginx Controller:

helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx --force-update

helm install ingress-nginx ingress-nginx/ingress-nginx \
  --namespace ingress-nginx \
  --create-namespace \
  --kubeconfig kubeconfig \
  --values values/ingress-nginx.yaml
Enter fullscreen mode Exit fullscreen mode

Step 6.3 - Hetzner CSI Driver

Install the Hetzner CSI driver to enable dynamic provisioning and management of persistent volumes backed by Hetzner Cloud:

helm install hcloud-csi hcloud/hcloud-csi \
  -n kube-system \
  --kubeconfig kubeconfig
Enter fullscreen mode Exit fullscreen mode

Step 6.4 - Create an example application

Create manifest file:

nano test-app.yaml
Enter fullscreen mode Exit fullscreen mode

Paste manifest:

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nginx-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
        - name: nginx
          image: nginx:alpine-slim
          command:
            - /bin/sh
            - -c
            - |
              echo '<html><head><style>body { font-size: 40px; font-family: Arial, sans-serif; }</style></head><body><h1>Hello from Hetzner Cloud!</h1></body></html>' > /usr/share/nginx/html/index.html && nginx -g 'daemon off;'
          ports:
            - containerPort: 80
          volumeMounts:
            - name: nginx-storage
              mountPath: /usr/share/nginx/html
      volumes:
        - name: nginx-storage
          persistentVolumeClaim:
            claimName: nginx-pvc
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-service
spec:
  selector:
    app: nginx
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
  type: ClusterIP
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: nginx-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  ingressClassName: nginx
  rules:
    - host: yourdomain.com ## Replace with your own domain
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: nginx-service
                port:
                  number: 80
Enter fullscreen mode Exit fullscreen mode

Deploy application:

kubectl apply -f test-app.yaml --kubeconfig kubeconfig
Enter fullscreen mode Exit fullscreen mode

Conclusion

Your Talos Linux based Kubernetes infrastructure is now fully operational and ready for production workloads.

License: MIT

Top comments (0)