DEV Community

jesus manrique
jesus manrique

Posted on • Originally published at guayoyo.tech

Tutorial: Kubernetes with k3s — 3-Node Cluster, Ingress, and Cloudflare in 30 Minutes

Setting up a Kubernetes cluster sounds like a weeks-long project requiring certifications and a board-approved budget. And for years it was. But in 2026, with k3s — Rancher's lightweight Kubernetes — you can have a functional 3-node cluster in the time it takes to eat lunch.

This tutorial takes you from zero to a real cluster running a web application, exposed with Ingress, SSL with a 15-year Cloudflare certificate, and your own domain. No managed services. No excuses.

What you'll build

By the end of this tutorial you'll have:

                   ┌──────────────┐
                   │  Cloudflare  │
                   │  (DNS + SSL) │
                   └──────┬───────┘
                          │
                   ┌──────▼───────┐
                   │   Ingress    │
                   │  (Traefik)   │
                   └──────┬───────┘
                          │
              ┌───────────┼───────────┐
              │           │           │
        ┌─────▼─────┐ ┌──▼────┐ ┌───▼─────┐
        │  Master   │ │Worker1│ │Worker2  │
        │  k3s      │ │k3s   │ │k3s      │
        └───────────┘ └───────┘ └─────────┘
              │
    ┌─────────┼─────────┐
    │         │         │
┌───▼──┐ ┌───▼──┐ ┌───▼──┐
│ Pod1 │ │ Pod2 │ │ Pod3 │  ← Your scaled web app
└──────┘ └──────┘ └──────┘
Enter fullscreen mode Exit fullscreen mode

Three servers, a real cluster, and an app accessible from the internet with your own domain.

What you need before starting

1. Three Linux servers

Create three VPS. The most accessible options in May 2026:

Provider Minimum plan Price/month (approx) What to get
Hetzner CX22 (2 vCPU, 4 GB) ~$4 3x CX22 with private network
DigitalOcean Basic Droplet (1 vCPU, 1 GB) ~$6 3x Droplets with VPC
OVH VPS Starter (1 vCPU, 2 GB) ~$4 3x VPS
Linode Nanode (1 vCPU, 1 GB) ~$5 3x Nanodes

Important for the tutorial: when creating the servers, enable private networking if your provider offers it (Hetzner calls it "Network", DigitalOcean "VPC", OVH "vRack"). This gives you internal IPs for node-to-node communication without exposing internal traffic to the internet.

Each server must have:

  • 1 GB RAM minimum (2 GB recommended for the master)
  • 1 CPU
  • 10 GB disk
  • Ubuntu 24.04 LTS (operating system)
  • Private network enabled between all three (optional but highly recommended)

⚠️ Essential: public IP on the master. Without a public IP on the master node, Cloudflare can't point the domain to your server and Let's Encrypt can't verify domain ownership. Workers can have only private IPs if they're on the same network as the master, but the master must be accessible from the internet on ports 80 and 443. This is non-negotiable for SSL and the domain.

All the providers mentioned above (Hetzner, DigitalOcean, OVH, Linode) include a public IPv4 with each VPS at no extra cost.

2. SSH access to all three servers

When you create each VPS, the provider gives you:

  • A public IP (e.g., 167.99.123.45)
  • A user (usually root on Hetzner/OVH, or whatever you configured on DigitalOcean)
  • A password or SSH key

Verify you can connect to each one from your local terminal:

ssh root@SERVER_IP
Enter fullscreen mode Exit fullscreen mode

If using SSH keys (recommended):

# If you don't have an SSH key on your local machine yet:
ssh-keygen -t ed25519 -C "your-email@example.com"

# Copy it to each server:
ssh-copy-id root@SERVER_IP
Enter fullscreen mode Exit fullscreen mode

If the provider gave you a password instead of a key, change it on first login and configure key-based access.

3. A domain on Cloudflare

You need a domain configured on Cloudflare. If you already have one, go to the Cloudflare dashboard and make sure:

  • Your domain's nameservers point to Cloudflare (e.g., eliza.ns.cloudflare.com)
  • The plan is Free (enough for this tutorial, includes proxy and SSL)

If you don't have a domain, buy one directly from Cloudflare (.com ~$10/year, .tech ~$8/year, .dev ~$12/year) or from Namecheap/Porkbun and then add it to Cloudflare.

For this tutorial I'll use k3s.your-domain.com as the example subdomain. You'll use your real domain.

4. kubectl installed on your local machine

kubectl is the command-line tool for controlling Kubernetes. You'll use it to view nodes, deploy apps, and monitor.

# On your local machine (Linux/Mac):
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
chmod +x kubectl
sudo mv kubectl /usr/local/bin/

# On macOS with Homebrew:
brew install kubectl

# On Windows (PowerShell as administrator):
# winget install Kubernetes.kubectl

# Verify installation:
kubectl version --client
Enter fullscreen mode Exit fullscreen mode

(If you prefer not to install kubectl, you can run all commands with sudo kubectl directly on the master via SSH.)

5. Final verification before starting

Run this in your local terminal and make sure you have all the answers before proceeding:

# 1. Can I SSH into all three servers?
ssh root@MASTER_IP "echo 'OK: master'"
ssh root@WORKER1_IP "echo 'OK: worker1'"
ssh root@WORKER2_IP "echo 'OK: worker2'"

# 2. Does the master have a public IP? (should show an IP, not 10.x or 192.168.x)
ssh root@MASTER_IP "curl -s ifconfig.me"
# This should return the same IP you'll use in Cloudflare

# 3. Do I have kubectl?
kubectl version --client 2>/dev/null && echo "OK: kubectl" || echo "MISSING: kubectl"

# 4. Is my domain on Cloudflare?
# Open https://dash.cloudflare.com and confirm you see your domain in the list
Enter fullscreen mode Exit fullscreen mode

All good? Let's build the cluster.

Example names and IPs for this tutorial

In the commands that follow I'll use these names. Replace them with yours:

Master:  10.0.1.10  (k3s-master)   ← Master private IP
Worker1: 10.0.1.20  (k3s-worker-1) ← Worker 1 private IP
Worker2: 10.0.1.30  (k3s-worker-2) ← Worker 2 private IP

Master public IP: 167.99.123.45  ← The one Cloudflare will use
Domain: k3s.your-domain.com
Email: your-email@example.com  ← For Let's Encrypt
Enter fullscreen mode Exit fullscreen mode

Write yours down in a notepad. You'll need them at every step.

Step 1: Prepare the three servers

SSH into each of the three servers and run the same commands on all. Let's start with the master, but the base preparation is identical.

# Update the system
sudo apt update && sudo apt upgrade -y

# Install minimal dependencies
sudo apt install -y curl wget ufw

# Set hostname to identify each node
# On the master:
sudo hostnamectl set-hostname k3s-master

# On worker1:
sudo hostnamectl set-hostname k3s-worker-1

# On worker2:
sudo hostnamectl set-hostname k3s-worker-2

# Open necessary firewall ports
# Master:
sudo ufw allow 6443/tcp    # API server
sudo ufw allow 80/tcp      # HTTP for Ingress
sudo ufw allow 443/tcp     # HTTPS for Ingress
sudo ufw allow 22/tcp      # SSH
sudo ufw --force enable

# Workers (don't need 6443):
sudo ufw allow 80/tcp
sudo ufw allow 443/tcp
sudo ufw allow 22/tcp
sudo ufw --force enable
Enter fullscreen mode Exit fullscreen mode

Why does the master need port 6443? That's where Kubernetes' API server listens. The workers communicate with it, and so do you when using kubectl. Ports 80 and 443 are for web traffic reaching the Ingress Controller.

Step 2: Install k3s on the master node

k3s installs with a single command. It's not magic — it's an official script that packages Kubernetes, containerd, Traefik (Ingress Controller), and CoreDNS into a single binary under 100 MB.

# On the master
curl -sfL https://get.k3s.io | sh -s - \
  --write-kubeconfig-mode 644 \
  --disable servicelb \
  --node-name k3s-master
Enter fullscreen mode Exit fullscreen mode

What each flag does:

  • --write-kubeconfig-mode 644: makes kubeconfig readable without sudo (convenience for development; in production use 600).
  • --disable servicelb: disables k3s' built-in load balancer. We'll use Traefik as Ingress, so we don't need ServiceLB.
  • --node-name: explicit node name.

In 30 seconds you have a functional master node. Let's verify:

# View cluster nodes
sudo kubectl get nodes

# You should see:
# NAME          STATUS   ROLES                  AGE   VERSION
# k3s-master    Ready    control-plane,master   30s   v1.32.x+k3s1
Enter fullscreen mode Exit fullscreen mode

That Ready means the control plane is up, CoreDNS is working, and Traefik is running. All on one node consuming ~400 MB of RAM.

Step 3: Get the token and master IP

For workers to join the cluster they need two things: the join token and the master IP.

# On the master, save the token
sudo cat /var/lib/rancher/k3s/server/node-token
# Example output: K10d8f7e6a5b4c3d2e1f0a9b8c7d6e5f4a3b2c1d0e9f8g7h6i5j4k3l2m1n0::server:abc123def456

# Save the master IP (the one workers will use to connect)
# If you have a private network, use that IP. If not, the public one.
hostname -I | awk '{print $1}'
Enter fullscreen mode Exit fullscreen mode

Important: if your servers are on different networks and you use the public IP for node-to-node communication, make sure port 6443 is open on the master's firewall to the internet. In production environments, a VPN or private network between nodes is recommended.

Step 4: Join workers to the cluster

On each worker run the following command, replacing MASTER_IP with your master's IP and TOKEN with the token you got:

# On worker-1 and worker-2
curl -sfL https://get.k3s.io | K3S_URL=https://MASTER_IP:6443 \
  K3S_TOKEN="TOKEN" \
  sh -s - \
  --node-name k3s-worker-1   # or k3s-worker-2
Enter fullscreen mode Exit fullscreen mode

Concrete example:

curl -sfL https://get.k3s.io | K3S_URL=https://10.0.1.10:6443 \
  K3S_TOKEN="K10d8f7e6a5b4c3d2e1f0a9b8c7d6e5f4a3b2c1d0e9f8g7h6i5j4k3l2m1n0::server:abc123def456" \
  sh -s - \
  --node-name k3s-worker-1
Enter fullscreen mode Exit fullscreen mode

The worker downloads k3s, joins the cluster, and is ready in seconds. Verify from the master:

sudo kubectl get nodes

# You should see all three nodes:
# NAME            STATUS   ROLES                  AGE   VERSION
# k3s-master      Ready    control-plane,master   5m    v1.32.x+k3s1
# k3s-worker-1    Ready    <none>                 30s   v1.32.x+k3s1
# k3s-worker-2    Ready    <none>                 20s   v1.32.x+k3s1
Enter fullscreen mode Exit fullscreen mode

Three nodes, functional cluster. That's it. No separate etcd, no manual cert-manager, no hand-compiled Kubernetes binaries. Welcome to 2026.

Step 5: Create a simple web application

Now that you have the cluster, let's deploy something to it. We'll create a Node.js web app that's minimal but useful: a dashboard showing information about the server it runs on — pod name, node, memory usage — so you can see load balancing in action.

We do all of this from the master or your local machine with kubectl.

# Create app directory
mkdir -p ~/k3s-demo && cd ~/k3s-demo

# Create package.json
cat > package.json << 'EOF'
{
  "name": "k3s-demo",
  "version": "1.0.0",
  "scripts": { "start": "node server.js" },
  "dependencies": { "express": "^4.21.0" }
}
EOF

# Create server.js
cat > server.js << 'EOF'
const express = require('express');
const os = require('os');

const app = express();
const PORT = process.env.PORT || 3000;

app.get('/', (req, res) => {
  const mem = process.memoryUsage();
  res.json({
    app: 'k3s-demo',
    version: '1.0.0',
    pod: os.hostname(),
    node: process.env.NODE_NAME || 'unknown',
    memory: {
      heapUsed: `${Math.round(mem.heapUsed / 1024 / 1024)} MB`
    },
    uptime: `${Math.floor(process.uptime())}s`,
    time: new Date().toISOString()
  });
});

app.get('/health', (req, res) => {
  res.json({ status: 'ok' });
});

app.listen(PORT, () => {
  console.log(`k3s-demo running on port ${PORT}`);
});
EOF
Enter fullscreen mode Exit fullscreen mode

This app exposes a / endpoint that returns environment info and a /health endpoint we'll use as a Kubernetes health check. Simple but enough to see the cluster in action.

Step 6: Create the Dockerfile and build the image

We need to package the app in a container. Since k3s uses containerd, we can build the image on the master and load it locally, or use a registry. For this tutorial we'll use k3s' built-in local image store.

cat > Dockerfile << 'EOF'
FROM node:22-alpine
WORKDIR /app
COPY package.json ./
RUN npm install --production
COPY server.js ./
EXPOSE 3000
USER node
CMD ["node", "server.js"]
EOF

# Build the image
docker build -t k3s-demo:1.0.0 .

# If on the master, import it into containerd
# k3s includes a tool for this:
sudo k3s ctr images import k3s-demo.tar

# Alternative: use 'docker save' and 'k3s ctr images import'
docker save k3s-demo:1.0.0 -o k3s-demo.tar
sudo k3s ctr images import k3s-demo.tar
Enter fullscreen mode Exit fullscreen mode

Note on registries: in a real environment you'd use a private registry (Docker Hub, GHCR, Harbor) or k3s' integrated registry. For this tutorial, local import is enough. If you want to use Docker Hub:

docker tag k3s-demo:1.0.0 your-user/k3s-demo:1.0.0
docker push your-user/k3s-demo:1.0.0
# Then in the manifests use image: your-user/k3s-demo:1.0.0
Enter fullscreen mode Exit fullscreen mode

Step 7: Create Kubernetes manifests

Create the YAML files that define the deployment. Let's start with a Deployment that maintains 3 replicas of your app:

cat > deployment.yaml << 'EOF'
apiVersion: apps/v1
kind: Deployment
metadata:
  name: k3s-demo
  labels:
    app: k3s-demo
spec:
  replicas: 3
  selector:
    matchLabels:
      app: k3s-demo
  template:
    metadata:
      labels:
        app: k3s-demo
    spec:
      containers:
      - name: k3s-demo
        image: k3s-demo:1.0.0
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 3000
        env:
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        resources:
          requests:
            memory: "64Mi"
            cpu: "50m"
          limits:
            memory: "128Mi"
            cpu: "200m"
        readinessProbe:
          httpGet:
            path: /health
            port: 3000
          initialDelaySeconds: 5
          periodSeconds: 10
        livenessProbe:
          httpGet:
            path: /health
            port: 3000
          initialDelaySeconds: 15
          periodSeconds: 20
EOF
Enter fullscreen mode Exit fullscreen mode

Note two important details:

  • imagePullPolicy: IfNotPresent: uses the local image if it exists, doesn't try to download it. Perfect for our imported image.
  • NODE_NAME is injected via fieldRef so the app knows which physical node it runs on. You'll see it in the endpoint response.
  • resources with requests and limits: in production this is mandatory. It prevents a pod from hogging resources and lets the scheduler make informed decisions.

Then the Service:

cat > service.yaml << 'EOF'
apiVersion: v1
kind: Service
metadata:
  name: k3s-demo
  labels:
    app: k3s-demo
spec:
  type: ClusterIP
  selector:
    app: k3s-demo
  ports:
  - port: 80
    targetPort: 3000
    protocol: TCP
EOF
Enter fullscreen mode Exit fullscreen mode

And the Ingress that exposes the app to the world with SSL:

cat > ingress.yaml << 'EOF'
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: k3s-demo
  annotations:
    traefik.ingress.kubernetes.io/router.entrypoints: websecure
    traefik.ingress.kubernetes.io/router.tls: "true"
spec:
  tls:
  - hosts:
    - k3s.your-domain.com
    secretName: cloudflare-origin-cert
  rules:
  - host: k3s.your-domain.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: k3s-demo
            port:
              number: 80
EOF
Enter fullscreen mode Exit fullscreen mode

Notice secretName: cloudflare-origin-cert. This is how Kubernetes handles SSL: you create a TLS secret with your certificate and key, and reference it in the Ingress. No Let's Encrypt, no automatic renewals. The Cloudflare certificate lasts 15 years and is free. We'll create it in the next step.

Step 8: Configure Cloudflare — DNS and SSL certificate

Cloudflare doesn't just give us DNS. It also offers free 15-year SSL certificates for traffic between Cloudflare and your origin server. This eliminates the need for Let's Encrypt and its 90-day renewals.

8.1 Create the DNS record

Open your Cloudflare dashboard, select your domain, and go to DNS → Records. Click Add record:

  1. Type: A
  2. Name: k3s (this creates k3s.your-domain.com)
  3. IPv4 address: your master's public IP (e.g., 167.99.123.45)
  4. Proxy status: Proxied (orange icon, enabled)
  5. TTL: Auto
  6. Click Save

This is how it looks in the panel:

┌─────────────────────────────────────────────────────────┐
│ Type  │ Name  │ Content          │ Proxy  │ TTL  │
│ A     │ k3s   │ 167.99.123.45   │ 🟠     │ Auto │
└─────────────────────────────────────────────────────────┘
Enter fullscreen mode Exit fullscreen mode

Cloudflare's orange proxy is key: it hides your server's real IP, gives you basic DDoS protection, and accelerates delivery with CDN.

8.2 Configure SSL/TLS in Cloudflare

Go to SSL/TLS → Overview and select Full (strict) mode. This forces traffic between Cloudflare and your origin to be encrypted with a valid certificate.

8.3 Generate the origin SSL certificate (15 years free)

Go to SSL/TLS → Origin Server and click Create Certificate:

  1. Leave "Let Cloudflare generate a private key and a CSR" checked
  2. Under Hostnames add: *.your-domain.com and your-domain.com
  3. Certificate Validity: 15 years
  4. Click Create

Cloudflare will show you two text blocks:

  • Origin Certificate (PEM) — the certificate
  • Private Key (PEM) — the private key

Save both NOW. The private key is only shown once. If you close the window, you'll have to generate another certificate.

Copy each block to a file on the master:

# On the master, create the certificate files
# Paste the Origin Certificate content between the delimiters:
cat > /tmp/cloudflare-cert.pem << 'CERTEOF'
-----BEGIN CERTIFICATE-----
... paste the full certificate here ...
-----END CERTIFICATE-----
CERTEOF

# Paste the Private Key content:
cat > /tmp/cloudflare-key.pem << 'KEYEOF'
-----BEGIN PRIVATE KEY-----
... paste the full private key here ...
-----END PRIVATE KEY-----
KEYEOF
Enter fullscreen mode Exit fullscreen mode

8.4 Create the TLS secret in Kubernetes

With the files on the master, create the secret the Ingress will use:

sudo kubectl create secret tls cloudflare-origin-cert \
  --cert=/tmp/cloudflare-cert.pem \
  --key=/tmp/cloudflare-key.pem

# Verify it was created correctly
sudo kubectl describe secret cloudflare-origin-cert
Enter fullscreen mode Exit fullscreen mode

The name cloudflare-origin-cert must match the secretName in the ingress.yaml you created in step 7.

8.5 Verify DNS propagation

DNS can take from 30 seconds to 5 minutes to propagate:

# From your local terminal:
nslookup k3s.your-domain.com

# You should see Cloudflare's IP (not your server's):
# k3s.your-domain.com
# Address: 104.21.xx.xx   ← Cloudflare IP
# Address: 172.67.xx.xx   ← Cloudflare IP
Enter fullscreen mode Exit fullscreen mode

Done. You have a domain, DNS, and a 15-year SSL certificate without depending on Let's Encrypt.

Step 9: Deploy everything and verify

With the TLS secret created, Cloudflare pointing, and the origin certificate ready, apply the manifests in order:

# 1. First the Deployment (creates the pods)
sudo kubectl apply -f deployment.yaml

# 2. Then the Service (exposes pods internally)
sudo kubectl apply -f service.yaml

# 3. Finally the Ingress (exposes the app with SSL using the Cloudflare certificate)
sudo kubectl apply -f ingress.yaml

# View running pods (wait a few seconds)
sudo kubectl get pods -o wide

# View the ingress
sudo kubectl get ingress

# View services
sudo kubectl get svc
Enter fullscreen mode Exit fullscreen mode

When all pods are Running, your app is live and served with SSL. Since the certificate is already loaded in the TLS secret, there's no waiting for any external verification — it works immediately.

Test your application:

curl https://k3s.your-domain.com
Enter fullscreen mode Exit fullscreen mode

You should see something like:

{
  "app": "k3s-demo",
  "version": "1.0.0",
  "pod": "k3s-demo-7f8c9b6d5-abc12",
  "node": "k3s-worker-1",
  "memory": { "heapUsed": "18 MB" },
  "uptime": "45s",
  "time": "2026-05-14T01:30:00.000Z"
}
Enter fullscreen mode Exit fullscreen mode

Refresh several times. You'll see the pod and node change — the Ingress is balancing the load across the three pods, which are distributed between the workers. Kubernetes orchestrating without you writing a single line of balancing logic.

Step 10: Scale on demand

One of the advantages of having a cluster is scaling with one command:

# From 3 replicas to 6
kubectl scale deployment k3s-demo --replicas=6

# Watch new pods being created
kubectl get pods -w    # -w = watch, live updates
Enter fullscreen mode Exit fullscreen mode

In seconds you have 6 pods distributed across the workers. If one of the workers goes down, Kubernetes automatically relocates its pods to healthy nodes.

# Simulate a worker failure (deleting pods doesn't work, they self-recreate)
# But you can see what happens if you drain a node:
kubectl drain k3s-worker-1 --ignore-daemonsets --delete-emptydir-data
kubectl get pods -o wide   # All migrated to master and worker-2
Enter fullscreen mode Exit fullscreen mode

This is real self-healing capacity. No scripts, no manual monitoring, no waking up at 3 AM.

What this cluster DOESN'T have (yet)

This tutorial gave you a functional cluster in ~30 minutes. But a production cluster needs more:

  • Persistent storage: PVCs with Longhorn or Rook-Ceph for databases and files
  • Monitoring: Prometheus + Grafana for metrics, alerts, and dashboards
  • Centralized logging: Loki or ELK for logs from all pods in one place
  • CI/CD: ArgoCD or Flux for automated GitOps deployments
  • Security policies: NetworkPolicies, PodSecurityPolicies, OPA/Gatekeeper
  • Backups: Velero for automatic cluster and volume backups
  • Real high availability: Multiple masters with external etcd

This cluster is a Formula 1 car without brakes, telemetry, or a helmet. It runs, but it's not ready for the track.


Did you enjoy what you built and want to take it to the next level? At Guayoyo Tech we design, deploy, and operate cloud native architectures on Kubernetes for companies that need more than a tutorial:

  • Multi-cloud and on-premise clusters with real high availability, not "faith-based"
  • GitOps with ArgoCD so every change is versioned, reviewed, and deployed automatically
  • Monitoring and observability that alerts you before the customer notices
  • Migration strategies from traditional VPS to orchestrated containers without downtime
  • Team training in Kubernetes, Docker, CI/CD, and cloud native from scratch

What you just set up in 30 minutes is the entry point. We build the whole house.

Let's talk →

Top comments (0)