DEV Community

Otto
Otto

Posted on

Kubernetes for Solo Developers in 2026: Do You Actually Need It?

Kubernetes for Solo Developers in 2026: Do You Actually Need It?

Short answer: probably not. But if you do, here's how to get started without losing your mind.

Kubernetes (K8s) has a reputation for being complex, enterprise-only, and overkill for small projects. That reputation is mostly deserved. But in 2026, the tooling has improved dramatically, and there are legitimate use cases for solo developers.

Let me give you the honest guide — including when NOT to use Kubernetes.

Should You Even Use Kubernetes?

Before anything else: let's be honest.

You probably DON'T need Kubernetes if:

  • You're running 1-3 services
  • You're on a single VPS or small server
  • You don't need zero-downtime deploys
  • You have < 10k daily users
  • You're spending more than 2 hours/month on infrastructure

Docker Compose handles 90% of solo dev use cases. If you're not running 5+ services with complex dependencies, just use Compose.

You MIGHT need Kubernetes if:

  • You need automatic scaling (traffic spikes)
  • You have microservices that need independent scaling
  • You want self-healing (auto-restart failed services)
  • You need rolling deployments with zero downtime
  • You're managing multiple environments (staging, prod, preview)

The Solo Dev K8s Stack in 2026

Forget the full enterprise setup. Here's the minimal stack that actually makes sense:

Component Tool Why
Local cluster k3d or Minikube Lightweight, fast startup
Production k3s on VPS 512MB RAM minimum
CI/CD GitHub Actions Free, integrates everywhere
GitOps ArgoCD Declarative deploys
Ingress Traefik or Nginx SSL termination
Secrets Sealed Secrets Safe to commit

Your First Kubernetes Cluster — Local Setup

# Install k3d (k3s in Docker — fastest local option)
curl -s https://raw.githubusercontent.com/k3d-io/k3d/main/install.sh | bash

# Create a cluster
k3d cluster create myapp --port "8080:80@loadbalancer"

# Verify
kubectl get nodes
# NAME                   STATUS   ROLES                  AGE   VERSION
# k3d-myapp-server-0    Ready    control-plane,master   30s   v1.28.4+k3s1
Enter fullscreen mode Exit fullscreen mode

You have a real Kubernetes cluster running in Docker. 30 seconds.

Core Concepts — The Minimum You Need to Know

Pods — The Smallest Unit

# pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: my-app
spec:
  containers:
  - name: app
    image: nginx:alpine
    ports:
    - containerPort: 80
Enter fullscreen mode Exit fullscreen mode
kubectl apply -f pod.yaml
kubectl get pods
Enter fullscreen mode Exit fullscreen mode

In practice: you almost never create Pods directly. Use Deployments.

Deployments — What You Actually Use

# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app
spec:
  replicas: 2  # Run 2 copies
  selector:
    matchLabels:
      app: my-app
  template:
    metadata:
      labels:
        app: my-app
    spec:
      containers:
      - name: app
        image: my-docker-image:latest
        ports:
        - containerPort: 3000
        resources:
          requests:
            memory: "128Mi"
            cpu: "100m"
          limits:
            memory: "256Mi"
            cpu: "500m"
        env:
        - name: DATABASE_URL
          valueFrom:
            secretKeyRef:
              name: app-secrets
              key: database-url
Enter fullscreen mode Exit fullscreen mode
kubectl apply -f deployment.yaml
kubectl rollout status deployment/my-app
Enter fullscreen mode Exit fullscreen mode

If the new version fails health checks, Kubernetes automatically rolls back. Zero downtime deploys are built-in.

Services — Networking

# service.yaml
apiVersion: v1
kind: Service
metadata:
  name: my-app-service
spec:
  selector:
    app: my-app  # Routes to our deployment
  ports:
  - port: 80
    targetPort: 3000
  type: ClusterIP  # Internal only
Enter fullscreen mode Exit fullscreen mode

Ingress — Exposing to the World

# ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: my-app-ingress
  annotations:
    cert-manager.io/cluster-issuer: "letsencrypt-prod"
spec:
  rules:
  - host: myapp.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: my-app-service
            port:
              number: 80
  tls:
  - hosts:
    - myapp.com
    secretName: myapp-tls
Enter fullscreen mode Exit fullscreen mode

This routes myapp.com → your service, with automatic SSL via Let's Encrypt.

Real-World: A Full-Stack App Deployment

Here's how a typical solo dev stack looks:

myapp/
├── k8s/
│   ├── namespace.yaml
│   ├── deployment.yaml
│   ├── service.yaml
│   ├── ingress.yaml
│   ├── secret.yaml
│   └── hpa.yaml        # Horizontal Pod Autoscaler
├── .github/
│   └── workflows/
│       └── deploy.yml
└── docker-compose.yml  # Still used for local dev!
Enter fullscreen mode Exit fullscreen mode

GitHub Actions CI/CD

# .github/workflows/deploy.yml
name: Deploy to Kubernetes

on:
  push:
    branches: [main]

jobs:
  deploy:
    runs-on: ubuntu-latest
    steps:
    - uses: actions/checkout@v4

    - name: Build and push Docker image
      run: |
        docker build -t ghcr.io/${{ github.repository }}:${{ github.sha }} .
        docker push ghcr.io/${{ github.repository }}:${{ github.sha }}

    - name: Deploy to K8s
      run: |
        kubectl set image deployment/my-app app=ghcr.io/${{ github.repository }}:${{ github.sha }}
        kubectl rollout status deployment/my-app
Enter fullscreen mode Exit fullscreen mode

Push to main → Docker image built → Deployed to Kubernetes. Automatically.

Auto-Scaling — The Real Win

# hpa.yaml
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: my-app-hpa
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: my-app
  minReplicas: 1
  maxReplicas: 10
  metrics:
  - type: Resource
    resource:
      name: cpu
      target:
        type: Utilization
        averageUtilization: 70
Enter fullscreen mode Exit fullscreen mode

Traffic spike? Kubernetes scales from 1 to 10 pods automatically. Traffic drops? Scales back down. Pay for what you use (if on a cloud provider).

Production Hosting Options for Solo Devs

Option Cost Complexity
k3s on $6/mo VPS (Hetzner) ~$6/mo Medium
DigitalOcean K8s ~$24/mo (3 nodes) Low
Civo K3s ~$20/mo Low
AWS EKS $73+/mo High
GKE Autopilot Pay per pod Low

For a side project: k3s on a Hetzner VPS. €4/month server, install k3s in 5 minutes, get most of the K8s benefits.

Useful Commands Cheat Sheet

# See everything running
kubectl get all -n my-namespace

# Logs from a pod
kubectl logs -f deployment/my-app

# Shell into a pod
kubectl exec -it $(kubectl get pod -l app=my-app -o name | head -1) -- sh

# Scale manually
kubectl scale deployment my-app --replicas=3

# Rollback if something goes wrong
kubectl rollout undo deployment/my-app

# See resource usage
kubectl top pods
Enter fullscreen mode Exit fullscreen mode

The Honest Verdict

Kubernetes will make your infrastructure more reliable, more scalable, and easier to reason about — eventually. The learning curve is real.

If you're launching a side project: use Docker Compose. If it starts getting traffic and you need zero-downtime deploys and auto-scaling: migrate to k3s. The Kubernetes concepts you learn will transfer directly to any cloud provider.


Managing your freelance or indie hacker projects? My Freelancer OS Notion Template keeps your client work, projects, and finances organized — so you can focus on shipping.

Build, ship, iterate. 🚀

Top comments (0)