DEV Community

Cover image for Kubernetes Backup & Restore: Velero + MinIO Complete Guide
Sandesh Pawar
Sandesh Pawar

Posted on

Kubernetes Backup & Restore: Velero + MinIO Complete Guide

Kubernetes environments demand reliable backups to prevent data loss from misconfigurations or disasters. This step-by-step tutorial shows how to set up Velero with a local MinIO backend for namespace backups and restores using Helm on Minikube or any cluster.

Why Velero with MinIO?

Velero backs up Kubernetes resources like deployments, services, and volumes via the API server, storing them in object storage like MinIO (S3-compatible). It's the leading open-source tool for disaster recovery, cluster migration, and scheduled backups in 2026 production setups.

MinIO provides a lightweight, self-hosted S3 alternative ideal for development, air-gapped clusters, or cost-sensitive teams—avoiding cloud vendor lock-in.

Pro Tip: Always test restores quarterly; untested backups fail 70% of the time in real incidents.

Prerequisites

Ensure these are ready before starting:

  • Docker and Docker Compose installed.
  • Kubernetes cluster (e.g., Minikube v1.33+).
  • kubectl configured.
  • Helm v3+.
  • Update MinIO IP in configs (use hostname -I for host IP).

Common Mistake: Forgetting to expose MinIO's IP correctly leads to Velero connection timeouts.

Phase 1: Deploy MinIO Storage Backend

MinIO acts as your S3-compatible backup storage. Use Docker Compose for quick local setup.

Create docker-compose.yaml:

version: '3.7'
services:
  minio:
    image: minio/minio:latest
    container_name: velero-minio
    ports:
      - "9000:9000"  # API
      - "9001:9001"  # Console
    volumes:
      - minio-data:/data
    environment:
      MINIO_ROOT_USER: velero
      MINIO_ROOT_PASSWORD: Velero123StrongPass!
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
      interval: 30s
      timeout: 20s
      retries: 3
    command: server /data --console-address ":9001"

  mc:  # MinIO Client with retry loop
    image: minio/mc:latest
    depends_on:
      minio:
        condition: service_healthy
    entrypoint: >
      /bin/sh -c "
      echo 'Waiting for MinIO health...';
      mc alias set local http://minio:9000 velero Velero123StrongPass!;
      mc mb local/backup-bucket || echo 'Bucket already exists';
      mc anonymous set public local/backup-bucket || echo 'Policy already set';
      mc ls local/backup-bucket;
      exit 0;
      "

volumes:
  minio-data:
Enter fullscreen mode Exit fullscreen mode

Launch it:

docker compose up -d
Enter fullscreen mode Exit fullscreen mode

Verify at http://<your-host-ip>:9001 (user: velero, pass: Velero123StrongPass!). Check backup-bucket exists.

Pro Tip: In production, use distributed MinIO with erasure coding for high availability.

Phase 2: Install Velero via Helm

Velero deploys as a cluster operator. Use VMware Tanzu Helm repo (latest stable as of March 2026).

Add repo:

helm repo add vmware-tanzu https://vmware-tanzu.github.io/helm-charts
helm repo update
Enter fullscreen mode Exit fullscreen mode

Create velero-secret.yaml:

apiVersion: v1
kind: Secret
metadata:
  name: velero-secrets
  namespace: velero
type: Opaque
stringData:
  cloud: |
    [default]
    aws_access_key_id = velero
    aws_secret_access_key = Velero123StrongPass!
Enter fullscreen mode Exit fullscreen mode

Apply:

kubectl create namespace velero
kubectl apply -f velero-secret.yaml
Enter fullscreen mode Exit fullscreen mode

Create velero-values.yaml (update s3Url with your MinIO host IP, e.g., http://192.168.49.2:9000 for Minikube):

configuration:
  backupStorageLocation:
    - name: default
      provider: aws
      bucket: backup-bucket
      config:
        s3Url: http://YOUR_MINIO_IP:9000  # Update this!
        s3ForcePathStyle: "true"
      default: true
  defaultVolumesToRestic: true

credentials:
  existingSecret: velero-secrets

snapshotsEnabled: false  # Enable for PV snapshots in prod

initContainers:
   - name: velero-plugin-for-aws
     image: velero/velero-plugin-for-aws:v1.13.1  # Use latest compatible
     imagePullPolicy: IfNotPresent
     volumeMounts:
       - mountPath: /target
         name: plugins
Enter fullscreen mode Exit fullscreen mode

Install:

helm install velero vmware-tanzu/velero -n velero -f velero-values.yaml
kubectl get pods -n velero  # Wait for Running
Enter fullscreen mode Exit fullscreen mode

Common Mistake: Wrong s3Url IP—use minikube ip or host IP reachable from cluster pods.

Phase 3: Install Velero CLI

Download Velero CLI v1.18.0+ (matches server; check velero.io for latest):

wget https://github.com/vmware-tanzu/velero/releases/download/v1.18.0/velero-v1.18.0-linux-amd64.tar.gz
tar -xvf velero-v1.18.0-linux-amd64.tar.gz
sudo cp velero-v1.18.0-linux-amd64/velero /usr/local/bin/
velero version
Enter fullscreen mode Exit fullscreen mode

Phase 4: Create and Verify Backup

Backup namespaces (create test-ns first if needed: kubectl create ns test-ns):

velero backup create test-backup --include-namespaces default,test-ns --wait
velero backup get
velero backup describe test-backup  # Should show Completed
Enter fullscreen mode Exit fullscreen mode

Backups store in MinIO backup-bucket as tarballs with YAML manifests.
Pro Tip: Add --default-volumes-to-restic for PVC data; requires restic daemonsets.

Phase 5: Simulate Disaster and Restore

Delete test resources:

kubectl delete pod,service --all -n default  # Or specific: kubectl delete pod test -n default
kubectl delete ns test-ns
kubectl get po -A  # Confirm gone
Enter fullscreen mode Exit fullscreen mode

Restore:

velero restore create --from-backup test-backup --wait
velero restore get
velero restore describe <restore-name>  # From output
kubectl get po -A  # Resources back!
Enter fullscreen mode Exit fullscreen mode

Common Mistake: Restores skip existing resources by default—use --existing-resource-policy=update to overwrite.

Best Practices for Production

Aspect Recommendation Why It Matters
Scheduling velero schedule create daily --schedule="0 2 * * *" Automates daily backups at 2 AM.
Retention --ttl=30d Keeps 30 days; prevents storage bloat.
Monitoring Export Prometheus metrics; alert on failures. Catches issues before disasters.
Multi-location Add secondary BSL for offsite DR. Survives regional outages.
Hooks Pre/post exec hooks for DB flush. Ensures consistent stateful backups.

Test full E2E monthly. Velero shines for DevOps teams handling stateful apps like databases on Kubernetes.

Ready to implement? Drop a comment on your cluster setup or share your backup success! Subscribe for more Kubernetes/DevOps guides.

Top comments (0)