<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Sandesh Pawar</title>
    <description>The latest articles on DEV Community by Sandesh Pawar (@dev-sandesh).</description>
    <link>https://dev.to/dev-sandesh</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/dev-sandesh"/>
    <language>en</language>
    <item>
      <title>Kubernetes Backup &amp; Restore: Velero + MinIO Complete Guide</title>
      <dc:creator>Sandesh Pawar</dc:creator>
      <pubDate>Fri, 27 Mar 2026 10:20:01 +0000</pubDate>
      <link>https://dev.to/dev-sandesh/kubernetes-backup-restore-velero-minio-complete-guide-35m7</link>
      <guid>https://dev.to/dev-sandesh/kubernetes-backup-restore-velero-minio-complete-guide-35m7</guid>
      <description>&lt;p&gt;Kubernetes environments demand reliable backups to prevent data loss from misconfigurations or disasters. This step-by-step tutorial shows how to set up Velero with a local MinIO backend for namespace backups and restores using Helm on Minikube or any cluster. &lt;/p&gt;

&lt;h2&gt;
  
  
  Why Velero with MinIO?
&lt;/h2&gt;

&lt;p&gt;Velero backs up Kubernetes resources like deployments, services, and volumes via the API server, storing them in object storage like MinIO (S3-compatible). It's the leading open-source tool for disaster recovery, cluster migration, and scheduled backups in 2026 production setups. &lt;/p&gt;

&lt;p&gt;MinIO provides a lightweight, self-hosted S3 alternative ideal for development, air-gapped clusters, or cost-sensitive teams—avoiding cloud vendor lock-in. &lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;Ensure these are ready before starting:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Docker and Docker Compose installed.&lt;/li&gt;
&lt;li&gt;Kubernetes cluster (e.g., Minikube v1.33+).&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;kubectl&lt;/code&gt; configured.&lt;/li&gt;
&lt;li&gt;Helm v3+.&lt;/li&gt;
&lt;li&gt;Update MinIO IP in configs (use &lt;code&gt;hostname -I&lt;/code&gt; for host IP).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Common Mistake:&lt;/strong&gt; Forgetting to expose MinIO's IP correctly leads to Velero connection timeouts. &lt;/p&gt;

&lt;h2&gt;
  
  
  Phase 1: Deploy MinIO Storage Backend
&lt;/h2&gt;

&lt;p&gt;MinIO acts as your S3-compatible backup storage. Use Docker Compose for quick local setup.&lt;/p&gt;

&lt;p&gt;Create &lt;code&gt;docker-compose.yaml&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;3.7'&lt;/span&gt;
&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;minio&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;minio/minio:latest&lt;/span&gt;
    &lt;span class="na"&gt;container_name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;velero-minio&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;9000:9000"&lt;/span&gt;  &lt;span class="c1"&gt;# API&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;9001:9001"&lt;/span&gt;  &lt;span class="c1"&gt;# Console&lt;/span&gt;
    &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;minio-data:/data&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;MINIO_ROOT_USER&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;velero&lt;/span&gt;
      &lt;span class="na"&gt;MINIO_ROOT_PASSWORD&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Velero123StrongPass!&lt;/span&gt;
    &lt;span class="na"&gt;healthcheck&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;test&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;CMD"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;curl"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;-f"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;http://localhost:9000/minio/health/live"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
      &lt;span class="na"&gt;interval&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;30s&lt;/span&gt;
      &lt;span class="na"&gt;timeout&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;20s&lt;/span&gt;
      &lt;span class="na"&gt;retries&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;
    &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;server /data --console-address ":9001"&lt;/span&gt;

  &lt;span class="na"&gt;mc&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;  &lt;span class="c1"&gt;# MinIO Client with retry loop&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;minio/mc:latest&lt;/span&gt;
    &lt;span class="na"&gt;depends_on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;minio&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;condition&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;service_healthy&lt;/span&gt;
    &lt;span class="na"&gt;entrypoint&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;&amp;gt;&lt;/span&gt;
      &lt;span class="s"&gt;/bin/sh -c "&lt;/span&gt;
      &lt;span class="s"&gt;echo 'Waiting for MinIO health...';&lt;/span&gt;
      &lt;span class="s"&gt;mc alias set local http://minio:9000 velero Velero123StrongPass!;&lt;/span&gt;
      &lt;span class="s"&gt;mc mb local/backup-bucket || echo 'Bucket already exists';&lt;/span&gt;
      &lt;span class="s"&gt;mc anonymous set public local/backup-bucket || echo 'Policy already set';&lt;/span&gt;
      &lt;span class="s"&gt;mc ls local/backup-bucket;&lt;/span&gt;
      &lt;span class="s"&gt;exit 0;&lt;/span&gt;
      &lt;span class="s"&gt;"&lt;/span&gt;

&lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;minio-data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Launch it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker compose up &lt;span class="nt"&gt;-d&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Verify at &lt;code&gt;http://&amp;lt;your-host-ip&amp;gt;:9001&lt;/code&gt; (user: &lt;code&gt;velero&lt;/code&gt;, pass: &lt;code&gt;Velero123StrongPass!&lt;/code&gt;). Check &lt;code&gt;backup-bucket&lt;/code&gt; exists. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pro Tip:&lt;/strong&gt; In production, use distributed MinIO with erasure coding for high availability. &lt;/p&gt;

&lt;h2&gt;
  
  
  Phase 2: Install Velero via Helm
&lt;/h2&gt;

&lt;p&gt;Velero deploys as a cluster operator. Use VMware Tanzu Helm repo (latest stable as of March 2026). &lt;/p&gt;

&lt;p&gt;Add repo:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm repo add vmware-tanzu https://vmware-tanzu.github.io/helm-charts
helm repo update
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create &lt;code&gt;velero-secret.yaml&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Secret&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;velero-secrets&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;velero&lt;/span&gt;
&lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Opaque&lt;/span&gt;
&lt;span class="na"&gt;stringData&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;cloud&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
    &lt;span class="s"&gt;[default]&lt;/span&gt;
    &lt;span class="s"&gt;aws_access_key_id = velero&lt;/span&gt;
    &lt;span class="s"&gt;aws_secret_access_key = Velero123StrongPass!&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Apply:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl create namespace velero
kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; velero-secret.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create &lt;code&gt;velero-values.yaml&lt;/code&gt; (update &lt;code&gt;s3Url&lt;/code&gt; with your MinIO host IP, e.g., &lt;code&gt;http://192.168.49.2:9000&lt;/code&gt; for Minikube):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;configuration&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;backupStorageLocation&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
      &lt;span class="na"&gt;provider&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;aws&lt;/span&gt;
      &lt;span class="na"&gt;bucket&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;backup-bucket&lt;/span&gt;
      &lt;span class="na"&gt;config&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;s3Url&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http://YOUR_MINIO_IP:9000&lt;/span&gt;  &lt;span class="c1"&gt;# Update this!&lt;/span&gt;
        &lt;span class="na"&gt;s3ForcePathStyle&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;true"&lt;/span&gt;
      &lt;span class="na"&gt;default&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;defaultVolumesToRestic&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;

&lt;span class="na"&gt;credentials&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;existingSecret&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;velero-secrets&lt;/span&gt;

&lt;span class="na"&gt;snapshotsEnabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;  &lt;span class="c1"&gt;# Enable for PV snapshots in prod&lt;/span&gt;

&lt;span class="na"&gt;initContainers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
   &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;velero-plugin-for-aws&lt;/span&gt;
     &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;velero/velero-plugin-for-aws:v1.13.1&lt;/span&gt;  &lt;span class="c1"&gt;# Use latest compatible&lt;/span&gt;
     &lt;span class="na"&gt;imagePullPolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;IfNotPresent&lt;/span&gt;
     &lt;span class="na"&gt;volumeMounts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
       &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;mountPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/target&lt;/span&gt;
         &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;plugins&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Install:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm &lt;span class="nb"&gt;install &lt;/span&gt;velero vmware-tanzu/velero &lt;span class="nt"&gt;-n&lt;/span&gt; velero &lt;span class="nt"&gt;-f&lt;/span&gt; velero-values.yaml
kubectl get pods &lt;span class="nt"&gt;-n&lt;/span&gt; velero  &lt;span class="c"&gt;# Wait for Running&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Common Mistake:&lt;/strong&gt; Wrong &lt;code&gt;s3Url&lt;/code&gt; IP—use &lt;code&gt;minikube ip&lt;/code&gt; or host IP reachable from cluster pods. &lt;/p&gt;

&lt;h2&gt;
  
  
  Phase 3: Install Velero CLI
&lt;/h2&gt;

&lt;p&gt;Download Velero CLI v1.18.0+ (matches server; check velero.io for latest):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;wget https://github.com/vmware-tanzu/velero/releases/download/v1.18.0/velero-v1.18.0-linux-amd64.tar.gz
&lt;span class="nb"&gt;tar&lt;/span&gt; &lt;span class="nt"&gt;-xvf&lt;/span&gt; velero-v1.18.0-linux-amd64.tar.gz
&lt;span class="nb"&gt;sudo cp &lt;/span&gt;velero-v1.18.0-linux-amd64/velero /usr/local/bin/
velero version
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Phase 4: Create and Verify Backup
&lt;/h2&gt;

&lt;p&gt;Backup namespaces (create &lt;code&gt;test-ns&lt;/code&gt; first if needed: &lt;code&gt;kubectl create ns test-ns&lt;/code&gt;):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;velero backup create test-backup &lt;span class="nt"&gt;--include-namespaces&lt;/span&gt; default,test-ns &lt;span class="nt"&gt;--wait&lt;/span&gt;
velero backup get
velero backup describe test-backup  &lt;span class="c"&gt;# Should show Completed&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Backups store in MinIO &lt;code&gt;backup-bucket&lt;/code&gt; as tarballs with YAML manifests. &lt;br&gt;
&lt;strong&gt;Pro Tip:&lt;/strong&gt; Add &lt;code&gt;--default-volumes-to-restic&lt;/code&gt; for PVC data; requires restic daemonsets. &lt;/p&gt;
&lt;h2&gt;
  
  
  Phase 5: Simulate Disaster and Restore
&lt;/h2&gt;

&lt;p&gt;Delete test resources:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl delete pod,service &lt;span class="nt"&gt;--all&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; default  &lt;span class="c"&gt;# Or specific: kubectl delete pod test -n default&lt;/span&gt;
kubectl delete ns test-ns
kubectl get po &lt;span class="nt"&gt;-A&lt;/span&gt;  &lt;span class="c"&gt;# Confirm gone&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Restore:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;velero restore create &lt;span class="nt"&gt;--from-backup&lt;/span&gt; test-backup &lt;span class="nt"&gt;--wait&lt;/span&gt;
velero restore get
velero restore describe &amp;lt;restore-name&amp;gt;  &lt;span class="c"&gt;# From output&lt;/span&gt;
kubectl get po &lt;span class="nt"&gt;-A&lt;/span&gt;  &lt;span class="c"&gt;# Resources back!&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Common Mistake:&lt;/strong&gt; Restores skip existing resources by default—use &lt;code&gt;--existing-resource-policy=update&lt;/code&gt; to overwrite. &lt;/p&gt;

&lt;h2&gt;
  
  
  Best Practices for Production
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Aspect&lt;/th&gt;
&lt;th&gt;Recommendation&lt;/th&gt;
&lt;th&gt;Why It Matters&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Scheduling&lt;/td&gt;
&lt;td&gt;&lt;code&gt;velero schedule create daily --schedule="0 2 * * *"&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Automates daily backups at 2 AM.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Retention&lt;/td&gt;
&lt;td&gt;&lt;code&gt;--ttl=30d&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Keeps 30 days; prevents storage bloat.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Monitoring&lt;/td&gt;
&lt;td&gt;Export Prometheus metrics; alert on failures.&lt;/td&gt;
&lt;td&gt;Catches issues before disasters.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Multi-location&lt;/td&gt;
&lt;td&gt;Add secondary BSL for offsite DR.&lt;/td&gt;
&lt;td&gt;Survives regional outages.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Hooks&lt;/td&gt;
&lt;td&gt;Pre/post exec hooks for DB flush.&lt;/td&gt;
&lt;td&gt;Ensures consistent stateful backups.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Test full E2E monthly. Velero shines for DevOps teams handling stateful apps like databases on Kubernetes. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ready to implement?&lt;/strong&gt; Drop a comment on your cluster setup or share your backup success! Subscribe for more Kubernetes/DevOps guides.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>backup</category>
      <category>minio</category>
      <category>devops</category>
    </item>
    <item>
      <title>What is Kubernetes API Deprecation.</title>
      <dc:creator>Sandesh Pawar</dc:creator>
      <pubDate>Sun, 08 Mar 2026 14:12:37 +0000</pubDate>
      <link>https://dev.to/dev-sandesh/what-is-kubernetes-api-deprecation-431a</link>
      <guid>https://dev.to/dev-sandesh/what-is-kubernetes-api-deprecation-431a</guid>
      <description>&lt;h1&gt;
  
  
  &lt;strong&gt;Introduction&lt;/strong&gt;
&lt;/h1&gt;

&lt;p&gt;Kubernetes evolves rapidly, and with each release, APIs are improved, stabilized, or removed. While this evolution helps improve reliability and performance, it also introduces challenges for administrators and developers maintaining Kubernetes clusters.&lt;/p&gt;

&lt;p&gt;One common issue teams encounter during cluster upgrades is &lt;strong&gt;API deprecation&lt;/strong&gt;. If deprecated APIs are still being used in manifests, Helm charts, or automation scripts, deployments can fail, applications may stop working, and in severe cases, production outages can occur.&lt;/p&gt;

&lt;p&gt;In this guide, we will explain:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What &lt;strong&gt;Kubernetes API deprecation&lt;/strong&gt; means&lt;/li&gt;
&lt;li&gt;Why deprecated APIs can break your workloads&lt;/li&gt;
&lt;li&gt;How to &lt;strong&gt;identify deprecated APIs in your cluster&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;A &lt;strong&gt;practical example of converting deprecated APIs using kubectl-convert&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;




&lt;h1&gt;
  
  
  &lt;strong&gt;What is Kubernetes API Deprecation?&lt;/strong&gt;
&lt;/h1&gt;

&lt;p&gt;Kubernetes is built around an &lt;strong&gt;API-driven architecture&lt;/strong&gt;. Every resource inside a Kubernetes cluster — such as Pods, Deployments, Services, and Ingress — is defined and managed using APIs.&lt;/p&gt;

&lt;p&gt;Users interact with these APIs using:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;kubectl CLI&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;REST API&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Client libraries&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Infrastructure tools like Terraform or Helm&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each Kubernetes resource is defined with an &lt;strong&gt;apiVersion&lt;/strong&gt; field in the manifest.&lt;/p&gt;

&lt;p&gt;Example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx-deployment&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;apiVersion&lt;/code&gt; determines which version of the Kubernetes API is used to create or manage the resource.&lt;/p&gt;

&lt;p&gt;As Kubernetes evolves, APIs go through different stages:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Alpha&lt;/strong&gt; – Experimental and unstable&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Beta&lt;/strong&gt; – Stable but may change&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GA (General Availability)&lt;/strong&gt; – Production-ready&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;When an API version becomes outdated or is replaced by a better version, Kubernetes marks it as &lt;strong&gt;deprecated&lt;/strong&gt;.&lt;/p&gt;




&lt;h1&gt;
  
  
  &lt;strong&gt;What Does API Deprecation Mean?&lt;/strong&gt;
&lt;/h1&gt;

&lt;p&gt;API deprecation means that an API version is &lt;strong&gt;still available but scheduled for removal in future Kubernetes releases&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Kubernetes maintains a strict policy regarding deprecated APIs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Beta APIs are supported for &lt;strong&gt;at least 9 months or 3 releases&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;After that, they may be &lt;strong&gt;completely removed&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;extensions/v1beta1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This API version for &lt;strong&gt;Ingress resources&lt;/strong&gt; was removed starting from &lt;strong&gt;Kubernetes v1.22&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;If you attempt to deploy resources using a removed API version, Kubernetes returns an error.&lt;/p&gt;

&lt;p&gt;Example error:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Error: UPGRADE FAILED: current release manifest contains removed kubernetes api(s)
error from kubernetes: unable to recognize "": no matches for kind "Ingress" in version "extensions/v1beta1"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h1&gt;
  
  
  &lt;strong&gt;Why Deprecated APIs Are Dangerous&lt;/strong&gt;
&lt;/h1&gt;

&lt;p&gt;Using deprecated APIs can cause serious operational issues.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;1. Cluster Upgrade Failures&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;When upgrading Kubernetes clusters, manifests using deprecated APIs may fail to deploy.&lt;/p&gt;

&lt;p&gt;This can break:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;CI/CD pipelines&lt;/li&gt;
&lt;li&gt;Helm upgrades&lt;/li&gt;
&lt;li&gt;Infrastructure automation&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;2. Application Downtime&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Applications depending on removed APIs may stop functioning after a cluster upgrade.&lt;/p&gt;

&lt;p&gt;Even subtle API changes can introduce unexpected behavior and debugging complexity.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;3. Compatibility Issues&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Infrastructure tools such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Terraform providers&lt;/li&gt;
&lt;li&gt;Helm charts&lt;/li&gt;
&lt;li&gt;CI/CD pipelines&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;may depend on certain Kubernetes API versions. When those APIs are removed, compatibility breaks.&lt;/p&gt;




&lt;h1&gt;
  
  
  &lt;strong&gt;How to Identify API Versions in Kubernetes&lt;/strong&gt;
&lt;/h1&gt;

&lt;p&gt;You can list all supported API versions in your cluster using:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl api-versions
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Example output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;admissionregistration.k8s.io/v1
admissionregistration.k8s.io/v1beta1
apiextensions.k8s.io/v1
apiextensions.k8s.io/v1beta1
apiregistration.k8s.io/v1
apiregistration.k8s.io/v1beta1
apps/v1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command helps administrators verify which APIs are currently supported by the cluster.&lt;/p&gt;

&lt;p&gt;However, identifying which resources &lt;strong&gt;actually use deprecated APIs&lt;/strong&gt; inside your cluster can be challenging.&lt;/p&gt;




&lt;h1&gt;
  
  
  &lt;strong&gt;Example: Deprecated Ingress API&lt;/strong&gt;
&lt;/h1&gt;

&lt;p&gt;Below is an example of an &lt;strong&gt;old Ingress manifest using a deprecated API version&lt;/strong&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cat ingress-old.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="c1"&gt;# Deprecated API version&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;networking.k8s.io/v1beta1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Ingress&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ingress-space&lt;/span&gt;
  &lt;span class="na"&gt;annotations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;nginx.ingress.kubernetes.io/rewrite-target&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;rules&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;http&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;paths&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/video-service&lt;/span&gt;
        &lt;span class="na"&gt;pathType&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Prefix&lt;/span&gt;
        &lt;span class="na"&gt;backend&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;serviceName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ingress-svc&lt;/span&gt;
          &lt;span class="na"&gt;servicePort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This API version (&lt;code&gt;networking.k8s.io/v1beta1&lt;/code&gt;) is deprecated and removed in newer Kubernetes versions.&lt;/p&gt;




&lt;h1&gt;
  
  
  &lt;strong&gt;How to Fix Deprecated Kubernetes APIs&lt;/strong&gt;
&lt;/h1&gt;

&lt;p&gt;One of the easiest ways to migrate deprecated APIs is by using the &lt;strong&gt;kubectl-convert tool&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;This tool automatically converts manifests from older API versions to the latest supported version.&lt;/p&gt;




&lt;h1&gt;
  
  
  &lt;strong&gt;Step 1: Download kubectl-convert&lt;/strong&gt;
&lt;/h1&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl-convert"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;% Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
100 55.8M  100 55.8M    0     0  55.3M      0  0:00:01  0:00:01 --:--:-- 55.3M
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h1&gt;
  
  
  &lt;strong&gt;Step 2: Download the Checksum&lt;/strong&gt;
&lt;/h1&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl-convert.sha256"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h1&gt;
  
  
  &lt;strong&gt;Step 3: Verify the Binary&lt;/strong&gt;
&lt;/h1&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;echo "$(cat kubectl-convert.sha256) kubectl-convert" | sha256sum --check
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Expected output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl-convert: OK
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h1&gt;
  
  
  &lt;strong&gt;Step 4: Install the Tool&lt;/strong&gt;
&lt;/h1&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo install -o root -g root -m 0755 kubectl-convert /usr/local/bin/kubectl-convert
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h1&gt;
  
  
  &lt;strong&gt;Step 5: Verify Installation&lt;/strong&gt;
&lt;/h1&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl convert --help
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command displays the usage instructions for converting Kubernetes manifests.&lt;/p&gt;




&lt;h1&gt;
  
  
  &lt;strong&gt;Step 6: Convert Deprecated Manifest&lt;/strong&gt;
&lt;/h1&gt;

&lt;p&gt;The correct command is:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl-convert -f ingress-old.yaml --output-version networking.k8s.io/v1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Converted output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;networking.k8s.io/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Ingress&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;annotations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;nginx.ingress.kubernetes.io/rewrite-target&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ingress-space&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;rules&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;http&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;paths&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;backend&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ingress-svc&lt;/span&gt;
            &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;number&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
        &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/video-service&lt;/span&gt;
        &lt;span class="na"&gt;pathType&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Prefix&lt;/span&gt;
&lt;span class="na"&gt;status&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;loadBalancer&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h1&gt;
  
  
  &lt;strong&gt;Step 7: Save the Converted Manifest&lt;/strong&gt;
&lt;/h1&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl-convert -f ingress-old.yaml --output-version networking.k8s.io/v1 &amp;gt; ingress-new.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Verify the new file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cat ingress-new.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h1&gt;
  
  
  &lt;strong&gt;Step 8: Deploy the Updated Resource&lt;/strong&gt;
&lt;/h1&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;k create -f ingress-new.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ingress.networking.k8s.io/ingress-space created
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h1&gt;
  
  
  &lt;strong&gt;Step 9: Verify the Deployment&lt;/strong&gt;
&lt;/h1&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;k get ingress
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Example output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME            CLASS    HOSTS   ADDRESS   PORTS   AGE
ingress-space   &amp;lt;none&amp;gt;   *                 80      6s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h1&gt;
  
  
  &lt;strong&gt;Best Practices to Avoid API Deprecation Issues&lt;/strong&gt;
&lt;/h1&gt;

&lt;p&gt;To avoid disruptions caused by deprecated APIs, follow these best practices:&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;1. Monitor Kubernetes Release Notes&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Always review release notes before upgrading clusters.&lt;/p&gt;




&lt;h3&gt;
  
  
  &lt;strong&gt;2. Regularly Audit Your Manifests&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Check all YAML manifests, Helm charts, and automation scripts for deprecated APIs.&lt;/p&gt;




&lt;h3&gt;
  
  
  &lt;strong&gt;3. Use Tools for API Migration&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Tools such as &lt;strong&gt;kubectl-convert&lt;/strong&gt; help automate the migration process.&lt;/p&gt;




&lt;h3&gt;
  
  
  &lt;strong&gt;4. Test Upgrades in Staging Environments&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Before upgrading production clusters, test upgrades in staging environments to identify deprecated API usage.&lt;/p&gt;




&lt;h1&gt;
  
  
  &lt;strong&gt;Conclusion&lt;/strong&gt;
&lt;/h1&gt;

&lt;p&gt;Kubernetes API deprecation is a normal part of the platform's evolution. However, failing to update deprecated APIs can lead to failed deployments, compatibility issues, and unexpected downtime.&lt;/p&gt;

&lt;p&gt;Understanding how Kubernetes APIs evolve and knowing how to migrate deprecated resources is an essential skill for Kubernetes administrators and DevOps engineers.&lt;/p&gt;

&lt;p&gt;By regularly auditing manifests and using tools like &lt;strong&gt;kubectl-convert&lt;/strong&gt;, teams can ensure smooth cluster upgrades and maintain stable Kubernetes environments.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>apiversions</category>
    </item>
    <item>
      <title>Adding Metrics to a Kubernetes Cluster for Pod and Node Resource Monitoring</title>
      <dc:creator>Sandesh Pawar</dc:creator>
      <pubDate>Sun, 08 Feb 2026 11:23:09 +0000</pubDate>
      <link>https://dev.to/dev-sandesh/adding-metrics-to-a-kubernetes-cluster-for-pod-and-node-resource-monitoring-4a9a</link>
      <guid>https://dev.to/dev-sandesh/adding-metrics-to-a-kubernetes-cluster-for-pod-and-node-resource-monitoring-4a9a</guid>
      <description>&lt;p&gt;Monitoring CPU and memory usage of pods and nodes is essential for keeping your Kubernetes cluster &lt;strong&gt;healthy&lt;/strong&gt; and performant.&lt;br&gt;
Without metrics, you are effectively running the cluster blind and cannot troubleshoot performance issues or scale workloads properly.&lt;/p&gt;


&lt;h2&gt;
  
  
  Why do we need metrics in a Kubernetes cluster?
&lt;/h2&gt;

&lt;p&gt;Resource metrics help you:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;See which pods and nodes are consuming the most CPU and memory.&lt;/li&gt;
&lt;li&gt;Detect performance bottlenecks and noisy neighbors early.&lt;/li&gt;
&lt;li&gt;Make informed scaling decisions for Horizontal/Vertical Pod Autoscalers.​&lt;/li&gt;
&lt;li&gt;Troubleshoot issues like OOMKills, throttling, and node pressure.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By default, many Kubernetes clusters do not ship with the Metrics Server installed.&lt;br&gt;
You can verify this by checking for the &lt;code&gt;metrics-server&lt;/code&gt; pod in the &lt;code&gt;kube-system&lt;/code&gt; namespace:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get pods &lt;span class="nt"&gt;-n&lt;/span&gt; kube-system | &lt;span class="nb"&gt;grep &lt;/span&gt;metrics-server
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you do not see a &lt;code&gt;metrics-server&lt;/code&gt; pod, it means metrics are not yet available in your cluster.&lt;/p&gt;

&lt;p&gt;The Metrics Server is a cluster-wide aggregator that collects CPU and memory usage from kubelets on each node and exposes them through the Kubernetes Metrics API, which is what &lt;code&gt;kubectl top node&lt;/code&gt; and &lt;code&gt;kubectl top pod&lt;/code&gt; use under the hood.&lt;/p&gt;




&lt;h2&gt;
  
  
  Exploring the &lt;code&gt;kubectl top&lt;/code&gt; command
&lt;/h2&gt;

&lt;p&gt;Before installing anything, check what &lt;code&gt;kubectl top&lt;/code&gt; can do using the help flag:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl top &lt;span class="nt"&gt;--help&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once Metrics Server is running, you can use:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Show node-level resource usage &lt;/span&gt;
kubectl top nodes 
&lt;span class="c"&gt;# Show pod-level resource usage in the current namespace &lt;/span&gt;
kubectl top pods
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If Metrics Server is not installed or not working, you will see an error similar to:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;Error from server &lt;span class="o"&gt;(&lt;/span&gt;ServiceUnavailable&lt;span class="o"&gt;)&lt;/span&gt;: the server is currently unable to handle the request &lt;span class="o"&gt;(&lt;/span&gt;get nodes.metrics.k8s.io&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This indicates that the &lt;code&gt;metrics.k8s.io&lt;/code&gt; API is not available in your cluster.​&lt;/p&gt;




&lt;h2&gt;
  
  
  Installing Metrics Server in the Kubernetes cluster
&lt;/h2&gt;

&lt;p&gt;Metrics Server can be installed by applying the official &lt;code&gt;components.yaml&lt;/code&gt; manifest.​&lt;/p&gt;

&lt;p&gt;Run the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This creates the &lt;code&gt;metrics-server&lt;/code&gt; deployment (and related resources) in the &lt;code&gt;kube-system&lt;/code&gt; namespace.&lt;/p&gt;

&lt;p&gt;You can then check the logs of the Metrics Server deployment:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl logs &lt;span class="nt"&gt;-n&lt;/span&gt; kube-system deployment/metrics-server
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If everything is working correctly, you should see logs indicating successful scraping of metrics from kubelet.&lt;/p&gt;




&lt;h2&gt;
  
  
  Troubleshooting: Metrics Server pod not Ready
&lt;/h2&gt;

&lt;p&gt;Sometimes the &lt;code&gt;metrics-server&lt;/code&gt; pod may stay in &lt;code&gt;CrashLoopBackOff&lt;/code&gt; or &lt;code&gt;NotReady&lt;/code&gt; state.&lt;br&gt;&lt;br&gt;
A common reason is that Metrics Server cannot establish a secure TLS connection to kubelet on the nodes, often due to certificate or hostname issues.&lt;/p&gt;

&lt;p&gt;You have two broad options:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Fix the certificates (secure, recommended for production).&lt;/li&gt;
&lt;li&gt;Relax TLS verification using &lt;code&gt;--kubelet-insecure-tls&lt;/code&gt; (acceptable in labs, not recommended for production).&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;
  
  
  Option 1: Regenerate certificates securely (recommended)
&lt;/h2&gt;

&lt;p&gt;If you are using a kubeadm-based cluster, you can regenerate control-plane certificates using &lt;code&gt;kubeadm init phase&lt;/code&gt; commands, Check this link for better configuration &lt;a href="https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/#kubelet-serving-certs" rel="noopener noreferrer"&gt;kubelet-serving-certs&lt;/a&gt;.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Regenerate certificates (adjust the config path as per your environment):
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;kubeadm init phase certs all &lt;span class="nt"&gt;--config&lt;/span&gt; /path/to/your/configuration/file.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ol&gt;
&lt;li&gt;Restart kubelet on each control-plane and worker node:
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl restart kubelet
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;After this, wait a few moments and re-check the Metrics Server pod status:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get pods &lt;span class="nt"&gt;-n&lt;/span&gt; kube-system | &lt;span class="nb"&gt;grep &lt;/span&gt;metrics-server
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If the TLS setup is correct, the pod should move to &lt;code&gt;Running&lt;/code&gt; and &lt;code&gt;Ready&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Option 2: Use insecure TLS for Metrics Server (not recommended for production)
&lt;/h2&gt;

&lt;p&gt;For development, lab, or non-critical clusters, you might decide to bypass strict TLS verification by adding the &lt;code&gt;--kubelet-insecure-tls&lt;/code&gt; flag to the Metrics Server container arguments.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Edit the Metrics Server deployment:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl edit deployment metrics-server &lt;span class="nt"&gt;-n&lt;/span&gt; kube-system
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Under &lt;code&gt;spec.template.spec.containers[0].args&lt;/code&gt;, add:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;spec:   
  template:    
    spec:      
      containers:      
      - name: metrics-server        
        args:        
        - --kubelet-insecure-tls`
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Save and exit the editor.&lt;br&gt;&lt;br&gt;
Kubernetes will restart the pod with the updated arguments.&lt;/p&gt;

&lt;p&gt;Again, verify that the pod becomes Ready:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get pods &lt;span class="nt"&gt;-n&lt;/span&gt; kube-system | &lt;span class="nb"&gt;grep &lt;/span&gt;metrics-server
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;Note: Using &lt;code&gt;--kubelet-insecure-tls&lt;/code&gt; disables certificate validation between Metrics Server and kubelet and can expose you to TLS man-in-the-middle attacks, so avoid this in production environments.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Verifying pod and node utilization with &lt;code&gt;kubectl top&lt;/code&gt;
&lt;/h2&gt;

&lt;p&gt;Once the Metrics Server pod is healthy and Ready, you can start querying live metrics.&lt;/p&gt;

&lt;p&gt;Check node-level usage:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl top nodes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Example output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME           CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%
worker-node1   120m         6%     800Mi           40%
worker-node2   90m          4%     700Mi           35%
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Check pod-level usage in the current namespace:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl top pods
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can also view metrics across all namespaces:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl top pods &lt;span class="nt"&gt;-A&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This gives you a quick, CLI-based view of which workloads are consuming the most resources, and is often the first step in investigating scaling or performance issues.&lt;/p&gt;




&lt;h2&gt;
  
  
  Reference
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Official &lt;code&gt;kubectl top&lt;/code&gt; command documentation.[&lt;a href="https://kubernetes.io/docs/reference/kubectl/generated/kubectl_top/" rel="noopener noreferrer"&gt;kubernetes&lt;/a&gt;]​&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>kubernetes</category>
      <category>observability</category>
      <category>monitoring</category>
      <category>analytics</category>
    </item>
  </channel>
</rss>
