<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Raphael Ben Hamo</title>
    <description>The latest articles on DEV Community by Raphael Ben Hamo (@yeshivsher).</description>
    <link>https://dev.to/yeshivsher</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/yeshivsher"/>
    <language>en</language>
    <item>
      <title>How to Use a Homelab for Kubernetes Practice (Free Setup Guide)</title>
      <dc:creator>Raphael Ben Hamo</dc:creator>
      <pubDate>Thu, 23 Apr 2026 20:04:51 +0000</pubDate>
      <link>https://dev.to/yeshivsher/how-to-use-a-homelab-for-kubernetes-practice-free-setup-guide-24p3</link>
      <guid>https://dev.to/yeshivsher/how-to-use-a-homelab-for-kubernetes-practice-free-setup-guide-24p3</guid>
      <description>&lt;p&gt;Cloud-based Kubernetes — EKS, AKS, GKE — is expensive to learn on. Every minute a managed cluster runs costs money, and experimenting means breaking things. A homelab eliminates that entirely: run Kubernetes on your own machine, destroy and rebuild clusters without a billing meter, and see every layer of the stack that managed services hide.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tool Comparison
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Feature&lt;/th&gt;
&lt;th&gt;Minikube&lt;/th&gt;
&lt;th&gt;K3s&lt;/th&gt;
&lt;th&gt;Kind&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Best for&lt;/td&gt;
&lt;td&gt;Beginners&lt;/td&gt;
&lt;td&gt;Multi-node / edge&lt;/td&gt;
&lt;td&gt;Fast testing / CI&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Setup speed&lt;/td&gt;
&lt;td&gt;Moderate&lt;/td&gt;
&lt;td&gt;Fast&lt;/td&gt;
&lt;td&gt;Very fast&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Multi-node&lt;/td&gt;
&lt;td&gt;Limited&lt;/td&gt;
&lt;td&gt;Native&lt;/td&gt;
&lt;td&gt;Native via config&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Resource usage&lt;/td&gt;
&lt;td&gt;Medium&lt;/td&gt;
&lt;td&gt;Low&lt;/td&gt;
&lt;td&gt;Low&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Offline support&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Start with Minikube&lt;/strong&gt; if you're new. It's the official Kubernetes tool, has built-in add-ons, and mirrors a standard cluster closely.&lt;/p&gt;

&lt;h2&gt;
  
  
  Quick Setup With Minikube
&lt;/h2&gt;

&lt;p&gt;Install on macOS:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;brew &lt;span class="nb"&gt;install &lt;/span&gt;minikube
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Install on Linux:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-LO&lt;/span&gt; https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
&lt;span class="nb"&gt;sudo install &lt;/span&gt;minikube-linux-amd64 /usr/local/bin/minikube
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Start your cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;minikube start
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Verify it's running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get nodes
&lt;span class="c"&gt;# NAME       STATUS   ROLES           AGE   VERSION&lt;/span&gt;
&lt;span class="c"&gt;# minikube   Ready    control-plane   30s   v1.31.0&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Three Exercises to Build Real Skills
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Deployments and Scaling
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl create deployment nginx &lt;span class="nt"&gt;--image&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;nginx &lt;span class="nt"&gt;--replicas&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;3
kubectl scale deployment nginx &lt;span class="nt"&gt;--replicas&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;5
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Delete a pod manually — Kubernetes recreates it automatically. This is the Deployment controller maintaining desired state.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl delete pod &amp;lt;pod-name&amp;gt;
kubectl get pods &lt;span class="nt"&gt;--watch&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  2. ConfigMaps
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl create configmap app-config &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--from-literal&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;ENV&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;staging &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--from-literal&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;LOG_LEVEL&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;debug
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Mount it into a pod:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# pod-with-config.yaml&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;app-pod&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;app&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx&lt;/span&gt;
    &lt;span class="na"&gt;envFrom&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;configMapRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;app-config&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; pod-with-config.yaml
kubectl &lt;span class="nb"&gt;exec &lt;/span&gt;app-pod &lt;span class="nt"&gt;--&lt;/span&gt; &lt;span class="nb"&gt;env&lt;/span&gt; | &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="nt"&gt;-E&lt;/span&gt; &lt;span class="s1"&gt;'ENV|LOG_LEVEL'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  3. Services and Networking
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# ClusterIP — cluster-internal only&lt;/span&gt;
kubectl expose deployment nginx &lt;span class="nt"&gt;--port&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;80 &lt;span class="nt"&gt;--type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;ClusterIP &lt;span class="nt"&gt;--name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;nginx-clusterip

&lt;span class="c"&gt;# NodePort — accessible from your machine&lt;/span&gt;
kubectl expose deployment nginx &lt;span class="nt"&gt;--port&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;80 &lt;span class="nt"&gt;--type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;NodePort &lt;span class="nt"&gt;--name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;nginx-nodeport

&lt;span class="c"&gt;# Open in browser via Minikube&lt;/span&gt;
minikube service nginx-nodeport
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Understanding how traffic moves through ClusterIP → NodePort → LoadBalancer is one of the most valuable Kubernetes networking skills you can build.&lt;/p&gt;




&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Want all 6 exercises&lt;/strong&gt; — including RBAC, Helm charts, and multi-node K3s configs? &lt;a href="https://korpro.io/blog/homelab-kubernetes-practice" rel="noopener noreferrer"&gt;Get the full free guide on korpro.io →&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  From Homelab to Production Thinking
&lt;/h2&gt;

&lt;p&gt;The homelab is where instinct forms. Before you move to production, practice the questions that always come up: How do you monitor workloads? (Install Prometheus + Grafana.) How do you handle persistent data? (Test PersistentVolumeClaims and pod rescheduling.) How do you manage secrets securely? (Try Sealed Secrets or External Secrets Operator.)&lt;/p&gt;

&lt;p&gt;Every hour spent breaking and rebuilding a local cluster translates directly to faster debugging and better decisions in production.&lt;/p&gt;




&lt;h2&gt;
  
  
  Running Kubernetes in Production?
&lt;/h2&gt;

&lt;p&gt;Cost and waste become real problems at scale. KorPro automatically finds orphaned resources, unused ConfigMaps, over-provisioned workloads, and idle services across all your clusters — before they become surprise invoices.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://korpro.io" rel="noopener noreferrer"&gt;Start free on KorPro →&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>homelab</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Stop Paying for Kubernetes Leftovers (Without Risking Production)</title>
      <dc:creator>Raphael Ben Hamo</dc:creator>
      <pubDate>Thu, 05 Feb 2026 22:02:27 +0000</pubDate>
      <link>https://dev.to/yeshivsher/stop-paying-for-kubernetes-leftovers-without-risking-production-pa7</link>
      <guid>https://dev.to/yeshivsher/stop-paying-for-kubernetes-leftovers-without-risking-production-pa7</guid>
      <description>&lt;p&gt;Kubernetes waste usually isn’t a dramatic “bug.” It’s the slow accumulation of leftovers: volumes, services, configs, and credentials that were needed once, but no longer have a living workload attached to them.​​&lt;br&gt;
Teams keep paying for these resources because deleting the wrong thing in production is risky — and manual checks don’t scale across namespaces and clusters.​​&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The 3 kinds of Kubernetes waste&lt;/strong&gt;&lt;br&gt;
When people say “Kubernetes cost optimization,” they often jump straight to right-sizing CPU/memory. That’s real, but it’s only one category.​&lt;br&gt;
In practice, Kubernetes waste usually falls into three buckets: orphaned resources, over-provisioned resources, and idle resources.​&lt;/p&gt;

&lt;p&gt;This post focuses on the one that’s the most frustrating to deal with manually: orphaned resources — things that still exist, still cost money (or create risk), but are no longer referenced by any active workload.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What “orphaned” looks like in real clusters&lt;/strong&gt;&lt;br&gt;
Orphaned resources show up in multiple places, especially after migrations, incident fixes, Helm churn, and “temporary” environments that became permanent.​&lt;br&gt;
Common examples include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;PersistentVolumes / PVCs that are no longer attached to running workloads.&lt;/li&gt;
&lt;li&gt;Services with no active endpoints (nothing behind them anymore).&lt;/li&gt;
&lt;li&gt;Secrets and ConfigMaps that have no live consumer, but still sit in the API and increase audit/attack surface.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The hard part isn’t understanding the list — it’s proving “unused” safely, because Kubernetes references can be indirect and spread across many objects.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmy38x6ew5nuyh44gkhik.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmy38x6ew5nuyh44gkhik.png" alt="Illustration Of KorPro" width="679" height="680"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why unused Secrets are so hard to clean up&lt;/strong&gt;&lt;br&gt;
Secrets don’t expire just because an app was removed. Over time, clusters quietly accumulate credentials with no clear owner.​&lt;br&gt;
That creates a larger attack surface, harder audits, and operational uncertainty — because nobody wants to delete a Secret and discover it was still referenced somewhere obscure.​&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Become a member&lt;/strong&gt;&lt;br&gt;
This is why simple heuristics (labels, naming conventions, “last applied,” or “created date”) don’t hold up in production. Kor (the open-source project behind KorPro) approaches it differently by building a reference graph across workloads and Kubernetes objects to determine whether a Secret is truly in use.​&lt;/p&gt;

&lt;p&gt;A practical “audit-first” approach (what good looks like)&lt;br&gt;
In production, the safest cleanup workflow is audit-first:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Scan clusters for unused/orphaned objects across namespaces.​&lt;/li&gt;
&lt;li&gt;Enrich each finding with context (cost, health, risk) so humans can prioritize.&lt;/li&gt;
&lt;li&gt;Generate a “safe-to-prune” review list — then decide what to delete and when (during a change window, with approvals).​​
That last step matters: a tool should help teams delete with confidence, but it shouldn’t auto-delete in production by default.​​&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;How KorPro helps (without installing anything in the cluster)&lt;/strong&gt;&lt;br&gt;
KorPro is built to discover and analyze unused Kubernetes resources across AWS (EKS), GCP (GKE), and Azure (AKS), with a “zero-installation” architecture that runs outside the cluster using read-only permissions.&lt;br&gt;
It scans for unused resources (ConfigMaps, Secrets, Services, Deployments, PVCs, and more), then enriches findings with cost estimates (monthly/yearly), health/efficiency metrics, and security risk assessments.&lt;/p&gt;

&lt;p&gt;Most importantly for real-world teams, KorPro follows an audit-first approach: it provides recommendations and a safe-to-prune checklist, but does not automatically delete resources — you keep control of production changes.​​&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Try it on your cluster (free)&lt;/strong&gt;&lt;br&gt;
If you’re spending meaningful money on Kubernetes, it’s worth running a quick audit just to see what you’re still paying for from old environments, removed apps, and past migrations.​​&lt;br&gt;
KorPro offers a free trial (up to 5 reports/month): &lt;a href="https://app.korpro.io/signup%E2%80%8B" rel="noopener noreferrer"&gt;https://app.korpro.io/signup​&lt;/a&gt;&lt;br&gt;
If you want to see the output format first, there’s an interactive demo here: &lt;a href="https://korpro.io/demo" rel="noopener noreferrer"&gt;https://korpro.io/demo&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
