Last month, I realized I was paying ₹2,500/month across various cloud services—photo backups, file sync, a VPN, and a small VM for side projects.
When I added up the annual cost and compared it to the price of a used ThinkPad on OLX, the math was obvious. Three weeks later, I had a 3-node Kubernetes cluster running in my living room.
The Problem
I wanted the real Kubernetes experience—not Minikube, not Docker Compose pretending to be orchestration. I needed something that could:
- Run GPU workloads for local LLMs (Ollama).
- Host my photo library (Immich).
- Sync files and run Home Assistant.
Every "budget homelab" guide I found assumed you'd drop ₹50,000+ on a NUC cluster. I had a laptop, an old desktop with a gaming GPU, and about ₹15,000 to spend.
What I Did
1. The Hardware Hustle
I started with what I had. My desktop had a GTX 1070 Ti collecting dust. My old ThinkPad E14 (16GB RAM, i5) was sitting in a drawer. The only purchase was a ₹6,000 used laptop with a GTX 1650 Ti from a local seller—decent specs, terrible screen, perfect for a headless node.
Total hardware cost: ₹6,000.
2. K3s over K8s
Full Kubernetes would have eaten my RAM alive. K3s gave me 90% of the functionality at ~20% of the resource overhead. The control plane runs on the ThinkPad, which also hosts lightweight services: Grafana, Prometheus, Home Assistant, and PostgreSQL. Installation was literally one command per node.
3. The GPU Scheduling "Hack"
I wanted to run Ollama on the 1070 Ti, but I also wanted to use that desktop for work/gaming. The solution? A simple script that joins/leaves the K3s cluster on demand.
# athena.sh
./athena.sh gpu claim # Joins cluster, pulls workloads
./athena.sh gpu release # Drains node, returns GPU to OS
The cluster adapts, pods reschedule, and life continues.
4. NodePort Everything
I wasted two days trying to get proper pod networking across nodes before accepting reality: my home network and cheap router weren't going to play nice with complex Ingress. I switched to NodePort services with a 30xxx range:
-
Prometheus:
:30900 -
Grafana:
:30300 -
Ollama:
:31434
Not elegant? Maybe. Reliable? Absolutely.
What I Learned
Lesson 1: Scope is everything.
My first attempt tried to replicate a production setup (Traefik, cert-manager, etc.). It was fragile. The working version is simpler:hostPathvolumes instead of a storage provisioner, and NodePorts instead of ingress. "Production-grade" and "Actually usable at home" are different goals.
Lesson 2: GPUs in K8s are easier than they look.
Install the NVIDIA device plugin, add a RuntimeClass, and set resource limits. The hard part isn't the tech; it's the workflow of sharing hardware between "Work" and "Cluster."
Current State
The cluster now runs 25+ services:
- AI: Ollama + Open WebUI
- Media/Files: Immich, Nextcloud, Jellyfin
- Home: Home Assistant
- Stack: Prometheus, Grafana, Postgres
The Financials:
- Power Draw: ~80W idle / 200W load.
- Monthly Electricity: ~₹400.
- Annual Savings: ~₹25,000 (and I own my data).
Discussion
Have you built a homelab on a budget? What tradeoffs did you make between "doing it right" and "getting it done"? Let me know in the comments!
Top comments (2)
Sounds awesome
Thank you :)