I had an old HP Pavilion sitting on a shelf for about two years. i5, 16 GB RAM, 240 GB SSD — mostly empty. I almost gave it away once. Then I didn't.
Funny how the things you almost throw away end up being the most useful.
The Problem Wasn't Money
I was paying for cloud storage. A VPN. DNS filtering. Home automation. Photo management. None of these are expensive on their own. But together? Together they added up to a dependency I didn't fully control.
"The most dangerous phrase in the language is: We've always done it this way." — Grace Hopper
I kept paying because everyone else did. Because Google Drive works fine until it doesn't. Because a smart plug that needs a cloud server to turn off is not a smart plug — it's a remote control with extra steps.
So I did what any stubborn engineer does. I consolidated.
One machine. Many purposes. The HP Pavilion became the candidate.
Proxmox Was the Easy Part
I'd heard about Proxmox for a while — a hypervisor built on Debian that runs VMs and lightweight containers with a clean web UI. Free. What sold me was that it could handle both heavy workloads in full virtual machines and lighter services in LXC containers, all on the same box.
Installation was genuinely painless. Burned it to a USB drive, booted the laptop, had a working Proxmox host in twenty minutes. The web UI was up at https://<host-ip>:8006 before I'd finished my coffee.
"Talk is cheap. Show me the code." — Linus Torvalds
I didn't need benchmarks or blog posts convincing me it worked. I needed to see https://<host-ip>:8006 in my browser. And there it was.
I didn't plan the whole thing upfront. That's not how projects like this work. You start with one thing. Then another. Then suddenly you have a brain.
Pi-hole: The First Neuron
Pi-hole was first — DNS-level ad blocking in a 512 MB container set as the primary DNS for the whole network. Quiet. Efficient. No more sketchy ads on cheap IoT devices. No more background tracking.
I didn't realize how much noise was happening until it stopped.
Home Assistant: The Heart
Next came Home Assistant — a 4 GB VM managing 25 smart switches across every room. Lights, ACs, fans, geysers.
The voice integration was a DIY affair. HTTP-switch talking, a subnet scanner to auto-discover the gateway when DHCP shuffles IPs around. Not elegant. Works. The kind of 1 AM engineering decision you make and never question again.
The Voice Thing Became My Favourite Part
This is where it got fun.
I passed the NVIDIA 940M — still sitting inside the laptop — through to a Debian container. cgroup2 device rules, bind-mounted host libraries, the whole thing. Inside it runs:
- Ollama with a quantised language model that fits in 4 GB of VRAM
- Faster-Whisper for speech-to-text
- Piper for text-to-speech (I picked a voice called Amy because it doesn't sound like a robot)
The flow is simple: I speak into my old phone. The audio hits Whisper on the Pavilion's GPU. The language model figures out what I want. Piper speaks back. It plays through the Sonos in the living room.
None of it needs the internet. The entire voice loop runs local. That's not a nice-to-have — that's the whole reason this exists.
"Any sufficiently advanced technology is indistinguishable from magic." — Arthur C. Clarke
When I say "lights off" and the room goes dark and the speaker answers me — and I know that nothing left my house to make it happen — that's not magic. But it's close enough to make me smile every single time.
OpenClaw: The Automation Brain
Then there's OpenClaw — an Ubuntu VM with 8 GB of RAM running my personal AI assistant. It talks to language models through OpenRouter, talks to Home Assistant through an MCP integration, and chats with me on Telegram.
It runs eight scheduled jobs through the day: workout reminders in the morning, geyser auto-off timers, evening wind-down prompts, and a morning summary that tells me which devices are still on and what the weather looks like.
No SSH from outside. Everything goes through the host. If it needs restarting, I trigger it from the Proxmox host with a one-liner.
What Runs, What Doesn't, and Why
Not everything runs all the time. That was a lesson I learned the hard way — the Pavilion is not a server. It's a laptop. It gets warm. It gets loud.
"Simplicity is a prerequisite for reliability." — Edsger Dijkstra
I learned this the hard way — 40+ entities exposed to the voice AI, responses slow and unreliable. Cut it to 20. Instant.
Always on (5 services):
- Home Assistant — it literally runs my home
- Pi-hole — DNS is infrastructure
- Voice Assistant — needs to be ready when I talk to it
- OpenClaw — the automation brain
- Hangar — a deployment agent running Docker-in-LXC with Open WebUI on top for experiments
On-demand:
- PhotoPrism — AI-powered photo management, fires up when I plug in a camera
- Nextcloud — file sync, on when I need it
Both are my attempts to stop depending on Google Drive for everything.
Things I Learned (The Hard Way)
Don't upgrade the Proxmox kernel casually. I went to 6.17 once. The NVIDIA driver broke because the DRM API changed. The 940M became an expensive paperweight. I pinned to 6.14 and haven't touched it since.
And wrapping the Toyama switches in template helpers instead of exposing raw switches gave me a cleaner dashboard and fewer surprises.
Less is more — something I keep relearning, in homelabs and in life.
Why This Matters
I'm not going to pretend this is a production setup. It's not. It's a collection of containers on an old laptop held together with YAML files and stubbornness.
But here's the thing — when I speak into my phone and the lights turn off and the Sonos answers me without touching a single cloud server, I feel something I haven't felt from any SaaS dashboard.
Control. Real control. Not the kind you get from a settings page. The kind you get from knowing exactly what's running, where, and why.
"The best way to predict the future is to invent it." — Alan Kay
That's what I did. I took a dusty laptop, a free hypervisor, and a bunch of open-source containers and built something that actually belongs to me.
This is Part 1. There's more to write — the voice pipeline deep-dive, why this particular combination of Ollama, Whisper, and Piper actually works on a 4 GB GPU without feeling like a compromise, the Toyama integration story.
An old laptop. A hypervisor. Some containers. And the slightly unreasonable idea that your home should actually be smart — without phoning home to someone else's server.
What would you run on yours?
Words are mine. Structure is Vyasa's. The HP Pavilion is still running.
One last thing. The two agents working alongside me are Vyasa and Agastya, named after the legendary Saptarishis. Vyasa compiles the words. Agastya runs the machines — the same sage who once drank an entire ocean because the truth was hidden underneath it. Now he's here, helping me find what was hiding in a dusty old HP Pavilion. Some stories just refuse to end.
Top comments (0)