DEV Community

Cover image for The Homelab That Runs My Automation Business: One Mini PC, Two Salvaged Drives
Ben Utting
Ben Utting

Posted on • Originally published at ctrlaltautomate.com

The Homelab That Runs My Automation Business: One Mini PC, Two Salvaged Drives

I have an £869 mini PC on a shelf. Plugged into it are two USB drives I've owned for over a decade. One was the expansion drive for my Xbox in about 2010. The other was an external backup drive I bought around 2015 and rediscovered in my bedroom. Together they run fifteen workloads that would otherwise be a monthly cloud bill.

This is what's actually on it, how much it replaced, and why I went this route instead of a VPS.

Why a box and not the cloud

The trigger was client demos. Prospects come in asking about OpenClaw or Claude Code setups and want to see something running. A throwaway GCP or AWS VM for that is £15 to £25 a month each, and even a cheap VPS is £5 to £10. Multiply by the number of demos, experiments, and half-finished builds I'd want to keep around, and the monthly number gets uncomfortable fast.

I have an infrastructure background. Azure, VMware, patching, commissioning and decommissioning servers is literally my day job. Running my own hypervisor at home is a Tuesday. The decision was: spend one time on hardware I own and run whatever I want on it, or pay a cloud provider in perpetuity for the same capacity. I bought the box.

The hardware

The host is a Beelink SER9 MAX. £869 from Amazon.

  • AMD Ryzen 7 255 with Radeon 780M graphics, 8 cores / 16 threads, boost to 4.97 GHz
  • 64 GB DDR5-5600 (2 x 32 GB Micron)
  • 1 TB Crucial NVMe (CT1000E100SSD8)
  • 10 GbE on board, USB4, WiFi 6

It's a lot of machine for the price. Ryzen 7 H-series compute, 64 GB of fast DDR5, and a 10 GbE port in a palm-sized box. If I'd specced equivalent compute in the cloud I'd be north of £150 a month before storage.

Plugged into it over USB:

  • A 2 TB Seagate ST2000DM001, which started life as the USB expansion drive on my Xbox around 2010. It's partitioned into two halves: 500 GB for Proxmox backups, 1.3 TB as a spare.
  • A 1 TB Seagate Expansion USB drive from around 2015 that I'd forgotten about until I was tidying up. It holds the media library for Jellyfin.

Nothing on those drives I'd cry about losing. Everything important is either on the NVMe with proper Proxmox backups, or replicated to Google Drive (which will soon be going) via rclone.

The hypervisor

Proxmox VE 9.1.6 on kernel 6.17.13. Free, open source, boring in the best way. I've used ESXi and Hyper-V professionally for years and Proxmox is the one I want at home because it doesn't need a licence server, the web UI is good enough to not need vCenter, and the LXC support is first-class.

The storage layout is uncomplicated:

  • local on the NVMe (98 GB) for ISOs and templates
  • local-lvm thin pool on the NVMe (832 GB) for all VM and container disks, currently 35% used
  • media bind-mount to the 1 TB USB for Jellyfin
  • backups to the 500 GB partition on the 2 TB USB

Thirty-nine days uptime as I'm writing this. Load average 0.31. Using 19.7 GB of 58 GB RAM with everything running. It's genuinely idle most of the time.

What's actually running

Four VMs currently up, eleven LXC containers, fifteen workloads total.

AI and client work (VMs)

  • 201 claudio (8 GB RAM). The always-on Claude Code box I wrote about previously. Runs my morning brief, client pulse, and the X post drafts every day.
  • 202 costar (4 GB RAM). A dedicated VM for a client project automating prospecting. Kept separate so I can tear it down or snapshot it without touching anything else.
  • 401 scout (4 GB RAM). Scout scraper that finds AI automation jobs every few hours and posts them to an n8n webhook.
  • 200 openclaw-giuseppe (stopped). A demo VM I spin up when prospects want to see OpenClaw running. This is the one that justified the box in the first place.

Infrastructure and services (LXC containers)

  • 500 n8n (4 GB). Workflow automation for client deliverables and my own content pipeline.
  • 501 npm (2 GB). Nginx Proxy Manager, reverse-proxies every internal service onto a sensible hostname.
  • 502 pihole (512 MB). Network-wide DNS and ad blocking.
  • 503 evolution (2 GB). Long-running test environment for WhatsApp automations.
  • 504 jellyfin (2 GB). Media server for the 1 TB external drive.
  • 505 finance-dashboard (2 GB). The FastAPI + SQLite finance app I wrote about in the last post.
  • 506 tailscale (2 GB). Mesh VPN so every box I own can reach every other box regardless of network.
  • 507 life-os (2 GB). My self-hosted productivity stack.
  • 508 vs-code (2 GB). Browser-based VS Code instance I can hit from any device.
  • 509 calibre-web (2 GB). eBook library.
  • 510 media-stack (2 GB, 200 GB disk). Everything around Jellyfin.

LXC is doing most of the heavy lifting here. Almost all of the services are Linux userspace apps that don't need a full kernel of their own, so containers are faster to spin up, use less RAM, and are trivial to back up.

What the cloud equivalent would cost

Taking the running workloads at a conservative £10 to £15 a month each for a similarly-specced cloud VM or managed container:

  • 4 running VMs at £15 each = £60/month
  • 11 LXC containers at £8 each (smaller workloads, cheaper tier) = £88/month
  • Say £150/month of replaced compute

That doesn't count storage. The 2 TB of external USB storage would be another £30 to £50 a month on any cloud object or block storage tier at those sizes.

Call it £180 to £200 a month. The Beelink paid for itself in under six months. Everything after that is free.

More importantly, the capacity to experiment is now effectively unlimited. If I want to spin up a new VM to test an idea, I click twice. If I want to throw away that VM an hour later, I click twice. There's no bill meter running in the background nudging me to stop.

The client demo workflow, concretely

The original reason for the box. When a prospect wants to see OpenClaw in action:

  1. Clone the OpenClaw template VM (ID 901) into a fresh instance.
  2. Log in, show them the skills I've built, walk through a real run.
  3. Stop the VM when the call ends. Resources freed, backup snapshot kept.

Total cost to me: electricity. Total cost in the cloud for the same demo capacity kept warm: £15 to £25 a month per client, every month.

What I'd do differently and what's next

What I'd do differently: get the box sooner. I spent a couple of months running services on my main machine before committing to the Beelink. Those months of friction (every time I rebooted Windows, every time something drew too much CPU) were the cost of not doing it.

What's next is the network layer. Right now everything is behind whatever my ISP router gives me. The plan is a proper UniFi stack: Cloud Gateway Ultra, Flex Mini 2.5 GbE switch, U7 Lite WiFi 7 access point, and a DeskPi mini rack for about £300 total. That gets me VLAN segmentation (client traffic on its own network, lab traffic on another), a real firewall with IDS, and a home for the box that isn't a shelf.

The second phase is storage: a proper four-bay NAS with mirrored drives so I can stop relying on two random USB drives for anything important. That's the bit that's genuinely been held together with string.

Beyond that, the next Proxmox node. The joy of this setup is it scales by just adding another box. Two Beelinks clustered together is more compute than most small businesses have on-premise, and it still fits on a shelf.

The takeaway

If you've got even a little infrastructure instinct and you're paying for more than one cloud VM a month, a second-hand mini PC and a Proxmox install will pay for itself fast. Use whatever drives you've got. Proxmox doesn't care that your 2 TB backup volume used to sit on top of an Xbox 360.

The £869 hardware number is real. The two old USB drives are real. The 39 days of uptime are real. It's running my business.

ctrlaltautomate.com

Top comments (0)