There are six Raspberry Pi 4s on a shelf in my living room. They run 24/7, they're all wired directly into the router, and they exist for one fairly specific reason: some cinema websites block GitHub's IP ranges.
GitHub Actions runners share IP space with a lot of automated traffic, and a handful of venues had decided they didn't want to serve requests from that space. The failures were inconsistent — empty responses, timeouts, bot-detection pages — which made them annoying to diagnose. Once I'd worked out what was actually happening, the fix was straightforward: residential IP addresses. Requests that look like they're coming from someone's home connection, because they are.
Hence the Pis.
Why Pis, Not Just a Cheap PC?
It's a fair question. I set myself a target: £50 or less per Pi, all-in. That means the Pi 4 itself, an SD card, a power cable, and an ethernet cable. No wiggle room for a fancy case or anything optional. But six Pis at £50 each is £300 — you could buy a reasonable secondhand desktop for that and run six runners on it without breaking a sweat.
The honest answer is that it didn't start as a deliberate architecture decision. I had one Pi spare, so I set it up as a runner. That was enough at first. As I added more venues and the pipeline got busier, I added another, then another. By the time I had three or four, I was actively buying more rather than reconsidering the approach — partly because they're cheap and low-power (running a desktop 24/7 would cost noticeably more on the electricity bill), but also because I'd started to like the fault tolerance story.
Each Pi is independent. If one plays up, it takes one runner offline, not all of them. Better yet, there's nothing precious about any individual machine — the setup steps are fully documented, so if a Pi goes wrong I can wipe the SD card and have it back as a runner in under an hour. Cattle, not pets. A single PC running six processes doesn't give you that.
Pi 4s aren't particularly cheap if you buy them new and in a hurry, but there's a reasonable secondhand market if you're patient. I watched eBay listings and Facebook Marketplace, picked them up when they matched the budget, and that's how I ended up with six of them. A few came without accessories, which meant sourcing cables separately — but even then, it worked out.
One thing I learned the hard way: the power supply matters more than you'd think. The Pi 4 is particular about voltage, and one of mine was on an underpowered cable. All headless, so there's no screen to hint at what's wrong — it just showed up as one runner that was less reliable than the others, dropping jobs intermittently. It took longer than I'd like to admit to narrow it down to the power supply. Swapping the power supply fixed it immediately.
SD Cards: The Unexpected Bottleneck
The other thing that surprised me was how much the SD cards matter for this use case.
Most Raspberry Pi guides will tell you any Class 10 card is fine, and for general use that's probably true. But GitHub Actions runners do a lot of I/O — constant git checkouts, caches being read and written, files being created and deleted across every job. Slow cards can appear fine at first, but you'll notice them becoming a bottleneck once they get a job, especially one with a lot of smaller steps. Jobs that should take 10 seconds start taking ten times as long, and you can't figure out why until you look at where the time is actually going.
Swapping to SanDisk Extreme Pro cards made a noticeable difference — runners were now consistently faster on anything I/O-heavy, which in practice is most jobs. I ended up writing a workflow to test SD card speed which uses Raspberry Pi's own speed test script. It checks whether read and write speeds are fast enough to provide adequate performance, which saves finding out the hard way mid-pipeline (and I'm hoping will let me quickly diagnose if an SD card is degrading).
The other SD card lesson: 16GB is too small. The GitHub Actions runner cache fills up in less than a week of regular use. I have a scheduled workflow to free up space — it clears the npm cache, removes all Playwright browsers, then reinstalls the latest dependencies and pre-warms everything. It works, but it's a bit of a workaround for a storage problem. I've since bumped everything to 64GB cards, I still run the workflow weekly, and so far everything's running smoothly.
The Physical Setup
Six Pis sitting loose on a shelf with cables going everywhere is exactly as annoying as it sounds, so I designed a mount to keep things tidy. It's a 3D-printed mount that holds each Pi in place, with enough spacing for airflow and clean cable routing (power cable is supported, SD card is accessible from the top, ethernet cable is hidden underneath).
If you want to print one yourself, I've uploaded the STL files to Printables.
Everything is connected directly to the router via ethernet. No Wi-Fi. I briefly considered Wi-Fi for the tidiness of it, but I've had too many experiences with Wi-Fi dropouts causing mysterious CI failures, and the whole point of this thing is reliability. Ethernet cables aren't pretty, but they don't drop connections.
The full cluster sits inside an IKEA SMARRA box. It runs quietly, doesn't generate much heat, and sits in a corner where it's easy to ignore — which is exactly what you want from infrastructure.
What I Haven't Covered
Getting the Pis onto the network is the easy bit. Actually registering them as self-hosted GitHub Actions runners, keeping those runners healthy, and managing the runner environment across six machines is its own topic — one for another day.
The short version for the curious: GitHub provides a script you run on each machine, it registers itself in your repo's settings, and from that point on it just sits there waiting to pick up jobs. The initial setup is straightforward enough. It's everything that comes after — keeping them healthy, diagnosing npm cache issues, hunting down slow runners — where things get more interesting. I do have a workflow that reports stats across all runners — uptime, temperature, disk space remaining — which at least makes it easy to spot a machine that's quietly having a bad time.
Next post: GitHub as Infrastructure — self-hosted runners, secrets management, and using GitHub Actions as the backbone of a daily data pipeline.





Top comments (0)