Originally published at https://qcserestipy.github.io on March 7, 2026.
Introduction
My name is Patrick and I work as a Senior HPC DevOps and AWS Cloud Engineer. My day-to-day work revolves around building and operating infrastructure platforms that enable developers and researchers to run complex workloads.
In practice this means building Kubernetes platforms that power CI environments, GPU pipelines, and automated workflows. These systems integrate technologies such as Kubernetes, ArgoCD, Harbor registries, Apache Airflow, autoscaling compute infrastructure with Karpenter, and observability platforms based on the Grafana ecosystem.
The common theme behind all of this work is automation. My personal engineering paradigm can be summarized in a single sentence:
Automation or nothing.
If a system cannot be rebuilt automatically, reproduced reliably, and operated without constant manual intervention, it is not finished. This philosophy did not emerge from theory. It emerged from frustration.
The Problem With Most Home Labs
Many engineers treat their home lab as a playground. A place where experiments happen quickly, configurations are modified manually, and problems are solved with one-off fixes. I made the same mistake early in my career.
One of my first personal infrastructure projects was a small home cloud based on Docker Swarm. It ran OwnCloud for file storage and OpenVPN for remote access. In theory it was supposed to make my life easier by giving me control over my own data. In practice I spent more time fixing the system than using it. Every time something broke, I had to rediscover how the system was configured. Containers had been changed manually, configuration files had diverged, and the environment slowly drifted away from anything reproducible. Instead of owning my infrastructure, my infrastructure owned me. That experience fundamentally changed how I approach systems today.
Infrastructure Should Be Self-Sufficient
In production environments we do not accept fragile systems.
Infrastructure must be:
- reproducible
- automated
- observable
- auditable
- scalable
When something fails, the goal is not to manually repair it. The goal is to rebuild it automatically. The same principle should apply to personal infrastructure. If my entire home lab disappeared tomorrow, recovery should be simple:
- Buy a new machine
- Install Docker
- Run a bootstrap script
From there the system should rebuild itself. The Kubernetes cluster should start automatically. ArgoCD should install itself. ArgoCD should then reconcile its own configuration and deploy every application in the platform. All configuration should live in Git repositories.
No manual configuration.
No hidden state.
No mystery infrastructure.
Just code.
Why Kubernetes?
My passion lies in building systems that can operate independently once they are correctly designed. Technologies that enable this idea naturally fascinate me. Some of my favorites include:
- Kubernetes
- ArgoCD and GitOps workflows
- Infrastructure as Code
- Observability platforms built around the Grafana ecosystem
These tools allow complex distributed systems to behave predictably. One of the most satisfying moments in infrastructure engineering is watching a system configure itself. For example, installing a vanilla ArgoCD instance and then applying an ArgoCD application that manages ArgoCD itself. Within minutes the platform begins mutating its own configuration and deploying new services automatically.
Observability adds another dimension to this experience. Dashboards, logs, metrics, and traces transform distributed systems into something understandable. Suddenly an entire platform becomes visible and measurable. You can see the system breathing.
The Goal of This Project
This blog documents an experiment:
How much production-style infrastructure can fit inside a home lab?
The environment is intentionally constrained. The entire platform runs on a single machine:
A Mac Mini with an Apple M4 chip and 16 GB of memory.
Instead of relying on cloud infrastructure, the Kubernetes cluster runs locally using kind (Kubernetes in Docker). Running infrastructure locally introduces interesting constraints. Large production systems often rely on separate control planes, distributed storage, advanced networking, and managed cloud services. None of these luxuries exist in a minimal home lab environment. This project attempts to push those limits.
The goal is to implement as many production-grade practices as possible, including:
- GitOps workflows
- Kubernetes platform automation
- CI/CD infrastructure
- observability stacks
- ingress and service exposure
- infrastructure reproducibility
Whenever something cannot realistically be implemented in a home lab, I will explain how the same problem would typically be solved in a real production environment.
What You Can Expect From This Blog
Many technical blog posts show isolated configuration snippets and claim that a solution works. They often omit the details of how the system is actually built, how it operates, and how it can be reproduced. This style of documentation is frustrating. It leaves readers with more questions than answers. It creates a false impression that complex systems can be built with a few lines of code.
I strongly dislike that style of documentation. Instead, this project will publish everything required to reproduce the system:
- complete Git repositories
- Helm values
- Kubernetes manifests
- helper scripts
- cluster bootstrap code
Readers should be able to rebuild the entire platform themselves. The intention is not only to demonstrate a working system, but to document the reasoning behind architectural decisions, trade-offs, and limitations.
The Question
How much Kubernetes can you squeeze into a single machine?
How close can a personal home lab get to a real production platform?
And how far can we push automation before a system truly begins to operate on its own?
This blog is the attempt to find out.
Top comments (1)
Some comments may only be visible to logged-in visitors. Sign in to view all comments.