From Kernel Isolation to Netbird protocols, a deep dive into building a modular foundation on the HP EliteDesk.
In my last article I mentioned about starting my homelab. I did it. I went with proxmox as a hypervisor so I could deploy multiple VMs and services accordingly. The main choice after installing proxmox on it was to decide I was going to use LXC, docker containers or VMs for deploying and managing services.
So my choice was to run Docker containers on VMs instead of using LXC is due to a few factors. Docker while using host kernel will be using the host kernel of the VM and this will provide a defense-in-depth security. A kernel exploit in Docker only crashes the guest VM rather than your entire physical host, while enabling proper live migration, snapshots, and disaster recovery that LXC doesn’t support well in Proxmox.
VMs let me segregate workloads into isolated environments with different kernel versions for specific needs, dedicate static resources to prevent noisy neighbor problems, and create predictable resource boundaries that containers alone can’t guarantee. This approach makes orchestration tools like Kubernetes easier to implement, and allows me to treat the VM layer as infrastructure while Docker handles application delivery. Thus ultimately trading some performance for significantly better isolation, portability, and operational flexibility that’s ideal for homelabs.
Hardware pass through that I did is the passing both of the 2.5" hardrive so I could keep all of the photos which I had into a cold storage saving the nvme storage for high speed requirements that would be the other services.
With the VM layer sorted, the next challenge was remote access — being able to reach my homelab from anywhere. This came down to two options: Tailscale or Netbird. Exploring both of them I came across that Netbird is truly open source with protocols being ICE and gRPC which I am learning about whereas Tailscale is similar to Netbird but has a blackbox type situation where it is not clear what is happening. Netbird is now setup on my laptop, phone and the mini pc(server). This allows me to ssh into it from any other network with netbird creating a mesh for all the devices. Also allows me to sync all the photos or once done any new photos to the server automatically via Immich.
Immich is a google photos alternative with local ML for face recognition and smart search. The backend is a postgres db to store everything, which handles more than just storing photo metadata it manages the vector embeddings that the ML models generate, enabling semantic search across the entire library. The location feature where immich shows where the photo was taken on a map shows that the PostGIS extension was also used to store the geospatial metadata. Everytime I come across a usage of postgres I still feel there is a lot postgres does and its internals need to be studied more.
When trying to setup local domains such as immich.homelab via the nginx proxy manager and AdGaurd Home I was reminded of the famous quote by Jeff Geerling “It’s always DNS.” which prompted me to discover how I needed to do DNS rewrites which when setup decoupled my services from their IP addresses. This allowed for Service Discovery; the end-user (me) just to remember immich.home, while AdGuard handles the logic of pointing to my Reverse Proxy.
Since everything I deployed has been an individual docker container instead of a monolithic Docker-Compose file because I’ve adopted a Micro-service directory structure where each service is isolated in its own folder. To facilitate communication between these isolated ‘rooms,’ I use Externally Defined Docker Networks. This network allowed me to learn how docker containers said to be isolated talk to each other. I manually created a shared network using docker network create and declared it as external: true in each service's Compose file. This gives every service a door into the same hallway — containers can reach each other by name (Docker handles the internal DNS automatically, so http://my-service:3000 just works) without exposing anything to the outside world. This also made me realise that in the future, when I deploy the arr stack, grouping those services on the same network makes perfect sense — they're designed to talk to each other, and now I know how to support them.
Tldr of the progress
Thanks will update moving forward.

Top comments (0)