Many years back, the cheapo DVD player that my wife and I owned kicked the bucket and I wasn't too keen on buying another one because I could already see the writing on the wall for physical media. So I downloaded and setup Plex Media Server on my gaming PC and began ripping my DVDs and serving them up to my Chromecast. Several years after that, my employer was upgrading workstations, so I paid $20 to bring home one of the old machines to take over Plex duties and run a few other things. Little did I know this would send me down a giant rabbit hole of learning, trials, and fun.
The journey to my current setup
When I got my first server, I knew I wanted to go Linux instead of Windows because of its light weight and I really wanted to learn more about Linux. So I installed the latest LTS Ubuntu Server, Plex Media Server, and OwnCloud and called it good for a while. This worked well as I could just copy any ripped media to the server via Owncloud.
But this left the server sitting idle most of the time and there were other services I was wanting to try out, like Home Assistant. So I started installing more things, but this left me with an uneasy feeling. Multiple Python versions we needed, Ubuntu updates were scary (a jump from PHP 5 to PHP 7 left me with lots of troubleshooting to get Owncloud working after an upgrade), and I had no good backup scheme. So I began looking around for better solutions.
/r/homelab seemed to be big fans of virtualization and Proxmox was often recommended since there's a free community edition. So I wiped the OS drive, installed Proxmox and started creating LXC containers for each service and any related add-ons. This solved a lot of my issues. I no longer needed to worry about conflicting dependencies, I could backup a container before performing a major update to an application, and bind mounts allowed for easy sharing of data between services.
While I was happy with this solution and ran things this way for over 2 years, there were still some problems. My main problem was that management of containers and applications was too isolated. I needed to create scripts on each container OS to automate updates and configuration was also spread inconsistently in random folders on each container. This required lots of notes regarding where configuration was stored, what changes were made to the default configuration, and just didn't seem optimal.
The solution
I know Docker has been around for a really long time now and isn't really in vogue in the tech sphere these days because of k8s and other solutions for managing/scaling distributed container ecosystems. But for a single home-server setup it seemed like the best option to me. I was also planning a hardware update, so while I waited for those pieces to come together I began building out a git repo with my docker infrastructure, any scripts I thought might be handy, and as much of my applications' configuration as possible.
When the time came, I spun up containers one by one to make sure I had everything configured correctly and was able to get everything dialed in just a few hours of testing. For most services, I was able to just copy over the configuration files from the old server, bind them to the container and be up and running with little to no changes necessary. I also added some additional services to take advantage of the additional hardware power available.
Repo structure
-
docker-compose.yml
- one compose to rule them all -
.env
- passwords and stuff I don't want stored in git -
.gitignore
- files to exclude from source control like.env
,auth
files, and SQLite databases. -
readme.md
- notes -
apps/
-
appname/
-
init.sh
- if I need to do any initializing like cloning a repo and copying files -
build/
- if I need to build the image myselfDockerfile
- other build files
-
config/
- plain text configuration files that are mounted to the container
-
-
-
scripts/
-
init.sh
- install docker, docker compose, create folders, docker configuration -
update.sh
- update all containers and prune old images -
backup.sh
- backup volumes and other data to the backup directory - etc
-
Other directories outside the repo
-
/mnt/datadrive/
-
apps/
-
docker/
- docker data here instead of the OS drive https://docs.docker.com/config/daemon/#docker-daemon-directory -
appname/
- application data if not using a docker volume
-
-
backup/
- synced offsite via Duplicatti-
appname/
- files separated by application
-
-
-
/mnt/mediadrive/media/
movies/
tvshows/
music/
- etc...
The pieces that really make this system shine
Docker Compose is the main star of the show. Writing long commands to start a container with all volumes, ports, variables and other configuration is ugly. Sure you can script these commands, but compose simplifies all the docker commands by allowing you to refer to services by simple names instead of IDs. It also makes updates really easy. It's as simple as docker-compose pull
and then docker-compose up -d
. This will pull any new images and restart only containers with configuration changes or new images.
Caddy-Docker-Proxy is a container that also adds a lot of simplicity to configuring a reverse proxy. It allows you to define your Caddy configuration right from labels in your docker-compose.yml
. Those in the know will say this sounds a lot like Traefik, and they wouldn't be wrong. I just think it's a little simpler to use and more flexible.
Awesome backup utilities like this for mysql/mariadb, this one for postgres, and this one for Docker volumes. Along with Duplicatti which can also be run via Docker, this all makes for incredibly simple data backups that are also stored encrypted and off-site in case of an emergency. Since my configuration is also stored in Git, I could be back up and running in the event of hardware failure with very little effort.
Other notes
I have a small SSD boot drive. A zfs mirror of 3TB HDDs for application data. This mirror also has a SSD cache drive to improve read speeds, but that's probably overkill for my situation. I also have a zfs mirror of 8TB HDDs for media files. zfs is awesome and I highly recommend it.
Both MariaDB and PostgreSQL support initialization scripts. When running a database instance for a single stack you can simply set some environmental variables for the default user/database and password. Since I'm using a single instance of each database for multiple services, I use the initialization scripts to create databases and users.
If the container supports it, I suggest setting the UID and GID to your non-root user/group IDs. This is crucial for accessing shared files like media so you don't end up with files owned by root. It also keeps configuration read/writable by your user, otherwise you have to run git commands with sudo.
Shoutouts to some awesome projects
In addition to the projects I've already mentioned, these projects are amazing.
- Nextcloud - File storage, office document editing, CalDav/CardDav, and more
- Node Red - Flow programming, mainly for IOT
- Home Assistant Websocket Plugin for Node Red - Use Node Red for Home Assistant automation
- Bitwarden RS - Bitwarden server written in Rust
- Pihole - Ad-blocking DNS server
- Cloudflared - DNS over HTTPS client that's not tied to Cloudflare (route Pihole requests through this)
- Archivy - I use it like Pocket to save web pages for later
Top comments (1)
Really nice article, definitly need to make a homelab. Ill use an old' laptop to run k8s cluster