TL;DR: I created a self-hosted web UI tool for PostgreSQL/MySQL/MariaDB backups from Docker containers. Auto-discovers containers, runs dumps inside them (so pg_dump always matches your DB version), no credentials in config files. Free, MIT licensed, Linux only
One docker compose up - your databases, backed up
GitHub: https://github.com/nomad4tech/backup-manager
Docker Hub: https://hub.docker.com/r/nomad4tech/backup-manager
Short Demo: https://youtu.be/3rXkPmOpDNc
How I got here
I've used and seen others use bash scripts for backups. Usually it's a massive script, or a set of scripts and pipelines - at minimum: create a dump, compress it, upload to S3, delete old files. All wired up via cron or systemd.
I've watched this approach fail in ways that hurt:
- someone deleted the crontab. ("crontab -r" instead of "crontab -e" its terrifyingly easy to mistype, and you won't even notice)
- someone deleted the backup script itself during a cleanup
- dump silently failed because there wasn't enough disk space, or the database container was restarted mid-dump
In every case, we found out the same way: when a restore was actually needed, and the last backup was five months old.
I'm a Java developer and an aspiring vibe coder (not sure whether to be proud or embarrassed about that). At some point, in my free time, I decided to solve this for myself - it was supposed to be a simple JAR: a config file, a backup pipeline, notifications. That's it.
Then I added an API. Then a frontend. Then it kept going...
The app grew into a proper self-hosted tool. I played around with Docker Hub and GitHub,and at some point I wanted to take it further - to build something useful for the community. Something approachable, lightweight, and worth maintaining as an open-source project.
Why "just use pgBackRest/restic/Barman" wasn't my answer
My answer: those are better tools for serious workloads. If you need incremental backups, WAL archiving, point-in-time recovery - go use pgBackRest. It's excellent - I have a lot to learn from it.
But my situation was simpler: multiple servers, multiple Docker-based projects, databases living in containers. I wanted to open a UI, create a task, and not think about it again.
- bash script - I write it, I maintain it, I remember to add rotation, error handling, disk space checks. I already did that. Then I built this.
- restic - powerful, but no Docker-native workflow, no UI, setup per-project.
- pgBackRest - its own config system, its own repository concept. Overkill for "give me a daily dump and email me if it fails."
Backup Manager is for people who want to click a button, not write a config.
The core idea: one place to manage all your backups
The original problem wasn't just "automate a dump." It was: multiple servers, multiple Docker containers, no single place to see what's backed up and what isn't.
The solution I wanted was simple - connect to any server, pick a container and DB, set a schedule, done. Everything in one UI. No SSH-ing into machines to check cron logs, no wondering if the script on server #14526 is still running.
The technical decision that makes this work cleanly: Backup Manager runs dumps inside the container via Docker exec API, not on the host. Three things fall out of this naturally:
1. pg_dump always matches your database version. No "client/server version mismatch" errors. Ever. The binary is the one that shipped with your database image.
2. No credentials in config files. You don't even need to know database credentials at all. PostgreSQL resolves $POSTGRES_USER/$PGPASSWORD from the container's environment. MySQL uses $MYSQL_ROOT_PASSWORD. The backup tool never sees them - they're resolved at dump time, inside the container.
3. Data streams directly to the app host. No temporary files on the database server. No memory buffering regardless of database size. For large databases this matters.
A few other things worth knowing
Auto-discovery. The app finds database containers automatically - you pick from a list, not a config file. Database size is shown in the selection screen.
Disk space pre-flight. Before every dump, free space is checked against 1.5Γ the size of the previous dump. If there isn't enough room, the dump won't start.
File rotation. Set how many backups to keep. Old files are deleted automatically after each successful backup.
Remote servers via SSH. Add an SSH connection and all databases on that host appear in the same UI. The Docker socket is proxied through the SSH tunnel - it's never exposed directly to the network. This works the same way Portainer and Watchtower handle it.
S3 upload is optional. AWS, MinIO, Yandex Cloud, or any S3-compatible storage. If S3 is unavailable, the dump still runs and saves locally.
Email notifications on success and failure. Multiple recipients supported - useful when a backup task belongs to a shared server and more than one person needs to know if something breaks
Heartbeat monitoring. The app pings healthchecks.io (or any compatible service) by schedule. If the app goes down - you get an alert. If the monitoring service itself goes down - Backup Manager emails you. Silence is a signal, not a green light.
Deployment
services:
backup-manager:
image: nomad4tech/backup-manager:latest
ports:
- "8080:8080"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./data:/app/data
- ./backups:/app/backups
environment:
- SPRING_PROFILES_ACTIVE=docker
docker compose up -d
Local Docker socket is detected automatically. Default login: admin / admin - change it immediately in Settings -> Account.
What else is included
- Gzip compression enabled by default, per task
- Test S3, email, and heartbeat integrations directly from the UI before saving
- Inline task editing - no need to recreate tasks to change a schedule
- Full backup history with details
- REST API with Swagger documentation
- Container re-creation via
docker-compose down/updoesn't break tasks - containers are resolved by name, not ID
Limitations (being honest)
- Linux only - Windows and macOS are on the roadmap
- Early stage - tested on databases up to ~500 GB, but edge cases are still being found
-
AI Involvement - I used AI coding assistants to speed up development. However, all architectural decisions, core logic, and testing strategies are entirely mine
- π‘ A Note on the UI Iβm a Java dev, not a frontend engineer. Iβll be honest: the web interface was 100% generated by AI assistants. My role was defining the UX flow, API contracts, and wiring it all together. Itβs a testament to how far developer tools have come
- Single maintainer - that's me! Response times may vary depending on my availability, but feedback is always welcome
What's next
UI-based restore, backup encryption, webhooks, MongoDB support, incremental backups...
Why I'm sharing this now
This is my first self-hosted tool I'm putting out publicly. It's not perfect. But I use it on my own production servers, and it does exactly what I built it to do.
The best thing that can happen to it at this stage is real-world usage on setups I didn't anticipate - databases I haven't tested, network configs I haven't seen, edge cases I haven't hit.
If this looks useful - try it and break it. -> https://github.com/nomad4tech/backup-manager
Issues, stars, and feedback all help. If something breaks or doesn't work the way you expect - open an issue. That's genuinely the most valuable thing anyone can do for the project right now.



Top comments (0)