Self-hosting is great because it gives you control.
You can run your own apps, keep your data closer to you, avoid some vendor lock-in, and learn how your stack actually works.
But there is a tradeoff: once you self-host, you are also responsible for the boring parts.
Exposed ports. Container defaults. Secrets. Backups. Updates. Reverse proxies. Databases.
A lot of self-hosted setups start small:
services:
app:
image: myapp:latest
ports:
- "8080:8080"
db:
image: postgres:latest
ports:
- "5432:5432"
It works. The app is online. Everything feels fine.
But a working Docker Compose file is not always a safe Docker Compose file.
Here are some common security mistakes I keep seeing in self-hosted Docker Compose setups.
1. Exposing databases directly
A database usually does not need to be exposed to the public internet.
This is risky:
services:
db:
image: postgres:16
ports:
- "5432:5432"
The same applies to services like:
- PostgreSQL:
5432 - MySQL / MariaDB:
3306 - Redis:
6379 - MongoDB:
27017 - Elasticsearch / OpenSearch:
9200
In many self-hosted stacks, the database only needs to be reachable by other containers on the same Docker network.
A safer pattern is often to avoid publishing the database port at all:
services:
db:
image: postgres:16
volumes:
- db_data:/var/lib/postgresql/data
app:
image: myapp:1.0.0
depends_on:
- db
If you really need local access, bind to localhost instead of all interfaces:
ports:
- "127.0.0.1:5432:5432"
This is not a complete security solution, but it is usually safer than publishing the database broadly.
2. Running privileged containers
This is another setting worth reviewing carefully:
services:
app:
image: example/app:latest
privileged: true
privileged: true gives a container much broader access to the host than most services need.
Sometimes it is required. Many times it is not.
If a container asks for privileged mode, it is worth asking:
- Why does this service need it?
- Can I use specific capabilities instead?
- Is there a documented reason?
- Is this image trusted?
- Is this service exposed publicly?
Privileged containers are not automatically bad, but they should not be invisible.
3. Using network_mode: host without thinking
Host networking can be useful, but it also removes some of Docker's network isolation.
services:
app:
image: example/app:latest
network_mode: host
With host networking, the container shares the host network namespace.
That can make port exposure harder to reason about, especially in a homelab where services are added over time.
Before using host networking, check:
- Does this service actually require it?
- Which ports does it open?
- Is it behind a reverse proxy?
- Is it only reachable over a VPN or private network?
- Would a normal Docker network work instead?
4. Running containers as root
Many containers run as root by default.
If your Compose file does not specify a user, it may be worth checking whether the image supports non-root execution.
services:
app:
image: example/app:1.0.0
A more explicit setup might look like this:
services:
app:
image: example/app:1.0.0
user: "1000:1000"
This is not always possible, and some images need extra configuration. But if a service can run as a non-root user, that is usually worth considering.
5. Putting secrets directly in docker-compose.yml
This is easy to do:
services:
app:
image: example/app:1.0.0
environment:
API_KEY: "super-secret-key"
DATABASE_PASSWORD: "password123"
It is also easy to forget about.
Inline secrets can end up in:
- Git history
- shared snippets
- support requests
- screenshots
- public GitHub repositories
- copied backups
A better pattern is to avoid hardcoding sensitive values directly in the Compose file.
Depending on your setup, you might use:
-
.envfiles with proper permissions - Docker secrets
- a secrets manager
- environment injection from your deployment system
Even then, be careful not to commit .env files.
6. Using latest everywhere
This is common:
services:
app:
image: myapp:latest
db:
image: postgres:latest
The problem is that latest is not a version. It is a moving target.
This can be especially risky for stateful services like databases.
A safer pattern is to pin versions:
services:
db:
image: postgres:16.2
redis:
image: redis:7.2.4
You still need to update, but now updates are intentional instead of accidental.
7. No visible backup strategy
If your Compose file has persistent volumes, there is probably data worth protecting.
services:
db:
image: postgres:16
volumes:
- db_data:/var/lib/postgresql/data
volumes:
db_data:
A Compose file cannot tell the whole backup story.
But when there are database volumes and no visible backup service, no backup documentation, and no restore-test process, it is a signal to slow down and check.
A good backup plan should answer:
- What data is backed up?
- Where is it backed up to?
- How often?
- Is it encrypted?
- Has restore been tested?
- Who knows how to recover it?
Backups are not real until restore has been tested.
8. Assuming a reverse proxy makes everything safe
Reverse proxies like Traefik, Caddy, Nginx Proxy Manager, SWAG, and others are useful.
But they can also make exposure harder to understand.
A service might be:
- internal only
- bound to localhost
- directly exposed
- exposed through a reverse proxy
- accessible only over VPN
- accidentally exposed through an old port mapping
The important thing is not just:
Do I have a reverse proxy?
The important thing is:
Do I understand which services are reachable, from where, and why?
A simple review checklist
Before exposing a self-hosted Docker Compose stack, I like to check:
- Are any databases published to the host?
- Are any admin panels exposed?
- Are any services using
privileged: true? - Are any services using
network_mode: host? - Are containers running as root?
- Are secrets hardcoded?
- Are images pinned to specific versions?
- Are persistent volumes backed up?
- Are restore tests documented?
- Do I know what is public, private, and internal?
This does not replace a full security audit, but it catches a lot of easy-to-miss issues.
Why I built DockAudit
I built DockAudit to make this kind of lightweight review easier.
DockAudit is an open-source security auditor for self-hosted Docker Compose stacks.
It scans docker-compose.yml files and highlights risky settings like:
- exposed databases and admin panels
- privileged containers
- host networking
- containers running as root
- inline secrets
- unpinned images
- missing backup hints
It runs locally and does not send your Compose files anywhere.
The goal is not to replace a full security audit. It is a small, local-first tool for catching common self-hosted Docker Compose risks before they become incidents.
GitHub:
If you run self-hosted Docker Compose stacks, I would love feedback on what checks would be useful.
And if you find it useful, a GitHub star would help a lot.
Top comments (0)