Running databases on servers is easy.
Running them securely is where most setups fail.
In this guide, you’ll learn how to:
- Install Docker the right way
- Lock down Docker so it doesn’t weaken your server
- Run PostgreSQL in Docker without exposing it to the internet
- Access the database securely from your local machine
- Avoid common security foot-guns
This setup is ideal for:
- single servers
- side projects
- SaaS MVPs
- production environments that value safety over shortcuts
Prerequisites
- Ubuntu 22.04 / 24.04
- A non-root admin user (e.g.
dev) withsudo - SSH key-based login
- Firewall enabled (UFW)
Important rule:
Never run Docker or databases directly asrootfor daily work.
Part 1 — Installing Docker Securely
Why not use apt install docker.io?
Because:
- it’s often outdated
- it lacks security defaults
- official Docker docs explicitly discourage it for production
We’ll use Docker’s official repository instead.
Step 1: Remove old Docker packages (safe)
sudo apt remove -y docker docker-engine docker.io containerd runc
Step 2: Install required dependencies
sudo apt update
sudo apt install -y ca-certificates curl gnupg
Step 3: Add Docker’s official GPG key
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | \
sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg
Step 4: Add Docker’s repository
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] \
https://download.docker.com/linux/ubuntu \
$(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
Step 5: Install Docker Engine
sudo apt update
sudo apt install -y \
docker-ce \
docker-ce-cli \
containerd.io \
docker-buildx-plugin \
docker-compose-plugin
Step 6: Verify Docker installation
sudo docker version
sudo docker run --rm hello-world
If you see “Hello from Docker!”, Docker is working.
Step 7: Allow your admin user to use Docker
sudo usermod -aG docker dev
Log out and log back in:
exit
ssh dev@SERVER_IP
Verify:
docker ps
⚠️ Do NOT add application or service users to the docker group
The docker group is effectively root access.
Part 2 — Locking Down Docker (Very Important)
By default, Docker gives containers more power than you want.
Create Docker’s daemon config:
sudo nano /etc/docker/daemon.json
Paste this:
{
"icc": false,
"live-restore": true,
"no-new-privileges": true,
"userns-remap": "default",
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "3"
}
}
What this does (in plain English)
- Prevents containers from escalating privileges
- Maps container root → unprivileged host user
- Prevents containers from freely talking to each other
- Limits log size so disk can’t be filled
- Keeps containers running during Docker restarts
Restart Docker:
sudo systemctl restart docker
Verify user namespace remapping:
docker info | grep -i userns
Part 3 — Installing PostgreSQL Securely in Docker
Why Docker for databases?
Docker databases are great if:
- data lives in volumes
- ports are not exposed publicly
- backups exist
We’ll follow those rules.
Step 1: Create a Docker volume for Postgres data
docker volume create pgdata
Step 2: Run PostgreSQL 18 (correct way)
PostgreSQL 18 changed how data directories work.
You must mount /var/lib/postgresql, not /var/lib/postgresql/data.
docker run -d \
--name postgres \
--restart unless-stopped \
-e POSTGRES_USER=appuser \
-e POSTGRES_PASSWORD=STRONG_PASSWORD_HERE \
-e POSTGRES_DB=appdb \
-v pgdata:/var/lib/postgresql \
-p 127.0.0.1:5432:5432 \
postgres:18
Why this is secure
- Database port is bound to localhost only
- No public exposure
- Data is persisted safely
- PostgreSQL runs as a non-root user internally
Step 3: Verify Postgres is running
docker ps
docker logs postgres
You should see:
database system is ready to accept connections
Step 4: Verify database access
docker exec -it postgres psql -U appuser -d appdb
Inside Postgres:
SELECT version();
\q
Part 4 — Verify Database Is NOT Public
This step is critical.
ss -tulpn | grep 5432
Expected output:
127.0.0.1:5432
❌ If you see 0.0.0.0:5432, stop and fix it.
Firewall check:
sudo ufw status
You should not see Postgres listed.
Part 5 — Secure Access from Your Local Machine
Never expose database ports publicly.
Use SSH tunneling instead.
From your local machine:
ssh -N -L 5432:127.0.0.1:5432 dev@SERVER_IP
Leave this terminal open.
Now connect using any DB client:
| Setting | Value |
|---|---|
| Host | 127.0.0.1 |
| Port | 5432 |
| User | appuser |
| Password | your password |
| Database | appdb |
This works with:
- TablePlus
- DBeaver
- pgAdmin
- psql
Encrypted. Private. Safe.
Common Mistakes to Avoid
❌ Exposing DB ports to 0.0.0.0
❌ Running Docker as root daily
❌ Running databases without volumes
❌ Skipping backups
❌ Using Docker as a security boundary
Final Thoughts
This setup gives you:
- Secure Docker installation
- Hardened container defaults
- PostgreSQL 18 with future-safe layout
- Zero public database exposure
- Industry-standard access pattern
You don’t need Kubernetes or managed services to be secure —
you just need discipline and correct defaults.
What to do next
- Add automated backups
- Add resource limits to Postgres
- Use docker-compose for reproducibility
- Monitor disk usage
- Periodically audit exposed ports
If you follow this guide, your server will already be more secure than most production setups.
Happy shipping 🚀
Top comments (0)