Part 2 of 7 — Self-hosting Supabase: a learning journey
Also available in French: Partie 2 — Le serveur
This post covers creating the server, locking it down, and getting Docker running correctly. None of it is complicated, but there are a few places where the obvious choice is the wrong one.
Create the server
Go to hetzner.com, create an account, and navigate to Cloud > New Server.
Choose Ubuntu 24.04 LTS on a CX22 (2 vCPU, 4 GB RAM). Pick a datacenter in the EU if that matters to you. Add your SSH public key during setup. Do not set a root password.
Click Create. In about 30 seconds you have a server. Note the IP address. We will call it YOUR_VPS_IP throughout the series.
First login
ssh root@YOUR_VPS_IP
Update the system before doing anything else:
apt update && apt upgrade -y
Set a hostname:
hostnamectl set-hostname supabase-vps
Firewall
Ubuntu ships with ufw. Configure it before enabling it, and do them in this order:
ufw default deny incoming
ufw default allow outgoing
ufw allow 22/tcp
ufw allow 80/tcp
ufw allow 443/tcp
ufw enable
Verify the result:
ufw status
Only ports 22, 80, and 443 should be open.
One thing to know about Docker and ufw: Docker writes iptables rules directly, bypassing ufw entirely. This means that if you publish a port in a Docker Compose file with ports:, that port will be accessible from the internet even if ufw says otherwise. We will never use ports: for the database service because of this. The firewall is the outer gate; keep database ports off published ports entirely.
SSH hardening
nano /etc/ssh/sshd_config
Set these:
PermitRootLogin prohibit-password
PasswordAuthentication no
PubkeyAuthentication yes
prohibit-password allows root login but only with a key, never a password. Since we added our key at server creation, this is fine.
Restart SSH:
systemctl restart ssh
Before closing your current session, open a new terminal window and verify you can still log in. Always test SSH changes with a parallel session. If you get locked out, Hetzner provides a rescue console in the web interface, but it is much more comfortable to not need it.
fail2ban
fail2ban reads auth logs and bans IP addresses that fail too many times. The defaults are good enough for us:
apt install fail2ban -y
systemctl enable fail2ban
systemctl start fail2ban
No configuration file changes are needed. The SSH jail is active by default.
With the default configuration, five failed SSH attempts from one IP address triggers a one-hour ban. Automated scanners looking for weak passwords will stop making progress immediately.
Installing Docker
Here is the first place where the obvious choice is wrong.
Ubuntu's package manager has a Docker package. It is outdated. There is also a Docker Snap package. It has known issues with file permissions, volume paths, and service restart behavior in production environments. Do not use either one.
Install Docker from Docker's official apt repository:
apt remove docker docker-engine docker.io containerd runc 2>/dev/null
apt install -y ca-certificates curl gnupg lsb-release
install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg \
| gpg --dearmor -o /etc/apt/keyrings/docker.gpg
chmod a+r /etc/apt/keyrings/docker.gpg
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] \
https://download.docker.com/linux/ubuntu \
$(. /etc/os-release && echo "$VERSION_CODENAME") stable" \
| tee /etc/apt/sources.list.d/docker.list > /dev/null
apt update
apt install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
Verify it installed:
docker --version
# Docker version 29.x.x, build ...
The version number matters. We need a recent Docker Engine for Traefik v3 to work correctly. I discovered this the hard way after losing an afternoon on the problem. This is explained in the next post.
Docker Swarm
We are using Docker Swarm rather than plain docker compose. On a single machine this might seem unnecessary, but Swarm gives us resource limits per service (critical when running 17 containers on 4 GB RAM) and automatic restart on container failure.
Initialize it:
docker swarm init
Your server is now a single-node Swarm manager. That is the complete setup.
Clone the repository
All configuration lives in a git repository. Create a private repo on GitHub, then clone it on the VPS:
cd /root
git clone https://github.com/YOUR_USERNAME/YOUR_REPO.git supabase-vps-cluster
cd supabase-vps-cluster
The repository is empty at this point. You will add the compose files and scripts in the following posts. What matters now is having the directory in place and the connection to git established.
Keeping config in git means you can deploy changes by pushing to the repo and pulling on the server, review diffs before applying them, and roll back if something breaks.
The structure we are building:
supabase-vps-cluster/
├── instances/
│ ├── project1/
│ │ └── docker-compose.yml
│ └── project2/
│ └── docker-compose.yml
├── traefik/
│ └── docker-compose.yml
└── scripts/
Secrets (the .env files, the Kong configuration) are never committed to git. They are generated on the server from Vault, which we set up in Post 5.
Where we are
The server is running, locked down to three ports, with SSH key-only authentication and brute-force protection. Docker CE is installed from the official source and Swarm is initialized.
In the next post, we install Traefik and get HTTPS working for every subdomain.
The full series
- Why we are building this
- The server, you are here
- Traefik and SSL
- The first Supabase instance
- Vault
- Two instances
- Security and the load test
Top comments (0)