Part 1 of 7 — Self-hosting Supabase: a learning journey
Also available in French: Partie 1 — Pourquoi auto-héberger Supabase
Supabase's free tier gives you two active projects: Postgres, authentication, storage, real-time subscriptions, a dashboard. All free. It works well. The problem is that it works so well you stop thinking about what is actually happening.
When you click "Create project," a complete backend appears in about 30 seconds. You get a database connection string, an API URL, a set of keys. You paste them into your app and it works. This is convenient, but it is also a kind of blindness. You cannot reason about a system you do not understand. You cannot estimate its limits, predict its failure modes, or debug it with confidence. You just hope it keeps working.
I have been using Supabase since a few months and I wanted to understand it better. So I decided to build the same thing from scratch on a cheap server, see every piece of it, make all the mistakes. Not to save money. The managed service is fairly priced and I will make that case at the end of this series. The goal was to learn.
This series documents that process.
What Supabase is
Supabase is not a single program. It is eight open-source projects assembled together and run behind an API gateway:
- PostgreSQL -- the database
- GoTrue -- authentication server (signup, login, JWTs)
- PostgREST -- generates a REST API from your Postgres schema
- Realtime -- WebSocket server for live database updates
- Storage -- file upload service with an S3-compatible backend
- Kong -- API gateway that fronts all of the above
- Studio -- the web dashboard
- postgres-meta -- provides the schema introspection that Studio uses
Each of these is a separate process, running in its own container, with its own configuration, its own version history, its own set of bugs. Supabase connects them, configures them, and operates them. When you use the managed service, that work is invisible.
When you self-host, it is very visible.
What you will build
By the end of this series you will have a working cluster with two fully isolated Supabase instances on a single server, automatic HTTPS for every subdomain, secrets stored in HashiCorp Vault rather than in files, runtime intrusion detection via Falco (an eBPF-based tool that watches what is happening inside containers at the operating system level), and a load test that shows exactly where the server starts to struggle.
More importantly, you will have a clear picture of what the managed service is doing on your behalf.
Internet
|
+---------------+
| Traefik | TLS, routing
+-------+-------+
|
+--------+--------+
| |
+---------------+ +---------------+
| Project 1 | | Project 2 |
| ----------- | | ----------- |
| Kong | | Kong |
| GoTrue | | GoTrue |
| PostgREST | | PostgREST |
| Realtime | | Realtime |
| Storage | | Storage |
| Studio | | Studio |
| Postgres | | Postgres |
+---------------+ +---------------+
+---------------+
| Vault | secrets, localhost only
+---------------+
Who this is for
This series is for developers who learn by building. You need to be comfortable with a terminal. You do not need prior experience with Docker Swarm, Vault, or Traefik. Everything is explained from the beginning.
Expect several evenings of work. I spent more time than I planned, partly because of mistakes (which I will document), and partly because each component turned out to be more interesting than I expected.
If you are running a production application with real users and real money, please use the managed service. It is not wise to rely on a single hobbyist server for something important. Self-hosting means you are the one waking up when something breaks. This series is for learning, not for replacing a service you depend on.
The hardware
The server is a Hetzner CX22: 2 vCPU, 4 GB RAM, 40 GB SSD, located in Falkenstein, Germany. It costs EUR 4.51 per month.
Hetzner is a German hosting company. I chose it for cost and because data residency in the EU matters if your users are in the EU.
The CX22 sounds small, and for a managed cloud offering it is. But self-hosted Supabase at rest uses surprisingly little memory:
| Service | Memory at rest |
|---|---|
| PostgreSQL | ~77 MB |
| Kong | ~229 MB |
| GoTrue | ~8 MB |
| PostgREST | ~15 MB |
| Realtime | ~168 MB |
| Studio | ~170 MB |
| Traefik | ~30 MB |
| Vault | ~140 MB |
Two complete Supabase instances fit in roughly 2.2 GB. The server has 4 GB.
These numbers were measured at idle. Part 7 shows what happens under load.
Under a realistic load test, 50 concurrent users with think time between actions, the database CPU peaked at 0.67%. The server had almost nothing to do.
What I got wrong
Here is a preview of the mistakes in this series:
I chose the wrong version of Traefik. It did not work with the version of Docker I had installed. I spent an afternoon figuring out why requests were not being routed before I found the compatibility note buried in the changelog.
I deployed the Realtime service with an encryption key of the wrong length. It crashed immediately with an error about key size that did not mention the actual expected length.
I used vault kv put when I should have used vault kv patch. This replaced all my secrets with a single new key. My Postgres password became the word "change_me." I discovered this when every API call started failing with authentication errors.
I ran a load test that appeared to fail on reads. The problem was not the server. The test was running from Ohio and the server is in Germany. Network latency is real.
Each mistake taught me something about how the system works. The managed Supabase service handles all of these edge cases without you ever knowing they existed. That is part of what the service provides.
The honest picture
Before you spend a Saturday on this: the free tier already gives you two projects. Self-hosting does not unlock more capacity than that for most hobby use cases. What it unlocks is understanding.
After building this cluster, I have a clearer idea of what Supabase actually does, why the Pro plan costs what it costs, and what I am giving up when I choose the managed path. That was worth the time.
One note before you start: Supabase publishes an official self-hosting guide based on plain Docker Compose. This series uses Docker Swarm instead, which adds per-service memory limits and automatic container restart. Both approaches work. The Swarm approach is better suited to running multiple projects on a single server with constrained RAM, which is exactly what we are doing here.
Next: creating the server and locking it down.
The full series
- Why we are building this, you are here
- The server
- Traefik and SSL
- The first Supabase instance
- Vault
- Two instances
- Security and the load test
Top comments (0)