DEV Community

Dinh Doan Van Bien
Dinh Doan Van Bien

Posted on

Part 6 — Two instances

Part 6 of 7 — Self-hosting Supabase: a learning journey

Also available in French: Partie 6 — Deux instances

Supabase's free tier gives you two active projects. I was already using both. Adding a second instance to the self-hosted cluster was not about needing more capacity, it was about understanding isolation. When Supabase runs two projects on the same infrastructure, how does it keep them separate? This post answers that question by actually doing it.


What isolation means here

When I say the two projects are isolated, I mean:

Network isolation. Each project has its own Docker overlay network. In the compose file you write internal: as the network name, but when you deploy with docker stack deploy ... project1, Swarm automatically prefixes it with the stack name. The network becomes project1_internal at runtime. Services in project1 cannot reach services in project2 at the network layer, even though both compose files define a network called internal.

Data isolation. Each project has its own Postgres container with its own volume. The databases have no shared storage, no shared connection, no way to reach each other.

Authentication isolation. The JWT secrets are different. The user tables are different. A token issued by project1 is not valid on project2, and vice versa. The API keys (anon key, service role key) are different.

Routing isolation. Different subdomains, different TLS certificates.

The only shared resources are the Traefik reverse proxy (which sits outside both stacks) and the physical server's CPU and RAM.


The compose file is nearly identical

The project2 compose file is a copy of project1's with these changes:

  1. Different values come from a separate Vault path (secret/project2)
  2. Different Traefik router names (this is critical)
  3. Different subdomain rules in the Traefik labels
  4. Different FLY_ALLOC_ID for Realtime (project2-realtime)

The network names are not something you change manually, Swarm adds the stack name as a prefix automatically.


Router names must be unique

This is the one configuration detail that will break your second instance if you miss it.

Traefik identifies routes by router names. If two services register a router with the same name, Traefik picks one and ignores the other. No error, no log message pointing at the conflict.

In project1 we named our routers p1-kong and p1-studio:

traefik.http.routers.p1-kong.rule: Host(`kong.project1.yourdomain.com`)
traefik.http.routers.p1-studio.rule: Host(`studio.project1.yourdomain.com`)
Enter fullscreen mode Exit fullscreen mode

In project2 they must be different:

traefik.http.routers.p2-kong.rule: Host(`kong.project2.yourdomain.com`)
traefik.http.routers.p2-studio.rule: Host(`studio.project2.yourdomain.com`)
Enter fullscreen mode Exit fullscreen mode

The same applies to service names and middleware names:

traefik.http.services.p2-kong.loadbalancer.server.port: '8000'
traefik.http.middlewares.p2-studio-auth.basicauth.users: ...
traefik.http.routers.p2-studio.middlewares: security-headers@swarm,p2-studio-auth@swarm
Enter fullscreen mode Exit fullscreen mode

Prefix everything with the project identifier. It takes 30 seconds to do this carefully.


Separate Vault secrets

Store project2's secrets under a separate path:

vault kv put secret/project2 \
  POSTGRES_PASSWORD="$(openssl rand -hex 16)" \
  JWT_SECRET="$(openssl rand -hex 32)" \
  SUPABASE_ANON_KEY="<different anon jwt>" \
  SUPABASE_SERVICE_ROLE_KEY="<different service_role jwt>" \
  API_EXTERNAL_URL="https://kong.project2.yourdomain.com" \
  GOTRUE_EXTERNAL_URL="https://kong.project2.yourdomain.com" \
  SITE_URL="https://kong.project2.yourdomain.com" \
  DB_ENC_KEY="supabaserealtime" \
  GOTRUE_MAILER_AUTOCONFIRM="false" \
  SECRET_KEY_BASE="$(openssl rand -hex 64)" \
  PG_META_CRYPTO_KEY="$(openssl rand -hex 16)"
Enter fullscreen mode Exit fullscreen mode

Create a separate read-only token for project2:

vault policy write project2-readonly vault-policy-project2.hcl
vault token create -policy=project2-readonly -ttl=8760h -format=json \
  | jq -r '.auth.client_token' > /root/project2-token.txt
Enter fullscreen mode Exit fullscreen mode

Store the token at /root/project2-token.txt, and add it as a separate GitHub Actions secret if you use automated deployments.


Memory across two instances

Running two full stacks, here is roughly how the 4 GB is used:

Service Project 1 Project 2 Total
PostgreSQL ~77 MB ~77 MB ~154 MB
Kong ~229 MB ~185 MB ~414 MB
GoTrue ~12 MB ~12 MB ~24 MB
PostgREST ~17 MB ~17 MB ~34 MB
Realtime ~168 MB ~168 MB ~336 MB
Storage ~18 MB ~18 MB ~36 MB
postgres-meta ~68 MB ~68 MB ~136 MB
Studio ~170 MB ~170 MB ~340 MB

Subtotal for both projects: about 1.5 GB. Add Traefik (30 MB), Vault (140 MB), and the OS (around 300 MB) and you are at roughly 2.0 to 2.5 GB out of 4 GB available.

A third instance would probably fit. I have not tried it yet.


Deploy project2

bash scripts/fetch-env-from-vault.sh project2
set -a && source instances/project2/.env && set +a
docker stack deploy -c instances/project2/docker-compose.yml project2
bash scripts/init-realtime.sh project2
Enter fullscreen mode Exit fullscreen mode

Verify both stacks:

docker service ls
Enter fullscreen mode Exit fullscreen mode

You should see 17 or more services with all replicas at 1/1.


Verifying the isolation

Create a user on project1 and verify it does not exist on project2:

curl -X POST https://kong.project1.yourdomain.com/auth/v1/signup \
  -H "apikey: PROJECT1_ANON_KEY" \
  -H "Content-Type: application/json" \
  -d '{"email":"test@example.com","password":"TestPass123!"}'

# Try the same credentials on project2
curl -X POST https://kong.project2.yourdomain.com/auth/v1/token?grant_type=password \
  -H "apikey: PROJECT2_ANON_KEY" \
  -H "Content-Type: application/json" \
  -d '{"email":"test@example.com","password":"TestPass123!"}'
# {"error":"invalid_grant","error_description":"Invalid login credentials"}
Enter fullscreen mode Exit fullscreen mode

The auth systems are completely separate. This is also how Supabase keeps different customers' data isolated on their shared infrastructure. The approach is simpler than I expected.

Part 7 — Security and the load test →


The full series

  1. Why we are building this
  2. The server
  3. Traefik and SSL
  4. The first Supabase instance
  5. Vault
  6. Two instances, you are here
  7. Security and the load test

Top comments (0)