DEV Community

Dinh Doan Van Bien
Dinh Doan Van Bien

Posted on

Part 4 — The first Supabase instance

Part 4 of 7 — Self-hosting Supabase: a learning journey

Also available in French: Partie 4 — La première instance Supabase

We have a server, Docker Swarm, and Traefik running. Now we deploy Supabase. This is the part with the most surprises. I will document each one as we get to it.


What one Supabase project is

Before writing any configuration, it helps to have a clear picture of what we are deploying. One Supabase project is eight Docker services:

Internet
    |
  Traefik (TLS termination, routing)
    |
  Kong (API gateway, port 8000)
    |
    +-- GoTrue   (auth, port 9999)
    +-- PostgREST (REST API, port 3000)
    +-- Realtime  (WebSockets, port 4000)
    +-- Storage   (files, port 5000)
    +-- Studio   (dashboard, port 3000)
    +-- postgres-meta (schema introspection, port 8080)

  PostgreSQL (port 5432, internal only)
Enter fullscreen mode Exit fullscreen mode

Kong is the only service accessible from the internet (through Traefik). All others are on an internal Docker overlay network. PostgreSQL is never published to the host.


The secrets you need to generate

Before writing the compose file, generate these values:

# Postgres password
openssl rand -hex 16

# JWT secret (must be at least 32 characters)
openssl rand -hex 32

# For Studio's schema browser
openssl rand -hex 16   # PG_META_CRYPTO_KEY
Enter fullscreen mode Exit fullscreen mode

The anon key and service_role key are standard JWTs signed with your JWT secret. You can generate them with this script:

JWT_SECRET="your-jwt-secret-here"

# Expiry: year 2035 (Unix timestamp)
EXPIRY=2051222400

python3 - << EOF
import json, hmac, hashlib, base64

secret = "$JWT_SECRET"

def b64(data):
    return base64.urlsafe_b64encode(data).rstrip(b'=').decode()

def enc(obj):
    return b64(json.dumps(obj, separators=(',', ':')).encode())

header = enc({"alg":"HS256","typ":"JWT"})

for role in ["anon", "service_role"]:
    payload = enc({
        "role": role,
        "iss": "supabase",
        "iat": 1772393548,
        "exp": $EXPIRY
    })
    msg = f"{header}.{payload}".encode()
    sig = b64(hmac.new(secret.encode(), msg, hashlib.sha256).digest())
    print(f"{role}: {header}.{payload}.{sig}")
EOF
Enter fullscreen mode Exit fullscreen mode

The anon JWT is safe to expose to browsers. The service_role JWT bypasses row-level security and must be kept secret.

We will store all of these in Vault in Part 5. For now, write them somewhere safe.


The docker-compose.yml

Here is the complete stack definition. I will explain each surprising part after.

The image tags below point to current major versions. For exact pinned versions of each component, check the official Supabase self-hosting reference at supabase.com/docs/guides/self-hosting. They maintain a tested, stable combination of versions there.

version: '3.8'

networks:
  internal:
    driver: overlay
  traefik_default:
    external: true
    name: traefik_default

volumes:
  db_data:
  storage_data:

services:

  db:
    image: supabase/postgres:15          # use latest 15.x from supabase.com/docs/guides/self-hosting
    environment:
      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
    volumes:
      - db_data:/var/lib/postgresql/data
    networks:
      - internal
    deploy:
      resources:
        limits:
          memory: 1g
          cpus: '1.0'
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U postgres"]
      interval: 30s
      timeout: 10s
      retries: 3
      start_period: 30s

  auth:
    image: supabase/gotrue:latest         # pin to a stable release tag; see note below
    depends_on:
      - db
    environment:
      GOTRUE_DB_DRIVER: postgres
      GOTRUE_DB_DATABASE_URL: postgres://supabase_auth_admin:${POSTGRES_PASSWORD}@db:5432/postgres
      GOTRUE_JWT_SECRET: ${JWT_SECRET}
      GOTRUE_JWT_EXP: '3600'
      GOTRUE_JWT_AUD: authenticated
      GOTRUE_JWT_DEFAULT_GROUP_NAME: authenticated
      GOTRUE_JWT_ADMIN_ROLES: service_role
      GOTRUE_API_HOST: 0.0.0.0
      GOTRUE_API_PORT: '9999'
      GOTRUE_SITE_URL: ${SITE_URL}
      GOTRUE_EXTERNAL_URL: ${GOTRUE_EXTERNAL_URL}
      API_EXTERNAL_URL: ${API_EXTERNAL_URL}
      GOTRUE_MAILER_AUTOCONFIRM: ${GOTRUE_MAILER_AUTOCONFIRM:-false}
      GOTRUE_SMS_AUTOCONFIRM: 'false'
    networks:
      - internal
    deploy:
      resources:
        limits:
          memory: 256m
          cpus: '0.5'

  rest:
    image: ghcr.io/supabase/postgrest:v12 # use latest stable v12
    depends_on:
      - db
    environment:
      PGRST_DB_URI: postgres://postgres:${POSTGRES_PASSWORD}@db:5432/postgres
      PGRST_DB_SCHEMA: public,storage,graphql_public
      PGRST_DB_ANON_ROLE: anon
      PGRST_JWT_SECRET: ${JWT_SECRET}
      PGRST_DB_USE_LEGACY_GUCS: 'false'
    networks:
      - internal
    deploy:
      resources:
        limits:
          memory: 256m
          cpus: '0.5'

  realtime:
    image: ghcr.io/supabase/realtime:v2   # use latest stable v2
    depends_on:
      - db
    environment:
      DB_HOST: db
      DB_PORT: 5432
      DB_NAME: postgres
      DB_USER: postgres
      DB_PASSWORD: ${POSTGRES_PASSWORD}
      DB_ENC_KEY: ${DB_ENC_KEY}
      DB_AFTER_CONNECT_QUERY: SET search_path TO _realtime
      API_JWT_SECRET: ${JWT_SECRET}
      SECRET_KEY_BASE: ${SECRET_KEY_BASE}
      APP_NAME: realtime
      FLY_APP_NAME: realtime
      FLY_ALLOC_ID: project1-realtime
      PORT: '4000'
      SEED_SELF_HOST: 'true'
      RUN_JANITOR: 'true'
      ENABLE_TAILSCALE: 'false'
      DNS_NODES: ''
      ERL_AFLAGS: -proto_dist inet_tcp
    networks:
      - internal
    deploy:
      resources:
        limits:
          memory: 512m
          cpus: '0.5'

  storage:
    image: ghcr.io/supabase/storage-api:v1 # use latest stable v1
    depends_on:
      - db
    environment:
      ANON_KEY: ${SUPABASE_ANON_KEY}
      SERVICE_KEY: ${SUPABASE_SERVICE_ROLE_KEY}
      JWT_SECRET: ${JWT_SECRET}
      DATABASE_URL: postgres://supabase_storage_admin:${POSTGRES_PASSWORD}@db:5432/postgres
      FILE_STORAGE_BACKEND_PATH: /var/lib/storage
      STORAGE_BACKEND: file
      FILE_SIZE_LIMIT: '52428800'
      GLOBAL_S3_BUCKET: stub
      REGION: stub
      TENANT_ID: stub
      POSTGREST_URL: http://rest:3000
      PGRST_JWT_SECRET: ${JWT_SECRET}
      DB_INSTALL_ROLES: 'true'
    volumes:
      - storage_data:/var/lib/storage
    networks:
      - internal
    deploy:
      resources:
        limits:
          memory: 256m
          cpus: '0.5'

  kong:
    image: ghcr.io/supabase/kong:2.8.1   # Kong version; only change if Supabase releases a new one
    depends_on:
      - db
    environment:
      KONG_DATABASE: 'off'
      KONG_DECLARATIVE_CONFIG: /var/lib/kong/kong.yml
      KONG_LOG_LEVEL: info
      KONG_PROXY_ACCESS_LOG: /dev/stdout
      KONG_PROXY_ERROR_LOG: /dev/stderr
      KONG_ADMIN_ACCESS_LOG: /dev/stdout
      KONG_ADMIN_ERROR_LOG: /dev/stderr
      KONG_SERVER_TOKENS: 'off'
    volumes:
      - /root/supabase-vps-cluster/instances/project1/kong.yml:/var/lib/kong/kong.yml:ro
    networks:
      - internal
      - traefik_default
    deploy:
      resources:
        limits:
          memory: 512m
          cpus: '0.5'
      labels:
        traefik.enable: 'true'
        traefik.http.routers.p1-kong.entrypoints: websecure
        traefik.http.routers.p1-kong.rule: Host(`kong.project1.yourdomain.com`)
        traefik.http.routers.p1-kong.tls.certresolver: le
        traefik.http.routers.p1-kong.middlewares: security-headers@swarm
        traefik.http.services.p1-kong.loadbalancer.server.port: '8000'
        traefik.swarm.network: traefik_default

  meta:
    image: supabase/postgres-meta:v0      # use latest stable v0
    depends_on:
      - db
    environment:
      PG_META_PORT: 8080
      PG_META_DB_HOST: db
      PG_META_DB_PORT: 5432
      PG_META_DB_NAME: postgres
      PG_META_DB_USER: supabase_admin
      PG_META_DB_PASSWORD: ${POSTGRES_PASSWORD}
      PG_META_DB_SSL_MODE: disable
      PG_META_CRYPTO_KEY: ${PG_META_CRYPTO_KEY}
    healthcheck:
      disable: true
    networks:
      - internal
    deploy:
      resources:
        limits:
          memory: 256m
          cpus: '0.25'

  studio:
    image: supabase/studio:latest        # always use the latest Studio tag
    depends_on:
      - db
    environment:
      HOSTNAME: 0.0.0.0
      SUPABASE_URL: http://kong:8000
      SUPABASE_PUBLIC_URL: ${API_EXTERNAL_URL}
      SUPABASE_ANON_KEY: ${SUPABASE_ANON_KEY}
      SUPABASE_SERVICE_KEY: ${SUPABASE_SERVICE_ROLE_KEY}
      AUTH_JWT_SECRET: ${JWT_SECRET}
      STUDIO_PG_META_URL: http://meta:8080
      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
      DEFAULT_ORGANIZATION_NAME: Default Organization
      DEFAULT_PROJECT_NAME: Default Project
    healthcheck:
      disable: true
    networks:
      - internal
      - traefik_default
    deploy:
      resources:
        limits:
          memory: 512m
          cpus: '0.5'
      labels:
        traefik.enable: 'true'
        traefik.http.routers.p1-studio.entrypoints: websecure
        traefik.http.routers.p1-studio.rule: Host(`studio.project1.yourdomain.com`)
        traefik.http.routers.p1-studio.tls.certresolver: le
        traefik.http.services.p1-studio.loadbalancer.server.port: '3000'
        traefik.swarm.network: traefik_default
        traefik.http.routers.p1-studio.middlewares: security-headers@swarm,p1-studio-auth@swarm
        traefik.http.middlewares.p1-studio-auth.basicauth.users: YOUR_HASHED_CREDENTIALS
Enter fullscreen mode Exit fullscreen mode

Replace YOUR_HASHED_CREDENTIALS with a bcrypt hash of your password. Install the tool and generate the hash on the server:

apt install apache2-utils -y
htpasswd -nB admin
# New password:
# Re-type new password:
# admin:$2y$05$...
Enter fullscreen mode Exit fullscreen mode

Copy the output (including the username). In Docker Compose labels, every $ must be doubled because Compose uses $ for variable interpolation. The string admin:$2y$05$... becomes admin:$$2y$$05$$... in the label.


kong.yml: the API gateway configuration

The compose file bind-mounts /root/supabase-vps-cluster/instances/project1/kong.yml into the Kong container. This file is where you define routes, authentication, and rate limits. It is not committed to git because it contains your API keys.

Create it at that path on the server:

_format_version: '2.1'
_transform: true

consumers:
  - username: anon
    keyauth_credentials:
      - key: YOUR_SUPABASE_ANON_KEY
  - username: service_role
    keyauth_credentials:
      - key: YOUR_SUPABASE_SERVICE_ROLE_KEY

acls:
  - consumer: anon
    group: anon
  - consumer: service_role
    group: admin

services:
  - name: auth-v1-open
    url: http://auth:9999/verify
    routes:
      - name: auth-v1-open
        strip_path: true
        paths:
          - /auth/v1/verify
    plugins:
      - name: cors

  - name: auth-v1-open-callback
    url: http://auth:9999/callback
    routes:
      - name: auth-v1-open-callback
        strip_path: true
        paths:
          - /auth/v1/callback
    plugins:
      - name: cors

  - name: auth-v1
    url: http://auth:9999/
    routes:
      - name: auth-v1-all
        strip_path: true
        paths:
          - /auth/v1/
    plugins:
      - name: cors
      - name: key-auth
        config:
          hide_credentials: false
      - name: acl
        config:
          hide_groups_header: true
          allow:
            - admin
            - anon
      - name: rate-limiting
        config:
          minute: 30
          policy: local
          limit_by: ip

  - name: rest-v1
    url: http://rest:3000/
    routes:
      - name: rest-v1-all
        strip_path: true
        paths:
          - /rest/v1/
    plugins:
      - name: cors
      - name: key-auth
        config:
          hide_credentials: true
      - name: acl
        config:
          hide_groups_header: true
          allow:
            - admin
            - anon

  - name: realtime-v1-ws
    url: http://realtime:4000/socket
    protocol: ws
    routes:
      - name: realtime-v1-ws
        strip_path: true
        paths:
          - /realtime/v1/
    plugins:
      - name: cors
      - name: key-auth
        config:
          hide_credentials: false
      - name: acl
        config:
          hide_groups_header: true
          allow:
            - admin
            - anon

  - name: storage-v1
    url: http://storage:5000/
    routes:
      - name: storage-v1-all
        strip_path: true
        paths:
          - /storage/v1/
    plugins:
      - name: cors
      - name: key-auth
        config:
          hide_credentials: true
      - name: acl
        config:
          hide_groups_header: true
          allow:
            - admin
            - anon
Enter fullscreen mode Exit fullscreen mode

A few things to note. The auth-v1-open routes (/verify, /callback) are intentionally left without key-auth -- these are the OAuth redirect endpoints that browsers call directly during login flows and cannot include an API key header. Everything else requires a valid key.

The file permissions matter: chmod 644 kong.yml. Kong runs as a non-root user and will fail with a permission error on files set to 600 or 700.

After any change to this file, Kong does not pick it up automatically. Force a restart:

docker service update --force project1_kong
Enter fullscreen mode Exit fullscreen mode

Surprise 1: memory limits are not optional

Without memory limits, services compete for RAM on a 4 GB server and can trigger OOM kills that take down other containers. You want hard limits.

The numbers I landed on after tuning:

Service Memory limit Reason
db 1 GB Postgres buffer cache
kong 512 MB More than expected, Kong caches config
realtime 512 MB Erlang/BEAM VM uses ~200 MB at idle
studio 512 MB Next.js server-side rendering
auth 256 MB GoTrue
rest 256 MB PostgREST
storage 256 MB Storage API
meta 256 MB postgres-meta

Realtime was the one that surprised me most. The Erlang/BEAM runtime has a large baseline footprint, around 200 MB before any connections are established. I set it initially to 256 MB, which looked generous, and it kept to hit the limit. 512 MB is correct. That is what Supabase Cloud allocates, for the same reason.


Surprise 2: Studio needs three non-obvious variables

Studio is a Next.js application. Server-side rendering runs inside the container; client-side rendering runs in the browser. These two contexts need different URLs:

  • SUPABASE_URL: http://kong:8000, for server-side code running inside Docker, it reaches Kong by container name on the internal network
  • SUPABASE_PUBLIC_URL, the public HTTPS URL, for browser-side code
  • POSTGRES_PASSWORD, Studio makes direct Postgres connections for its query runner

If any of these are missing, Studio produces confusing 400/500 errors in the browser console with no obvious indication of what is wrong. I had to read the Studio source code to understand why. It is not obvious to the user when these variables are missing.


Surprise 3: Studio's healthcheck kills it

The supabase/studio image includes a built-in Docker healthcheck. In Swarm, a container that fails its healthcheck gets killed and restarted. Studio's healthcheck was failing on our setup.

Disable it:

healthcheck:
  disable: true
Enter fullscreen mode Exit fullscreen mode

Same problem with postgres-meta. It also has a built-in healthcheck that triggers exit 137 (SIGKILL) in Swarm. Disable that one too.


Surprise 4: you cannot hardcode GOTRUE_MAILER_AUTOCONFIRM

For development and load testing, you want email signup to auto-confirm (skip the verification email). I initially set this in the compose file as a hardcoded string:

GOTRUE_MAILER_AUTOCONFIRM: 'false'
Enter fullscreen mode Exit fullscreen mode

Then I needed to change it to true. I updated the .env file. Redeployed. Nothing changed. The service was still reading false.

The problem is that a hardcoded string in the environment: block has priority over a variable from the .env file. The .env variable was being ignored.

The fix is to use variable substitution:

GOTRUE_MAILER_AUTOCONFIRM: ${GOTRUE_MAILER_AUTOCONFIRM:-false}
Enter fullscreen mode Exit fullscreen mode

The :-false part means "use this value if the variable is not set." Now the .env file controls the value. This is how it should be from the start.


Surprise 5: DB_ENC_KEY must be exactly 16 bytes

Realtime uses AES-128-ECB encryption. AES-128 requires a 16-byte key. I generated a key with openssl rand -hex 32, which gives 32 hexadecimal characters. But 32 hex characters represent 16 bytes, 2 hex chars per byte. That should work. Right?

No. Realtime passes the key string directly as the key value, not as a hex-encoded byte array. The string openssl rand -hex 32 gives a 32-character string, which is treated as 32 bytes. AES-128 needs 16 bytes. The service crashes with "Bad key size."

The official default for self-hosted Realtime is the literal string supabaserealtime. It is exactly 16 characters, therefore 16 bytes. Use this value. Do not be creative with key generation here.


Surprise 6: the _realtime schema

The official Supabase Docker Compose repository includes a file docker/volumes/db/realtime.sql that is mounted into the Postgres container and creates the _realtime schema automatically on first boot. If you clone the official repo, this is handled for you.

This series builds a compose file from scratch. That mount is not there, so _realtime never gets created. Realtime v2.76+ requires it for multi-tenant configuration and crashes at startup with no clear indication of what is missing.

Run this once after first deploy:

docker exec $(docker ps --filter name=project1_db --format '{{.Names}}' | head -1) \
  psql -U postgres -d postgres \
  -c "CREATE SCHEMA IF NOT EXISTS _realtime;"

docker service update --force project1_realtime
Enter fullscreen mode Exit fullscreen mode

What the script does: creates the _realtime schema, renames the tenant that SEED_SELF_HOST creates from realtime-dev to realtime (a naming mismatch between the seed logic and the app name), and force-restarts the service. It is safe to run multiple times.


Surprise 7: API_EXTERNAL_URL must point to Kong

API_EXTERNAL_URL drives the URLs that GoTrue puts into emails (password resets, confirmations) and the public URL that Studio uses for browser-side API calls.

I pointed it at PostgREST, because PostgREST is the REST API. That seems sensible. But PostgREST is an internal service. Kong is the gateway that fronts everything, handles authentication, and enforces rate limits. The external URL must be Kong's public address:

API_EXTERNAL_URL=https://kong.project1.yourdomain.com
Enter fullscreen mode Exit fullscreen mode

Pointing it at PostgREST bypasses Kong entirely, which breaks authentication.


A note on GoTrue image tags


⚠️ Avoid GoTrue release candidates. :latest can pull an RC. GoTrue RCs have had database migration ordering bugs that cause the service to fail at startup with a cryptic error. If GoTrue fails to start and the logs mention migrations, check the GoTrue releases page and pin to the most recent stable tag.

Deploy

Create the .env file (we will move this to Vault in Post 5). First, generate the two remaining secrets. These are shell commands, not literal .env values:

openssl rand -hex 64   # copy this as SECRET_KEY_BASE
openssl rand -hex 16   # copy this as PG_META_CRYPTO_KEY
Enter fullscreen mode Exit fullscreen mode

Then create the file with real values:

# instances/project1/.env
POSTGRES_PASSWORD=<generated above>
JWT_SECRET=<generated above>
SUPABASE_ANON_KEY=<anon jwt from the script>
SUPABASE_SERVICE_ROLE_KEY=<service_role jwt from the script>
API_EXTERNAL_URL=https://kong.project1.yourdomain.com
GOTRUE_EXTERNAL_URL=https://kong.project1.yourdomain.com
SITE_URL=https://kong.project1.yourdomain.com
GOTRUE_MAILER_AUTOCONFIRM=false
DB_ENC_KEY=supabaserealtime
SECRET_KEY_BASE=<paste the 128-char hex string>
PG_META_CRYPTO_KEY=<paste the 32-char hex string>
Enter fullscreen mode Exit fullscreen mode

Deploy:

set -a && source instances/project1/.env && set +a
docker stack deploy -c instances/project1/docker-compose.yml project1
Enter fullscreen mode Exit fullscreen mode

Check that all services come up:

docker service ls | grep project1
Enter fullscreen mode Exit fullscreen mode

All eight should show 1/1 replicas within a minute or two. If any show 0/1, check the logs:

docker service logs --tail 50 project1_auth
Enter fullscreen mode Exit fullscreen mode

Initialize Realtime:

bash scripts/init-realtime.sh project1
Enter fullscreen mode Exit fullscreen mode

Test the API endpoint:

curl -s https://kong.project1.yourdomain.com/health
# {"status":"healthy"}
Enter fullscreen mode Exit fullscreen mode

Where we are

A working Supabase instance: Postgres, authentication, REST API, real-time subscriptions, file storage, and a dashboard protected with basic auth.

In the next post, we move all those secrets out of flat files and into Vault, and I will tell you about the afternoon I accidentally deleted everything.

Part 5 — Vault →


The full series

  1. Why we are building this
  2. The server
  3. Traefik and SSL
  4. The first Supabase instance, you are here
  5. Vault
  6. Two instances
  7. Security and the load test

Top comments (0)