DEV Community

Lambo Poewert
Lambo Poewert

Posted on • Originally published at reddit.com

Moved from hosted Supabase to self-hosted on a single VPS — here's what I learned after weeks in production

Most self-hosting guides show you how to get Supabase running. Not many tell you what happens after you've been running it for months with real users and real data. Here's what I've learned.

The project

I built MadeOnSol, a Solana ecosystem directory that tracks 950+ tools across 26 categories. The stack includes 74 Postgres tables, 100k+ real-time trade records, Google OAuth, file storage for 900+ tool logos, and Row Level Security on every table.

It started on Supabase's free hosted tier. Four months ago I moved everything to a self-hosted Docker setup on a Hetzner CPX32 (8GB RAM, 4 vCPU, €15/month). Here's what I'd want to know if I were doing it again.

How much resources does it actually use?

This is the question nobody answers well. The default self-hosted Supabase ships with ~15 containers. I stripped it down to 8 by removing Studio, Edge Functions, Analytics, Vector, Meta, and the Deno cache. None of them were doing anything useful in production.

Here's what the remaining services actually consume:

Service Memory What it does
supabase-db ~413 MB Postgres 15
supabase-kong ~429 MB API gateway (yes, really)
realtime ~169 MB WebSocket subscriptions
supabase-storage ~109 MB File storage
supabase-pooler ~60 MB Supavisor connection pooling
supabase-rest ~40 MB PostgREST
supabase-auth ~22 MB GoTrue (email + Google OAuth)
supabase-imgproxy ~17 MB Image transforms

Total: ~1.26 GB. On 8GB that leaves plenty of room for a Next.js app, three background services, Nginx, and Umami analytics.

The things that went right

Co-located database performance. With the database on the same machine as the app, API responses went from ~120ms (hosted) to ~30ms. No network hop makes a real difference when your background services are doing batch operations on 100k+ records.

Supavisor connection pooling is solid. Session mode on port 5432 for the Next.js app, transaction mode on port 6543 for background scripts. Set it up once, never thought about it again.

Storage migration was seamless. Copied all files over, updated URLs, and the storage API behaved identically to hosted. No app code changes needed.

Google OAuth just worked. Set the GOTRUE_EXTERNAL_GOOGLE_* environment variables, configured the consent screen, done. One gotcha: I use a manual PKCE flow with localStorage instead of cookies because cross-site cookie loss was causing redirect issues.

Backups went from "nothing" to solid. On the free hosted tier I had zero backups. Now it's pg_dump → gzip → SSH to a separate storage box, running at 3 AM daily with 30-day retention. Peace of mind for ~5 lines of bash.

The things that bit me

Kong uses an absurd amount of memory. 429 MB for an API gateway. It spikes to 600MB+ under load. It's the single biggest memory consumer after Postgres itself. I've considered replacing it with Nginx routing but Kong handles auth middleware that would be painful to replicate.

Realtime goes silently bad. The container reports "healthy" to Docker but stops delivering messages. The only fix is docker compose restart realtime. I now run a health check every 5 minutes that tests actual message delivery, not just the HTTP health endpoint.

JWT rotation will break every active session. I rotated my JWT secret once and every user got stuck in auth loops because their browser cookies had the old token. I wrote middleware that detects stale sb-* cookies and clears them automatically. Also learned the hard way: use docker compose up -d --force-recreate when changing env vars, not docker compose restart — restart doesn't re-read the .env file.

The default docker-compose is a security risk. Ports are bound to 0.0.0.0 by default, which means your Postgres, PostgREST, and Kong are accessible to the entire internet. Change every port binding to 127.0.0.1:PORT:PORT and put Nginx with SSL in front of everything.

Upgrades are a gamble. The self-hosted repo moves fast and I've skipped most updates. My rule: only upgrade if there's a specific bug fix I need. Always pin your Docker image versions.

Security checklist

If you're self-hosting, this is the minimum:

  • Nginx reverse proxy with Let's Encrypt SSL in front of Kong (port 8000)
  • UFW firewall allowing only Cloudflare IPs on 80/443
  • Every Docker port bound to 127.0.0.1
  • RLS on every table — self-hosting doesn't mean you can skip this
  • Admin checks via app_metadata.is_admin (not user_metadata, which users can modify themselves)
  • Rate limiting at both the Nginx and application layer

When it makes sense (and when it doesn't)

Self-host if you're comfortable with Postgres, your project is outgrowing the free tier, you want direct database access for cron jobs and background processing, or you want full control over backups.

Stay on hosted if you want zero ops overhead, rely on Supabase Studio, don't want to manage Docker/Nginx/SSL, or your project fits comfortably on the free tier.

For me the tipping point was having background services that needed direct Postgres access for heavy batch work. The co-located performance and €15/month flat cost made it an easy call.

The full stack — Supabase, Next.js, three background services, Nginx, and Umami — all runs on a single VPS. Happy to answer questions about any part of the setup.

Top comments (0)