12 Docker containers for Supabase, one for Convex — and zero bandwidth limits when you self-host it. Full migration playbook inside.
I was wrapping up a coding session, ready to grab a piña colada on the terrace. Then: a Convex alert. Usage limit almost reached.
My desire for alcohol vanished immediately.
First reaction: "Can't be. It's the 3rd. Quota just reset two days ago." Second: figure out which project was the culprit. Third: look at the queries. And there it was — db.query("metrics").collect() everywhere. Full table scan on every mutation. 12,500 rows. Four reactive queries re-running in a loop every time a single document changed.
I wrote that code. Well, Claude Code did. Same thing.
This had been in the back of my mind for a while — self-hosting Convex. I'd already verified it was possible. I just didn't know how. That evening, I had my answer for why it needed to happen.
TL;DR: Supabase self-hosting requires 12 Docker containers and 8 GB RAM minimum (not viable on a shared VPS). Convex is cleaner to self-host: one container, same API, unlimited bandwidth. The only real gotcha is reverse proxy multi-port routing. Here's the full playbook, including the migration prompt that prevents the two-line mistake I made.
[cover image]
Why I Left Supabase (The Short Version)
This story actually starts earlier. Before Convex, before the bandwidth alert — there was a Supabase problem.
Supabase cloud was eating margin I wasn't making. So I did what any reasonable dev does: I looked at self-hosting. Free, right? Just spin up the containers.
One day later. Twelve containers configured, Traefik finally routing to the right services after three false starts, Kong arguing with every env var I threw at it, the inter-service network doing its own thing for reasons that took two hours to debug. My VPS was sweating. But it was running.
Then I ran docker ps.
supabase-db. supabase-studio. supabase-kong. supabase-auth. supabase-storage. supabase-realtime. supabase-meta. supabase-imgproxy. supabase-vector.
Twelve containers. For a backend. Kong alone can sit at 2.5 GB under load. Minimum 8 GB RAM recommended for production. I have a VPS running five other services. That's a no.
(You can technically disable unused services to reduce resource usage. But then you're maintaining a partial, customized stack on a server you share with everything else. Full control, they say.)
I needed something lighter. That's how I found Convex, and discovered a set of advantages I wasn't expecting. If you want the full picture, I covered why Convex became my default stack for building AI-powered SaaS with Claude Code. Short version: everything in TypeScript, reactive queries by default, auto-migrations on deploy. And Claude Code generates code that actually compiles the first time because it's working in one language across the whole stack instead of juggling SQL migrations, Deno Edge Functions, and TypeScript frontend in separate contexts.
One concern I had: vendor lock-in. Convex went open source in February 2025. That concern evaporated.
So I migrated. Everything worked. And then, three months later, I nearly hit the 1 GB/day bandwidth limit.
The Wake-Up Call: 821 MB Out of 1 GB
The problem was the queries I'd written. (I mean, you know who... 😉)
db.query("metrics").collect() is a full table scan. Every row, every time. On a 12,500-row table. And because Convex reactive queries re-execute whenever any document in the queried table changes, these weren't running once. They were running constantly.
Four of them: getTopContents scanning all metrics to group by content, crossPlatformMetrics aggregating across platforms, velocity queries scanning the entire table for sparklines, mediumOverview reading all distributions for follower counts.
On March 3rd, that combination consumed 821 MB of my 1 GB daily quota. By evening 💀
Before you say "just add indexes" — I know. I got there. But the story doesnt end there.
The Quick Fixes (Do These First)
The 80/20 move before even thinking about self-hosting: fix the queries.
Add indexes to your schema:
metrics: defineTable({
content_id: v.id("contents"),
platform_name: v.string(),
period: v.string(),
})
.index("by_period", ["period"])
.index("by_platform_period", ["platform_name", "period"])
.index("by_content_id", ["content_id"]),
Filter by period instead of scanning everything:
// Before: reads every row in the table
const allMetrics = await ctx.db.query("metrics").collect();
// After: reads only current month
const currentPeriod = new Date().toISOString().slice(0, 7); // "2026-03"
const metrics = await ctx.db
.query("metrics")
.withIndex("by_period", (q) => q.eq("period", currentPeriod))
.collect();
Limit sparkline queries to recent periods. If you're drawing a 6-month chart, build the list of period strings you need and query each one with an index. Don't read three years of data for six data points.
These three changes cut my bandwidth by roughly 80%. Claude Code had written the original queries — fast to generate, zero indexes, works fine until it doesn't. The fixes took about an hour.
But the 1 GB ceiling is still there. Every new feature chips away at it. Every imported article, every real-time dashboard refresh. The quota is structural. Optimization buys time.
Self-hosting removes the ceiling.
The Decision: Self-Host or Stay on Cloud?
If you already have a server running Docker, self-hosting Convex costs exactly $0 extra.
One container. Same CLI, same client libraries, same API. You point your app at a different URL and the rest of your code doesn't change. The backend is open source. You own it.
This makes sense if you already have Docker and a reverse proxy configured, if you're approaching bandwidth limits, or if you want full data sovereignty on a project where managed SLAs dont matter.
It doesn't make sense if you'd need to rent a server just for this — the cloud tier is cheaper than you think. Same if you use Convex Auth or CDN file storage, neither of which exist in the self-hosted version yet. And if debugging Traefik at 4 AM sounds like a bad evening, stay on cloud. I say this without judgment.
My situation: I already have a dedicated server running Traefik, n8n, and several other services. Adding one container costs nothing. The math was obvious.
Why Two Subdomains
Convex self-hosted exposes two ports from a single container. Port 3210 for the backend — queries, mutations, WebSocket. Port 3211 for HTTP Actions — your REST endpoints.
Two ports, two subdomains:
convex.yourdomain.com → port 3210 (backend + WebSocket)
convex-http.yourdomain.com → port 3211 (HTTP actions)
This architecture is also the root cause of the only real problem I hit.
The Traefik Trap
Quick context if you've never used Traefik: it's a reverse proxy that sits in front of your Docker containers and routes incoming traffic to the right service. The nice thing is it configures itself by reading labels on your containers — no config file to maintain. Most self-hosting tutorials use it. It works great.
Until one container exposes two ports.
04:31:36 CET. First docker compose up -d. Traefik logs, immediately:
ERR Router convex-actions cannot be linked automatically
with multiple Services: ["convex-actions" "convex-backend"]
ERR Router convex-backend cannot be linked automatically
with multiple Services: ["convex-actions" "convex-backend"]
The Convex container hadn't even finished starting. Traefik was already refusing the routing.
My initial labels looked like this:
labels:
- traefik.enable=true
- traefik.http.routers.convex-backend.rule=Host(`convex.yourdomain.com`)
- traefik.http.routers.convex-backend.entrypoints=websecure
- traefik.http.routers.convex-backend.tls.certresolver=letsencrypt
- traefik.http.services.convex-backend.loadbalancer.server.port=3210
- traefik.http.routers.convex-actions.rule=Host(`convex-http.yourdomain.com`)
- traefik.http.routers.convex-actions.entrypoints=websecure
- traefik.http.routers.convex-actions.tls.certresolver=letsencrypt
- traefik.http.services.convex-actions.loadbalancer.server.port=3211
Looks right. It isnt.
When a single Docker container declares multiple Traefik services, auto-linking breaks. Traefik sees two services and doesn't know which router maps to which. In every tutorial you've ever read, there's one container and one port — so the mapping is implicit and works. With two ports, you have to make it explicit. Two lines:
labels:
- traefik.enable=true
- traefik.http.routers.convex-backend.rule=Host(`convex.yourdomain.com`)
- traefik.http.routers.convex-backend.entrypoints=websecure
- traefik.http.routers.convex-backend.tls.certresolver=letsencrypt
- traefik.http.routers.convex-backend.service=convex-backend # ← this
- traefik.http.services.convex-backend.loadbalancer.server.port=3210
- traefik.http.routers.convex-actions.rule=Host(`convex-http.yourdomain.com`)
- traefik.http.routers.convex-actions.entrypoints=websecure
- traefik.http.routers.convex-actions.tls.certresolver=letsencrypt
- traefik.http.routers.convex-actions.service=convex-actions # ← this
- traefik.http.services.convex-actions.loadbalancer.server.port=3211
Two minutes between the error and the fix. The rule: whenever you route multiple ports from one container through Traefik's Docker provider, add explicit service= labels. Every time, no exceptions.
One note on Cloudflare: I run Cloudflare in front of my domains with the proxy disabled (DNS-only, grey cloud) for these subdomains. Let's Encrypt needs to reach your server directly to issue certificates. If you leave the orange cloud active, you'll get an SSL conflict you'll spend 20 minutes attributing to the wrong layer — DNS, then Traefik, then Convex. Disable the proxy for the migration.
This server already runs several self-hosted services, including the infrastructure behind rebuilding my AI agent setup from scratch after an API deprecation forced my hand. The philosophy is the same: own the stack, remove the recurring costs, keep the developer experience intact.
The Prompt That Prevents All of This
If you're using Claude Code to generate the docker-compose, give it this context upfront:
Self-host Convex (ghcr.io/get-convex/convex-backend) behind Traefik.
Single container, TWO ports: 3210 (backend + WebSocket) and 3211 (HTTP actions).
Each port needs its own subdomain with SSL.
Traefik Docker provider with Let's Encrypt already configured.
Four lines. That's the context that triggers the explicit service= pattern. Without it, every LLM defaults to the standard tutorial assumption: one container, one port, implicit mapping. Works great until you have two ports.
It will still hallucinate other things. Claude Code always does. But at least not this specific one.
The Migration (The Part I Barely Did)
I asked Claude Code: "Can we migrate my Convex cloud project to the self-hosted instance?"
I'd barely opened the docs tab to figure out where to start. Claude Code had already read them. "Checking prerequisites." Then: connecting to the server via SSH, reading the existing docker-compose, pulling the image, generating the admin key. I watched. That was my job — watching a terminal scroll faster than I could read it while my AI intern demolished a task I'd mentally budgeted two hours for.
Fifteen minutes later it told me to update two DNS records. I did. Done.
(For the record: I did supervise. Very attentively. From my chair.)
The sequence, for the curious: with the container running and SSL certificates issued — Let's Encrypt takes about 15 seconds, the first curl returns exit code 60, don't panic — migration is five steps.
Generate the admin key:
docker compose exec backend ./generate_admin_key.sh
Starts with convex-self-hosted|. Store it, never commit it.
Create .env.self-hosted in your project root:
CONVEX_SELF_HOSTED_URL=https://convex.yourdomain.com
CONVEX_SELF_HOSTED_ADMIN_KEY=convex-self-hosted|your-key-here
Every npx convex command needs --env-file .env.self-hosted from now on. Without it, the CLI silently targets the cloud. Easy to forget, dangerous to miss.
Export from cloud first:
npx convex export --path convex-backup.zip
This is your rollback. Do it before anything else.
Deploy schema, then import — in that order:
npx convex deploy --env-file .env.self-hosted --yes
npx convex import --env-file .env.self-hosted convex-backup.zip
Schema first creates the tables and indexes. Import needs them to exist. Swap the order and the import fails.
Set environment variables:
npx convex env list # list cloud vars first
npx convex env set --env-file .env.self-hosted YOUR_KEY "value"
Two things that catch people: NEXT_PUBLIC_* variables in Next.js are baked at build time. Changing them in Vercel does nothing until you redeploy. The dashboard looks updated. The app is still talking to the cloud. Redeploy.
And if you have HTTP Actions consumers — a static site, external webhooks — they need the HTTP Actions URL updated separately. Backend working doesn't mean actions work. Different port, different subdomain, test both.
Backups: Your Problem Now
#!/bin/bash
BACKUP_DIR="/path/to/backups"
DATE=$(date +%Y%m%d-%H%M)
VOLUME_NAME="your-project_convex_data" # check: docker volume ls
mkdir -p "$BACKUP_DIR"
docker run --rm \
-v "${VOLUME_NAME}:/data:ro" \
-v "${BACKUP_DIR}:/backup" \
alpine tar czf "/backup/convex-${DATE}.tar.gz" -C / data
find "$BACKUP_DIR" -name "convex-*.tar.gz" -mtime +30 -delete
Add to cron, daily at 3 AM. The volume name includes your project directory as a prefix — check with docker volume ls if you're unsure.
Keep your Convex Cloud instance alive for 48 hours after migration. Your cloud data was never modified. Rollback is one env var change and a redeploy. Three minutes.
Results
Latency: self-hosted on an OVH server in France runs at about 461ms round-trip versus 413ms on Convex Cloud US. Negligible. EU users on a EU server might see better latency than the US-based cloud default.
Bandwidth anxiety: gone. I shipped two new dashboard features the week after without once thinking about quota.
The Supabase comparison holds up at the infrastructure level too: one Convex container at idle uses under 200 MB RAM. Supabase self-hosted starts at 12 containers and 8 GB recommended. For a side project on a shared VPS, that difference is everything.
Somewhere around midnight, I closed the terminal. The piña colada was still on the table, somehow.
Salud.
Checklist
□ Create 2 DNS A records pointing to your server IP
□ Disable Cloudflare proxy (grey cloud) for both subdomains
□ Write docker-compose.yml with explicit service= labels on both routers
□ docker compose up -d
□ Wait ~15s for SSL certs (exit code 60 on first curl = normal)
□ Generate admin key
□ Create .env.self-hosted
□ Export from cloud: npx convex export --path backup.zip
□ Deploy schema: npx convex deploy --env-file .env.self-hosted --yes
□ Import data: npx convex import --env-file .env.self-hosted backup.zip
□ Set env vars: npx convex env set --env-file .env.self-hosted KEY value
□ Test backend queries
□ Test HTTP actions separately (different subdomain)
□ Update NEXT_PUBLIC_CONVEX_URL + redeploy frontend
□ Update any HTTP Actions consumers
□ Set up backup cron
□ Keep cloud alive 48h as rollback
□ Verify backup runs next day
Sources
I write about what actually happens when you build production projects with AI tools — not the highlight reel. If that sounds useful, subscribe.
That cover image is 100% AI. I'm the guy who debugs Traefik labels at 4 AM — a designer I am not.
Top comments (0)