Disclosure: This post contains affiliate links. We may earn a small commission at no extra cost to you.
TL;DR: I migrated a multi-agent AI platform from Firebase to Supabase. The result: 80% cost reduction, actual SQL queries instead of document nesting gymnastics, row-level security that works, and a free tier that's actually usable for production. Here's the complete migration story with real numbers.
Let me be direct: Firebase is a great product for certain use cases. But if you're building anything with relational data, complex queries, or multi-tenant architecture, you will eventually hit a wall. I hit it 6 months in.
This is the story of migrating AEGIS — a multi-agent AI orchestration system with 16 agents, tenant isolation, and an audit trail — from Firebase to Supabase + PostgreSQL. Not a theoretical comparison. A real migration with real data and real costs.
What Was Wrong with Firebase?
Nothing was "wrong" with Firebase. It worked. But three problems kept getting worse:
1. The Nested Document Problem
Firebase's Firestore stores data as nested documents. When my agents needed to query "all messages from agent X in project Y, sorted by timestamp, with pagination" — that required a composite index, a collection group query, and careful denormalization.
The same query in PostgreSQL:
SELECT * FROM messages
WHERE agent_name = 'CEO' AND project_id = 'proj-123'
ORDER BY created_at DESC
LIMIT 20 OFFSET 0;
One line. No index configuration file. No denormalization.
2. The Multi-Tenant Security Nightmare
AEGIS needs tenant isolation — each organization's data must be completely invisible to others. In Firestore, this means writing security rules like:
// Firebase security rules — manual per-collection
match /messages/{messageId} {
allow read, write: if request.auth.token.tenant_id == resource.data.tenant_id;
}
// Repeat for EVERY collection. Miss one and you have a data leak.
In PostgreSQL with Row-Level Security (RLS):
ALTER TABLE messages ENABLE ROW LEVEL SECURITY;
CREATE POLICY tenant_isolation ON messages
USING (tenant_id = current_setting('app.tenant_id'));
That's it. One policy per table. Applied automatically to every query. You can't accidentally bypass it.
3. The Cost Curve
Firebase charges per document read. For an AI system where agents constantly read and write messages, this adds up fast:
| Metric | Firebase (Monthly) | Supabase (Monthly) |
|---|---|---|
| Document reads | 2.1M ($0.84) | N/A |
| Document writes | 340K ($0.61) | N/A |
| Storage | 2.3 GB ($0.59) | 8 GB (free) |
| Auth | 12K MAU ($0.07/MAU) | 50K MAU (free) |
| Database compute | Included | Free tier |
| Total | ~$180/mo | ~$25/mo |
The reads were the killer. Every time a user opened the dashboard, every agent status check, every message list — all document reads. And Firebase counts each document individually.
How Did I Actually Migrate?
The migration took 3 weekends. Here's the approach:
Step 1: Schema Design (Weekend 1)
Firestore's document model doesn't translate 1:1 to relational tables. I had to make decisions:
# Firebase: nested document
{
"messages": {
"msg-001": {
"from_agent": "CEO",
"to_agent": "CTO",
"content": "Review this proposal",
"project_id": "proj-123",
"tenant_id": "tenant-456",
"thread_id": "thread-789",
"created_at": "2026-01-15T10:00:00Z"
}
}
}
# PostgreSQL: proper relational schema
CREATE TABLE messages (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
from_agent VARCHAR(50) NOT NULL,
to_agent VARCHAR(50) NOT NULL,
content TEXT NOT NULL,
project_id UUID REFERENCES projects(id),
tenant_id UUID NOT NULL,
thread_id UUID,
created_at TIMESTAMPTZ DEFAULT NOW()
);
CREATE INDEX idx_messages_tenant ON messages(tenant_id);
CREATE INDEX idx_messages_agent ON messages(from_agent, created_at DESC);
The relational model is immediately clearer. Joins, foreign keys, and indexes work as expected.
Step 2: Data Migration Script (Weekend 2)
I wrote a Python script that pulled from Firestore and inserted into Supabase:
import firebase_admin
from firebase_admin import firestore
from supabase import create_client
# Connect to both
fb_db = firestore.client()
sb = create_client(SUPABASE_URL, SUPABASE_KEY)
# Migrate messages (with batching)
docs = fb_db.collection("messages").stream()
batch = []
for doc in docs:
data = doc.to_dict()
batch.append({
"id": doc.id,
"from_agent": data["from_agent"],
"to_agent": data["to_agent"],
"content": data["content"],
"tenant_id": data["tenant_id"],
"created_at": data["created_at"].isoformat(),
})
if len(batch) >= 500:
sb.table("messages").insert(batch).execute()
batch = []
# Flush remaining
if batch:
sb.table("messages").insert(batch).execute()
Key insight: migrate in batches of 500. Supabase's REST API handles bulk inserts well, and it keeps memory usage low for large collections.
Step 3: Application Code Swap (Weekend 3)
The biggest change was replacing Firestore SDK calls with Supabase client:
# Before (Firebase)
docs = db.collection("messages") \
.where("tenant_id", "==", tenant_id) \
.where("from_agent", "==", "CEO") \
.order_by("created_at", direction=firestore.Query.DESCENDING) \
.limit(20) \
.stream()
# After (Supabase)
response = supabase.table("messages") \
.select("*") \
.eq("tenant_id", tenant_id) \
.eq("from_agent", "CEO") \
.order("created_at", desc=True) \
.limit(20) \
.execute()
The API surface is similar enough that most changes were mechanical find-and-replace.
What About Real-Time?
Firebase's real-time listeners were the one feature I was nervous about losing. Supabase has real-time via PostgreSQL's LISTEN/NOTIFY:
const channel = supabase
.channel('agent-messages')
.on('postgres_changes', {
event: 'INSERT',
schema: 'public',
table: 'messages',
filter: `tenant_id=eq.${tenantId}`,
}, (payload) => {
console.log('New message:', payload.new);
})
.subscribe();
It works. The latency is slightly higher than Firestore (50-100ms vs 20-50ms), but for my use case — agent message notifications — that's imperceptible.
6 Months Later: The Results
| Metric | Before (Firebase) | After (Supabase) | Change |
|---|---|---|---|
| Monthly cost | $180 | $25 | -86% |
| Query complexity | High (denormalization) | Low (SQL) | Simplified |
| Security model | Per-collection rules | RLS policies | Stronger |
| Backup strategy | Manual exports | pg_dump + automated | Better |
| Local development | Emulator (slow) | Docker PostgreSQL | Faster |
| Vendor lock-in | High (Firestore SDK) | Low (standard SQL) | Reduced |
The cost drop alone justified the migration. But the real win was developer experience. Writing SQL is faster than fighting Firestore's query limitations.
When Should You NOT Migrate?
Be honest — Supabase isn't always the right choice:
- Simple apps with <10K users: Firebase's free tier is generous enough. Don't migrate for the sake of it.
- Offline-first apps: Firestore's offline persistence is genuinely excellent. Supabase doesn't have a direct equivalent.
- Google ecosystem integration: If you're deep in Google Cloud (Cloud Functions, Cloud Run, BigQuery), Firebase's integration is seamless.
Key Takeaways
- PostgreSQL + RLS > Firestore security rules for multi-tenant apps. It's not even close.
- Migrate in batches of 500 using the Supabase REST API. Don't try to dump/import.
- Real-time works via PostgreSQL LISTEN/NOTIFY. Slightly higher latency, but functional.
- The cost curve matters. Firebase charges per document read. At scale, this dominates your bill.
- Local development is better with Docker + PostgreSQL than Firebase emulators.
Useful Resources
- Supabase — Free tier includes 500MB database, 50K monthly active users, and real-time subscriptions. Start here.
- DigitalOcean ($200 free credit) — If you want to self-host PostgreSQL instead of using Supabase's managed service, their managed database starts at $15/month.
- Vercel — Pairs well with Supabase for frontend hosting. Their Edge Functions work natively with Supabase's PostgREST API.
If you're hitting Firebase's walls — nested document queries, security rule sprawl, escalating read costs — the migration is worth it. Plan for 3 weekends, batch your data, and enjoy writing SQL again.
Stay Updated
I publish deep-dive technical articles 5x/week on AI agents, Python architecture, and developer tooling. Follow me here on dev.to or subscribe to the newsletter to get them in your inbox.
This article was generated with AI assistance and reviewed for accuracy. If you found it helpful, consider supporting the author:
Top comments (0)