We're big fans of Lovable. It's genuinely one of the best tools for going from an idea to a working app, and a lot of our customers build on it.
Their Cloud feature takes things further by managing a Supabase instance for you. No database setup, no hosting headaches. It's a great experience right up until the moment you want to connect something external: Zapier, email automations, analytics that talk to Postgres directly.
That's when you realize you don't have the database keys.
The official migration path off Cloud is rough. CSV exports one table at a time. Storage files uploaded individually. Every user has to reset their password. If you have real users, that last part is a non-starter.
We built an open-source exporter that handles the whole thing. Here's what we learned building it.
The approach
Both sides are Supabase, which means both sides are Postgres. That's the key insight: we don't need a custom migration format or intermediary API. Native Postgres tooling (pg_dump and psql) can move the data. The hard parts are everything Supabase layers on top.
The migration breaks down into three problems:
- Getting credentials out of a locked-down Lovable Cloud project (you don't have direct DB access)
- Cloning the database without breaking Supabase's auth system, system schemas, or foreign key ordering
- Copying storage files (avatars, uploads, assets) between Supabase Storage instances
Each one had a non-obvious solution.
Getting credentials out of a locked-down project
Lovable Cloud doesn't expose your database URL or service role key directly.
But here's the thing: it's still Supabase under the hood. And Supabase edge functions have access to environment variables like SUPABASE_DB_URL and SUPABASE_SERVICE_ROLE_KEY. Lovable Cloud doesn't block you from deploying edge functions, so we can use one as a credential bridge:
Deno.serve(async (req) => {
// access-key check omitted for brevity (full version in repo)
return jsonResponse({
supabase_db_url: Deno.env.get("SUPABASE_DB_URL"),
service_role_key: Deno.env.get("SUPABASE_SERVICE_ROLE_KEY"),
});
});
That's it. Deploy this as an edge function on the source project, and the exporter can connect. The full version includes access-key protection so the endpoint isn't open to anyone. After the migration, delete the edge function and rotate your secrets.
Schema vs. data: why you can't just pg_dump everything
The clone happens in four stages (all running inside a Docker container built on postgres:17-alpine, so users don't need Postgres tooling installed locally):
dump_schema → restore_schema → dump_data → restore_data
Schema first, because the target needs table definitions before it can accept rows. But you can't dump the schema as-is. Every Supabase project comes with a public schema and a standard comment on it. Restoring those into a fresh Supabase project causes conflicts:
const rawSchema = await readFile(rawSchemaPath, "utf8");
const filteredSchema = rawSchema
.split("\n")
.filter(
(line) =>
line !== "CREATE SCHEMA public;" &&
!line.startsWith("COMMENT ON SCHEMA public IS "),
)
.join("\n");
For data, we dump public and auth schemas together but skip transient, system-managed tables:
const EXCLUDED_TABLES = [
"auth.schema_migrations",
"storage.migrations",
"supabase_functions.migrations",
"auth.sessions",
"auth.refresh_tokens",
"auth.flow_state",
"auth.one_time_tokens",
"auth.audit_log_entries",
];
The auth.users table is not excluded. That's the key to migrating without password resets. Supabase stores hashed passwords in auth.users. Moving the row moves the hash. Users log in on the new instance with their existing password.
Streaming data through a FIFO pipe
The obvious approach for the data stage: pg_dump to a file, then psql from that file. This works until someone's database is larger than the 2GB disk limit on Cloudflare containers (where the hosted version runs).
Instead, the clone script creates a named FIFO (first-in-first-out) pipe:
mkfifo "$FIFO"
pg_dump $SOURCE_DB_URL --data-only --schema=public --schema=auth \
--exclude-table=... --no-owner --no-acl > "$FIFO" &
DUMP_PID=$!
psql $TARGET_DB_URL -v ON_ERROR_STOP=1 < "$FIFO" &
RESTORE_PID=$!
wait $DUMP_PID
wait $RESTORE_PID
Data flows from source to target without ever fully materializing on disk. The container's storage requirements stay constant regardless of database size.
One subtlety: the target needs to accept rows in whatever order pg_dump produces them, even if foreign key constraints exist between tables. The script sets session_replication_role=replica on the target connection, disabling triggers and FK checks for the duration of the import. Once psql finishes, the mode resets and all constraints are re-validated.
There's no "copy all storage" API
Database migration gets all the attention, but most Lovable apps also store files in Supabase Storage: avatars, uploads, assets. If you only move the database, your app loads and every image is a 404.
Supabase doesn't have a "copy all buckets to another project" endpoint. You have to list every bucket, recreate each one on the target (with matching settings: public/private, file size limits, allowed MIME types), then download and re-upload every object individually. Lovable Cloud storage sometimes has orphaned references (rows in the DB pointing to files that no longer exist).
The exporter walks the source buckets, recreates them on the target, and copies objects in parallel. When a file doesn't actually exist on the source (orphaned reference), the copy returns early instead of failing the whole migration:
const isMissingStorageObjectResponse = (status: number, body: string): boolean => {
if (status === 404) return true;
const lowered = body.toLowerCase();
if (lowered.includes('"error":"not_found"')) return true;
if (lowered.includes("object not found")) return true;
return false;
};
// in copyOneObject:
if (isMissingStorageObjectResponse(downloadResponse.status, errorBody)) {
return "skipped_missing";
}
Each object ends up as "copied", "skipped_missing", or "skipped_existing". The summary tells you exactly what happened across the whole migration.
Gotchas we hit along the way
Permission checks before any writes. Early versions would fail mid-migration when the target lacked INSERT permissions on certain tables. The exporter now pre-checks both SELECT on source and INSERT on target for every table before starting.
Classified failure modes. A migration can fail for a lot of reasons:
- Source DB is unreachable
- Target database isn't empty
- Target is missing required permissions
- Credentials are wrong or expired
- Storage buckets can't be created
Each maps to a specific exit code with a human-readable hint so you know exactly what to fix. (All log output is sanitized to strip database passwords and service role keys before writing to stdout.)
Failures and resumability without state
The hosted version runs on Cloudflare containers, and we picked containers specifically because they're ephemeral. Credentials pass through an isolated environment that gets destroyed after the migration. Nothing persists. That's a feature when you're handling someone's database URL and service role key, but it rules out saving job progress to disk and resuming where you left off.
Instead, the exporter treats each run as atomic. We require the target to be blank before starting, so if the database clone fails, you create a fresh Supabase project and run it again. No half-migrated state to untangle.
Storage is more best-effort by design. Files that fail to copy get skipped and logged rather than killing the whole migration. If you re-run, the copy engine scans the target first and skips anything that already made it across:
// before copying, collect what's already on the target
const existingTargetObjectPaths = await collectExistingObjectPaths(
targetProjectUrl, targetAdminKey, bucketId,
);
// then in the copy loop:
if (existingTargetObjectPaths.has(fullPath)) {
return "skipped_existing";
}
Not a checkpoint system, but in practice it gets you there.
After the migration
Once your data is in your own Supabase, you have direct Postgres access. That's the whole point.
Some things we've seen people connect:
- Zapier / Make for workflow automations triggered by database changes
- Dreamlit for database-driven transactional emails (connects to your Postgres, no API code)
- Analytics and monitoring that query Postgres directly
- Custom edge functions that need the service role key
You don't have to leave Lovable to build, either. Connect your own Supabase to a new Lovable project and your workflow stays the same. The difference is you own the infrastructure and can plug in whatever tools make sense.
Try it
Hosted (no setup): dreamlit.ai/tools/lovable-cloud-to-supabase-exporter
Run it locally:
git clone https://github.com/dreamlit-ai/lovable-cloud-to-supabase-exporter
cd lovable-cloud-to-supabase-exporter
pnpm install
pnpm exporter -- export run \
--source-edge-function-url <your-edge-function-url> \
--source-edge-function-access-key <your-access-key> \
--target-db-url <your-supabase-db-url> \
--target-project-url <your-supabase-project-url> \
--target-admin-key <your-service-role-key> \
--confirm-target-blank
The full source is on GitHub. MIT licensed.
If you've migrated off Lovable Cloud (or want to), we'd love to hear what you're building on your own Supabase. Drop a comment or find us on X.
Top comments (0)