DEV Community

Cover image for How I migrated from Firebase to Supabase
Joe Ashwell
Joe Ashwell

Posted on • Updated on

How I migrated from Firebase to Supabase

Late last year I launched unwindhr.com which is an HR tool for small teams.

This started as a freelance project and quickly snowballed into a full blown SaaS product.

I didn't really give much thought as to which backend I would use. I already had 3 projects in Supabase (BOXCUT, MineWork & geomechanics.io), but also a few projects in Firebase too. I was more concerned at the time at actually building the product.

It really wasn't long after launching and landing my first few clients that I really started to notice performance issues and super high read numbers. Admittedly, this is most likely due to my backend design which lead to needing to fetch multiple nested collections based on various roles and permissions.

I was already starting to feel a little cornered in the whole Google ecosystem and a bit limited with stuff like backups, vendor lock in, etc. (and you always have the obvious hanging over your head) and ultimately, I think I just find the mental model of a SQL database more intuitive compared to a NoSQL database. So I thought to myself; "the longer I leave it, the harder it'll be to make the switch".

I remember reading a blog post some time ago about the migration but immediately knew this was a little overkill for me. I had far fewer users and my db size was smaller, I also just didn't fancy running both simultaneously.

My rough plan was:

  1. Export data from Firebase as a snapshot
  2. Design table schemas, policies, etc.
  3. Refactor code to work with Supabase
  4. Test
  5. Pull UnwindHR down over the weekend
  6. Export latest data from Firebase & import to Supabase
  7. Test again (a lot..!)
  8. Make the switch

My usage across Firebase was:

  1. Auth
  2. Firestore
  3. Storage
  4. Functions - although this wouldn't be required for Supabase

Overview

I primarily followed the official guide as to how to migrate over, but came across quite a few (of what I believe to be) very common gotchas. So here's a hodge-podge mix of official packages, custom scripts, and other tools I used to complete this migration.

Migrating Auth

To migrate authenticated users, I followed the official guide and it worked pretty seamlessly. I'm not going to replicate the steps here, best to keep one source of truth.

The only small caveat I have here is that using this package I couldn't successfully import passwords, or persist user ids either. Both of these were lost in the migration process (I tried a number of ways but no such luck). However, I really wasn't too fussed tbh. I feel it's better for security to just make the user reset their password. With regards to persisting the uid, my solution was to have this in most of the tables to have an optional column; fb_id. Then I could always refer back to the correlating Firebase id for future reference.

Migrating Firestore

Here's where it starts to get fun. If you have nested collections - which I had quite a few, this is where the official db package doesn't work so well. It only returns the top level collections, which is really quite annoying.

This took me a while, but I eventually found this incredibly helpful guide which uses the node-firestore-import-export npm package.

Using that package we can export our entire Firestore database into a single JSON file. Super useful! But, I knew I was going to have pretty much a one-to-one relationship between a single collection and what would be a single table in Supabase. So I wrote a script to traverse the collections and split them out for ease during the migration. The script πŸ‘‡

const fs = require("fs");
const path = require("path");

// Load JSON data from file
function loadJson(filename) {
  return JSON.parse(fs.readFileSync(filename, "utf8"));
}

// Save JSON data to file
function saveJson(data, filename) {
  fs.writeFileSync(filename, JSON.stringify(data, null, 2));
}

const collectionsAccumulator = {};

// Recursively traverse and aggregate data into collectionsAccumulator
function aggregateCollections(data, parentKey = "") {
  if (data && typeof data === "object") {
    if (data.__collections__) {
      Object.entries(data.__collections__).forEach(
        ([collectionName, collectionData]) => {
          const fullCollectionName = parentKey
            ? `${parentKey}_${collectionName}`
            : collectionName;

          if (!collectionsAccumulator[fullCollectionName]) {
            collectionsAccumulator[fullCollectionName] = [];
          }

          Object.values(collectionData).forEach((document) => {
            if (document.__collections__) {
              aggregateCollections(document, fullCollectionName);
              delete document.__collections__;
            }
            collectionsAccumulator[fullCollectionName].push(document);
          });
        }
      );
    } else {
      Object.values(data).forEach((value) => {
        if (typeof value === "object") {
          aggregateCollections(value, parentKey);
        }
      });
    }
  }
}

function writeAggregatedCollectionsToFiles(basePath) {
  Object.entries(collectionsAccumulator).forEach(
    ([collectionName, collectionData]) => {
      const filePath = path.join(basePath, `${collectionName}.json`);
      saveJson(collectionData, filePath);
      console.log(`Saved ${filePath}`);
    }
  );
}

function main() {
  const filename = "backup.json";
  const outputPath = "output";

  if (!fs.existsSync(outputPath)) {
    fs.mkdirSync(outputPath, { recursive: true });
  }

  const data = loadJson(filename);
  aggregateCollections(data);
  writeAggregatedCollectionsToFiles(outputPath);
}

main();
Enter fullscreen mode Exit fullscreen mode

Now the collections were separated into individual JSON files, I then spent some time designing the Supabase table schemas, policies, etc. I then migrated each collection, one-by-one.

There were some other manual edits that I needed to make ahead of time in each JSON file before I migrated the data over e.g., some column names where different, types, and id -> fb_id as I mentioned earlier.

I actually thought it would be easier to use SQL for each migration as I didn't have that much data - for reference I only had about a few hundred rows worth of data in each table. So I used the Supabase SQL editor to just insert the data.

SQL example. πŸ‘‡

WITH json_data AS (
  SELECT '[your json data]'::json AS data
)

INSERT INTO your_table (date_start, org_id, name, fb_id, allow_weighting, date_end, lock_date)
SELECT 
    (p->>'date_start')::timestamptz AS date_start, 
    (p->>'org_id')::bigint AS org_id, 
    p->>'name' AS name, 
    p->>'fb_id' AS fb_id, 
    COALESCE((p->>'allow_weighting')::boolean, false) AS allow_weighting, 
    (p->>'date_end')::timestamptz AS date_end, 
    (p->>'lock_date')::timestamptz AS lock_date
FROM json_data, json_array_elements(data) AS p;
Enter fullscreen mode Exit fullscreen mode

Once my primary tables had been inserted (e.g., user_profiles), I could then start referencing these new ids as foreign keys in other tables (what previously were the other/sub-collections). To do this, I just wrote another script to look up and change the ids on my machine before inserting into Supabase.

Migrating Storage

Ok, another one that didn't really work when following the official storage guide. A lot of the files seemed to become either corrupted or just were in the wrong format during the download operation. For example PDFs just appeared to be in binary for some reason, although JPEGs and PNGs were fine.

I ended up going a different route. I used the Google Cloud SDK to just download my entire storage bucket into my local machine. You'll need to go through a few steps first if you choose to do this too:

  1. install the Google Cloud SDK
  2. run gcloud init and follow the prompts (auth, project, etc.)
  3. run gsutil -m cp -r gs://YOUR_BUCKET_NAME/* THE_PATH_WHERE_YOU_WANT_IT

From there I simply dragged the entire folder structure (roughly 200 folders & files) to the Supabase UI and Supabase handled the rest! 🀌

Et voilΓ , migration done! πŸ‘οΈβš‘πŸ‘οΈ

There were of course a million other tiny little hurdles, problems, and challenges along the way, but broadly speaking it was fairly painless.

It took me about 2 weeks (part time) and was 100% the right decision for me!

Any questions, just shout! ✌️

Top comments (2)

Collapse
 
dancramer profile image
Dan Cramer πŸ‘¨πŸ»β€πŸ’»

Great write up of the things you had to go through. Thank you! Double check your first link, it looks likes there’s a typo

Collapse
 
ja profile image
Joe Ashwell

Ah thanks so much @dancramer ! There’s always one typo somewhere πŸ˜‚