DEV Community

Cover image for Changing Databases 5 Times in 48 Hours Boosted Our Launch to 35,000 Views
David Cox for Supabase

Posted on

Changing Databases 5 Times in 48 Hours Boosted Our Launch to 35,000 Views

The way this story begins is about as wacky as its title. I started Integrated Reasoning in 2021 to find out what would happen if arbitrary precision integers could be used when solving knapsack problems (e.g. 9, 33, and 65 bit registers). My thinking was that a flexible word size would make pseudo-polynomial time algorithms for solving knapsack problems more efficient. In short, it does help, but the largest gains come from pipelining.

Fast-forward to mid-April 2024. We’d just started developing IRX, an optimization-as-a-service platform, to serve as the interface to Integrated Reasoning’s hardware accelerated solvers. We planned a MVP that essentially moved the experience of running open-source solvers like HiGHS from the command line into the browser. The selling point being that these processes can take months or longer to complete, and not having to wrangle them would help our users focus on optimization instead of the infrastructure required to run optimization.

Pushing logs in real-time from AWS to the browser was something I’d never done before. I figured it was roughly similar to the problem of streaming text from an LLM, something I’d never done before either, which has seen tons of recent development. On one hand, real-time log events might be overkill for an MVP. On the other hand, rendering logs in real-time looks beautiful and would help us provide a user experience that’s as smooth as the command line. We went for it.

Our initial plan was to use Prisma Pulse to subscribe to Postgres writes within an edge function that sent each line of stdout from HiGHS, CBC, and SCIP to the browser using Vercel streaming responses. Other than its complexity, there were two major problems with this plan. Finding a managed Postgres instance that had the right combination of table replication functionality to satisfy the requirements of both Pulse and our backend was non-trivial, resulting in the absurdity that is the first half of this article’s title. The cascade of events that stemmed from changing Postgres providers five times in two days is still mind-blowing to me.

Ant Wilson on Twitter

The domino effect began when Long (@beertocode) from Supabase reached out with a challenge. He explained that he wants to give new people building with Supabase an extra push. If we build and launch with Supabase within two weeks, they will use their supa-huge following on socials to make the launch a big one. Challenge accepted, and challenge it was.

Social media opportunity is not usually part of the equation when evaluating database options. However, our launch goal was to gather as much user feedback as possible, so it was a factor we couldn’t ignore. Knowing almost nothing about Supabase’s product, I took an afternoon to read through all of their documentation. By that evening I had a completely different, much simpler, mental model for how to build IRX.

Supabase’s anonymous sign-in feature coupled with row level security meant that people could use IRX on launch day without having to go through a registration process. We’re talking zero barrier to entry, which is something I didn’t even think was possible in the seemingly never ending cycle of “create another account and reset your password” that is life in 2024.

Anonymous sign-in proved extremely valuable to us by making it possible for nearly three hundred people to try solving mixed-integer programs with IRX on launch day — even if they had no idea what IRX was and were just playing around. We got incredible feedback from a wider variety of backgrounds than I’d have ever imagined.

IRX Traffic

The second of the two problems with our approach to real-time log streaming didn’t present itself until after we’d implemented the feature. Prisma Pulse has a limit of at most 20 concurrent subscriptions per table. This limit wasn’t obvious until a resource leak consumed our entire connection quota instantaneously. Capping out at 20 concurrent users would not have made for a good launch and time was really ticking.

IRX Launch Timer

We were still debugging streaming when Long Tweeted us at the three hour mark 😅. 

Fortunately by this point I’d read everything there is to read about Supabase, including their quota table for realtime events, which supports 10,000 concurrent clients out of the box. Making the switch was easy, largely thanks to Supabase’s TypeScript type generation.

Other than some back pressure issues, we managed to ship real-time log streaming for three solvers running simultaneously on potentially different spot instances.

IRX Infrastructure

IRX Solver Infrastructure

(IRX internal infrastructure diagrams by @baykovr)

Long put it best:

Long on Twitter

What’s really crazy is that I signed up for Resend because of their Supabase integration. Their founder @zenorocha blew up our Twitter a few days before the launch, adding another 16,000 views to our launch week traffic with a single tweet.

Zeno Rocha on Twitter

Here’s my sketch of IRX’s infrastructure on the day of the launch:

IRX Launch Day Infrastructure

We freaking launched. Thank you for the challenge Supabase!

Supabase on Twitter

Top comments (5)

jdgamble555 profile image
Jonathan Gamble

What are the 5 databases? Maybe I'm misunderstanding here?

david-r-cox profile image
David Cox
david-r-cox profile image
David Cox

Cross-posted to X here.

sylvain361 profile image
Sylvain • Edited

What's more mind boggling is that you discovered a cap of 20 users only 2 days before launch.

david-r-cox profile image
David Cox

Yeah, maybe I was misusing Pulse but we were sweating badly when that limit hit us.