DEV Community

Cover image for Wasp x Supabase: Smokin’ Hot Full-Stack Combo 🌶️ 🔥
Mihovil Ilakovac for Wasp

Posted on

Wasp x Supabase: Smokin’ Hot Full-Stack Combo 🌶️ 🔥

TL;DR: In this post I will tell you about the hyper-productive stack for painlessly building full-stack apps with React & Node.js - Supabase and Wasp! We combined these two technologies to get auth, async jobs, full-stack type safety, managed DB, and managed storage out of the box.

Hey, I’m Miho! 👋

I’m a senior full-stack dev and I’ve been in the business of dreaming up and creating projects for almost 10 years. Almost on a daily basis I stumble upon a problem and want to build an app to fix it! That’s why I had to get good at doing it quickly, with as little hassle as possible.

After having used both Wasp and Supabase for a while, combining them together seemed like a no brainer to me. Turns out I was right!

No theory, we’ll build an app!

We've cooked up something interesting: a greeting cards generator that doesn’t just work, but it’s also infinitely creative! Leveraging open-source AI models — yes, the shiny new Llama 3 and super speedy SDXL-Lighting — we've brought this idea to life.

Need a visual? Here's a quick sketch I made (good thing I got that tablet!):

Sketch of the different app components, some of which are the Wasp full stack app and Supabase DB and Storage

And this is how our app looks once it's all polished and ready to go:

Check out the deployed version of our app — sign in with Google and get some sweet cards!

Llama has the last word

In our app, multiple models collaborate to produce a nice looking result.

Funny image of a llama telling a painter what to paint

It works like this:

  1. User gives us a topic
  2. Llama 3 generates the greeting card text (”text”)
  3. … it also describes some artwork that fits the text (”image prompt”)
  4. Stable Diffusion draws the artwork
  5. ???
  6. Profit!

Imagine requesting a greeting card for your three-year-old, bossy, red-dress-loving llama (because who wouldn't?!).

You'd get something adorable like this! 🦙:

Prompt: “a greeting card for my llamas 3rd birthday, it’s quite bossy and loves wearing red clothes”

Prompt: “a greeting card for my llamas 3rd birthday, it’s quite bossy and loves wearing red clothes”

Support us! 🙏⭐️

GH star click

If you find this post helpful, consider giving us a star on Github! Everything we do at Wasp is open source, and your support helps us make web development easier and motivates us to write more articles like this one.

support us

⭐️ Thanks For Your Support 🙏

How we pulled it off

That’s quite a cool greeting card IMHO, but we need a bit more to make this a proper app that works for our users.

We want to login with Google

We used Wasp’s built-in auth which makes your auth totally yours and independent of any 3rd party service. Under the hood, it uses Lucia and Arctic to give you email, username and multiple OAuth providers out of the box.

We didn’t need to work too hard beyond this code to set it up:

Wasp config file code

Wasp config file code

We want to split up the card making process

Using Wasp's async jobs, we've split the card creation process into manageable steps, so users aren't left in the dark. They get playful updates like "Warming up the AI" and "Drawing the image" — making the wait a bit more bearable 🐻

These tasks are managed by pg-boss behind the scenes (which is based on PostgreSQL) and, oh look, that seamlessly connects to...

Managed PostgreSQL

It was a great experience using Supabase’s rock-solid PostgreSQL database for this app. The DX around that product is phenomenal: viewing and managing the DB data was a lifesaver when you don’t want to craft your own admin panel from scratch.

Screenshot of the Supabase table editor

Table Editor is great for quick admin work on the DB

Modern apps need modern storage

And for storage, we opted for Supabase’s S3-compatible storage option. This means our app doesn’t rely on having dedicated disk storage — making it more portable and more easily scalable.

Overview of the greeting card images

Overview of the greeting card images

Llama 3-70B model

Meta’s newest Llama3 is an open-source contender to GPT-4 (the 405B model that’s still training).

The text it produced was always usable and funny most of the time. I felt like it didn’t need that much prompt tweaking to get good results.

Prompts we used

Writing the greeting card:



Write a greeting card text for the following topic: "<topic>". Make it clever.

Return it as plain text, no quotes, no extra syntax.
Return only the greeting card text. Max chars: 80!


Enter fullscreen mode Exit fullscreen mode

For example, if we used the topic Laughing about a thing we’d get the following result: “Laughter is the best medicine, unless you have health insurance, then that's probably better.”

Getting a usable image prompt:



Based on the text I'll provide, give me a nice artwork to go alongside it.
Describe it in a way of a short list of features of the artwork.
Use descriptive language so someone can paint it.
Only respond with the description, no extra syntax. Max words: 30

Context: <original_topic>

Text:
<text>


Enter fullscreen mode Exit fullscreen mode

For the example above, we’d get the following image prompt: “Whimsical illustration of a smiling pill bottle surrounded by swirling laughter bubbles, with a subtle medical cross in the background, set against a warm, sunny yellow sky.”

Now, why did we do the second step? Just compare the images generated directly from the “text” and the “image prompt”:

Image generated with the text as prompt

Using the “text” directly

Image generated with the special image prompt

Using the “image prompt” generated by Llama 3

As you can see, the version based on the image prompt aligns much better aesthetically with greeting card vibes—colorful and friendly.

SDXL-Lighting (4 step variant) model

Bytedance produced this model based on Stable Diffusion XL and made it super fast. The greeting card images are created in 1-2 seconds. The images remind me of Midjourney’s quality which means the model is doing a good job.

Example SDXL-Lighting image 1: abstract art

Example SDXL-Lighting image 2: image of a cat

Example SDXL-Lighting image 3: image of an astronaut

Cost and Time to Generate

We used Replicate to run the models and the cost so far is 26 cents for 90 cards — which means it’s less than a third of a cent per card!

The combination of open-source models, minimal token use, and quick image generation keeps costs impressively low.

Producing a single card takes under 5 seconds which helps if you are in a hurry 🙂

Give It a Try!

Check out the beautiful UI crafted by ShadCN over at the deployed version of our app — sign in with Google and get some sweet cards! Plus, the entire project is open source. Grab the code from GitHub.

Top comments (12)

Collapse
 
tyaga001 profile image
Ankur Tyagi

love the writing style. nice post.

Collapse
 
infomiho profile image
Mihovil Ilakovac

Thank you!

Collapse
 
karanganesh43 profile image
Karan Ganesh

Ok but what about the costs at scale? What will 100k hits do if they came in within 24 hours and how much would it cost?
Nice post though.

Collapse
 
infomiho profile image
Mihovil Ilakovac

Since we are using Replicate, I believe the big scale wouldn't be an issue.

I believe the costs could scale linearly 26 cents times 1000 could be ... 260 bucks for 90000 greeting cards!

I feel that's not a lot for that much greeting cards which if you monetise properly, the economics could work out at the end :)

Collapse
 
sodic profile image
Filip Sodić

Ah, one week too late - my friend had a weeding last weekend and got the most generic greeting card you could think of :)

Collapse
 
infomiho profile image
Mihovil Ilakovac

"I wish you all the best in your new adventure" - Filip

Maybe if some other friend has their wedding this year, you could use this thing then :)

Collapse
 
vincanger profile image
vincanger

Ive been wanting to try out llama 3. This looks awesome. 😎

Collapse
 
infomiho profile image
Mihovil Ilakovac

It's quite a useful model.

For some stuff it performed better (converting a big JSON file into CSV while filtering) than GPT4 because GPT4 said "it's quite a big file, here's just a few lines".

Collapse
 
matijasos profile image
Matija Sosic

Ha this is awesome! Combining all the hottest tech in one neat package :)

Collapse
 
infomiho profile image
Mihovil Ilakovac

Great things require great packaging!

Collapse
 
zvone187 profile image
zvone187

Oh, this is nice!!

Collapse
 
infomiho profile image
Mihovil Ilakovac

Thanks, I'm glad you found it useufl :)