DEV Community

Discussion on: Why we migrated opensource 😼inboxkitten (77 million serverless request) from 🔥Firebase to ☁️Cloudflare workers & 🐑CommonsHost

Collapse
 
picocreator profile image
Eugene Cheah • Edited

Thanks, @tmikaeld we did actually consider using KeyValue or alternatively using the Cache API - though it would very likely double our bill.

For a more commercial production use case, having such edge like performance could be well worth the cost. For a hobby project, not as much haha 😅

Currently, we even limit the workers to strictly the required endpoints required, to cut out the random API scans, from bots, etc. As from the firebase logs, we realized it can be quite sizable, due to the site's popularity in Russia, China, and the USA.

That being said - for email response body, that is something we definitely want to do next! Less so for performance or cost... But to do so for fun!

Collapse
 
tmikaeld profile image
Mikael D

Yeah, I realized after i wrote it that you'd still hit the Workers and incur a cost for it, so normal hosting would be cheaper.

Thread Thread
 
picocreator profile image
Eugene Cheah • Edited

Though if you get creative. There might be some possible Rube-Goldberg level of cost-cutting that might be possible. Something one should never do in production for code maintainability (maybe?)

In general, our API functions in 2 major step - listing of emails, and reading of the email content, with separate API calls. And due to its workflow, its always in that sequence.

From the documentation of Cloudflare VCL and Cache API

What we can do is split the API routing into

  • listing on Cloudflare worker
  • reading of email body on firebase (or something else) with standard Cloudflare in front of it

So that during the initial "listing of emails" - what we can do is to fill in the Cloudflare cache for the subsequent "reading of email" API call. For the top 10 emails in the list (if any).

And when the subsequent "reading of email" is called on the user click, it would hopefully be reading from cache, saving a single call hit... And if it misses, it would simply fall back to firebase.


But like I said, it's a crazy rube-Goldberg. And would only technically work cause there is a reasonable chance of hitting the same Cloudflare node due to the magic of http2.

And that's assuming it works lol - would be amused to see someone experiment with such setups.