DEV Community

loading...
Cover image for Cloudflare & Fauna: A Peek Into Performance
Fullstack Frontend

Cloudflare & Fauna: A Peek Into Performance

K
・5 min read

With Workers, Cloudflare offers a solid FaaS product. It's much lighter than AWS Lambda, edge deployed near your customers, and with Workers KV, you can even store many different kinds of application states.

In April, Cloudflare announced database partnerships, seemingly because they aren't in the mood of building their own serverless data storage.

It would be awesome if Fauna were part of the Bandwidth Alliance

The Bandwidth Alliance is a group of forward-thinking cloud and networking companies that are committed to discounting or waiving data transfer (also known as bandwidth) fees for shared customers.

Waiving fees for moving data between Cloudflare and Fauna would be pretty rad.

Anyway, I played around with these technologies again for my SaaS product. This time I looked a bit into the performance of integrating them both.

Cloudflare Workers & Fauna

For this experiment, I created four Cloudflare Worker scripts.

  1. A script that does nothing to get a baseline.
  2. A script that loads data from memory.
  3. A script that loads data from Workers KV.
  4. A script that loads data from Fauna.

I took some JSON from Twitter, a 676 KB of tweets, and pumped it into the respective data storage.

Then I did a bunch of reads and looked at the latency.

1. Empty Worker

The empty Worker doesn't do anything.

addEventListener("fetch", (event) => event.respondWith(new Response()));
Enter fullscreen mode Exit fullscreen mode

When I clicked around in Insomnia a bit, I got a latency of about 20 ms, which seemed to line up with what I read about Cloudflare Workers.

When I throw loadtest on it, I got this:

Max requests:        1000
Concurrency level:   10
Agent:               none
Completed requests:  1000
Total errors:        8
Total time:          93.1370978 s
Requests per second: 11
Mean latency:        895.6 ms
Percentage of the requests served within a certain time
  50%      84 ms
  90%      1161 ms
  95%      3096 ms
  99%      15114 ms
           21055 ms (longest request)
Enter fullscreen mode Exit fullscreen mode

I won't pretend I know what this means, but it seems that Workers behave differently under load. Or it means my local internet connection is shit? I can't really say, but maybe people are reading this who see and think, "Ahhh, of course!" and they will enlighten me.

2. Memory Worker

Next comes the Worker that has the JSON data hardcoded as a JavaScript string inside the script file. I guess this has all kinds of implications I'm not aware of, but let's look into it away.

addEventListener("fetch", (event) => event.respondWith(new Response(data)));

const data = `{
  "...": {
    "created_at": "Fri May 28 09:10:02 +0000 2021",
    "id_str": "...",
    "full_text": "...",
    "display_text_range": [0, 268],
    "entities": {
      "user_mentions": [
        {
          "screen_name": "...",
          "name": "...",
          "id_str": "...",
          "indices": [216, 225]
        }
      ]
    },
    "source": "<a href=\"https://mobile.twitter.com\" rel=\"nofollow\">Twitter Web App</a>",
    "in_reply_to_status_id_str": "...",
    "in_reply_to_user_id_str": "...",
    "in_reply_to_screen_name": "...",
    "user_id_str": "...",
    "retweet_count": 0,
    "favorite_count": 7,
    "reply_count": 3,
    "quote_count": 0,
    "conversation_id_str": "...",
    "lang": "en",
    "self_thread": {
      "id_str": "..."
    }
  },

...

}`;
Enter fullscreen mode Exit fullscreen mode

I included one of the tweet objects (there are 159 in the list) to give you some impression of the data I'm using.

I didn't try to parse or model the data any further to remove other overhead. This is just a simple in-memory cache. It doesn't make much sense since the cache must be created from scratch every time the Worker has a cold-start, but whelp.

Insomnia shows an average of 200 ms when I click around a bit. 99% of that file is now a string, so parsing the script means parsing the string.

The load test revealed the following numbers:

Max requests:        1000
Concurrency level:   10
Agent:               none
Completed requests:  1000
Total errors:        2
Total time:          154.7164956 s
Requests per second: 6
Mean latency:        1460.2 ms
Percentage of the requests served within a certain time
  50%      809 ms
  90%      3497 ms
  95%      3765 ms
  99%      15265 ms
           21026 ms (longest request)
Enter fullscreen mode Exit fullscreen mode

The 50% have an increase of 10x, and 90% have an increase of 3x in latency.

After that, things are mostly the same. I guess something is clogging up the tubes under load, haha.

3. KV Worker

Now, we enter the saner territory. At least in terms of data storage. The memory one didn't make much sense, I think.

I used Workers KV same as the memory store. I just pumped the whole JSON string into one key. This way, no iteration for the data load occurs, and it can be seen as some cache for stuff that would otherwise come from a remote database Fauna.

addEventListener("fetch", (event) =>
  event.respondWith(
    DATASTORE.get("data")
    .then(data => new Response(data))
  )
);
Enter fullscreen mode Exit fullscreen mode

The namespace is DATASTORE, the key is data.

Insomnia gives me around 300 ms latency. So, there is some noticeable difference between memory and Workers KV. At least it's not a magnitude, but only 50%.

The load test gave me:

Max requests:        1000
Concurrency level:   10
Agent:               none
Completed requests:  1000
Total errors:        6
Total time:          157.597486 s
Requests per second: 6
Mean latency:        1540.1 ms
Percentage of the requests served within a certain time
  50%      881 ms
  90%      3366 ms
  95%      3837 ms
  99%      15754 ms
           21055 ms (longest request)
Enter fullscreen mode Exit fullscreen mode

In terms of load testing, things look even brighter. Not for Cloudflare Workers in general, haha, but at least for the difference between memory and KV storage.

The latency for 50% of the requests only went up around 10%.

4. Fauna Worker

Finally, let's test the Fauna implementation. Here I did a bit of data modeling. Every tweet became one document. The reason for more modeling in Fauna than in KV was that Fauna would probably be used as an actual database in production, and KV more to cache some Fauna requests without meddling too much with them.

import faunadb from 'faunadb';
import { customFetch } from './utils.js';
const { Collection, Documents, Get, Lambda, Map, Paginate, Var } = faunadb.query;

const db = new faunadb.Client({
  secret: '...',
  fetch: customFetch,
})

addEventListener('fetch', event => {
  event.respondWith(handleRequest(event.request))
});

async function handleRequest(request) {
  const result = await db.query(
    Map(
      Paginate(Documents(Collection('data'))),
      Lambda('K', Get(Var('K'))),
    ),
  )

  return new Response(JSON.stringify(result.data))
}
Enter fullscreen mode Exit fullscreen mode

Not much is happening here, just reading the tweet data from Fauna that I uploaded with a script before.

Clicking around with Insomnia gave on average 600 ms latency. So using KV is two times as fast. So, caching responses from Fauna in Workers KV is a reasonable method to speed things up a bit.

The load test showed me:

Max requests:        1000
Concurrency level:   10
Agent:               none
Completed requests:  1000
Total errors:        2
Total time:          164.3392169 s
Requests per second: 6
Mean latency:        1537.6 ms
Percentage of the requests served within a certain time
  50%      962 ms
  90%      3066 ms
  95%      3985 ms
  99%      9651 ms
           21048 ms (longest request)
Enter fullscreen mode Exit fullscreen mode

Compared to the KV load test, things don't look much worse here.

Conclusion

I can't tell how much sense the load test makes here.

As I said, it could either be my machine or Cloudflare.

But I did all tests with the same configuration on the same internet connection and the same PC, so they can probably be compared to each other and give you a feeling for how a different data storage would affect your latency.

If I just look at my quick Insomnia tests, things look like this:

  • Empty 20 ms - base line
  • Memory 200 ms - around 10 times slower (than doing nothing, haha)
  • KV 300 ms - around 1.3 times slower than memory
  • Fauna 600 ms - around 2 times slower than KV

Discussion (2)

Collapse
matheus profile image
Matheus Calegaro

Interesting results! I've also made a similar experiment with CF Workers + Fauna and got response times up to 1s.

My code is here, btw

Collapse
kayis profile image
K Author

Good to know!
So, we're probably not complete morons :D

Forem Open with the Forem app