DEV Community

Cover image for Cloudflare Workers Introduction
K for Fullstack Frontend

Posted on • Updated on

Cloudflare Workers Introduction

In 2018, Cloudflare released Cloudflare Workers (CFW), a FaaS platform built on V8 and the Service Workers API standard. That year, I wrote a small article about the topic but didn't go much into detail. Today I will change that.

What are Cloudflare Workers?

CFW is a FaaS platform like AWS Lambda, but they have numerous differences from Lambda. They are like Service Workers in the browser, but also different from them.

Differences

How is CFW different from AWS Lambda?

Lambda is built on AWS Firecracker, a very fast and small virtualization for the cloud. It can host different runtimes, and you can even use your own runtime via as a Lambda Layer to execute functions in a way you like.

CFW is built on Google V8, the JavaScript engine that powers the Chrome browser. V8 allows creating multiple sandboxes inside one process, which removes much overhead to lower cold-start delays drastically.

This leads us to the first constraint of CFW; you can't bring your own runtime as you can with Lambda Layers. But V8 supports WebAssembly, so if a language supports WebAssembly as a compilation target, this is a way to get around that limitation.

CFW is also closer to Lambda@Edge than regular Lambda because CFW is always deployed on the edge of the Cloudflare network, and CFW doesn't require an extra API gateway like Lambda does.

For internal use cases, you can call Lambdas directly, but for public APIs, you need an API gateway in front of them. This adds latency, costs, and complexity. A CFW, in contrast, will be directly deployed to an URL.

CFW also has some limits. They can only use 10ms of CPU time (50ms in the paid plan) and 128MB of memory. The CFW scripts can have a maximum of 1MB, and only 30 are allowed per account, which means if you can't fit your use case into 30x1MB, CFW isn't an option.

It's worth noting that if your CFW waits for something and doesn't use CPU time, it can run indefinitely and even perform actions after it responds to a request.

The free plan of CFW comes with 100k executions per day, and the unlimited plan allows one million executions for \$0.50. Considering that CFW doesn't need an extra API gateway, this is quite cheap.

How is CFW different from Service Workers?

While CFW implements the Service Worker API, they have some minor differences, the obvious one being that they don't run in a browser but on a Cloudflare server. Server-side execution brings us to the first advantage over Service Workers: you can do cross-site scripting.

CORS isn't an issue for CFWs; in fact, you can use them to proxy non-CORS enabled APIs with a CFW and make it accessible directly via the browser.

They aren't 100% Service Worker API compliant. For security reasons, you can't run eval(), new Function. Also, Date.now() doesn't change during execution, which means that loading additional code at runtime doesn't work.

Like Service Workers, they can be used as a proxy between an API endpoint and the browser. Cloudflare will try to execute your worker for the defined path, and if it fails, you can opt-in to propagate the request to the origin.

Service Worker API compliance makes them easier to use for frontend devs, but also different from Node.js, which can lead to some libraries not working in the CFW environment.

Deployments

There are two ways to use CFWs.

Two Setups

The first and simplest one is as a stand-alone endpoint. Like you would do with any web server.

You define your worker, give it a path, and that is it. It calculates things, renders a website, calls some third-party APIs, whatever you like. In this case, the worker has its own URL and can't pass through to another web server on an error.

The second one is as a proxy. It's like a service worker, only that it doesn't run in the browser's background process.

You can set a path for your worker that already exists on your web server, and Cloudflare will execute the worker instead of your actual endpoint. You can then call your endpoint from within the CFW and do things with the response before sending it to the client. If you hit your execution limits or your CFW errors for some reason, you can pass the request to your endpoint as if nothing happened.

The second use case requires that Cloudflare handles DNS for the origin server.

Example: Creating a CORS Proxy Worker

Now that we know what CFW is let's create one to get a feeling for how they work!

Pre-Requisites

For this tutorial, you need Node.js v12 and a Cloudflare account.

Installing the CLI

Developing and deploying CFW is done with the help of a CLI called Wrangler. We will install it via NPM.

$ npm i @cloudflare/wrangler -g
Enter fullscreen mode Exit fullscreen mode

Initializing a Project

The CLI can then be used to create a new CFW project.

$ wrangler generate cors-proxy
$ cd cors-proxy
$ npm i
Enter fullscreen mode Exit fullscreen mode

Implementing the Worker Script

We will use the randomfox API because it's free, doesn't require an account and doesn't come with CORS support.

The new project comes with an index.js file as an entry point. Replace the content of the file with the following code:

const API_URL = "https://randomfox.ca/floof/";
const PROXY_ENDPOINT = "/cors-proxy";
const HTML = `
<!DOCTYPE html>

<h1>GET without CORS Proxy:</h1>
<code id='noproxy'>Waiting</code>

<h1>GET with CORS Proxy:</h1>
<img id="proxy" src="">

<script>
(async () => {
  try {
    await fetch('${API_URL}');
  } catch(e) {
    document.getElementById("noproxy").innerHTML = e;
  }
  const response = await fetch(window.location.origin + '${PROXY_ENDPOINT}')
  let { image } = await response.json();
  document.getElementById("proxy").src = image;
})()
</script>
`;

addEventListener("fetch", (event) => {
  const url = new URL(event.request.url);
  if (!url.pathname.startsWith(PROXY_ENDPOINT))
    return event.respondWith(
      new Response(HTML, {
        headers: { "content-type": "text/html;charset=UTF-8" },
      })
    );

  if (event.request.method === "GET")
    return event.respondWith(handleRequest(event.request));

  return event.respondWith(
    new Response(null, { status: 405, statusText: "Method not allowed" })
  );
});

async function handleRequest(request) {
  const url = new URL(request.url);
  request = new Request(API_URL, request);
  request.headers.set("Origin", new URL(API_URL).origin);

  let response = await fetch(request);

  response = new Response(response.body, response);
  response.headers.set("Access-Control-Allow-Origin", url.origin);
  response.headers.append("Vary", "Origin");

  return response;
}
Enter fullscreen mode Exit fullscreen mode

Let's go through the code step by step.

First, we define three constants.

  • API_URL is for the randomfox URL
  • PROXY_ENDPOINT is the path on our domain that acts as the proxy
  • HTML is an example page we will deliver in case the worker is visited from a browser

The example page will fetch two URLs, one directly from the randomfox API that will fail because of missing CORS headers, and one from our proxy. It will then set the src of an image element to the URL delivered by the API.

Next, we define our event listener for the fetch event. It will respond to all paths that aren't our PROXY_ENDPOINT with the HTML. If our PROXY_ENDPOINT was requested with a GET method, we would call the handleRequest function.

The handleRequest function calls the third-party API and adds CORS headers to the response before sending it back to the client. This will tell the browser that the answer is safe to use for our client-side JavaScript.

If someone tries to use unsupported request methods, we will respond with an error status.

This small example is a stripped-down version based on this template.

The CORS header is dynamic, based on the origin of the request. For security reasons, you usually use static domains, so the worker can only be accessed from browsers that visited your domain.

Deploying the Worker Script

We use the Wrangler CLI again to deploy the script. For this we have to add our account ID to the wrangler.toml file, it can be found on your Cloudflare dashboard under the menu point Workers.

After we added the ID, we can do a non-production deploy with the following command:

$ wrangler publish
Enter fullscreen mode Exit fullscreen mode




Using the Worker

If everything went good, we can access our worker at https://cors-proxy.<ACCOUNT_NAME>.workers.dev.

Summary

Cloudflare Workers are an excellent tool for frontend devs that want to add backend functionality to their applications. Be it the transformation of API responses, or hiding of API secrets from the clients. If you build PWAs or JAMStack apps, you can deliver the whole frontend via CFW without an actual backend.

Their adherence to web standards lets you feel right at home, and with a few commands, you get up and running in no time.

You have to keep in mind that they aren't as flexible as AWS Lambda, a more comprehensive platform that allows more configuration and has much higher limits on the number of scripts you can deploy and their file size.

Top comments (9)

Collapse
 
bernardbaker profile image
Bernard Baker

Good article. I've been looking at offsetting large amounts of processing with service workers.

Collapse
 
kayis profile image
K • Edited

Thanks!

If you want to put "large amounts of processing" on them, keep an eye on the limits! :D

Collapse
 
bernardbaker profile image
Bernard Baker

Ok

Collapse
 
kylegalbraith profile image
Kyle Galbraith

Nice post! Just a note on the differences between Lambda and Cloudflare workers. Yes, they are different than your traditional Lambda functions sitting behind API Gateway, so your first comparison figure is correct. But they are not drastically different than Lambda@Edge which removes the API Gateway and runs at the CloudFront edge.

Of course, there are very large differences between the two that you hit on, runtime, and the obvious network difference (Cloudflare edge vs Cloudfront edge).

Awesome summary and introduction to Cloudflare workers. I think in general they can be easier to get going with than Lambda@Edge. It might be cool to create a post launching each side by side to compare the challenges.

Collapse
 
kayis profile image
K

Yes, sounds like a good idea.

Collapse
 
byrro profile image
Renato Byrro

Awesome post, thanks Kay!

It was very good to know that the 10/50ms limit applies to CPU time, not runtime. Considering this, CFW suddenly became a viable and great option for one application I'm starting to work on. 👍

Collapse
 
kayis profile image
K

Cool!

I found this point confusing too.

Collapse
 
michaeltharrington profile image
Michael Tharrington

I've followed ya here for a while and never seen your image previously. Just to say, your mohawk is awesome.

Collapse
 
kayis profile image
K

Yes, I started this new project "Fullstack Frontend" and wanted to put myself out there a bit more :)

Glad you like it :D