DEV Community

Cover image for Drastically Cut CI Time in an Nx Monorepo with Remote Task Caching: A Step-by-Step Guide
Andy Jessop
Andy Jessop

Posted on • Updated on

Drastically Cut CI Time in an Nx Monorepo with Remote Task Caching: A Step-by-Step Guide

This post focuses on establishing a remote caching strategy for monorepo tasks. By implementing this approach, you can significantly reduce CI runtimes and lessen the load on other developers' machines. Essentially, each task is executed only once per commit, leading to considerable time savings.

In this guide, I'll demonstrate how to set up remote caching in a cost-effective (and likely free for standard usage) manner using Cloudflare's Workers and R2 bucket storage.

The process involves two main steps: first, we'll create a CDN using Cloudflare's infrastructure. Then, we'll develop a custom task runner for Nx, designed to efficiently manage cache by pushing to and pulling from the remote cache as necessary.

The complete code can be found here on GitHub, distributed under the MIT license.

Initial Setup

  1. Prerequisites: Cloudflare account and Node.js.
  2. Workspace Setup:
    • Create an Nx workspace using npx create-nx-workspace.
    • Modify nx.json and package.json to configure the workspace.
    • Create an apps folder.

Cloudflare Worker Configuration

  1. API Token Generation:
    • Create an API token via the Cloudflare dashboard.
    • Set up a .env file with Cloudflare account details.
  2. Cloudflare Worker Initialisation:
    • Initialise a Cloudflare Worker in the apps/worker folder.
    • Modify package.json in the root and apps/worker to manage dependencies.
  3. Deployment:
    • Deploy the worker with npx nx deploy worker.

R2 Instance and Worker Binding

  1. Create R2 Bucket:
    • Use the wrangler CLI to create a bucket in Cloudflare’s R2 storage.
    • Confirm the bucket creation and bind it to the worker.
  2. Implement Simple Authentication:
    • Add an API key for authentication in the .env file and Cloudflare.
  3. Worker Functionality:
    • Develop the worker’s API for CRUD operations on assets.
    • Deploy and test the worker’s functionality.

Custom Task Runner for Nx

  1. Caching Mechanism in Nx:
    • Understand Nx's caching mechanism based on task hashes.
  2. Custom Task Runner Development:
    • Develop a custom task runner to interface with the CDN for caching.
    • Ensure the custom task runner is integrated and functional within Nx.
  3. Remote Cache Implementation:
    • Implement a custom remote cache class to retrieve and store cache in the CDN.
    • Build and test the custom task runner with CDN integration.
  4. Testing the Custom Task Runner:
    • Run tasks and check the output to confirm our custom task runner is working.

There's a lot to get through, so let's get started! And if you get stuck at any point, please make some noise in the comments and I'll try to help.

Initial Setup

Prerequisites

Setup the Nx Workspace

Let's start by initialising the workspace.

$ npx create-nx-workspace
Enter fullscreen mode Exit fullscreen mode

After running this command, Nx will ask you a few questions regarding how you want it set up. Choose the following options.

$ npx create-nx-workspace
Need to install the following packages:
create-nx-workspace@17.2.8
Ok to proceed? (y)

 >  NX   Let's create a new workspace [https://nx.dev/getting-started/intro]

✔ Where would you like to create your workspace? · cachier
✔ Which stack do you want to use? · none
✔ Package-based monorepo, integrated monorepo, or standalone project? · integrated
✔ Enable distributed caching to make your CI faster · No

 >  NX   Creating your v17.2.8 workspace.

   To make sure the command works reliably in all environments, and that the preset is applied correctly,
   Nx will run "npm install" several times. Please wait.

✔ Installing dependencies with npm
✔ Successfully created the workspace: cachier.
Enter fullscreen mode Exit fullscreen mode

This will create a workspace inside a new directory, cachier, open this in your editor and we'll finish the setup. There are two things we need to do.

  1. By setting Nx to analyzeSourceFiles, we're having it check for tasks inside the package.jsons of our packages. This is my preferred way of working with Nx, because it means that there is no magic hidden inside plugins. So, in the nx.json:
"pluginsConfig": {
  "@nrwl/js": {
    "analyzeSourceFiles": true
  }
}
Enter fullscreen mode Exit fullscreen mode

Then we want to create a workspace folder so that Nx knows where to look for our app. Add this to package.json:

"workspaces": [
  "apps/*"
]
Enter fullscreen mode Exit fullscreen mode

And create the apps folder:

mkdir apps
Enter fullscreen mode Exit fullscreen mode

The workspace is now set-up! Let's create a Cloudflare worker, which will be our CDN.

Cloudflare Worker Configuration

API Token Generation

In your Cloudflare dashboard, create a new API token:

Visit https://dash.cloudflare.com/profile/api-tokens

Create Token
Use Template: Edit Cloudflare Workers

You'll receive your API key, keep it safe for the next step. To be able to link and deploy your worker, we need to add the API key and your Account ID to the environment. Create a .env file in the root of the project with the following contents:

CLOUDFLARE_ACCOUNT_ID=your-account-id
CLOUDFLARE_API_TOKEN=the-api-token-you-just-created
Enter fullscreen mode Exit fullscreen mode

Cloudflare Worker Initialisation

Now we're going to generate the code for the worker. Run the following command from the root.

$ npm create cloudflare@latest
Enter fullscreen mode Exit fullscreen mode

When asked in which directory you want to create your application, choose apps/worker. Then choose the Hellow World worker, and TypeScript.

 $ npm create cloudflare@latest

using create-cloudflare version 2.9.0

╭ Create an application with Cloudflare Step 1 of 3
│
├ In which directory do you want to create your application?
│ dir ./apps/worker
│
├ What type of application do you want to create?
│ type "Hello World" Worker
│
├ Do you want to use TypeScript?
│ yes typescript
│
├ Copying files from "hello-world" template
│
├ Retrieving current workerd compatibility date
│ compatibility date 2023-12-18
│
╰ Application created

╭ Installing dependencies Step 2 of 3
│
├ Installing dependencies
│ installed via `npm install`
│
├ Installing @cloudflare/workers-types
│ installed via npm
│
├ Adding latest types to `tsconfig.json`
│ skipped couldn't find latest compatible version of @cloudflare/workers-types
│
╰ Dependencies Installed

╭ Deploy with Cloudflare Step 3 of 3
│
├ Do you want to deploy your application?
│ no deploy via `npm run deploy`
│
├  APPLICATION CREATED  Deploy your application with npm run deploy
│
│ Navigate to the new directory cd apps/worker
│ Run the development server npm run start
│ Deploy your application npm run deploy
│ Read the documentation https://developers.cloudflare.com/workers
│ Stuck? Join us at https://discord.gg/cloudflaredev
│
╰ See you again soon!
Enter fullscreen mode Exit fullscreen mode

This will add all the scaffolding and will also required dependencies to the package.json inside apps/worker. But this isn't quite what we want, because in an Nx monorepo you will generally want to install all dependencies at the root so that they are shared between apps and packages. So let's move the devDependencies from apps/worker/package.json to the package.json at the root. It should now look something like this:

{
  "name": "@cachier/source",
  "version": "0.0.0",
  "license": "MIT",
  "scripts": {},
  "private": true,
  "dependencies": {},
  "devDependencies": {
    "@nx/js": "17.2.8",
    "@nx/workspace": "17.2.8",
    "nx": "17.2.8",
    "@cloudflare/workers-types": "^4.20231218.0",
    "typescript": "^5.0.4",
    "wrangler": "^3.0.0"
  },
"workspaces": [
  "apps/*"
  ]
}
Enter fullscreen mode Exit fullscreen mode

And your apps/worker/package.json should look like this:

{
  "name": "worker",
  "version": "0.0.0",
  "private": true,
  "scripts": {
    "deploy": "wrangler deploy",
    "dev": "wrangler dev",
    "start": "wrangler dev"
  }
}
Enter fullscreen mode Exit fullscreen mode

Now, we'll install all the deps, again from the root (you will generally run all commands from the root when inside an Nx monorepo).

$ npm install
Enter fullscreen mode Exit fullscreen mode

Deployment

This is looking great! Let's see if we've got everything configured correctly - we'll try to deploy our worker. As we have the deploy script in our apps/worker/package.json and we set Nx up earlier to analyzeSourceFiles, we should be able to run that task from the root with the following command.

$ npx nx deploy worker
Enter fullscreen mode Exit fullscreen mode

And the output should look like this:

$ npx nx deploy worker

> nx run worker:deploy

> worker@0.0.0 deploy
> wrangler deploy
 ⛅️ wrangler 3.22.4
-------------------
🚧 New Workers Standard pricing is now available. Please visit the dashboard to view details and opt-in to new pricing: https://dash.cloudflare.com/f7cbdb419b792ed0c1d70e2/workers/standard/opt-in.
Total Upload: 0.19 KiB / gzip: 0.16 KiB
Uploaded worker (1.36 sec)
Published worker (0.75 sec)
  https://worker.[your-cloudflare-domain].workers.dev
Current Deployment ID: d14a9a2d-7b6e-4f58-a5ee-c069a3d3e745

 ————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————

 >  NX   Successfully ran target deploy for project worker (6s)
Enter fullscreen mode Exit fullscreen mode

If your deployment didn't work correctly, check that you have the .env file with the correct details added.

Creating an R2 Instance and Binding it to the Worker

Create R2 Bucket

From the root, run the following command:

$ ./node_modules/.bin/wrangler r2 bucket create cachier-bucket
Enter fullscreen mode Exit fullscreen mode

Note: as we haven't installed wrangler globally (I recommend not to, because you might get conflicting versions down the line), we're running the binary directly from node_modules.

Wrangler is going to prompt you to login to your Cloudflare account and authorise it to make changes to your account. If you don't want to do that, you will have to login to Cloudflare separately and create the bucket via the UI.

If you authorised it, come back to the terminal and you should see this:

$ ./node_modules/.bin/wrangler r2 bucket create cachier-bucket
 ⛅️ wrangler 3.22.4
-------------------
Attempting to login via OAuth...
Opening a link in your default browser: https://dash.cloudflare.com/oauth2/auth?response_type=code&client_id=54d11594-84e4-41aa-b438-e81b8fa78ee7&redirect_uri=http%3A%2F%2Flocalhost%3A8976%2Foauth%2Fcallback&scope=account%3Aread%20user%3Aread%20workers%3Awrite%20workers_kv%3Awrite%20workers_routes%3Awrite%20workers_scripts%3Awrite%20workers_tail%3Aread%20d1%3Awrite%20pages%3Awrite%20zone%3Aread%20ssl_certs%3Awrite%20constellation%3Awrite%20ai%3Aread%20offline_access&state=yq1qt0kv5y.8UNjib7fzPw6qJkyYWnM.&code_challenge=tFXmbUZbQfgUXUhx_R1GSYar4nUrPan4dJQWxsJbBpE&code_challenge_method=S256
Successfully logged in.
Creating bucket cachier-bucket.
Created bucket cachier-bucket.
Enter fullscreen mode Exit fullscreen mode

Looks good, but let's check the bucket is available anyway:

$ ./node_modules/.bin/wrangler r2 bucket list
[
  {
    "name": "cachier-bucket",
    "creation_date": "2024-01-13T20:22:35.223Z"
  },
]
Enter fullscreen mode Exit fullscreen mode

Nice. Now we can bind it to the worker.

In your apps/worker/wrangler.toml add this:

[[r2_buckets]]
binding = 'CACHIER_BUCKET' # <~ valid JavaScript variable name
bucket_name = 'cachier-bucket'
Enter fullscreen mode Exit fullscreen mode

The binding is the variable name we will use in the worker to access the R2 API.

Implement Simple Authentication

First, we're going to add some simple authentication via an API key. The worker will check for the presence of this key before doing any operation on the bucket. We'll need the key both locally and in the worker. For local operation, we'll add the API key to our .env file in the root of the project.

CACHIER_API_KEY=some-key-you've-just-made-up
Enter fullscreen mode Exit fullscreen mode

And to give the worker knowledge of it, we need to add the same key to Cloudflare using Wrangler.

$ ./node_modules/.bin/wrangler secret put CACHIER_API_KEY --name worker
 ⛅️ wrangler 3.22.4
-------------------
✔ Enter a secret value: … ************************************
🌀 Creating the secret for the Worker "worker"
✨ Success! Uploaded secret CACHIER_API_KEY
Enter fullscreen mode Exit fullscreen mode

We'll see in the next section how we will use this key to secure the CDN.

Worker Functionality

Note that we used the name worker, which is the name of our...worker. Now it's finally time to write some code. The API is going to look like this:

# Retrieve an asset from the CDN
GET https://worker.[your-cloudflare-domain].workers.dev/assets/[asset-name]

# Add an asset to the CDN
POST https://worker.[your-cloudflare-domain].workers.dev/assets/[asset-name]

# Delete an asset from the CDN
DELETE https://worker.[your-cloudflare-domain].workers.dev/assets/[asset-name]

# List all assets for a given hash
GET https://worker.[your-cloudflare-domain].workers.dev/list/[hash]
Enter fullscreen mode Exit fullscreen mode

Let's open up apps/worker/src/index.ts and replace the contents with this:

interface Env {
  CACHIER_BUCKET: R2Bucket;
  CACHIER_API_KEY: string;
}

export default {
  async fetch(request: Request, env: Env) {
    const apiKey = request.headers.get('x-cachier-api-key');

    if (apiKey !== env.CACHIER_API_KEY) {
      return new Response('Unauthorized', { status: 401 });
    }

    const url = new URL(request.url);
    const pathname = url.pathname.slice(1); // remove leading slash

    if (pathname.startsWith('assets')) {
      const key = pathname.slice('assets/'.length);
      switch (request.method) {
        case 'POST':
          await env.CACHIER_BUCKET.put(key, request.body, {
            httpMetadata: request.headers,
          });
          return new Response(`Put ${key} successfully!`);
        case 'GET':
          const object = await env.CACHIER_BUCKET.get(key);

          if (object === null) {
            return new Response('Object Not Found', { status: 404 });
          }

          const headers = new Headers();
          object.writeHttpMetadata(headers);
          headers.set('etag', object.httpEtag);

          return new Response(object.body, {
            headers,
          });
        case 'DELETE':
          await env.CACHIER_BUCKET.delete(key);
          return new Response('Deleted!');

        default:
          return new Response('Method Not Allowed', {
            status: 405,
            headers: {
              Allow: 'PUT, GET, DELETE',
            },
          });
      }
    }

    if (pathname.startsWith('list')) {
      const prefix = url.searchParams.get('prefix');

      if (!prefix) {
        return new Response('Bad Request', { status: 400 });
      }

      const objects = await env.CACHIER_BUCKET.list({ prefix });

      return new Response(JSON.stringify(objects));
    }
  },
};
Enter fullscreen mode Exit fullscreen mode

This is the simplest possible usage of R2, but it does everything we need for the moment. Let's go through a few key sections.

const apiKey = request.headers.get('x-cachier-api-key');

if (apiKey !== env.CACHIER_API_KEY) {
  return new Response('Unauthorized', { status: 401 });
}
Enter fullscreen mode Exit fullscreen mode

This is our rudimentary authorisation. Crude, but it works for our purposes.

So, for our POST case we're putting the file into the bucket with the put method.

case 'POST':
  await env.CACHIER_BUCKET.put(key, request.body);
  return new Response(`Put ${key} successfully!`);
Enter fullscreen mode Exit fullscreen mode

For our GET case we first check it's existence, then return it with an eTag, which will save a few KB over the long run (what is an eTag?)

case 'GET':
  const object = await env.CACHIER_BUCKET.get(key);

  if (object === null) {
    return new Response('Object Not Found', { status: 404 });
  }

  const headers = new Headers();
  object.writeHttpMetadata(headers);
  headers.set('etag', object.httpEtag);

  return new Response(object.body, {
    headers,
  });
Enter fullscreen mode Exit fullscreen mode

And for the DELETE case, we just delete from the bucket.

case 'DELETE':
  await env.CACHIER_BUCKET.delete(key);
  return new Response('Deleted!');
Enter fullscreen mode Exit fullscreen mode

For the list endpoint, we're getting the hash from the URL search params, and because the hash is the prefix for all relevant assets for that hash, we're using the list method with prefix equal to the hash. This will return an object listing all the assets we have cached for that hash.

const prefix = url.searchParams.get('prefix');

if (!prefix) {
  return new Response('Bad Request', { status: 400 });
}

const objects = await env.CACHIER_BUCKET.list({ prefix });

return new Response(JSON.stringify(objects));
Enter fullscreen mode Exit fullscreen mode

This looks pretty good now, let's deploy and test it out!

$ npx nx deploy worker
Enter fullscreen mode Exit fullscreen mode
$ npx nx deploy worker

> nx run worker:deploy

> worker@0.0.0 deploy
> wrangler deploy
 ⛅️ wrangler 3.22.4
-------------------
🚧 New Workers Standard pricing is now available. Please visit the dashboard to view details and opt-in to new pricing: https://dash.cloudflare.com/f7cbdem4142b792sae6aa5dab0eg0cwd70e2/workers/standard/opt-in.
Your worker has access to the following bindings:
- R2 Buckets:
  - CACHIER_BUCKET: cachier-bucket
Total Upload: 1.20 KiB / gzip: 0.54 KiB
Uploaded worker (1.66 sec)
Published worker (0.64 sec)
  https://worker.[your-cloudflare-domain].workers.dev
Current Deployment ID: a1fre82-bhc7-4d26-87a0-b29044c66db16

 ————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————

 >  NX   Successfully ran target deploy for project worker (8s)
Enter fullscreen mode Exit fullscreen mode

Hopefully you're seeing something similar to what we have above. Let's now create a test file to test out our new CDN. Create a new folder called tmp and add a file therein, test.txt with the following contents:

This is a test!
Enter fullscreen mode Exit fullscreen mode

Now, we'll push it to the CDN.

$ curl -H "x-cachier-api-key: [your-api-key]" \
       -X POST \
       -d @tmp/test.txt \
       https://worker.[your-cloudflare-domain].workers.dev/assets/test.txt
Put test.txt successfully!%
Enter fullscreen mode Exit fullscreen mode

That looks good, let's see if we can GET it.

$ curl -H "x-cachier-api-key: [your-api-key]" \
       -X GET \
       https://worker.[your-cloudflare-domain].workers.dev/assets/test.txt
This is a test%
Enter fullscreen mode Exit fullscreen mode

Nice! Now, let's delete it before we carry on.

$ curl -H "x-cachier-api-key: [your-api-key]" \
       -X DELETE \
       https://worker.[your-cloudflare-domain].workers.dev/assets/test.txt
Deleted!%
Enter fullscreen mode Exit fullscreen mode

Congratulations, you now have a fully-functional personal CDN!!

Caching and retrieving Nx tasks

This next bit is the most technical, so it requires a bit of planning. Let's have a think.

Caching Mechanism in Nx

First of all, a quick overview of how caching works in Nx. When you run a cacheable task (as defined in the nx.json, Nx will create a hash for that task. The hash includes various important characteristics about the code and also about the environment:

  • the code itself with all its imports
  • the operation and the flags, e.g. npx nx test my-package --withFlag
  • the node version

So it takes all of those, and creates a unique hash - a 20 digit number - that identifies all of those characteristics uniquely. When the task is run, it will take the console output of that task along with any output assets, and save it in .nx/cache under a folder whose name is the hash. You end up with something like this:

- 12345678901234567890
  - outputs
    some-asset.js
  code <-- A text file containing just the exit code, i.e. 0 or 1
  source <-- A hash. Not sure what it does...
  terminalOutput <-- The terminal output from the task
12345678901234567890.commit <-- Not sure what this is for, but it's necessary and contains a boolean
Enter fullscreen mode Exit fullscreen mode

There are other files there too, but I've done the hard work for you and have found that these are the only files you need for a given task.

Let's look at the caching first. Our plan is to do this:

  1. When a task is run, request any cached files from the remote cache, and give them to Nx if they exist.
  2. Let Nx run the task as normal.
  3. After the task has finished, push any new cache files to the remote cache so that we can draw on them the next time we run the task.

So how do we tap into the task-running process? The simplest and most integrated way is to create a custom task runner. Although the official documentation on this is not that great, I managed to dig in and find the right solution.

First of all, we're going to create a new workspace package to hold the source code for the custom task runner, and to build it into a commonjs format, which is required by the Nx task runner.

Custom Task Runner Development

Create the following folders and files.

- packages
    - task-runner
        - src
            - index.ts
        package.json
        tsconfig.json
Enter fullscreen mode Exit fullscreen mode

The package.json should have these contents.

{
  "name": "task-runner",
  "scripts": {
    "build": "tsc --project ./tsconfig.json"
  }
}
Enter fullscreen mode Exit fullscreen mode

And the tsconfig.json:

{
  "compilerOptions": {
    "outDir": "dist",
    "rootDir": ".",
    "module": "commonjs",
    "skipLibCheck": true,
    "target": "es5"
  },
  "include": ["src/index.ts"],
}
Enter fullscreen mode Exit fullscreen mode

And finally, we're initially creating a very simple index file, which we will augment later on. But for now, we just want to get the task runner up and...running.

import defaultTaskRunner from 'nx/tasks-runners/default';

export default async function runTasks(tasks, options, ctx) {
  console.log('running custom tasks');

  return defaultTaskRunner(tasks, options, ctx);
}
Enter fullscreen mode Exit fullscreen mode

Notice how we have a build step in the package.json - we're now going to add that to the prepare script in the root of the repo, so that it is run on npm install. This is critical because if we're using this in CI, we need it to be setup as we're likely using a fresh image.

"scripts": {
  "prepare": "nx build task-runner"
},
Enter fullscreen mode Exit fullscreen mode

Now let's run the npm install and see if it builds the runner.

npm install

> @cachier/source@0.0.0 prepare
> nx build task-runner

> nx run task-runner:build

> build
> tsc --project ./tsconfig.json

————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————

 >  NX   Successfully ran target build for project task-runner (1s)
Enter fullscreen mode Exit fullscreen mode

You should now see a packages/task-runner/dist/src/index.js file with the contents of our custom task runner.

Now we need to register that runner with Nx. In the nx.json, add the following object at the top level:

  "tasksRunnerOptions": {
    "custom": {
      "runner": "./packages/task-runner/dist/src/index.js",
      "options": {}
    }
  },
Enter fullscreen mode Exit fullscreen mode

This tells Nx that we have an option to use a custom task runner. We'll create a new dummy package that we'll use to test out our custom test runner. Create the following files.

- packages
    - my-package
        - src
            - index.test.js
        package.json
Enter fullscreen mode Exit fullscreen mode

The package.json should have these contents.

{
  "name": "my-package",
  "scripts": {
    "test": "node src/index.test.js"
  },
  "type": "module"
}
Enter fullscreen mode Exit fullscreen mode

And src/index.test.js:

import test from 'node:test';
import assert from 'node:assert';

test('1 should equal 1, even in JavaScript', () => {
  assert.equal(1, 1);
});
Enter fullscreen mode Exit fullscreen mode

This is just about the simplest package you can have. But it's enough to test that our new custom test runner is hooked up correctly.

npx nx test my-package --runner=custom
Enter fullscreen mode Exit fullscreen mode

Running that, you should see the running custom tasks log as the first line. If you do, then we're all set to go onto the last step!

Remote Cache Implementation

The final step is to build out the custom task runner, utilising the CDN that we built earlier. We're going to hook into the Nx task runner by adding a remoteCache object to the options that are passed to the defaultTaskRunner. This remoteCache is a class that implements retrieve and store methods for us to pull and push from the remote.

Each of these methods takes two arguments:

  1. hash - the hash
  2. cacheDirectory - the directory where Nx stores the cache.

Let's implement this custom remote cache class now. I'm going to do this in the same file as the customTaskRunner for simplicity, and I'm not going to do any refactoring because I wanted to lay out clearly exactly what's going on here. So, without further ado, here's the full packages/task-runner/src/index.ts file:

Note: remember to swap out [your-cloudflare-domain] with your own one.

import { RemoteCache } from 'nx/src/tasks-runner/default-tasks-runner';
import defaultTaskRunner from 'nx/tasks-runners/default';
import { join, relative, resolve } from 'node:path';
import { existsSync, mkdirSync, readdirSync, writeFileSync } from 'node:fs';

export default async function customTaskRunner(tasks, options, ctx) {
  options.remoteCache = new CustomRemoteCache();

  return defaultTaskRunner(tasks, options, ctx);
}

class CustomRemoteCache implements RemoteCache {
  retrieved = false;

  async retrieve(hash: string, cacheDirectory: string): Promise<boolean> {
      const objects = await fetch(`https://worker.[your-cloudflare-domain].workers.dev/list?prefix=${hash}`, {
        headers: {
          'x-cachier-api-key': process.env.CACHIER_API_KEY,
        }
      })
        .then(res => res.json())
        .then(res => (res as any).objects);

      const keys = objects.flatMap(object => object.key);

      if (!keys.length) {
        console.log('Remote cache miss.');
        return false;
      }

      console.log('Remote cache hit.');

      this.retrieved = true;

      const downloadPromises = keys.map(async (key) => {
        const fileUrl = `https://worker.[your-cloudflare-domain].workers.dev/assets/${key}`;
        const filePath = join(cacheDirectory, key);

        const fileDirectory = filePath.split('/').slice(0, -1).join('/');

        if (!existsSync(fileDirectory)) {
          mkdirSync(fileDirectory, { recursive: true });
        }

        const response = await fetch(fileUrl, {
          headers: {
            'x-cachier-api-key': process.env.CACHIER_API_KEY,
          }
        });

        if (!response.ok) {
          throw new Error(`Failed to download ${fileUrl}`);
        }

        const buffer = await response.arrayBuffer();

        writeFileSync(filePath, Buffer.from(buffer));
      });

      await Promise.all(downloadPromises);

      return true;
  }

  async store(hash: string, cacheDirectory: string): Promise<boolean> {
    if (this.retrieved) {
      console.log('Skipping store as task was retrived from renmote cache.');
      return false;
    }

    const directoryPath = join(cacheDirectory, hash);
    const commitFilePath = join(cacheDirectory, `${hash}.commit`);

    try {
      const filenames = [...this.getAllFilenames(directoryPath), commitFilePath];

      for (const file of filenames) {
        const key = relative(cacheDirectory, file);

        await fetch(`https://worker.[your-cloudflare-domain].workers.dev/assets/${key}`, {
          headers: {
            'x-cachier-api-key': process.env.CACHIER_API_KEY,
          },
          method: 'POST',
          body: `@${file}`,
        });
      }

      return true;
    } catch (error) {
      console.error('Error in CustomRemoteCache::store', error);
      return false;
    }
  }

  private getAllFilenames(dir: string): string[] {
    const dirents = readdirSync(dir, { withFileTypes: true });

    const files = dirents.map((dirent) => {
      const res = resolve(dir, dirent.name);
      return dirent.isDirectory() ? this.getAllFilenames(res) : res;
    });

    return [...files].flat();
  }
}
Enter fullscreen mode Exit fullscreen mode

Let's break that down section-by-section.

class CustomRemoteCache implements RemoteCache {
Enter fullscreen mode Exit fullscreen mode

The class itself implements RemoteCache from Nx, this is how we can easily hook into the task runner lifecycle. It will ensure that we can override the Nx caching mechanism by returning true from the retrieve method.

We'll break the retrieve method down into three sections.

  1. Check the remote CDN for a cached version of the current hash
  const objects = await fetch(`https://worker.[your-cloudflare-domain].workers.dev/list?prefix=${hash}`, {
    headers: {
      'x-cachier-api-key': process.env.CACHIER_API_KEY,
    }
  })
    .then(res => res.json())
    .then(res => res.objects);

  const keys = objects.flatMap(object => object.key);

  if (!keys.length) {
    console.log('Remote cache miss.');
    return false;
  }

  console.log('Remote cache hit.');

  this.retrieved = true;
Enter fullscreen mode Exit fullscreen mode

We're using the list endpoint to check if there are any cached objects that match this hash. This will return an object that contains an array of any matches. The keys here are effectively pathnames to the asset, e.g. 123456.commit or 123456/output. If there are any keys, we've hit the cache. We'll set the this.retrieved flag to true, so that we can avoid pushing the came thing back to the cache in the store method, which runs later.

const downloadPromises = keys.map(async (key) => {
    const fileUrl = `https://worker.[your-cloudflare-domain].workers.dev/assets/${key}`;
    const filePath = join(cacheDirectory, key);

    const fileDirectory = filePath.split('/').slice(0, -1).join('/');

    if (!existsSync(fileDirectory)) {
      mkdirSync(fileDirectory, { recursive: true });
    }

    const response = await fetch(fileUrl, {
      headers: {
        'x-cachier-api-key': process.env.CACHIER_API_KEY,
      }
    });

    if (!response.ok) {
      throw new Error(`Failed to download ${fileUrl}`);
    }

    const buffer = await response.arrayBuffer();

    writeFileSync(filePath, Buffer.from(buffer));
});

await Promise.all(downloadPromises);

return true;
Enter fullscreen mode Exit fullscreen mode

Now that we have the keys that we need to download, here we're fetching those files and writing them to the correct path.

Once the task has been run by Nx, the store method is called. In this method, we're first checking to see whether or not we've already retrieved this cache. If we have, then we don't need to push it back to the CDN.

Then we're going to get all the filenames (including their paths), for the files in the /${hash} folder, and also the ${hash}.commit file, then grab those files and push them to the remote, keeping the same file structure.

async store(hash: string, cacheDirectory: string): Promise<boolean> {
    if (this.retrieved) {
      console.log('\nSkipping store as task was retrived from renmote cache.');
      return false;
    }

    const directoryPath = join(cacheDirectory, hash);
    const commitFilePath = join(cacheDirectory, `${hash}.commit`);

    try {
      const filenames = [...this.getAllFilenames(directoryPath), commitFilePath];

      for (const file of filenames) {
        const key = relative(cacheDirectory, file);

        await fetch(`https://worker.[your-cloudflare-domain].workers.dev/assets/${key}`, {
          headers: {
            'x-cachier-api-key': process.env.CACHIER_API_KEY,
          },
          method: 'POST',
          body: `@${file}`,
        });
      }

      console.log('\nCache pushed to CDN.');

      return true;
    } catch (error) {
      console.error('Error in CustomRemoteCache::store', error);
      return false;
    }
  }
Enter fullscreen mode Exit fullscreen mode

Right, now that we have our task-runner code, we can build the task runner again and test it.

npx nx build task-runner
Enter fullscreen mode Exit fullscreen mode

This will create the js file in packages/task-runner/dist/src/index.js (check that you have the same path for the runner in nx.json).

Testing the Custom Task Runner

We're good to go, let's test it out. Our my-package already has a test task, but I want to confirm that tasks producing assets will also work here. So let's create a new task in packages/my-package/package.json.

"build": "mkdir -p dist && cp src/index.js dist/index.js",
Enter fullscreen mode Exit fullscreen mode

And create the packages/my-package/src/index.js with some arbitrary contents.

console.log('my package');
Enter fullscreen mode Exit fullscreen mode

Finally, we can test to see if it's all working.

$ npx nx build my-package --runner=custom
Enter fullscreen mode Exit fullscreen mode

This is the first run, so there will be no cache, but you should see a message confirming that we've pushed the cache to the CDN.

$ npx nx build my-package --runner=custom
Remote cache miss.

> nx run my-package:build

> build
> cp src/index.js dist/index.js


Cache pushed to CDN.
Enter fullscreen mode Exit fullscreen mode

Now, if we were to run the test again, we would just see the local cache, which is fine, but it's not what we're testing here. So now, let's clear the local cache.

$ npx nx reset
Enter fullscreen mode Exit fullscreen mode

And we'll run the task again. The moment of truth...

$ npx nx build my-package --runner=custom
Enter fullscreen mode Exit fullscreen mode

This time, you should see a cache hit!

$ npx nx build my-package --runner=custom
Remote cache hit.

> nx run my-package:build

> build
> cp src/index.js dist/index.js


Skipping store method as task was retrived from remote cache.
Enter fullscreen mode Exit fullscreen mode

If you've made it this far, congratulations! You now have a CDN that you're using as a remote cache for your Nx tasks.

Conclusion

By caching Nx tasks remotely, they can now be shared between your local machine, those of your devs, and your CI runners. One way I like to use a setup like this is to ensure that all my tests run on a pre-push commit hook, then when the same tasks run in CI, they draw from the remote cache. CI time is therefore kept to a minimum.

If you liked this and want to pursue it further, the next step would probably be to create a GitHub action that will use this runner in CI.

Top comments (0)