We moved our Nx monorepo from CircleCI to GitHub Actions on self-hosted AWS runners, tightened caching, and brought the full CI run from ~33 minutes down to ~15 minutes while owning cache and runner behavior end to end.
At Wecasa, our CI used to run on CircleCI. We wanted to consolidate on GitHub Actions and run jobs on self-hosted runners in AWS, with specs we control and costs we can optimize.
Our repo is a large Nx monorepo: each package is an app or library (SPAs, React Native / Expo mobile apps, etc.). Every push should run lint and tests through Nx, but we also needed fast feedback, reliable caching, and behavior that matches how developers work (branch pushes, master always green).
This post is a practical walkthrough: how we sized runners, provisioned them with Terraform, built a composite setup action for Yarn + Nx cache, parallelized test and lint, and tuned Jest to avoid OOMs on larger runners.
1) Runner specs: CircleCI vs our AWS runners
Before changing workflows, we analysed our current CI setup.
- Lint (medium runner): 2 vCPUs, 7.5 GB RAM
- Tests (large runner): 4 vCPUs, 15 GB RAM
AWS self-hosted runners (after)
- Lint (medium runner): 4 vCPUs, 7 GB RAM
- Tests (large runner): 8 vCPUs, 15 GB RAM
The test job got twice the CPU on the large runner, which matters a lot for parallel Nx tasks and Jest workers. Lint stayed on a smaller machine with more cores than our old Circle medium tier, which helped wall-clock time for ESLint/TypeScript-style work.
2) Infrastructure: Terraform-provisioned GitHub Actions runners
We created the runners with Terraform and registered them as labels GitHub Actions can target (medium-runner, large-runner). That gives us repeatable infra, clear ownership, and an obvious place to evolve instance types or scaling later.
3) One composite action: Yarn, install, and Nx cache
Repeated setup (Corepack, Yarn cache, yarn install, env files for mobile, Nx restore) belonged in one composite action so every job stays consistent and DRY.
Rough responsibilities:
- Validate the cache target (
testvslint) so we never restore the wrong Nx cache key by mistake. - Enable Corepack and resolve Yarn's cache directory.
- Restore Yarn's global cache (
actions/cache@v4) keyed onyarn.lock. yarn install --immutable- Restore Nx cache only (
actions/cache/restore@v4), we save Nx cache in each job’s workflow so we don’t double-save the same key from the composite action.
Splitting restore (composite) vs save (workflow) avoids competing writes and duplicate cache saves for the same key.
name: "Setup and run CI"
description: "Setup Yarn, restore Yarn cache and Nx cache"
inputs:
nx-cache:
description: "Which Nx cache to restore (test or lint)"
required: true
runs:
using: composite
steps:
- name: Validate nx-cache input
run: |
if [ "${{ inputs.nx-cache }}" != 'test' ] && [ "${{ inputs.nx-cache }}" != 'lint' ]; then
echo "::error::nx-cache must be 'test' or 'lint', got: ${{ inputs.nx-cache }}"
exit 1
fi
shell: bash
- name: Setup Yarn (1/4) -> Enable corepack
run: corepack enable
shell: bash
- name: Setup Yarn (2/4) -> Get yarn cache directory path
id: yarn-cache-dir-path
run: echo "dir=$(yarn config get cacheFolder)" >> $GITHUB_OUTPUT
shell: bash
- name: Setup Yarn (3/4) -> Retrieve global cache
uses: actions/cache@v4
id: yarn-cache
with:
path: ${{ steps.yarn-cache-dir-path.outputs.dir }}
key: ${{ runner.os }}-yarn-${{ hashFiles('**/yarn.lock') }}
restore-keys: |
${{ runner.os }}-yarn-
- name: Setup Yarn (4/4) -> Install packages and copy env files
run: yarn install --immutable
shell: bash
# Restore only; save is in workflow (test/lint job) to avoid duplicate save for same key
- name: Restore Nx cache
uses: actions/cache/restore@v4
with:
path: .nx
key: nx-${{ runner.os }}-${{ inputs.nx-cache }}-${{ hashFiles('nx.json', 'yarn.lock') }}-${{ hashFiles('**/project.json') }}
restore-keys: |
nx-${{ runner.os }}-${{ inputs.nx-cache }}-${{ hashFiles('nx.json', 'yarn.lock') }}-
4) Workflow: concurrency, ECR Public, parallel test + lint
Triggers and concurrency
We run on every push. We use concurrency so that a new push on the same branch cancels an in-flight run. Except on master, where we always let CI complete (release / mainline stability).
name: CI • Test and Lint
on:
push:
branches:
- "**" # all branches
concurrency:
group: test-and-lint-${{ github.ref }}
cancel-in-progress: ${{ github.ref != 'refs/heads/master' }} # Run CI everytime `master` branch is pushed
Docker image without Docker Hub rate limits
Jobs run in a Node container pulled from Amazon ECR Public (same family as docker.io/library/node), after a small job that logs in via AWS.
Parallel jobs
After ecr-login, test and lint run in parallel on large-runner and medium-runner respectively. Each calling the composite action with nx-cache: test or nx-cache: lint, then yarn ci:test / yarn ci:lint, and always saving the matching Nx cache.
Jest reports when Nx skips work
With Nx, if nothing relevant changed in a project, tests might not run, so no JUnit XML is produced. We still want the job summary to be meaningful when reports exist. After tests, we check for reports/**/*.xml (from jest-junit); only then we run dorny/test-reporter so we don’t fail the step when there’s nothing to publish.
Test job:
test:
needs: ecr-login
name: Test
runs-on: large-runner
env:
TZ: Europe/Paris
container:
# Use ECR Public Gallery to avoid Docker Hub rate limits (same image as docker.io/library/node)
# @see https://gallery.ecr.aws/docker/library/node
image: public.ecr.aws/docker/library/node:20.17.0
steps:
- name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Setup
uses: ./.github/actions/setup
with:
nx-cache: test
- name: Run test
id: run-test
run: yarn ci:test
- name: Save Nx cache (test)
if: always()
uses: actions/cache/save@v4
with:
path: .nx
key: nx-${{ runner.os }}-test-${{ hashFiles('nx.json', 'yarn.lock') }}-${{ hashFiles('**/project.json') }}
# Check if tests reports should be skipped because of NX cache and no changes in codebase
- name: Check for test reports
id: check_reports
if: always()
run: |
if [ -d reports ] && [ -n "$(find reports -name '*.xml' -print -quit 2>/dev/null)" ]; then
echo "exist=true" >> $GITHUB_OUTPUT
else
echo "exist=false" >> $GITHUB_OUTPUT
fi
- name: Publish test results to Job Summary
if: always() && steps.check_reports.outputs.exist == 'true'
uses: dorny/test-reporter@v2
with:
name: Jest Tests
path: reports/*.xml
reporter: jest-junit
Lint job:
lint:
name: Lint
needs: ecr-login
runs-on: medium-runner
env:
TZ: Europe/Paris
container:
# Use ECR Public Gallery to avoid Docker Hub rate limits (same image as docker.io/library/node)
# @see https://gallery.ecr.aws/docker/library/node
image: public.ecr.aws/docker/library/node:20.17.0
steps:
- name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Setup
uses: ./.github/actions/setup
with:
nx-cache: lint
- name: Run lint
id: run-lint
run: yarn ci:lint
- name: Save Nx cache (lint)
if: always()
uses: actions/cache/save@v4
with:
path: .nx
key: nx-${{ runner.os }}-lint-${{ hashFiles('nx.json', 'yarn.lock') }}-${{ hashFiles('**/project.json') }}
5) Jest, parallelism, and Out Of Memory (OOM): capping maxWorkers
On Jest 29 we hit OOM when launching tests.
We added a small TypeScript helper used when invoking the CI’s test job (and lint where relevant). So that total parallelism stays bounded: we assume up to three test streams in parallel and divide CPU count so that each Jest process gets a sane --maxWorkers value.
import os from "node:os";
const CORES_COUNT = os.cpus().length;
const PARALLEL_TEST_COUNT = 3;
const MAX_WORKERS_COUNT_PER_TEST =
Math.floor(CORES_COUNT / PARALLEL_TEST_COUNT) || 1;
We expect to revisit this after moving to Vitest or Jest 30, where defaults and memory behavior may change.
Conclusion
Migrating our CI was not only swapping CircleCI for GitHub Actions. It was aligning runner size, caching, and Jest concurrency with how Nx actually schedules work. We now own setup and cache behavior via a composite action and Terraform-managed runners. Which led to reduced costs and an improvement of the CI’s wall-clock from ~33 minutes to ~15 minutes.
Next iteration: keep tuning workers and task graphs as we modernize the test runner.
If you like monorepo and platform engineering notes like this, follow Wecasa and get in touch if you want to build this kind of stack with us.
Top comments (0)