Your GitHub Actions workflow takes 45 minutes. Your team is frustrated. Every push triggers a full rebuild of your entire monorepo. And you're burning through your monthly minutes faster than you can say "billing alert."
Sound familiar? You're not alone. As codebases grow and monorepos become the norm, CI/CD pipelines that worked fine for a single package suddenly become bottlenecks. But GitHub Actions has evolved significantly, and most developers aren't using its full potential.
This guide covers everything you need to know about GitHub Actions in 2026: from optimizing monorepo workflows to setting up self-hosted runners, from advanced caching strategies to cost management. Let's transform your CI/CD from a bottleneck into a competitive advantage.
The Monorepo Challenge: Why Your Builds Are Slow
Monorepos are everywhere now. Turborepo, Nx, Lerna, Rush—the tooling has matured. But CI/CD hasn't kept pace for most teams.
The Problem
# The naive approach: build everything on every push
name: CI
on: [push, pull_request]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- run: npm ci
- run: npm run build
- run: npm test
This workflow has three critical issues:
-
No change detection: Pushing to
packages/utilsrebuildspackages/frontend,packages/backend, and everything else - No parallelization: Tests run sequentially instead of in parallel
- No caching: Every run starts from scratch
Let's fix all three.
Change Detection: Only Build What Changed
The key insight: in a monorepo, most commits only affect a subset of packages. We should only build and test what actually changed.
Using paths-filter
name: CI
on:
push:
branches: [main]
pull_request:
jobs:
changes:
runs-on: ubuntu-latest
outputs:
frontend: ${{ steps.filter.outputs.frontend }}
backend: ${{ steps.filter.outputs.backend }}
shared: ${{ steps.filter.outputs.shared }}
steps:
- uses: actions/checkout@v4
- uses: dorny/paths-filter@v3
id: filter
with:
filters: |
frontend:
- 'packages/frontend/**'
- 'packages/shared/**'
backend:
- 'packages/backend/**'
- 'packages/shared/**'
shared:
- 'packages/shared/**'
frontend:
needs: changes
if: ${{ needs.changes.outputs.frontend == 'true' }}
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- run: npm ci
- run: npm run build --workspace=packages/frontend
- run: npm test --workspace=packages/frontend
backend:
needs: changes
if: ${{ needs.changes.outputs.backend == 'true' }}
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- run: npm ci
- run: npm run build --workspace=packages/backend
- run: npm test --workspace=packages/backend
Result: If you only change packages/frontend/src/Button.tsx, only the frontend job runs. Backend is skipped entirely.
Using Turborepo's Built-in Detection
If you're using Turborepo, it has built-in change detection:
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0 # Required for change detection
- uses: pnpm/action-setup@v3
- uses: actions/setup-node@v4
with:
node-version: 22
cache: 'pnpm'
- run: pnpm install
- run: pnpm turbo build --filter='...[origin/main]'
- run: pnpm turbo test --filter='...[origin/main]'
The --filter='...[origin/main]' syntax tells Turborepo to only run tasks for packages that changed since origin/main.
Advanced Caching: Beyond the Basics
Caching is where most teams leave performance on the table. Let's go beyond actions/cache.
Layer 1: Package Manager Cache
This is table stakes, but make sure you're doing it right:
- uses: actions/setup-node@v4
with:
node-version: 22
cache: 'pnpm' # or 'npm' or 'yarn'
This caches your node_modules based on your lockfile hash.
Layer 2: Build Cache with Turborepo
Turborepo's remote caching is a game-changer:
- run: pnpm turbo build --filter='...[origin/main]'
env:
TURBO_TOKEN: ${{ secrets.TURBO_TOKEN }}
TURBO_TEAM: ${{ vars.TURBO_TEAM }}
With remote caching enabled, if a teammate already built packages/utils with the same inputs, you'll get a cache hit—even on a fresh CI machine.
Layer 3: Custom Caching for Heavy Dependencies
Some dependencies take forever to install. Cache them separately:
- name: Cache Playwright browsers
uses: actions/cache@v4
with:
path: ~/.cache/ms-playwright
key: playwright-${{ runner.os }}-${{ hashFiles('**/package-lock.json') }}
- name: Install Playwright
run: npx playwright install --with-deps
if: steps.cache-playwright.outputs.cache-hit != 'true'
Layer 4: Docker Layer Caching
If you're building Docker images:
- uses: docker/build-push-action@v5
with:
context: .
push: true
tags: myapp:latest
cache-from: type=gha
cache-to: type=gha,mode=max
The type=gha uses GitHub Actions cache for Docker layers. This can cut Docker build times by 80%+.
Matrix Builds: Parallelize Everything
Matrix builds let you run the same job with different configurations in parallel.
Basic Matrix
jobs:
test:
runs-on: ubuntu-latest
strategy:
matrix:
node: [18, 20, 22]
os: [ubuntu-latest, windows-latest, macos-latest]
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: ${{ matrix.node }}
- run: npm ci
- run: npm test
This creates 9 parallel jobs (3 Node versions × 3 OS).
Dynamic Matrix for Monorepos
Generate your matrix dynamically based on what changed:
jobs:
detect:
runs-on: ubuntu-latest
outputs:
packages: ${{ steps.detect.outputs.packages }}
steps:
- uses: actions/checkout@v4
- id: detect
run: |
packages=$(ls -d packages/*/ | jq -R -s -c 'split("\n")[:-1]')
echo "packages=$packages" >> $GITHUB_OUTPUT
test:
needs: detect
runs-on: ubuntu-latest
strategy:
matrix:
package: ${{ fromJson(needs.detect.outputs.packages) }}
steps:
- uses: actions/checkout@v4
- run: npm ci
- run: npm test --workspace=${{ matrix.package }}
Now each package tests in its own parallel job.
Fail-Fast vs. Complete Matrix
By default, if one matrix job fails, all others are cancelled. Sometimes you want them all to complete:
strategy:
fail-fast: false # Continue other jobs even if one fails
matrix:
node: [18, 20, 22]
Self-Hosted Runners: When and How
GitHub-hosted runners are convenient but have limitations:
- 7GB RAM, 2 CPUs (standard)
- No persistent storage
- Per-minute billing adds up
- No GPU access
Self-hosted runners solve all of these.
When to Use Self-Hosted Runners
Use them when:
- You need more resources (RAM, CPU, GPU)
- You have long-running jobs that are expensive on hosted runners
- You need access to on-premise resources
- You're doing ML workloads that need GPUs
Don't use them when:
- You're a small team with simple builds
- You can't maintain the infrastructure
- Security isolation is paramount
Setting Up a Self-Hosted Runner
Create a runner in GitHub: Settings → Actions → Runners → New self-hosted runner
On your server:
# Download the runner
mkdir actions-runner && cd actions-runner
curl -o actions-runner-linux-x64.tar.gz -L https://github.com/actions/runner/releases/download/v2.320.0/actions-runner-linux-x64.tar.gz
tar xzf actions-runner-linux-x64.tar.gz
# Configure
./config.sh --url https://github.com/your-org/your-repo \
--token YOUR_TOKEN \
--labels gpu,linux,x64
# Run as a service
sudo ./svc.sh install
sudo ./svc.sh start
- Use in your workflow:
jobs:
ml-training:
runs-on: [self-hosted, gpu, linux]
steps:
- uses: actions/checkout@v4
- run: python train.py
Scaling Self-Hosted Runners with Actions Runner Controller (ARC)
For Kubernetes environments, ARC auto-scales runners based on demand:
# values.yaml for ARC
controllerServiceAccount:
namespace: arc-systems
name: arc-controller
githubConfigUrl: "https://github.com/your-org"
githubConfigSecret: github-config-secret
maxRunners: 10
minRunners: 1
template:
spec:
containers:
- name: runner
image: ghcr.io/actions/actions-runner:latest
resources:
requests:
cpu: 2
memory: 4Gi
Runners spin up when jobs are queued and spin down when idle.
Cost Optimization Strategies
GitHub Actions billing can surprise you. Here's how to keep costs under control.
1. Use Ubuntu Over macOS/Windows
| Runner | Cost per minute |
|---|---|
| ubuntu-latest | $0.008 |
| windows-latest | $0.016 (2x) |
| macos-latest | $0.08 (10x) |
Only use macOS for iOS builds or macOS-specific tests.
2. Cancel Redundant Runs
When you push multiple commits quickly, cancel the old runs:
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
3. Use Larger Runners Strategically
GitHub now offers larger runners (4x, 8x, 16x). Counterintuitively, they can be cheaper:
jobs:
build:
runs-on: ubuntu-latest-8-cores # 8 cores instead of 2
If your build takes 20 minutes on 2 cores but only 6 minutes on 8 cores, you save money despite the higher per-minute rate.
4. Timeout Your Jobs
Prevent runaway jobs from burning minutes:
jobs:
build:
runs-on: ubuntu-latest
timeout-minutes: 30 # Kill after 30 minutes
5. Schedule Non-Urgent Jobs
Run expensive jobs during off-peak hours:
on:
schedule:
- cron: '0 2 * * *' # 2 AM UTC daily
Advanced Patterns
Reusable Workflows
Don't repeat yourself across repositories:
# .github/workflows/reusable-test.yml
name: Reusable Test Workflow
on:
workflow_call:
inputs:
node-version:
required: true
type: string
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: ${{ inputs.node-version }}
- run: npm ci
- run: npm test
Use it from another workflow:
jobs:
call-reusable:
uses: ./.github/workflows/reusable-test.yml
with:
node-version: '22'
Composite Actions
Bundle multiple steps into a reusable action:
# .github/actions/setup-project/action.yml
name: 'Setup Project'
description: 'Setup Node.js, install deps, and cache'
runs:
using: 'composite'
steps:
- uses: pnpm/action-setup@v3
with:
version: 9
- uses: actions/setup-node@v4
with:
node-version: 22
cache: 'pnpm'
- run: pnpm install --frozen-lockfile
shell: bash
Use it:
steps:
- uses: actions/checkout@v4
- uses: ./.github/actions/setup-project
- run: pnpm build
Environment Protection Rules
For production deployments, require approvals:
jobs:
deploy-prod:
runs-on: ubuntu-latest
environment:
name: production
url: https://myapp.com
steps:
- run: ./deploy.sh
Configure the production environment in repo settings to require reviews.
OIDC for Cloud Authentication
Stop storing long-lived cloud credentials. Use OIDC:
jobs:
deploy:
runs-on: ubuntu-latest
permissions:
id-token: write
contents: read
steps:
- uses: aws-actions/configure-aws-credentials@v4
with:
role-to-assume: arn:aws:iam::123456789:role/GitHubActionsRole
aws-region: us-east-1
- run: aws s3 sync ./dist s3://my-bucket
No secrets stored—GitHub generates temporary credentials via OIDC.
Troubleshooting Common Issues
"Resource not accessible by integration"
Your workflow doesn't have the right permissions:
permissions:
contents: read
pull-requests: write
issues: write
Cache Not Being Restored
Check your cache key. Common issues:
- Lockfile not included in hash
- Different runner OS between save and restore
- Cache limit exceeded (10GB per repo)
- uses: actions/cache@v4
with:
path: ~/.npm
key: npm-${{ runner.os }}-${{ hashFiles('**/package-lock.json') }}
restore-keys: |
npm-${{ runner.os }}-
Matrix Jobs Time Out
If jobs hang, add explicit timeouts and debugging:
jobs:
test:
timeout-minutes: 30
steps:
- run: npm test
timeout-minutes: 20
env:
DEBUG: '*'
Self-Hosted Runner Goes Offline
Common causes:
- Machine rebooted but service didn't start
- Token expired (rotate every 30 days)
- Disk full from build artifacts
Set up monitoring:
# Check runner status
sudo ./svc.sh status
# View logs
sudo journalctl -u actions.runner.*
The Complete Monorepo Workflow
Here's a production-ready workflow that combines everything:
name: CI/CD
on:
push:
branches: [main]
pull_request:
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
changes:
runs-on: ubuntu-latest
outputs:
packages: ${{ steps.filter.outputs.changes }}
steps:
- uses: actions/checkout@v4
- uses: dorny/paths-filter@v3
id: filter
with:
filters: |
frontend:
- 'packages/frontend/**'
backend:
- 'packages/backend/**'
shared:
- 'packages/shared/**'
build-and-test:
needs: changes
if: ${{ needs.changes.outputs.packages != '[]' }}
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
package: ${{ fromJson(needs.changes.outputs.packages) }}
steps:
- uses: actions/checkout@v4
- uses: ./.github/actions/setup-project
- name: Build
run: pnpm turbo build --filter=${{ matrix.package }}
env:
TURBO_TOKEN: ${{ secrets.TURBO_TOKEN }}
TURBO_TEAM: ${{ vars.TURBO_TEAM }}
- name: Test
run: pnpm turbo test --filter=${{ matrix.package }}
- name: Upload coverage
uses: codecov/codecov-action@v4
with:
flags: ${{ matrix.package }}
deploy:
needs: build-and-test
if: github.ref == 'refs/heads/main'
runs-on: ubuntu-latest
environment: production
permissions:
id-token: write
contents: read
steps:
- uses: actions/checkout@v4
- uses: ./.github/actions/setup-project
- uses: aws-actions/configure-aws-credentials@v4
with:
role-to-assume: ${{ secrets.AWS_ROLE_ARN }}
aws-region: us-east-1
- run: pnpm deploy
Conclusion: From 45 Minutes to 5 Minutes
With these techniques, you can:
- Reduce build times by 80%+ using change detection, caching, and parallelization
- Cut costs by 50%+ with smarter runner selection and concurrency controls
- Scale confidently with self-hosted runners and ARC
- Secure deployments with OIDC and environment protection
GitHub Actions has grown from a simple CI tool to a powerful automation platform. The teams that master it have a significant advantage in shipping speed and developer experience.
Start with one optimization—maybe change detection or remote caching. Measure the improvement. Then iterate. Your future self (and your team) will thank you.
Now go make your pipelines fast. 🚀
⚡ Speed Tip: Read the original post on the Pockit Blog.
Tired of slow cloud tools? Pockit.tools runs entirely in your browser. Get the Extension now for instant, zero-latency access to essential dev tools.
Top comments (0)