DEV Community

Feyisayo Lasisi
Feyisayo Lasisi

Posted on

How I Cut GitHub Actions Usage in Half by Making the CI Pipeline Smarter

*A story about runaway build minutes, a blocked engineer, and a smarter approach to CI triggers
*

A Slack notification landed in my feed. A CI pipeline had failed. My first instinct was the usual suspects: the Static Application Security Scanning (SAST) tool flagged a vulnerability, the Software Composition Analysis (SCA) picked up a bad dependency, or the Dynamic Application Security Testing (DAST) tool caught something in the running application. I opened the GitHub Actions run to investigate.
The pipeline had not even started. The error message read:
"The job was not started because recent account payments have failed or your spending limit needs to be increased."

That was unexpected. This was an organization account with a private repository. GitHub gives a healthy allocation of free minutes per month. We should not have been anywhere near the limit.
I pulled up the GitHub Actions usage metrics. We had consumed over twice our normal monthly minutes across all repositories, and we were not even at the end of the month.

How We Got Here
A few weeks earlier, I had been working on evolving our existing CI setup into a full DevSecOps pipeline. That work introduced several new jobs, each running on its own dedicated VM, to execute security scans in parallel rather than sequentially on a single runner. Running everything on one VM would have meant pipeline runs stretching to 30 minutes or more per push, which was unacceptable.
To make things more complex, a single repository in our system could contain as many as 8 to 10 APIs. To avoid testing them one after another, I implemented a GitHub Actions matrix strategy, which spun up a separate VM per API and tested them concurrently. This dramatically reduced wall-clock time per run but multiplied the VM-minutes consumed per run by the same factor.
Multiply that across several backend repositories (each with multiple APIs) and multiple frontend repositories, and the numbers compounded fast. Every push or pull request to any branch triggered the full pipeline: every job, every VM, every API, regardless of what actually changed.
The result was that we burned through our free GitHub Actions minutes before the month was out.

The Immediate Fix
By the time I had traced the root cause, the engineer whose push triggered the failed run had already reached out. They were blocked and could not deploy.
The immediate fix was straightforward. I increased the spending limit on GitHub Actions. That unblocked the pipeline within minutes and deployments resumed. But that was a patch, not a solution.

The Fundamental Flaw
Unblocking the team gave me time to step back and examine the underlying design problem. The issue was simple but costly: the pipeline had no awareness of what actually changed.
Every push, whether it was a one-line README update, a config file tweak, or a core business logic change, triggered the full suite. Every API was built, tested, and scanned. Every VM was spun up. Every minute was consumed.
This was not just inefficient. It was architecturally blind.
Redesigning the Trigger Strategy
To fix this properly, I needed the pipeline to make intelligent decisions based on what changed, not just that something changed. The strategy I settled on had three layers:

Layer 1: Ignore Non-Functional Changes Entirely
Some changes simply do not affect application behaviour. Edits to .github/ workflow files during development, updates to README.md, changes to documentation: none of these should consume a single build minute. GitHub Actions supports this natively via the paths-ignore keyword. When a push only touches these paths, the workflow does not start at all. Zero minutes consumed.

Layer 2: Detect Shared Dependency Changes
Our application is built on a shared Core and Persistence layer. Every API in the repository depends on these projects. A bug introduced into Core is not isolated. It is a ripple that can break every single API downstream.
This makes Core and Persistence special. Any commit that touches these directories must trigger tests across the entire API suite, not just one API. There is no safe shortcut here.

Layer 3: Isolate Changes to Leaf APIs
On the other end of the dependency tree are the individual API projects. These are leaf nodes. A change to one API cannot physically affect another because there is no dependency between them.
If a push only touches a single API directory, only that API needs to be built and tested.

The Implementation: A detect-changes Job
I introduced a dedicated detect-changes job as the first stage of the pipeline. Every subsequent job, whether building, scanning, or testing, depends on its output before it can proceed.
The job works in four steps. First, it checks out the full git history so it can compare the current commit against the base branch and accurately identify what changed. Second, it runs a file diff specifically against the Core and Persistence directories. If any files in those directories were modified, a flag is set that tells the rest of the pipeline to run everything. Third, it runs a separate file diff across all individual API project directories to capture which specific APIs were touched. Fourth, it uses the results of both checks to dynamically construct a JSON matrix.
The matrix is the key output. If Core or Persistence changed, the matrix contains every API in the repository. If only specific APIs changed, the matrix contains only those APIs. If nothing relevant changed, the matrix is empty and a has-changes flag is set to false, which causes all downstream jobs to skip entirely.
Every downstream job, including the build job, the SAST scan, the SCA scan, and the DAST scan, reads this matrix as its input. Each job only processes the APIs the matrix tells it to. This means the entire security scanning suite becomes scoped to the actual change, not the entire repository.

Decision Flow
The logic the pipeline now follows on every push looks like this:
Did the push only touch ignored paths like README or workflow config files? The workflow does not start. Zero minutes consumed.
Did Core or Persistence change? The full matrix is built. Every API is tested.
Did only individual API directories change? A scoped matrix is built containing only the affected APIs.
Did no API directories change at all? The has-changes flag is set to false and all downstream jobs are skipped automatically.

The Result
The change in consumption was immediate. Pushes that previously triggered 8 to 10 parallel VM instances now trigger only the ones that matter. Documentation updates consume nothing. A fix to a single API spins up one VM, not ten.
More importantly, the pipeline did not become less safe. The Core and Persistence guard ensures that changes to shared dependencies still trigger a full suite run. The security guarantees of the DevSecOps pipeline remained intact. We just stopped paying for work that did not need to be done.

Lessons Learned

  1. CI pipelines need dependency awareness. A flat "run everything on every push" approach does not scale. Model your repository's dependency graph and let the pipeline reflect it.

  2. Shared layers deserve special treatment. Core and Persistence are not just directories. They are the foundation every other component builds on. Treat them that way in your pipeline logic.

  3. Dynamic matrices are powerful. GitHub Actions matrix strategy is commonly used with static values. Building the matrix dynamically at runtime unlocks a level of precision that static configurations cannot achieve.

  4. Optimization and security are not in conflict. A well-designed pipeline can be both efficient and thorough. The goal is not to skip checks. It is to run the right checks on the right code.
    In a follow-up article, I will walk through the full DevSecOps pipeline design, covering how SAST, SCA, and DAST are integrated and how security gates are enforced without becoming a bottleneck to developer velocity.

Top comments (0)