Most “secure” container pipelines are doing unnecessary work.
They rebuild images every night.
They rescan the same vulnerabilities.
They ignore half the findings.
And none of it reduces real risk.
Worse, it creates the wrong incentives.
Teams spend time silencing scanners instead of reducing attack surface.
Developers learn to ignore security signals entirely.
The real problem isn’t finding vulnerabilities.
It’s knowing which ones actually matter.
(This is the same problem we see in SAST—detecting vulnerable code is easy; proving it’s reachable is the hard part.)
When scanners flag CVEs in code paths your application never executes, the signal breaks.
So we changed the model entirely.
We stopped rebuilding containers on a schedule.
Instead, we replaced base images with a Runtime Factory built on three constraints:
- Minimal OS surface (Wolfi)
- Declarative compilation (apko)
- Event-driven rebuilds (Rebuild only when risk changes)
Here’s how the model works.
1. Event-Driven Self-Healing (The Core Insight)
We removed scheduled rebuilds entirely.
Rebuilds now happen based on risk, not time.
A lightweight monitor checks the deployed images daily:
- No changes → do nothing
- If a new CVE appears → trigger a rebuild immediately
This flips the model:
Traditional pipelines: rebuild → scan → ignore
Runtime factory: detect → rebuild → deploy
The order matters.
Here is the exact trigger:
- name: Trigger Rebuild if Vulnerable
if: steps.scan.outputs.vuln_count > 0
run: |
curl -X POST https://api.github.com/repos/${{ github.repository }}/dispatches \
-H "Authorization: token ${{ secrets.GITHUB_TOKEN }}" \
-d '{"event_type": "cve-rebuild-trigger"}'
All of our downstream caller workflows (secure-java, secure-node, etc.) listen silently for this exact signal. The moment the signal fires, all runtime workflows rebuild and publish patched images, ready for downstream deployment.
Rebuilds now happen only when something meaningful changes:
- new vulnerabilities in included packages
- upstream package updates
- runtime dependency changes
2. Declarative Builds (Compiling the OS)
To eliminate SCA noise, you must eliminate the components generating the noise. Instead of writing standard Dockerfiles iterating on FROM alpine or debian-slim, we use apko.
We use apko to define container images declaratively—no Dockerfiles, no incremental layers. We pair this with Wolfi, which provides a minimal, frequently patched package ecosystem designed for containers.
Here is a declarative Runtime configuration (wolfi-node.yaml):
contents:
repositories:
- https://packages.wolfi.dev/os
packages:
- wolfi-baselayout
- ca-certificates-bundle
- nodejs-24
accounts:
users:
- username: appuser
uid: 1000
run-as: appuser
When compiled, the resulting container contains Node.js 24 and the exact minimal glibc dependencies it needs to run. No unnecessary utilities (no shell, no package manager, no debugging tools unless explicitly included).
This comes with tradeoffs.
You lose the convenience of standard base images and take on responsibility for defining and maintaining your runtime.
For many teams, that’s not worth it.
3. Strict SLSA Provenance
Building a minimal image is only half the battle. Supply chain integrity requires cryptographic proof of how the runtime was built.
When our factory builds a runtime, it strictly pins generation tools by their SHA256 hashes. It then utilizes Sigstore and GitHub's OIDC identity to keylessly sign the image and generate an SLSA Level 3 Provenance Attestation.
Any developer can now verify both the integrity and origin of the image locally:
➜ gh attestation verify oci://index.docker.io/kenzman/ns-wolfi-go@sha256:61ea...
✓ Verification succeeded!
- Build repo: Ekene95/secops-base-images
- Workflow: secure-go.yaml
This proves exactly which pipeline built the image and what inputs were used.
If anything changes, verification fails.
4. The Benchmarks
Transitioning from standard base images to an event-driven, compiled runtime factory yielded significant improvements across the board.
These are not theoretical gains—we measured them on identical workloads.
| Metric | Legacy (node:20-alpine) |
Runtime Factory | Improvement |
|---|---|---|---|
| Attack Surface | 12 - 40+ known CVEs | Significantly reduced | Minimal package inclusion |
| Image Size | 135 MB | ~45 MB | 66% Smaller |
| Idle Memory Overhead | ~4-6 MB | < 1 MB | Lower runtime overhead due to reduced background processes and dependencies |
| Execution Context | root | uid 1000 | Secure by default |
In our tests, workloads sensitive to libc differences (e.g. Python numerical operations, JVM warm-up) showed measurable improvements compared to Alpine-based images.
Stop Solving the Wrong Problem
If your pipeline is constantly fighting CVEs in packages your application never touches, you don’t have a vulnerability problem.
You have a signal problem.
Security tools are good at finding issues.
They are not good at telling you which ones matter.
Until that changes, reducing noise is just as important as fixing vulnerabilities.
Base images optimize for convenience.
Runtime factories optimize for control.
And if you don’t control what goes into your runtime, your scanner will keep shouting—and your team will keep ignoring it.
If you’re dealing with SCA noise today, this pattern is worth experimenting with.
👉 Ekene95/secops-base-images on GitHub
Steal the pattern. Improve it. Break it.
If you want to experiment without building the full pipeline, I’ve published prebuilt images here:
These are generated from the same runtime factory, with minimal packages and signed provenance.
Use them as a reference or as a starting point before building your own.
Stop rebuilding containers blindly.
Start rebuilding based on risk.

Top comments (0)