The Email Nobody Wants
It usually starts with a completely normal message.
“Can you provide evidence of your software supply chain controls?”
“Do you maintain SBOMs for production artifacts?”
“How do you track vulnerability exceptions over time?”
At first glance, this sounds manageable. You already run npm audit. Maybe you use Dependabot. Maybe your CI blocks critical vulnerabilities. Your dashboards are green. CVE counts are low.
Then the auditor asks the next question:
“Can you prove that this report corresponds to what was actually deployed?”
And suddenly the entire room gets quiet.
Because most teams don't actually have evidence. They have snapshots:
- a vulnerability scan from last month
- a Syft export from one container image
- a spreadsheet full of accepted risks
- screenshots from CI
What auditors want is something much stricter:
- a reproducible inventory of dependencies
- tied to deployed artifacts
- with documented risk decisions
- and a traceable history of exceptions
Not “we scanned it once.”
Not “GitHub said it was fine.”
Not “our CVE count is low.”
They want a chain of evidence.
The Real Problem Is Not Vulnerabilities
Most engineers assume the audit is about security findings.
It usually isn't.
The real problem is reproducibility.
Imagine this conversation:
“This SBOM doesn't match the deployed digest.”
“This image was rebuilt.”
“This exception has no expiration.”
“Nobody remembers why we accepted this risk.”
That is what actually fails audits.
The issue is not whether you scanned your dependencies.
The issue is whether your decisions are:
- explainable
- reproducible
- traceable
- reviewable months later
What Auditors Actually Want
Most compliance frameworks dance around the wording, but the underlying requirements are surprisingly consistent.
Your auditor usually wants evidence for four things:
| Requirement | What they actually mean |
|---|---|
| Dependency visibility | “Can you enumerate what shipped?” |
| Vulnerability management | “Did you evaluate known issues?” |
| Risk acceptance process | “Who approved exceptions and why?” |
| Traceability | “Can you tie this evidence to production?” |
The frustrating part is that many security tools optimize for detection, not evidence integrity.
A scanner tells you something is vulnerable.
An auditor asks:
- why was it accepted?
- when was that decision made?
- who approved it?
- is the decision still valid?
- can you reproduce the report today?
Those are different problems.
The “Spreadsheet Hell” Phase
Almost every growing team goes through the same progression:
Phase 1 — “We’ll just use npm audit”
Simple enough.
Until you get hundreds of transitive findings nobody understands.
Phase 2 — “We’ll suppress the noisy stuff”
Now you have:
- undocumented ignores
- permanent suppressions
- exceptions nobody revisits
- tribal knowledge
Phase 3 — “We need an audit trail”
Now someone creates:
- an exception spreadsheet
- vulnerability review meetings
- manual SBOM exports
- ticket templates
And suddenly security work becomes document management.
Phase 4 — The Audit
This is where the cracks show:
- SBOM doesn’t match deployed image
- reports aren’t reproducible
- exception rationale is missing
- evidence exists in 14 places
At this point, the issue is operational entropy, not scanning capability.
The Supply Chain Evidence Package
So what does a defensible evidence package actually look like?
At minimum:
1. A reproducible SBOM
Not a random export from a laptop.
A generated artifact:
- tied to CI
- tied to a commit
- tied to a deployment digest
- versioned
- reproducible
Formats like CycloneDX help because they standardize structure.
2. Machine-readable risk decisions
This is where most pipelines fail.
A CVSS score is not a decision.
“Critical” is not an action plan.
You need explicit classifications:
- dev-only
- optional
- transitive
- direct + unpatched
- exempted with approval
The important part is not the label itself.
The important part is that:
- the vocabulary is finite
- the logic is documented
- the output is reproducible
3. Expiring exceptions
A permanent exception is just forgotten risk.
Every suppression should have:
- justification
- owner
- expiration date
Otherwise nobody remembers:
- why it existed
- whether it is still valid
- whether the vulnerability changed
4. Deterministic output
This matters more than most teams realize.
If:
- the same lockfile
- on the same commit
- with the same policy
can generate different results over time…
…then your audit evidence is unstable.
That means:
- auditors cannot reproduce it
- diffs become meaningless
- trust erodes
Reproducibility is not a “nice to have.”
It is the foundation of defensible compliance evidence.
Why We Started Building audit-ready
This was the motivation behind audit-ready.
Not:
“Let’s build another vulnerability scanner.”
But:
“How do we make audit decisions reproducible?”
The design constraints became unusually strict:
- deterministic core logic
- no environment-dependent behavior
- explicit machine-readable
reasonCode - time-bounded exceptions
- schema-validated SBOM output
- reproducible triage
The goal was to make outputs defensible later — not just readable today.
One Small Architectural Decision That Changed Everything
The most important design choice ended up being surprisingly simple:
Every dependency gets exactly one
reasonCode.
Not a floating score.
Not an opaque priority.
A deterministic classification like:
DEV_DEPENDENCY_ONLYTRANSITIVE_NO_EXPLOITDIRECT_UNPATCHEDEXEMPTED
This made several things suddenly possible:
- CI enforcement (
--fail-on) - stable audit reports
- diffable results
- reproducible rationale
The report stopped being:
“Here are 187 scary findings.”
And became:
“Here are the 3 decisions requiring action.”
What We Learned
A few surprising things emerged while building this:
Auditors care more about consistency than sophistication
A simple, reproducible rule system is often more useful than a smarter-but-opaque scoring engine.
Most security debt is actually process debt
Teams often know vulnerabilities exist.
The real issue is:
- missing ownership
- undocumented exceptions
- broken traceability
- inconsistent evidence
“Low CVE count” is not evidence
You can have:
- 0 critical findings
- and still fail an audit
because you cannot explain:
- how decisions were made
- whether reports are reproducible
- whether evidence maps to production
Honest Limitations
There are still major gaps.
Current limitations include:
- npm only
- no monorepo support yet
- reachability analysis is heuristic
- no caching yet (Phase 3)
- patch availability depends on OSV metadata
And importantly:
- deterministic logic means no “smart guesses”
- if no rule matches, the tool fails explicitly
That tradeoff is intentional.
An incorrect deterministic answer is easier to audit than an unverifiable heuristic one.
If Your Audit Starts Next Week
If you are suddenly being asked for supply chain evidence, the immediate goal is not perfection.
It is:
- reproducibility
- traceability
- documented decisions
- expiring exceptions
- artifact-to-deployment linkage
Even partial automation helps enormously if it is consistent.
Because the hardest thing to defend in an audit is not vulnerability presence.
It is undocumented inconsistency.
Closing
Supply chain security is slowly shifting from:
“Did you scan your dependencies?”
to:
“Can you prove your decisions?”
Those are fundamentally different requirements.
And most existing workflows were designed for the first one.
Not the second.
audit-ready is currently in beta (Phase 3 complete). We are actively looking for:
- real-world CI/CD edge cases
- audit workflow feedback
- reproducibility failures
- incorrect triage assumptions
If your team has dealt with SBOM audits, compliance evidence drift, or vulnerability exception chaos, I’d genuinely love feedback.
Top comments (0)