The EU’s Cyber Resilience Act (CRA) is changing how we think about vulnerability management. Rather than it being a best effort security practice, it’s now very much a legal mandate with strict timeline requirements for how organisations respond to security incidents/breaches. The core challenge is the 24-hour early warning window for actively exploited vulnerabilities.
According to the 2026 Cloudsmith Artifact Management Report, 74% of engineering teams cannot instantly map a specific CVE to all affected production services. When your CISO needs an impact analysis for a zero-day, manually grepping through stale metadata is no longer a viable strategy.
The report indicates that while SBOM generation is common practice in 2026, true SBOM operationalisation is still somewhat rare. Only 25% of teams automatically verify SBOMs at every build. The obvious problem with this strategy is that static SBOMs are just a point-in-time snapshot. Because software dependencies decay, a clean SBOM from 48 hours ago is functionally useless once a new CVE is published. The argument made in the Cloudsmith report is that AI-assisted software development is actually increasing the volume of transitive dependencies in the codebase, since developers are concerning themselves less with the libraries they source from Claude Code recommendations. Therefore, SBOMs cannot be a point-in-time asset.
In short, Article 14 reporting goes live on September 11th this year. So as of September 11 2026, we absolutely must be reporting on actively exploited vulnerabilities in your digital products. Additionally, Article 13(7) states that there must be systematic documentation of vulnerabilities using third-party intelligence (sources like OSV, NVD, KEV) - that kind of publicly-available 3rd-party intelligence. I’m not trying to bore the reader with legal articles and sub-sections of said legal articles, but it’s worth stating that all this ties back to the 24 hour reporting requirement. Once you become aware of an active exploit, let’s say you were affected by the Log4Shell incident, the process begins. This is the reporting window, not the discovery deadline.
The real question here isn’t how fast you can report, it’s whether you have a defensible process that allows you to become aware at all. If a vulnerability in your product is actively exploited and you had no process to detect that - saying “we didn’t know what happened” isn’t actually a defensible argument at all under the CRA.
Let’s say you have this identifier in your SBOM:
pkg:deb/ubuntu/openssl@1.1.1f-1ubuntu2.24?arch=amd64@distro=ubuntu-20.04
So, in 5 months (September 2026), you need to be able to continuously answer the below questions:
- Is this vulnerable? Article 3(40)
- Is it exploitable in your product? Article 3(41)
- Is it being actively exploited right now? Article 3(42)
Not once. For every component. Continuously.
To really simplify these requirements around SBOM, Article 3(40) deals purely with the vulnerability. This is the weakness that can be exploited, in theory. So basically all you have to do here is track and document the known vulnerabilities. Article 3(41) deals specifically with the exploitable vulnerabilities. These are realistically exploitable under practical operational conditions.
You cannot ship code with known exploitable vulnerabilities.
We can only make this decision based on vulnerability intelligence data. For example, CISA’s Known Exploited Vulnerabilities (KEV) catalogue literally tells us which CVEs are known to be exploited in the wild. Likewise, databases such as ExploitDB will paint a better picture of CVEs that have weaponised exploit scripts for those known vulnerabilities.
And again, environmental factors come into force here. If a CRITICAL CVE requires internet access to speak with a C2 server, but your machine is air-gapped, you can consult data intelligence like VEX reporting or EPSS to tell us the general likelihood of exploitation in a given environment. Finally, Article 3(42) deals with actively exploited incidents. If you have pulled a malicious (known-to-be-bad) library into your application stack, it initiates the process of 24 hours of early warning, 72 hours to present the notification of incident, which concludes with the 14-day report of an incident.
There’s basically 2 different sets of technologies to make all this possible to achieve. Firstly, you have your SBOM Ecosystem - this is how you answer the question, “what’s actually in my software?”. This involves a bunch of different classifications/identifiers like PURL and CPE, formats like CycloneDX and SPDX, as well as open-source tooling like Syft, Trivy, cdxgen.
The second set of technologies is the Vulnerability Intelligence stuff we mentioned earlier. This answers the question, “what’s dangerous right now?”. Again, a bunch of open-standards exist such as CVE data from NVD, EUVD, the various vendor advisories, exploit feeds, KEV, EPSS, OSV threat intelligence. There’s no right or wrong data intelligence source here, but you need to inform SBOMs based on data from those public threat intelligence systems.
And that’s where I’m going to close out on CRA for today. The CRA essentially requires components-to-vulnerability matching. What I mean by this is that the CRA requires you to connect these two groups of technologies continuously, not just as a once-off experiment. And this is precisely where all the friction lives, and why only 25% of teams automatically verify SBOMs at every build, according to the recent Cloudsmith report.
Top comments (0)