Your SBOM Tells You What's Vulnerable. It Doesn't Tell You How Long It Will Stay That Way.
Imagine your team runs a dependency scan before a release. Two hundred warnings. You triage by CVSS score — fix the criticals, document the highs, accept the mediums. You ship. Six weeks later, a medium-severity advisory that was already disclosed before your release date gets exploited in production. The maintainer was a solo developer. He'd acknowledged the CVE in a GitHub issue. There was even a draft fix — it just hadn't shipped. Your scanner knew the package was vulnerable. It didn't know whether a fix was coming in three days or three quarters. You found out the hard way.
That scenario is not exotic. It describes the gap that no current security tool addresses: not whether a vulnerability exists, but how long that vulnerability will remain the most current reality for your production system.
The framing we've been using is wrong
CVSS severity scores were designed to answer a specific question: if this vulnerability is exploited, how bad is it? That's a useful question. It's not the question that determines whether you get breached.
The question that determines whether you get breached is: how long will this package spend in the state where a working exploit exists, CVE numbers are public, and the patched version has not been adopted by the ecosystem? Call that window the patch-velocity gap. CVSS doesn't measure it. Your SCA tool doesn't measure it. Your SBOM doesn't capture it.
What drives that window? Two things: how fast upstream publishes advisories, and how fast the maintainer ships a fix and the ecosystem adopts it. When those rates are roughly matched, the window is manageable. When they're not — when a maintainer is publishing advisories faster than they're shipping fixes, or when the downstream ecosystem is slow to upgrade — the window stays open, and that's where attackers operate.
Why this is about to get worse
AI-assisted vulnerability discovery has been improving fast. XBOW's autonomous penetration-testing benchmark went from 54.5% (Claude Opus 4.6) to 98.5% (Claude Opus 4.7) in a single model release. Mozilla shipped 271 Firefox security fixes in two weeks using a preview model — roughly four years of normal patch throughput compressed into a fortnight.
The discovery side of the equation is accelerating. The fix side isn't. OSS maintainers are still largely volunteers. Patch velocity is determined by human time, not model capability. The result is a widening gap between how fast vulnerabilities are found and announced, and how fast they're actually closed in the ecosystem.
This is a queueing problem. If disclosure throughput doubles and maintainer capacity stays flat, the queue of disclosed-but-unpatched vulnerabilities gets longer. The packages with already-slow fix rates and slow downstream adoption — the ones already accumulating exposure today — get worse faster than anything else in your graph.
What the numbers say
I scored the top 304 npm and PyPI packages by download count on two dimensions: monthly CVE/OSV advisory rate (2024–2026 window) and a downstream adoption bucket (how quickly the ecosystem picks up patched versions). The result is a gap_bucket per package: LOW, MEDIUM, HIGH, or CRITICAL.
Today, before AI acceleration changes anything: 74 of 304 packages — nearly one in four — are already HIGH or CRITICAL. 30 are CRITICAL, meaning they combine a slow maintainer, slow downstream adoption, and a long median fix lag when fixes do arrive. These include packages like shelljs (3 CVEs in 24 months, 120-day fix lag, slow adoption in CI pipelines) and aiohttp (18 advisories, slow app-level patch adoption despite a responsive upstream).
Under 5× disclosure-rate acceleration — plausible within 12–18 months as AI tooling spreads beyond current classified partnerships — the HIGH+CRITICAL population grows to 105 packages (34.5%). That's a 42% increase with no change in maintainer capacity, because the structure of the problem gets worse: more advisories, same fix throughput, same slow-adoption downstream.
What to do with this
The immediate action is to stop triaging purely by CVSS and start looking at the gap column. A HIGH-severity finding in a package with an active maintainer and fast ecosystem adoption is a different risk than a MEDIUM-severity finding in a package with a 90-day fix lag and a downstream that's typically six months behind. Your current tooling treats them the same.
You can run patch-gap on your own lockfile today:
node patch-gap.js /path/to/your/package-lock.json
If it exits 1, look at the gap_bucket column before you look at the CVSS score. The packages showing HIGH or CRITICAL there are the ones where the race between time-to-exploit and time-to-patch-adoption is already unfavorable — and where AI-accelerated discovery is about to make it worse.
The full dataset, projection chart, and essay are at GitHub Repo.
Originally published at https://namishsaxena.com/blog/patch-velocity-asymmetry
Top comments (0)