After the LiteLLM PyPI compromise, I built a 5-minute dependency audit workflow. It uses only free tools and catches vulnerabilities that pip audit misses.
Here's the exact workflow.
Step 1: List All Dependencies (30 seconds)
# Python
pip freeze > requirements-full.txt
wc -l requirements-full.txt
# Output: 247 packages
# Node.js
npm ls --all --json | jq '.dependencies | keys | length'
# Output: 389 packages
Most developers don't realize how many transitive dependencies they have. That pip install httpx brought in 15 packages you never asked for.
Step 2: Check Against NVD (1 minute)
The National Vulnerability Database has 200,000+ CVEs. Free API, no key needed.
import requests
import subprocess
# Get installed packages
result = subprocess.run(['pip', 'freeze'], capture_output=True, text=True)
packages = []
for line in result.stdout.strip().split('\n'):
if '==' in line:
name, version = line.split('==')
packages.append((name, version))
# Check each against NVD
vulns = []
for name, version in packages:
response = requests.get(
'https://services.nvd.nist.gov/rest/json/cves/2.0',
params={'keywordSearch': name, 'resultsPerPage': 5}
)
if response.status_code == 200:
data = response.json()
if data.get('totalResults', 0) > 0:
vulns.append((name, version, data['totalResults']))
print(f'Packages with known CVEs: {len(vulns)}/{len(packages)}')
for name, ver, count in vulns:
print(f' {name}=={ver}: {count} CVE(s)')
Step 3: Check OSV.dev (1 minute)
OSV.dev covers vulnerabilities that NVD doesn't — especially Python and npm ecosystem-specific issues.
import requests
def check_osv(package_name, version, ecosystem='PyPI'):
response = requests.post('https://api.osv.dev/v1/query', json={
'package': {'name': package_name, 'ecosystem': ecosystem},
'version': version
})
return response.json().get('vulns', [])
# Check all packages
for name, version in packages:
vulns = check_osv(name, version)
if vulns:
for v in vulns:
print(f'[{v["id"]}] {name}=={version}: {v.get("summary", "No summary")}')
Step 4: Check EPSS Scores (1 minute)
EPSS (Exploit Prediction Scoring System) tells you the probability that a CVE will be exploited in the next 30 days. This helps you prioritize.
import requests
def get_epss_score(cve_id):
response = requests.get(f'https://api.first.org/data/v1/epss?cve={cve_id}')
data = response.json()
if data.get('data'):
return float(data['data'][0].get('epss', 0))
return 0
# For each CVE found in steps 2-3
for cve_id in found_cves:
score = get_epss_score(cve_id)
if score > 0.1: # >10% chance of exploitation
print(f'HIGH RISK: {cve_id} — {score*100:.1f}% exploit probability')
Step 5: Generate Report (1 minute)
report = {
'total_packages': len(packages),
'packages_with_cves': len(vulns),
'high_risk_cves': len([c for c in found_cves if get_epss_score(c) > 0.1]),
'scan_date': datetime.now().isoformat()
}
print(f"""
=== DEPENDENCY AUDIT REPORT ===
Total packages: {report['total_packages']}
With known CVEs: {report['packages_with_cves']}
High risk (EPSS > 10%): {report['high_risk_cves']}
Scan date: {report['scan_date']}
""")
Why This Beats pip audit
| Feature | pip audit | This workflow |
|---|---|---|
| CVE database | PyPA only | NVD + OSV.dev |
| Exploit probability | No | Yes (EPSS) |
| Cross-ecosystem | Python only | Python + npm |
| Risk prioritization | No | Yes |
| CI/CD integration | Basic | Full (exit code) |
Automate It
I packaged this into an open-source toolkit: API Security Scanner — checks against 5 free vulnerability databases in one run.
For the full list of free security APIs: Awesome Security APIs
How do you audit your dependencies? Are you checking just CVEs or also exploit probability?
Related:
- API Security Scanner — the tool
- Free API Directory — 100+ free APIs
- LiteLLM Was Compromised — How to Detect Supply Chain Attacks
Top comments (0)