If you've ever run npm audit and seen 47 vulnerabilities staring back at you, you know the feeling. That sinking "how did we get here" moment where you realize your app is built on a tower of code that nobody — including you — has actually reviewed.
This isn't a new problem, but it's getting worse. The average modern application pulls in hundreds of transitive dependencies. And the uncomfortable truth? Most critical open-source libraries are maintained by a handful of people, sometimes just one person, reviewing code in their spare time.
Recent efforts in the industry — including initiatives to use AI models for automated security auditing of open-source codebases — have put this problem back in the spotlight. So let's talk about the actual problem, why traditional approaches fall short, and what you can do today to stop treating dependency security as an afterthought.
The Root Cause: Trust Without Verification
Here's how most of us add dependencies:
# Monday morning, need a date library
npm install cool-date-lib
# ...never look at its source code
# ...never check who maintains it
# ...never audit its 14 transitive dependencies
We implicitly trust that someone else has done the security review. But who? The maintainer is often one overworked developer. The "community" is mostly people filing issues, not reading source code line by line.
The problem compounds at three levels:
- Direct vulnerabilities in the dependency code itself (buffer overflows, injection flaws, unsafe deserialization)
- Supply chain attacks where a maintainer account gets compromised or a malicious package gets typosquatted
- Transitive risk where your dependency's dependency has the actual vulnerability, buried three levels deep
The Log4Shell vulnerability in late 2021 was the wake-up call. A critical flaw in a logging library that sat in nearly every Java application on the planet. It had been there for years. Nobody caught it because nobody was looking — not at that scale.
Step 1: Actually Know What You're Running
You can't secure what you can't see. Start with a Software Bill of Materials (SBOM). This isn't just a compliance buzzword — it's your dependency inventory.
# Generate an SBOM for a Node.js project using CycloneDX
npx @cyclonedx/cyclonedx-npm --output-file sbom.json
# For Python projects
pip install cyclonedx-bom
cyclonedx-py requirements -i requirements.txt -o sbom.json
# For Go projects
# Uses the go module graph to produce a full dependency tree
cyclonedx-gomod mod -json -output sbom.json
Once you have an SBOM, you can feed it into vulnerability databases. But generating it once isn't enough — make it part of your CI pipeline so it stays current.
Step 2: Automate Scanning in CI (Not Just Locally)
Running npm audit on your laptop is fine. Running it only on your laptop is not. You need this in CI where it can actually block bad code from shipping.
Here's a practical GitHub Actions setup using OSV-Scanner, an open-source tool backed by the OSV vulnerability database:
# .github/workflows/dependency-audit.yml
name: Dependency Audit
on:
pull_request:
branches: [main]
schedule:
- cron: '0 6 * * 1' # weekly Monday morning scan
jobs:
scan:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Run OSV-Scanner
uses: google/osv-scanner-action/osv-scanner-action@v2
with:
scan-args: |-
--recursive
./
# The action will fail the build if vulnerabilities are found
# above the configured severity threshold
The scheduled cron job is important. New vulnerabilities get disclosed constantly — a dependency that was clean last week might have a CVE today. If you only scan on PRs, you'll miss vulnerabilities discovered after your code was merged.
Step 3: Lock Down Your Dependency Resolution
Lockfiles exist for a reason. But I've seen plenty of projects where package-lock.json is in .gitignore (please don't do this) or where developers routinely run npm install instead of npm ci.
# In CI, ALWAYS use ci instead of install
# npm ci uses the lockfile exactly — no surprise upgrades
npm ci
# For Python, pin everything including transitive deps
pip freeze > requirements.txt
# Or better yet, use pip-tools for reproducible builds
pip-compile requirements.in --generate-hashes # hashes verify integrity
The --generate-hashes flag is the real hero here. It ensures that the exact package content you reviewed is what gets installed. If someone pushes a compromised version to PyPI with the same version number (yes, this has happened), the hash check will catch it.
Step 4: Reduce Your Attack Surface
The best vulnerability is the one in a dependency you never installed. I've lost count of how many projects I've seen with massive dependency trees for functionality that could be written in 20 lines.
// Before: installing a package to check if a number is even
// (this is a real npm package with millions of downloads)
const isEven = require('is-even');
// After: just... write it
function isEven(n) {
return n % 2 === 0;
}
// Before: pulling in all of lodash for one function
const _ = require('lodash');
_.get(obj, 'a.b.c');
// After: optional chaining has existed since ES2020
const value = obj?.a?.b?.c;
I'm not saying "write everything from scratch." That's a different kind of security nightmare. But be intentional. Before adding a dependency, ask:
- Can I do this with the standard library or language built-ins?
- How many transitive dependencies does this pull in?
- Who maintains it, and how actively?
Step 5: Set Up Automated Dependency Updates
Stale dependencies are dangerous dependencies. The longer you go without updating, the more likely you're running code with known vulnerabilities — and the harder the eventual upgrade becomes.
Dependabot and Renovate are the two main open-source options here. I prefer Renovate for its flexibility, but both get the job done. The key configuration decision is grouping strategy:
// renovate.json — group minor/patch updates to reduce PR noise
{
"$schema": "https://docs.renovatebot.com/renovate-schema.json",
"extends": ["config:recommended"],
"packageRules": [
{
"matchUpdateTypes": ["minor", "patch"],
"groupName": "minor and patch dependencies",
"automerge": true
},
{
"matchUpdateTypes": ["major"],
"dependencyDashboardApproval": true
}
],
"vulnerabilityAlerts": {
"enabled": true,
"labels": ["security"]
// Security patches get their own PRs, not grouped
}
}
Automerging minor and patch updates (assuming you have decent test coverage) keeps your dependencies fresh without drowning you in PRs. Major updates get flagged for human review because they're more likely to contain breaking changes.
The Bigger Picture: AI-Assisted Auditing
Here's what's changing. The industry is starting to explore using large language models to audit source code at a scale that human reviewers simply can't match. The idea is straightforward: point an AI model at a codebase and have it look for vulnerability patterns, unsafe memory access, injection points, and logic flaws.
I haven't tested these approaches extensively in my own workflow yet, and honestly the tooling is still maturing. But the premise is sound — static analysis tools have always been limited by their rule sets, and ML models can potentially catch novel vulnerability patterns that rule-based scanners miss.
What this doesn't replace is the fundamentals. No amount of AI auditing helps if you're not tracking your dependencies, not running scanners in CI, and not keeping things updated. The basics still matter.
Prevention Checklist
If you take nothing else from this post:
- Generate and maintain SBOMs for every project
- Run vulnerability scanning in CI, not just locally, and on a schedule
- Use lockfiles religiously with hash verification where possible
- Audit your dependency tree — remove what you don't need
- Automate updates with Renovate or Dependabot
- Pin your CI actions to specific SHA commits, not tags (tags can be force-pushed)
Dependency security isn't glamorous work. Nobody's going to tweet about your well-configured Renovate setup. But the alternative is finding out about your vulnerabilities the hard way — from a security researcher if you're lucky, from an attacker if you're not.
Top comments (0)