<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Daniel</title>
    <description>The latest articles on DEV Community by Daniel (@p41n3st).</description>
    <link>https://dev.to/p41n3st</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/p41n3st"/>
    <language>en</language>
    <item>
      <title>I built a supply chain security scanner in Rust — here's what I learned</title>
      <dc:creator>Daniel</dc:creator>
      <pubDate>Fri, 15 May 2026 20:12:57 +0000</pubDate>
      <link>https://dev.to/p41n3st/i-built-a-supply-chain-security-scanner-in-rust-heres-what-i-learned-k87</link>
      <guid>https://dev.to/p41n3st/i-built-a-supply-chain-security-scanner-in-rust-heres-what-i-learned-k87</guid>
      <description>&lt;p&gt;If you've ever run npm install and thought "what exactly did I just put on my machine?", this one's for you.&lt;/p&gt;

&lt;p&gt;The problem that got me started&lt;br&gt;
A while back I was poking around the node_modules of a mid-sized project. It had 47 direct dependencies and... 841 transitive ones. Forty-seven became eight hundred and forty-one. No tool was giving me a clear picture of which ones were risky, which had active CVEs, or — worse — which had been quietly compromised.&lt;/p&gt;

&lt;p&gt;Snyk and Dependabot exist, sure, but they're either paid or require you to hand your repository over to a third-party service. I wanted something that ran locally, offline, and without giving anyone access to my code.&lt;/p&gt;

&lt;p&gt;That's how OpenSentinel was born.&lt;/p&gt;

&lt;p&gt;What it does&lt;br&gt;
One line: it analyzes your full dependency tree and tells you which packages are risky and why.&lt;/p&gt;

&lt;p&gt;opse scan ~/projects/my-app&lt;br&gt;
That launches an interactive TUI where you can navigate package by package, inspect each CVE, see what suspicious code patterns were detected, and export a SBOM if you need it for compliance.&lt;/p&gt;

&lt;p&gt;For CI/CD it's just as simple:&lt;/p&gt;

&lt;p&gt;opse analyze --format=json --severity=high,critical&lt;br&gt;
It exits with a meaningful code (0=clean, 1=medium, 2=high, 3=critical) so your pipeline can block automatically.&lt;/p&gt;

&lt;p&gt;The parts I had the most fun building&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The risk scoring system
I didn't want it to be just "has a CVE = bad". Reality is more nuanced. A package might have a high-severity CVE that doesn't apply to your setup, or it might have zero CVEs but be completely abandoned by its maintainer.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The scoring weighs 5 dimensions:&lt;/p&gt;

&lt;p&gt;Dimension   Weight&lt;br&gt;
Advisories (CVEs, GHSA, NVD)    40%&lt;br&gt;
Malicious code patterns 20%&lt;br&gt;
Version behavior changes    15%&lt;br&gt;
Maintainer reputation   15%&lt;br&gt;
Community reports   10%&lt;br&gt;
There's one special case: if a package is in the known-malicious database with a score ≥ 0.8, the weighted formula gets skipped entirely and it goes straight to CRITICAL. There's no point averaging things out if we already know it's malware.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The known-malicious database embedded in the binary
Some npm packages have a documented history of being compromised — &lt;a href="mailto:event-stream@3.3.6"&gt;event-stream@3.3.6&lt;/a&gt;, &lt;a href="mailto:ua-parser-js@0.7.29"&gt;ua-parser-js@0.7.29&lt;/a&gt;, &lt;a href="mailto:colors@1.4.1"&gt;colors@1.4.1&lt;/a&gt;, node-ipc during the war in Ukraine... the list is real and well-documented.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Instead of hitting an external API every time, I embedded the database directly into the binary at compile time using include_str!():&lt;/p&gt;

&lt;p&gt;static BUNDLED_DB: &amp;amp;str = include_str!("../../data/known_malicious.json");&lt;br&gt;
Compile-time. No internet needed. No latency. The binary knows from birth which packages are known bad actors.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Building the TUI with Ratatui
I'd never built a terminal UI in Rust before. Ratatui (the successor to tui-rs) turned out to be surprisingly ergonomic. The layout system works a lot like CSS Flexbox/Grid but for the terminal:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fskvvgzonfou7fu9bs833.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fskvvgzonfou7fu9bs833.png" alt=" " width="800" height="475"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The render loop runs on Tokio's blocking thread pool so it doesn't starve the async scan tasks.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;AST-based detection with Tree-sitter
This was the most technically involved part — and my favorite. Instead of running regexes over source code (fragile, easy to evade), the tool uses Tree-sitter to parse the actual AST of the JavaScript/TypeScript and look for specific patterns:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;process.env access followed by an HTTP call in the same scope&lt;br&gt;
eval(Buffer.from(..., 'base64')) — the classic obfuscated malware trick&lt;br&gt;
require() calls with dynamic paths&lt;br&gt;
Crypto mining signatures (stratum+tcp://)&lt;br&gt;
AST analysis is opt-in and activates with downloadSource: true in the config — it downloads the package tarball from the registry and scans it locally. Nothing leaves your machine.&lt;/p&gt;

&lt;p&gt;Tech stack&lt;br&gt;
Rust with Tokio for async&lt;br&gt;
Ratatui + Crossterm for the TUI&lt;br&gt;
Tree-sitter for AST analysis&lt;br&gt;
SQLx with PostgreSQL (optional — works fine without a DB too)&lt;br&gt;
Reqwest for the OSV, GitHub Advisories, and NVD APIs&lt;br&gt;
Serde for all the JSON/TOML work&lt;br&gt;
The PostgreSQL part is for caching advisories and maintainer metrics between scans. No Postgres? Things just don't get cached, everything still works.&lt;/p&gt;

&lt;p&gt;The GitHub Action&lt;br&gt;
A good CLI tool should be easy to drop into a pipeline. There's a composite GitHub Action:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;uses: ./
id: scan
with:
severity: high,critical
fail-on: "2"
github-token: ${{ secrets.GITHUB_TOKEN }}
It automatically leaves a comment on the PR with the results table, and updates it on every push instead of piling up new comments.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;What I actually ran into&lt;br&gt;
Running a TUI and async tasks at the same time is not obvious.&lt;br&gt;
Ratatui needs to own the terminal in a tight render loop. Tokio needs its threads free for async work. The fix was spawn_blocking — the render loop runs on a dedicated OS thread, the scan tasks run on the async runtime, and they talk through an unbounded channel. Once I understood why that separation exists, a lot of Rust's async model clicked into place.&lt;/p&gt;

&lt;p&gt;include_str!() solved a problem I was overcomplicating.&lt;br&gt;
I spent way too long thinking about how to ship the known-malicious database — separate file? download on first run? bundled as a dependency? Then I remembered that include_str!() embeds the file contents directly into the binary at compile time. One line. Works offline. No install step. Sometimes the simplest thing is actually the right thing.&lt;/p&gt;

&lt;p&gt;Not having a database shouldn't break anything.&lt;br&gt;
Early on, if PostgreSQL wasn't configured, the whole scan failed. That's the wrong default for a CLI tool — most people running it locally won't have a database set up. I reworked the orchestrator so the DB is genuinely optional: advisories still get fetched, scoring still runs, everything still works. The database just adds caching and persistence on top.&lt;/p&gt;

&lt;p&gt;The borrow checker catches real bugs, not just theoretical ones.&lt;br&gt;
At one point I had scan results being mutated from two places at the same time — one path was updating scores, another was building the output. The compiler refused to compile it. I thought it was being annoying. It wasn't: that was a real data race that would've caused silent incorrect output in a concurrent scan. The error message was actually pointing at the exact problem.&lt;/p&gt;

&lt;p&gt;Where it stands&lt;br&gt;
It works today for Node.js and Bun projects. Python, Go, and Rust support is on the roadmap.&lt;/p&gt;

&lt;p&gt;The code is on &lt;a href="https://github.com/QueeNFrisk/OpenSentinel" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt; — contributions are very welcome, especially around:&lt;/p&gt;

&lt;p&gt;More entries in the known-malicious database&lt;br&gt;
Parsers for other ecosystems&lt;br&gt;
Integration tests&lt;/p&gt;

</description>
      <category>npm</category>
      <category>bunjs</category>
      <category>security</category>
    </item>
  </channel>
</rss>
