<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Wes</title>
    <description>The latest articles on DEV Community by Wes (@ticktockbent).</description>
    <link>https://dev.to/ticktockbent</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/ticktockbent"/>
    <language>en</language>
    <item>
      <title>When AI Writes Your Firewall, Check the Math</title>
      <dc:creator>Wes</dc:creator>
      <pubDate>Sun, 12 Apr 2026 11:52:30 +0000</pubDate>
      <link>https://dev.to/ticktockbent/when-ai-writes-your-firewall-check-the-math-1cff</link>
      <guid>https://dev.to/ticktockbent/when-ai-writes-your-firewall-check-the-math-1cff</guid>
      <description>&lt;p&gt;A Python developer with "AI Solutions Architect" in their GitHub bio pushes 8,500 lines of eBPF Rust in a single commit. The commit author is "Blackwall AI." The &lt;code&gt;.gitignore&lt;/code&gt; lists &lt;code&gt;.claude/&lt;/code&gt;. The README reads like marketing copy, and the author later confirms the AI handled the "marketing glaze." Four days later, the repo has 119 stars.&lt;/p&gt;

&lt;p&gt;This is the new normal. People are shipping real projects in languages they don't primarily work in, at speeds that weren't possible two years ago. The question isn't whether AI-assisted code can be good. It's whether you can tell the difference between code that's good and code that looks good.&lt;/p&gt;

&lt;p&gt;I went to find out.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Is Blackwall?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/xzcrpw/blackwall" rel="noopener noreferrer"&gt;Blackwall&lt;/a&gt; is an eBPF/XDP firewall with an LLM-powered honeypot, written in Rust. Named after the AI containment barrier in Cyberpunk 2077. It runs packet filtering at kernel level via aya (pure Rust eBPF, no C, no libbpf), tracks per-IP behavioral profiles through a state machine that escalates from New to Normal to Probing to EstablishedC2, does JA4 TLS fingerprinting and deep packet inspection, and includes a tarpit that uses a local LLM through Ollama to simulate a compromised Ubuntu server. The idea is that when the behavioral engine flags an IP as hostile, traffic gets redirected to the honeypot, which pretends to be a real machine while logging everything the attacker does.&lt;/p&gt;

&lt;p&gt;One maintainer. Two commits. 119 stars in four days. MIT license.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Snapshot
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Project&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href="https://github.com/xzcrpw/blackwall" rel="noopener noreferrer"&gt;blackwall&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Stars&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;~119 at time of writing&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Maintainer&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Solo (xzcrpw), shipped everything in one commit&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Code health&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;8,544 lines, 123 tests, zero unwrap() in production, real bugs in the glue code&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Docs&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;AI-generated README that oversells, no architecture docs&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Contributor UX&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;No CONTRIBUTING.md, no CI, no PR history to judge from&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Worth using&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Not yet, but worth reading&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Under the Hood
&lt;/h2&gt;

&lt;p&gt;Six crates in a Cargo workspace. &lt;code&gt;common&lt;/code&gt; holds &lt;code&gt;#[repr(C)]&lt;/code&gt; shared types between kernel and userspace with explicit padding fields. &lt;code&gt;blackwall-ebpf&lt;/code&gt; has the XDP programs. &lt;code&gt;blackwall&lt;/code&gt; is the userspace daemon. &lt;code&gt;tarpit&lt;/code&gt; is the honeypot. &lt;code&gt;blackwall-controller&lt;/code&gt; coordinates distributed nodes. &lt;code&gt;xtask&lt;/code&gt; handles eBPF cross-compilation.&lt;/p&gt;

&lt;p&gt;The eBPF code is the strongest part. Every pointer dereference has a bounds check before access, which is not just good practice but a hard requirement: the BPF verifier will reject your program otherwise, and getting this right in Rust via aya is harder than it sounds. The entropy estimation uses a 256-bit bitmap instead of a histogram, which is a clever adaptation for eBPF's 512-byte stack limit. The TLS ClientHello parser handles session IDs, cipher suites, compression methods, extensions, SNI extraction, ALPN, and GREASE filtering. If you want to see how to write correct eBPF in Rust, this is a solid reference.&lt;/p&gt;

&lt;p&gt;The behavioral engine is clean too. Seven phases from &lt;code&gt;New&lt;/code&gt; through &lt;code&gt;EstablishedC2&lt;/code&gt;, monotonically increasing in suspicion with an explicit demotion path to &lt;code&gt;Trusted&lt;/code&gt;. The state machine uses deterministic thresholds for fast-path decisions, with optional LLM classification as a slow path. Constants are well-named and consistent: &lt;code&gt;SUSPICION_INCREMENT&lt;/code&gt; at 0.15, &lt;code&gt;SUSPICION_MAX&lt;/code&gt; at 1.0, &lt;code&gt;SUSPICION_DECAY&lt;/code&gt; at 0.02. Trusted promotion requires the score to stay below 0.1 across 300 seconds and 100+ packets.&lt;/p&gt;

&lt;p&gt;The tarpit is where things get creative. It runs four protocol handlers: SSH (backed by the LLM pretending to be a bash shell), HTTP (fake WordPress), MySQL (wire protocol responses), and DNS (canary records). The SSH honeypot's system prompt is detailed enough to be convincing. It specifies hostname, kernel version, filesystem layout, installed services, and includes example command/output pairs so the LLM knows to respond with raw terminal output instead of "Sure, here's the output of ls."&lt;/p&gt;

&lt;p&gt;Now for the bugs. The project ships with &lt;code&gt;"think": false&lt;/code&gt; in its Ollama requests, but some models emit &lt;code&gt;&amp;lt;think&amp;gt;&lt;/code&gt; blocks anyway. The stripping code handles exactly one block:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="nf"&gt;Some&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;start&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;content&lt;/span&gt;&lt;span class="nf"&gt;.find&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"&amp;lt;think&amp;gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="nf"&gt;Some&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;end&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;content&lt;/span&gt;&lt;span class="nf"&gt;.find&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"&amp;lt;/think&amp;gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;after&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;content&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;end&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="mi"&gt;8&lt;/span&gt;&lt;span class="o"&gt;..&lt;/span&gt;&lt;span class="p"&gt;];&lt;/span&gt;
        &lt;span class="n"&gt;after&lt;/span&gt;&lt;span class="nf"&gt;.trim_start&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="nf"&gt;.to_string&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If the model emits two &lt;code&gt;&amp;lt;think&amp;gt;&lt;/code&gt; blocks, the second one passes through to the attacker, leaking the LLM's reasoning about how to respond. A security tool with a security bug.&lt;/p&gt;

&lt;p&gt;The DPI path matching is aggressive. It flags any HTTP path starting with &lt;code&gt;/wp-&lt;/code&gt;, &lt;code&gt;/adm&lt;/code&gt;, &lt;code&gt;/cm&lt;/code&gt;, or &lt;code&gt;/cg&lt;/code&gt; as suspicious, which would catch &lt;code&gt;/admiral-insurance&lt;/code&gt; or &lt;code&gt;/cmarket&lt;/code&gt;. The &lt;code&gt;random_window_size()&lt;/code&gt; function in the tarpit's anti-fingerprinting module computes a value and discards it (&lt;code&gt;let _window = random_window_size()&lt;/code&gt;). TCP window randomization is advertised in the README but doesn't actually happen. The iptables DNAT rules that redirect traffic to the honeypot persist after a SIGKILL because &lt;code&gt;Drop&lt;/code&gt; cleanup gets skipped. Several modules (&lt;code&gt;distributed&lt;/code&gt;, &lt;code&gt;antifingerprint&lt;/code&gt;, &lt;code&gt;canary&lt;/code&gt;) carry &lt;code&gt;#[allow(dead_code)]&lt;/code&gt; annotations, meaning they're defined but not wired into anything.&lt;/p&gt;

&lt;p&gt;No CI. No GitHub Actions. The README claims "Clippy: zero warnings (-D warnings)" but nothing enforces it. The entire development history is two commits: "release: blackwall v1" and "images."&lt;/p&gt;

&lt;h2&gt;
  
  
  The Contribution
&lt;/h2&gt;

&lt;p&gt;The suspicion score scale mismatch was the clearest bug to fix. The behavioral engine operates on a 0.0 to 1.0 scale. Increments are 0.15, max is 1.0, decay is 0.02 per evaluation, and trusted promotion requires &amp;lt; 0.1. But the DPI event handler in &lt;code&gt;process_events&lt;/code&gt; was adding 15.0 and capping at 100.0. A completely different scale. Any IP that triggered a single DPI detection got a suspicion score of 15.0+, making trusted promotion effectively unreachable. At a decay rate of 0.02 per tick, it would take roughly 750 ticks to come back under 0.1.&lt;/p&gt;

&lt;p&gt;The fix was small: export &lt;code&gt;SUSPICION_INCREMENT&lt;/code&gt; and &lt;code&gt;SUSPICION_MAX&lt;/code&gt; from the behavior module and use them in the DPI handler, matching the pattern already used by &lt;code&gt;apply_escalation&lt;/code&gt; in the transition logic. Three files, 11 insertions, 5 deletions. All 19 behavior tests pass.&lt;/p&gt;

&lt;p&gt;Getting into the codebase was easy despite the size. The workspace structure keeps things separated, the naming is consistent, and the behavioral engine is self-contained enough that you can understand the state machine without reading the eBPF code. Finding the bug was a matter of grepping for &lt;code&gt;suspicion_score&lt;/code&gt; and noticing that one callsite looked nothing like the others.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/xzcrpw/blackwall/pull/3" rel="noopener noreferrer"&gt;PR #3&lt;/a&gt; got a response the same day. The maintainer had independently found the same bug during a major architectural rewrite and closed the PR to avoid merge conflicts with the new core. "Nice catch on the scale mismatch, man." He shipped v2.0.0 hours later, overhauling the behavioral engine and moving to native eBPF DNAT. Someone else's detailed &lt;a href="https://github.com/xzcrpw/blackwall/issues/2" rel="noopener noreferrer"&gt;bug report&lt;/a&gt; got a similar response: all findings would be addressed in the rewrite. Both are now closed.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Verdict
&lt;/h2&gt;

&lt;p&gt;Blackwall is interesting as both a codebase and a case study. The eBPF work is genuinely competent. The behavioral engine is well-designed. The honeypot concept is creative. But it shipped in one commit with no CI, no development history, and bugs in the code that connects the components together. The strongest parts (kernel programs, type contracts, state machine) look like they were built carefully. The weakest parts (DPI integration, think-tag stripping, dead code) look like they were assembled and not tested end-to-end.&lt;/p&gt;

&lt;p&gt;That's not a criticism of using AI to write code. It's a criticism of shipping without integration testing, regardless of who or what wrote it. The eBPF verifier forces you to be correct in the kernel programs. Nothing forces you to be correct in the glue.&lt;/p&gt;

&lt;p&gt;If you work with eBPF, the XDP programs and the aya patterns are worth studying. If you're interested in honeypot design, the tarpit's multi-protocol approach and LLM integration are genuinely novel. The maintainer shipped a v2.0.0 rewrite the same week, which suggests the integration bugs are being addressed. If you want to run this in production, give the new version a serious read first. The architecture has changed significantly, and it hasn't had time to accumulate the kind of real-world testing you want from something sitting between attackers and your network.&lt;/p&gt;

&lt;h2&gt;
  
  
  Go Look At This
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/xzcrpw/blackwall" rel="noopener noreferrer"&gt;Blackwall on GitHub&lt;/a&gt;. Read the eBPF code even if you skip the rest.&lt;/p&gt;

&lt;p&gt;If you want to contribute, the v2.0.0 rewrite is a fresh starting point. Our &lt;a href="https://github.com/xzcrpw/blackwall/pull/3" rel="noopener noreferrer"&gt;suspicion score fix&lt;/a&gt; and the &lt;a href="https://github.com/xzcrpw/blackwall/issues/2" rel="noopener noreferrer"&gt;bug report&lt;/a&gt; are both closed, addressed in the rewrite. The new codebase still has no CI, and the &lt;code&gt;&amp;lt;think&amp;gt;&lt;/code&gt; tag stripping likely still needs work. The 35 MB of PNG assets are still there.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This is Review Bomb #14, a series where I find under-the-radar projects on GitHub, read the code, contribute something, and write it up. If you know a project that deserves more eyeballs, drop it in the comments.&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This post was originally published at &lt;a href="https://www.wshoffner.dev/blog" rel="noopener noreferrer"&gt;wshoffner.dev/blog&lt;/a&gt;. If you liked it, the Review Bomb series lives there too.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>rust</category>
      <category>security</category>
      <category>ai</category>
    </item>
    <item>
      <title>A Rust TUI for Your UniFi Network That Actually Takes Code Review Seriously</title>
      <dc:creator>Wes</dc:creator>
      <pubDate>Thu, 09 Apr 2026 12:44:50 +0000</pubDate>
      <link>https://dev.to/ticktockbent/a-rust-tui-for-your-unifi-network-that-actually-takes-code-review-seriously-32cc</link>
      <guid>https://dev.to/ticktockbent/a-rust-tui-for-your-unifi-network-that-actually-takes-code-review-seriously-32cc</guid>
      <description>&lt;p&gt;If you run UniFi gear, you manage it through a web UI. That's fine until you need to script something, automate a deployment, or check your firewall rules from an SSH session on a box that doesn't have a browser. Ubiquiti doesn't ship a CLI. The API exists but it's split across two incompatible surfaces with different auth mechanisms, different response formats, and different ideas about what an entity ID should look like. Most people who try to automate UniFi end up writing bespoke Python scripts against whichever API endpoint they needed that week. The scripts work until a firmware update moves the endpoint.&lt;/p&gt;

&lt;p&gt;Someone decided to build the CLI that Ubiquiti didn't. And then kept going until it had a full TUI dashboard.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Is Unifly?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/hyperb1iss/unifly" rel="noopener noreferrer"&gt;Unifly&lt;/a&gt; is a Rust CLI and terminal dashboard for managing UniFi network infrastructure. Built by &lt;a href="https://github.com/hyperb1iss" rel="noopener noreferrer"&gt;Stefanie Jane&lt;/a&gt;, it ships as a single binary with 27 CLI commands covering devices, clients, networks, firewall rules, NAT policies, DNS, VPN, Wi-Fi observability, and more. There's also a 10-screen ratatui TUI for real-time monitoring: device health, client connections, traffic charts, firewall policies, network topology. The whole thing speaks both UniFi API dialects (the modern Integration API and the older Session API) and reconciles data between them.&lt;/p&gt;

&lt;p&gt;123 stars at time of writing. Eight weeks old. About 15,000 lines of Rust across two crates. The pace of development is aggressive: 71 commits in the first eight days of April alone.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Snapshot
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Project&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href="https://github.com/hyperb1iss/unifly" rel="noopener noreferrer"&gt;Unifly&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Stars&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;123 at time of writing&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Maintainer&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Solo (hyperb1iss), AI-assisted&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Code health&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;250+ tests, pedantic clippy, forbid(unsafe), e2e suite against simulated controller&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Docs&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Thorough CONTRIBUTING.md, detailed architecture guide, explicit ROADMAP with named gaps&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Contributor UX&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Same-day responses, content always lands, merge mechanism varies&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Worth using&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Yes if you have UniFi gear. The CLI alone replaces a lot of web UI clicking.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Under the Hood
&lt;/h2&gt;

&lt;p&gt;The architecture is a two-crate Cargo workspace. &lt;code&gt;unifly-api&lt;/code&gt; is the library: HTTP and WebSocket transport, a &lt;code&gt;Controller&lt;/code&gt; facade behind an &lt;code&gt;Arc&lt;/code&gt;, and a reactive &lt;code&gt;DataStore&lt;/code&gt; built on &lt;code&gt;DashMap&lt;/code&gt; and &lt;code&gt;tokio::watch&lt;/code&gt; channels. &lt;code&gt;unifly&lt;/code&gt; is the binary: clap-based CLI commands and ratatui TUI screens. The library is published to crates.io independently, so you could build your own tooling on top of it without pulling in the CLI.&lt;/p&gt;

&lt;p&gt;The dual-API problem is the interesting engineering challenge. UniFi controllers expose a modern Integration API (REST, API key auth, UUIDs) and an older Session API (cookie + CSRF, hex string IDs, envelope-wrapped responses). Some data only exists on one side. Client Wi-Fi experience metrics? Session API. Firewall policy CRUD? Integration API. NAT rules? Session v2 API, a third dialect. Unifly handles all of this behind a single &lt;code&gt;Controller&lt;/code&gt; type. You call &lt;code&gt;controller.execute(Command::CreateNatPolicy(...))&lt;/code&gt; and the command router figures out which API to hit, which auth to use, and which ID format to resolve.&lt;/p&gt;

&lt;p&gt;The lint configuration tells you a lot about a project's standards. Unifly runs &lt;code&gt;clippy::pedantic&lt;/code&gt; at &lt;code&gt;deny&lt;/code&gt; level, &lt;code&gt;clippy::unwrap_used&lt;/code&gt; at &lt;code&gt;deny&lt;/code&gt;, and &lt;code&gt;unsafe_code&lt;/code&gt; at &lt;code&gt;forbid&lt;/code&gt;. The workspace Cargo.toml has 30+ individual clippy rule overrides, each with a clear rationale. This is not someone who pasted a default config. The test suite uses wiremock for API mocking, assert_cmd for CLI testing, and insta for snapshot tests. The e2e suite spins up a simulated UniFi controller and runs full command flows. &lt;code&gt;cargo-deny&lt;/code&gt; enforces a license allowlist and vulnerability advisories.&lt;/p&gt;

&lt;p&gt;One thing you notice immediately: nearly every commit has &lt;code&gt;Co-Authored-By: Claude Opus 4.6&lt;/code&gt; in the trailer. This is an AI-assisted codebase, and the maintainer isn't hiding it. The code quality is high regardless of how it got there. The architecture is coherent, the tests are real, the error handling uses &lt;code&gt;thiserror&lt;/code&gt; and &lt;code&gt;miette&lt;/code&gt; properly. AI-assisted doesn't mean AI-dumped. The commit history shows iterative problem-solving, not a single prompt that produced 15,000 lines.&lt;/p&gt;

&lt;p&gt;What's rough? The TUI has ten screens and zero test coverage. The controller reconnect lifecycle is broken (the &lt;code&gt;CancellationToken&lt;/code&gt; goes permanent after the first disconnect). And the ROADMAP had a few stale entries that claimed features were missing when they'd already been implemented. These are the kinds of gaps you'd expect in a project that's moving this fast with a solo maintainer.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Contribution
&lt;/h2&gt;

&lt;p&gt;The ROADMAP listed "NAT policies have no &lt;code&gt;update&lt;/code&gt; subcommand" as a known gap. The workaround was to delete and recreate, which means re-specifying every field just to change one. NAT rules are the kind of thing you adjust frequently (new port forward, toggling a rule on and off), so this was a real workflow friction.&lt;/p&gt;

&lt;p&gt;The fix touched 11 files across both crates. On the library side: an &lt;code&gt;UpdateNatPolicyRequest&lt;/code&gt; struct, a new &lt;code&gt;Command&lt;/code&gt; variant, and an &lt;code&gt;apply_nat_update&lt;/code&gt; function that fetches the existing rule via the Session v2 API, merges only the changed fields, and PUTs it back. On the CLI side: an &lt;code&gt;Update&lt;/code&gt; variant in the clap args with all NAT fields as optional flags plus &lt;code&gt;--from-file&lt;/code&gt; support, validation that at least one field is provided, and &lt;code&gt;conflicts_with&lt;/code&gt; between &lt;code&gt;--name&lt;/code&gt; and &lt;code&gt;--description&lt;/code&gt; (both map to the same v2 API field, which I caught during my pre-submission review). I also promoted a private &lt;code&gt;ensure_session_access&lt;/code&gt; function from the events handler to a shared utility, and wired it into the NAT handler so users get a clean error message when their auth mode is insufficient instead of a deep transport failure.&lt;/p&gt;

&lt;p&gt;The codebase was easy to navigate. The project's AGENTS.md (symlinked as CLAUDE.md) doubles as a comprehensive architecture guide. It documents every API quirk, every CLI pattern, every module's responsibility. When I needed to understand how the firewall &lt;code&gt;update&lt;/code&gt; command worked (to replicate the pattern for NAT), I read the architecture doc first, then confirmed in code. The existing firewall implementation was an almost perfect blueprint. The CONTRIBUTING.md lays out the full checklist: define args, implement handler, wire dispatch, update skill docs, add tests. I followed it step by step.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/hyperb1iss/unifly/pull/10" rel="noopener noreferrer"&gt;PR #10&lt;/a&gt; is open and waiting for review. The maintainer's track record with external PRs is good. Five previous PRs from two contributors all had their content land in main. Clean PRs against current main get proper GitHub merges. PRs that conflict with ongoing refactors get reworked by the maintainer with co-author credit. Either way, the work ships.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Verdict
&lt;/h2&gt;

&lt;p&gt;Unifly is for anyone who manages UniFi networks and wants to do it from the terminal. If you run a homelab with Ubiquiti gear, or you're an MSP managing multiple sites, or you just want to script your firewall rules instead of clicking through a web UI, this is the tool. The CLI covers enough surface area to be genuinely useful today. The TUI is a bonus for monitoring.&lt;/p&gt;

&lt;p&gt;The trajectory is steep. Eight weeks from first commit to 27 commands, a full TUI, dual-API support, and a published crate on crates.io. The codebase is clean enough that an external contributor (me) could implement a missing feature by following the existing patterns. The maintainer responds quickly and ships contributed work. Those are the signals that matter for whether a project has legs.&lt;/p&gt;

&lt;p&gt;What would push it further? Test coverage on the TUI screens. Fixing the reconnect lifecycle so long-running TUI sessions survive network blips. And eventually, Cloud/Site Manager API support so you can manage controllers through Ubiquiti's cloud without direct network access. The &lt;code&gt;AuthCredentials::Cloud&lt;/code&gt; variant already exists in the code. The transport just isn't wired yet.&lt;/p&gt;

&lt;h2&gt;
  
  
  Go Look At This
&lt;/h2&gt;

&lt;p&gt;If you have UniFi gear, &lt;a href="https://github.com/hyperb1iss/unifly" rel="noopener noreferrer"&gt;try unifly&lt;/a&gt;. &lt;code&gt;cargo install unifly&lt;/code&gt; or grab a binary from the releases page. Run &lt;code&gt;unifly config init&lt;/code&gt; to connect to your controller, then &lt;code&gt;unifly devices list&lt;/code&gt; and see what happens.&lt;/p&gt;

&lt;p&gt;The ROADMAP has more gaps worth picking up. TUI test coverage is explicitly welcomed. The radio data parsing has untested code paths. The Cloud API transport is waiting for someone to implement it.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This is Review Bomb #13, a series where I find under-the-radar projects on GitHub, read the code, contribute something, and write it up. If you know a project that deserves more eyeballs, drop it in the comments.&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This post was originally published at &lt;a href="https://www.wshoffner.dev/blog" rel="noopener noreferrer"&gt;wshoffner.dev/blog&lt;/a&gt;. If you liked it, the Review Bomb series lives there too.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>rust</category>
      <category>networking</category>
      <category>cli</category>
    </item>
    <item>
      <title>Anatomy of a GitHub Actions Supply Chain Attack Targeting MCP Repos</title>
      <dc:creator>Wes</dc:creator>
      <pubDate>Wed, 08 Apr 2026 11:41:55 +0000</pubDate>
      <link>https://dev.to/ticktockbent/anatomy-of-a-github-actions-supply-chain-attack-targeting-mcp-repos-59jb</link>
      <guid>https://dev.to/ticktockbent/anatomy-of-a-github-actions-supply-chain-attack-targeting-mcp-repos-59jb</guid>
      <description>&lt;p&gt;On April 7th, someone submitted a pull request to my project Charlotte. 28 lines. One new file. A GitHub Actions workflow that "validates skill metadata in CI." The PR body quoted my own README back to me and offered to adjust the filename if I preferred something different.&lt;/p&gt;

&lt;p&gt;I said I'd review it tomorrow. Then I actually looked at it, and spent the next day tracing an operation that spans 250+ repositories, at least 64 sockpuppet accounts, and five distinct phases of escalating access -- all controlled by a single organization.&lt;/p&gt;

&lt;p&gt;This is what I found.&lt;/p&gt;

&lt;h2&gt;
  
  
  The PR
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/TickTockBent/charlotte" rel="noopener noreferrer"&gt;Charlotte&lt;/a&gt; is a browser automation MCP server. The PR came from an account called &lt;code&gt;internet-dot&lt;/code&gt; and added &lt;code&gt;.github/workflows/hol-skill-validate.yml&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;HOL Skill Validate&lt;/span&gt;

&lt;span class="na"&gt;on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;push&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;branches&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;main&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;master&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
  &lt;span class="na"&gt;pull_request&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;branches&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;main&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;master&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
  &lt;span class="na"&gt;workflow_dispatch&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;

&lt;span class="na"&gt;permissions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;contents&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;read&lt;/span&gt;
  &lt;span class="na"&gt;id-token&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;write&lt;/span&gt;

&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;validate&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;
    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@34e114876b...&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;hashgraph-online/skill-publish@1c30734416d9b05948ccd7f4b3cf60baada87e9e&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;mode&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;validate&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you want to check whether this workflow exists in your own repo, that hash is the string to search for.&lt;/p&gt;

&lt;p&gt;Two problems. First, &lt;code&gt;id-token: write&lt;/code&gt; grants the workflow permission to mint an OpenID Connect token from GitHub's OIDC provider. That's a signed JWT proving "this request comes from a workflow running in repo X, branch Y, triggered by event Z." It exists for deploying to cloud providers. A local metadata validation step has no reason to request one.&lt;/p&gt;

&lt;p&gt;Second, the action phones home. The entrypoint calls &lt;code&gt;getGithubOidcToken()&lt;/code&gt;, mints a token scoped to your repository, and passes it to &lt;code&gt;uploadSkillPreviewFromGithubOidc()&lt;/code&gt;, which ships it to &lt;code&gt;hol.org/registry/api/v1&lt;/code&gt;. The &lt;code&gt;preview-upload&lt;/code&gt; parameter defaults to &lt;code&gt;true&lt;/code&gt;. If you merge this, every push to main and every pull request mints an OIDC token and sends it to a third-party server.&lt;/p&gt;

&lt;p&gt;Charlotte has no &lt;code&gt;SKILL.md&lt;/code&gt;, no &lt;code&gt;skill.yaml&lt;/code&gt;, no Hashgraph Online skill definitions. There is nothing for this workflow to validate.&lt;/p&gt;

&lt;p&gt;I closed the PR and started pulling threads.&lt;/p&gt;

&lt;h2&gt;
  
  
  The account
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;internet-dot&lt;/code&gt; was created on April 14, 2025. It submitted its first PRs to Hashgraph Online repos the same day, then went dormant for 11 months. All campaign activity falls within a two-week window starting late March 2026. It has 1,599 public repositories, 7 followers, and a bio that reads "i'm just a small dot on the internet. infra and identity. open source." Its blog field links to &lt;code&gt;hol.org&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;hol.org&lt;/code&gt; is the website of Hashgraph Online (HOL), a blockchain organization in the Hedera ecosystem. Their GitHub org, &lt;code&gt;hashgraph-online&lt;/code&gt;, has one public member: Michael Kantor (&lt;code&gt;kantorcodes&lt;/code&gt;), who identifies himself as President.&lt;/p&gt;

&lt;p&gt;On the same day the &lt;code&gt;internet-dot&lt;/code&gt; account was created, it submitted its first PRs -- to &lt;code&gt;hashgraph-online/standards-agent-kit&lt;/code&gt;. Three pull requests adding a plugin system. All merged. From there, internet-dot contributed documentation to &lt;code&gt;hashgraph-online/hcs-improvement-proposals&lt;/code&gt;, fixed bugs in &lt;code&gt;hashgraph-online/skill-publish&lt;/code&gt; (the action being mass-deployed), and pushed 10 PRs to &lt;code&gt;hashgraph-online/ai-plugin-scanner&lt;/code&gt; in a single day. All merged rapidly.&lt;/p&gt;

&lt;p&gt;All nine of internet-dot's public gists are HOL documentation -- UAID specifications, MCP agent discovery guides, HCS standards overviews.&lt;/p&gt;

&lt;p&gt;This is not an independent contributor who happens to like HOL's tools. This is an operational account for the organization.&lt;/p&gt;

&lt;h2&gt;
  
  
  The campaign
&lt;/h2&gt;

&lt;p&gt;The PR to Charlotte wasn't an isolated event. It was Phase 5 of a multi-stage operation that has been running since late March 2026.&lt;/p&gt;

&lt;h3&gt;
  
  
  Phase 1: Seed the ecosystem (March 25-31)
&lt;/h3&gt;

&lt;p&gt;The campaign opens with 20+ pull requests to curated awesome-lists: awesome-ai-agents, awesome-a2a, awesome-agents, awesome-web3, awesome-decentralized, awesome-security, awesome-software-supply-chain-security. Each PR adds a Hashgraph Online entry. Also targeted: the dify-plugins and fastgpt-plugin registries.&lt;/p&gt;

&lt;p&gt;No code execution. No OIDC tokens. Just getting the name "Hashgraph Online" into places developers browse when evaluating tools.&lt;/p&gt;

&lt;h3&gt;
  
  
  Phase 2: Build a contribution history (March 26-27)
&lt;/h3&gt;

&lt;p&gt;A brief detour into what looks like blockchain bounty farming. PRs to SubTrackr, learnvault, EventHorizon, and Soroban-state-lens -- small fixes like "add keyboardShouldPersistTaps to ScrollView" and "add blur-based inline validation." All merged. The kind of activity that makes a GitHub profile look like an active contributor rather than a single-purpose bot.&lt;/p&gt;

&lt;h3&gt;
  
  
  Phase 3: Establish file presence (April 2-3)
&lt;/h3&gt;

&lt;p&gt;This is where the scale changes. In two days, internet-dot submits 40+ PRs adding a &lt;code&gt;codex-plugin.json&lt;/code&gt; manifest file to MCP repositories. No CI. No workflows. Just a static metadata file.&lt;/p&gt;

&lt;p&gt;The target list reads like a directory of MCP infrastructure:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;github/github-mcp-server&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;microsoft/playwright-mcp&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;cloudflare/mcp-server-cloudflare&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;elastic/mcp-server-elasticsearch&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;googleapis/genai-toolbox&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;hashicorp/terraform-mcp-server&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;containers/kubernetes-mcp-server&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;getsentry/XcodeBuildMCP&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;firecrawl/firecrawl-mcp-server&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;tavily-ai/tavily-mcp&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;exa-labs/exa-mcp-server&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;makenotion/notion-mcp-server&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And about 30 more. This is the foot in the door. Get a file into the repo. It's just metadata, what's the harm?&lt;/p&gt;

&lt;h3&gt;
  
  
  Phase 4: Introduce CI execution (April 3)
&lt;/h3&gt;

&lt;p&gt;For repos that merged the Phase 3 manifest, follow-up PRs arrive adding CI workflows that "lint" and "validate" the manifest. This is where &lt;code&gt;id-token: write&lt;/code&gt; first appears in the permissions block. The repos that already accepted a harmless metadata file now get a workflow that executes on every push.&lt;/p&gt;

&lt;p&gt;Several merged this too.&lt;/p&gt;

&lt;h3&gt;
  
  
  Phase 5: The payload (April 6-8, ongoing)
&lt;/h3&gt;

&lt;p&gt;The mass wave. 200+ PRs in three days, all adding the &lt;code&gt;hashgraph-online/skill-publish&lt;/code&gt; workflow with &lt;code&gt;id-token: write&lt;/code&gt;. The YAML is pinned to commit &lt;code&gt;1c30734416d9&lt;/code&gt;, which kantorcodes merged into skill-publish at 23:46 UTC on April 6th -- a "docs: use universal logo in README" commit. Hours later, the campaign PRs started arriving.&lt;/p&gt;

&lt;p&gt;The action has four modes: validate, monitor, quote, and publish. The campaign PRs use validate. The escalation path is built into the tool.&lt;/p&gt;

&lt;h3&gt;
  
  
  The pattern
&lt;/h3&gt;

&lt;p&gt;Each phase escalates access and builds on the last:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Brand awareness (no code)&lt;/li&gt;
&lt;li&gt;Contribution credibility (unrelated fixes)&lt;/li&gt;
&lt;li&gt;File presence (static metadata)&lt;/li&gt;
&lt;li&gt;CI execution (workflow permissions)&lt;/li&gt;
&lt;li&gt;OIDC token access (identity exfiltration)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If you only look at Phase 5 in isolation, it looks like someone spamming CI workflows. Zoom out and it's a funnel.&lt;/p&gt;

&lt;h2&gt;
  
  
  The sockpuppet network
&lt;/h2&gt;

&lt;p&gt;About a third of the Phase 5 targets aren't real repositories. They're controlled by the same operation.&lt;/p&gt;

&lt;p&gt;I pulled account creation timestamps for the obviously fake targets -- the ones with generated usernames like &lt;code&gt;Genuspogoniacubicmillimetre130&lt;/code&gt;, &lt;code&gt;Immunogenic-prismspectroscope589&lt;/code&gt;, and &lt;code&gt;Enfluranehighbloodpressure470&lt;/code&gt;. They were created in four distinct waves:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;February 20-21:&lt;/strong&gt; 3 accounts created within hours. &lt;code&gt;Crinolineflexion823&lt;/code&gt;, &lt;code&gt;Unfathomable-siren38&lt;/code&gt;, &lt;code&gt;Genuspogoniacubicmillimetre130&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;March 1:&lt;/strong&gt; 7 accounts in two hours. Starting at 07:32 UTC with &lt;code&gt;Shelflifegymnopilusvalidipes977&lt;/code&gt; and ending at 09:05 with &lt;code&gt;genetic-shopping832&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;March 7:&lt;/strong&gt; 6 accounts in 90 minutes. &lt;code&gt;dublen5269&lt;/code&gt; at 22:27, ending with &lt;code&gt;Gindhar2112&lt;/code&gt; at 23:48.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;March 17-18:&lt;/strong&gt; The big batch. 14 accounts in 12 hours. &lt;code&gt;foliagerosebush512&lt;/code&gt; and &lt;code&gt;unhealthy-outlander317&lt;/code&gt; were created one second apart at 21:50:33 and 21:50:34.&lt;/p&gt;

&lt;p&gt;Every one of these accounts has exactly one repository. No followers. No prior activity.&lt;/p&gt;

&lt;p&gt;There's a second pattern too: accounts with normal-looking usernames (&lt;code&gt;Rares683&lt;/code&gt;, &lt;code&gt;ANGEL37391&lt;/code&gt;, &lt;code&gt;sproutn-iq123&lt;/code&gt;) that each have exactly two repos -- one project and one github.io page, both created within minutes of each other. Same structure, less obvious names.&lt;/p&gt;

&lt;p&gt;In total, roughly 64 of the 200 target repos are sockpuppets. They make the campaign look more widespread than it is and pad the "number of repos using this workflow" metric. Counting only real projects, about 130 were targeted. Of those, 13 merged one or more campaign PRs.&lt;/p&gt;

&lt;h2&gt;
  
  
  The trust chain
&lt;/h2&gt;

&lt;p&gt;The workflow runs &lt;code&gt;pnpm install --frozen-lockfile --prod&lt;/code&gt; at runtime with no &lt;code&gt;.npmrc&lt;/code&gt;, pulling dependencies from public npm. The flag &lt;code&gt;--frozen-lockfile&lt;/code&gt; makes installs deterministic for a given commit. It also has &lt;code&gt;--ignore-scripts&lt;/code&gt; to block postinstall hooks. That sounds careful.&lt;/p&gt;

&lt;p&gt;But the dependencies are &lt;code&gt;import&lt;/code&gt;ed by &lt;code&gt;entrypoint.mjs&lt;/code&gt; at runtime, so module-level code in any package still executes. And the entire supply chain is one person:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;kantorcodes writes the action code (sole committer to &lt;code&gt;hashgraph-online/skill-publish&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;kantorcodes publishes the primary dependency &lt;code&gt;@hol-org/rb-client&lt;/code&gt; to npm (150 versions, co-maintained with &lt;code&gt;tmcc_patches&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;kantorcodes commits the &lt;code&gt;pnpm-lock.yaml&lt;/code&gt; that determines which package versions get installed&lt;/li&gt;
&lt;li&gt;internet-dot submits the PRs choosing which commit hash to pin&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;--frozen-lockfile&lt;/code&gt; guarantees you get what's in the lockfile at that commit. The person writing the lockfile is the person writing the code it installs. A new campaign PR with a new commit hash pulls whatever new lockfile and new package versions that person decided to ship.&lt;/p&gt;

&lt;p&gt;The scoped npm packages (&lt;code&gt;@hol-org/*&lt;/code&gt;, &lt;code&gt;@hashgraph/*&lt;/code&gt;) aren't vulnerable to dependency confusion. They don't need to be. One person already controls every link in the chain from the npm registry to the OIDC token upload.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the OIDC token gets them
&lt;/h2&gt;

&lt;p&gt;A GitHub OIDC token isn't a credential that grants access to your repo. You can't use it to push code or read secrets. But it's a signed assertion of identity that any server can verify.&lt;/p&gt;

&lt;p&gt;If hol.org trusts GitHub's OIDC provider, that token tells them exactly which repository the request came from. The action's outputs include &lt;code&gt;directory-topic-id&lt;/code&gt;, &lt;code&gt;package-topic-id&lt;/code&gt;, &lt;code&gt;skill-page-url&lt;/code&gt;. In publish mode, it writes to the Hedera Consensus Service -- that's a blockchain ledger. The record is immutable. Once your repo is "published" as a skill, the entry is on-chain and can't be removed by you.&lt;/p&gt;

&lt;p&gt;Validate mode is the entry point. Get the workflow merged. Then a follow-up PR changes one line -- &lt;code&gt;mode: validate&lt;/code&gt; becomes &lt;code&gt;mode: publish&lt;/code&gt; -- and your repo is registered on a platform you never signed up for, with a permanent, verifiable record associating your project with their infrastructure.&lt;/p&gt;

&lt;h2&gt;
  
  
  What to do
&lt;/h2&gt;

&lt;p&gt;If you maintain an MCP-related project, search your &lt;code&gt;.github/workflows&lt;/code&gt; directory for &lt;code&gt;hashgraph-online&lt;/code&gt;, &lt;code&gt;skill-publish&lt;/code&gt;, &lt;code&gt;hol-skill&lt;/code&gt;, or &lt;code&gt;codex-plugin-scanner&lt;/code&gt;. If you find a workflow you didn't add intentionally, remove it.&lt;/p&gt;

&lt;p&gt;Check your merged PRs from &lt;code&gt;internet-dot&lt;/code&gt;. If you merged a &lt;code&gt;codex-plugin.json&lt;/code&gt; manifest file, consider whether you want that there.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;permissions&lt;/code&gt; block in any workflow YAML is the first thing to check on any CI contribution. A workflow that validates local files needs &lt;code&gt;contents: read&lt;/code&gt; at most. &lt;code&gt;id-token: write&lt;/code&gt; on anything that isn't deploying to a cloud provider is a red flag.&lt;/p&gt;

&lt;h2&gt;
  
  
  The uncomfortable part
&lt;/h2&gt;

&lt;p&gt;I run a series where I find small repos and submit PRs to strangers. I know what it looks like from the other side of the notification. The difference between what I do and what this campaign does is intent and effort -- I read the code, find real bugs, and write targeted fixes. This operation scrapes README descriptions, templates a YAML file, and submits it to 250 repositories backed by a network of fake accounts.&lt;/p&gt;

&lt;p&gt;But to a busy maintainer glancing at their notification feed, both look like a stranger showing up with a pull request. That's what makes it effective. It exploits the same trust that makes open source collaboration work at all.&lt;/p&gt;

&lt;p&gt;Thirteen maintainers merged this. They probably had busy days too.&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>security</category>
      <category>github</category>
      <category>supplychain</category>
    </item>
    <item>
      <title>Your Artifact Registry Doesn't Need 2 GB of RAM</title>
      <dc:creator>Wes</dc:creator>
      <pubDate>Mon, 06 Apr 2026 11:12:12 +0000</pubDate>
      <link>https://dev.to/ticktockbent/your-artifact-registry-doesnt-need-2-gb-of-ram-3ckp</link>
      <guid>https://dev.to/ticktockbent/your-artifact-registry-doesnt-need-2-gb-of-ram-3ckp</guid>
      <description>&lt;p&gt;Every team eventually needs an artifact registry. You need somewhere to push Docker images, host internal npm packages, or cache Maven dependencies so your builds don't break when a mirror goes down. The standard answer is Nexus or Artifactory. Both work. Both are also Java applications that need a JVM, a database, careful heap tuning, and at least 2 GB of RAM before they'll serve a single artifact. On a CI server that's already running builds, that memory budget hurts.&lt;/p&gt;

&lt;p&gt;One developer decided the problem was simpler than the existing solutions make it look. The result is a 32 MB Rust binary that handles seven package protocols on less than 100 MB of RAM.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Is Nora?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/getnora-io/nora" rel="noopener noreferrer"&gt;Nora&lt;/a&gt; is a lightweight artifact registry built by &lt;a href="https://github.com/devitway" rel="noopener noreferrer"&gt;devitway&lt;/a&gt; (Pavel Volkov). It supports Docker/OCI, Maven, npm, PyPI, Cargo, Go modules, and raw file hosting in a single binary. It includes a web UI dashboard, Prometheus metrics, token auth with Argon2id password hashing, S3 or local storage backends, mirror/proxy mode for air-gapped environments, and garbage collection. You configure it with a TOML file and run it. That's it.&lt;/p&gt;

&lt;p&gt;36 stars. Three months of focused solo development. About 19,600 lines of Rust across 45 source files. This project is doing real work and almost nobody knows about it.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Snapshot
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Project&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href="https://github.com/getnora-io/nora" rel="noopener noreferrer"&gt;Nora&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Stars&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;36 at time of writing&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Maintainer&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Solo (devitway)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Code health&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;411 tests, proptest, fuzz targets, 61.5% coverage, CI that would make a team project jealous&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Docs&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Good README, CONTRIBUTING.md with build/test/PR instructions&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Contributor UX&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Same-day review on PRs, warm and specific feedback&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Worth using&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Getting there. Mirror mode and auth are solid. Watch this space.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Under the Hood
&lt;/h2&gt;

&lt;p&gt;The architecture is what you'd hope for: each registry protocol gets its own module under &lt;code&gt;src/registries/&lt;/code&gt;, storage is abstracted behind a trait (local filesystem or S3), and the HTTP layer is axum with tokio. Config is validated at startup. There's no clever metaprogramming, no macro soup. You can open the Docker registry module and understand what it does without reading three layers of indirection first.&lt;/p&gt;

&lt;p&gt;The dependency list is restrained for a project this ambitious. axum and tokio handle the HTTP server. reqwest handles upstream proxy requests. serde and toml for config. The entire Cargo.toml reads like someone who picks dependencies on purpose rather than pulling in whatever shows up first on crates.io. For context: Nexus Repository Manager pulls in over 400 Maven dependencies. Nora's &lt;code&gt;Cargo.lock&lt;/code&gt; has about 350 crate entries, but most of those are transitive deps from tokio and reqwest. The direct dependency count is small.&lt;/p&gt;

&lt;p&gt;The CI pipeline is where Nora stands out from other solo projects. Most one-person repos have a test workflow and maybe clippy. Nora runs: &lt;code&gt;cargo fmt&lt;/code&gt;, clippy with &lt;code&gt;-D warnings&lt;/code&gt;, the full test suite, &lt;code&gt;cargo-audit&lt;/code&gt; for vulnerability scanning, &lt;code&gt;cargo-deny&lt;/code&gt; for license and supply-chain policy, Trivy for container scanning, Gitleaks for secret detection, CodeQL for static analysis, and OpenSSF Scorecard for security posture. That is not a typical solo developer setup. It's more thorough than plenty of team projects I've contributed to.&lt;/p&gt;

&lt;p&gt;The test suite backs it up. 411 &lt;code&gt;#[test]&lt;/code&gt; functions across 29 files. Proptest for parser fuzzing. Fuzz targets via &lt;code&gt;cargo-fuzz&lt;/code&gt; with ClusterFuzzLite integration. Integration tests that spin up the actual binary and exercise all seven protocols. Playwright end-to-end tests for the web UI. Coverage measured at 61.5% via tarpaulin.&lt;/p&gt;

&lt;p&gt;Security is taken seriously too. Credentials use Argon2id with &lt;code&gt;zeroize&lt;/code&gt; to scrub secrets from memory after use. The token verification layer has a TTL cache so it's not re-hashing on every request. There are explicit TOCTOU race condition fixes in the storage layer and request deduplication for the proxy mode so concurrent pulls for the same image don't stampede the upstream registry.&lt;/p&gt;

&lt;p&gt;What's rough? The web UI modules are untested. &lt;code&gt;ui/api.rs&lt;/code&gt; (1,010 lines), &lt;code&gt;ui/templates.rs&lt;/code&gt; (861 lines), and &lt;code&gt;ui/components.rs&lt;/code&gt; (783 lines) have zero test coverage between them. That's 2,654 lines of code running the dashboard with no safety net. There's also a dead code problem: &lt;code&gt;error.rs&lt;/code&gt; defines a full &lt;code&gt;AppError&lt;/code&gt; type with an &lt;code&gt;IntoResponse&lt;/code&gt; impl that's been sitting unused since v0.3. The comment says "wiring into handlers planned for v0.3" but they're on v0.4 now, and handlers still construct status code responses manually. The garbage collector has a subtler issue: &lt;code&gt;collect_all_blobs&lt;/code&gt; scans all seven registries, but &lt;code&gt;collect_referenced_digests&lt;/code&gt; only reads Docker manifests. Non-Docker blobs would look like orphans to the GC. None of these are deal-breakers, but they're the kind of gaps that matter as the project grows.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Contribution
&lt;/h2&gt;

&lt;p&gt;I was reading through the metrics code when I noticed &lt;code&gt;detect_registry()&lt;/code&gt; had match arms for Docker, Maven, npm, PyPI, and Cargo, but Go and Raw requests fell through to an &lt;code&gt;"other"&lt;/code&gt; catch-all. Every request to those two registries was invisible in Prometheus. The &lt;code&gt;RegistriesHealth&lt;/code&gt; struct had the same gap: five fields for five registries, but Go and Raw weren't represented. The health endpoint would report them as down even when they were running fine.&lt;/p&gt;

&lt;p&gt;The kicker was the test suite. There was already a test for Go registry path detection. It asserted that a Go module request should be labeled &lt;code&gt;"other"&lt;/code&gt;. The test was passing because it was checking for the wrong thing.&lt;/p&gt;

&lt;p&gt;The fix was straightforward: add the missing match arms in &lt;code&gt;detect_registry()&lt;/code&gt;, add the &lt;code&gt;go&lt;/code&gt; and &lt;code&gt;raw&lt;/code&gt; fields to &lt;code&gt;RegistriesHealth&lt;/code&gt; and its construction in &lt;code&gt;check_registries_health()&lt;/code&gt;, and fix the test to assert the correct label. &lt;a href="https://github.com/getnora-io/nora/pull/97" rel="noopener noreferrer"&gt;PR #97&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Getting into the codebase was easy. The CONTRIBUTING.md lays out build and test commands clearly. The module structure maps directly to concepts: if you want to understand how Docker pushes work, you open &lt;code&gt;src/registries/docker/&lt;/code&gt;. If you want metrics, you open &lt;code&gt;src/metrics.rs&lt;/code&gt;. I found the bugs by reading, not by fighting the project layout. The whole thing compiled and tested cleanly on the first try.&lt;/p&gt;

&lt;p&gt;This was the first external PR the project had ever received. The maintainer reviewed it, approved it, and merged it the same day. His comment:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"This is the very first community PR that NORA has ever received, and it means a lot. The fact that you not only noticed the missing Go and Raw registries in metrics, but took the time to write a clean fix with proper tests... Welcome to the team."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That response tells you something about whether this project has a future.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Verdict
&lt;/h2&gt;

&lt;p&gt;Nora is for teams that need artifact hosting without the Java tax. If you're running a small engineering team, managing a homelab, or working in an air-gapped environment where you can't reach public registries, a 32 MB binary that handles seven protocols on 100 MB of RAM is a compelling alternative to configuring Nexus heap flags.&lt;/p&gt;

&lt;p&gt;The project's trajectory is strong. January was the initial scaffold with Docker support. February added six more protocols. March brought security hardening, proptest, integration tests, and a coverage push from 22% to 61.5%. April shipped v0.4 with mirror mode and air-gap support. That's a lot of ground to cover in three months, and the commit history tells a consistent story: conventional commits, one concern per commit, no monolithic dumps.&lt;/p&gt;

&lt;p&gt;What would push Nora to the next level? Wire in the &lt;code&gt;AppError&lt;/code&gt; type so error responses are consistent across registries. Fix the GC so it doesn't treat non-Docker blobs as orphans. Get some test coverage on the UI modules. And keep doing what's already working, because the foundation is solid.&lt;/p&gt;

&lt;h2&gt;
  
  
  Go Look At This
&lt;/h2&gt;

&lt;p&gt;If you've ever winced at your artifact registry's memory usage, &lt;a href="https://github.com/getnora-io/nora" rel="noopener noreferrer"&gt;give Nora a look&lt;/a&gt;. The codebase is clean, the CI is thorough, and the maintainer merges good work the same day you submit it.&lt;/p&gt;

&lt;p&gt;Star the repo. Try running it against your Docker workflow. If you want to contribute, the &lt;code&gt;AppError&lt;/code&gt; wiring and the GC's Docker-only reference scanning are both waiting for someone to pick them up.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This is Review Bomb #12, a series where I find under-the-radar projects on GitHub, read the code, contribute something, and write it up. If you know a project that deserves more eyeballs, drop it in the comments.&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This post was originally published at &lt;a href="https://www.wshoffner.dev/blog" rel="noopener noreferrer"&gt;wshoffner.dev/blog&lt;/a&gt;. If you liked it, the Review Bomb series lives there too.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>rust</category>
      <category>devops</category>
      <category>docker</category>
    </item>
    <item>
      <title>I Renamed All 43 Tools in My MCP Server. Here's Why I Did It Now.</title>
      <dc:creator>Wes</dc:creator>
      <pubDate>Fri, 03 Apr 2026 23:53:10 +0000</pubDate>
      <link>https://dev.to/ticktockbent/i-renamed-all-43-tools-in-my-mcp-server-heres-why-i-did-it-now-hic</link>
      <guid>https://dev.to/ticktockbent/i-renamed-all-43-tools-in-my-mcp-server-heres-why-i-did-it-now-hic</guid>
      <description>&lt;p&gt;Charlotte has 111 stars. That's not a lot. But it's enough that a breaking change will annoy real people.&lt;/p&gt;

&lt;p&gt;I shipped one anyway.&lt;/p&gt;

&lt;h2&gt;
  
  
  The naming problem
&lt;/h2&gt;

&lt;p&gt;When I started building &lt;a href="https://github.com/TickTockBent/charlotte" rel="noopener noreferrer"&gt;Charlotte&lt;/a&gt; in February, I named every tool with a colon separator: &lt;code&gt;charlotte:navigate&lt;/code&gt;, &lt;code&gt;charlotte:observe&lt;/code&gt;, &lt;code&gt;charlotte:click&lt;/code&gt;. It looked clean. It felt namespaced. Every tool call in every session used it.&lt;/p&gt;

&lt;p&gt;The problem: the MCP spec restricts tool names to &lt;code&gt;[A-Za-z0-9_.-]&lt;/code&gt;. The colon character isn't in that set. It never was. I either didn't check or didn't care at the time. The MCP SDK was lenient about it until v1.26.0, which started emitting validation warnings on every tool registration.&lt;/p&gt;

&lt;p&gt;I had two options. Fix it now with 111 stars and a handful of active users. Or fix it later with more stars, more users, more documentation, more muscle memory, and more pain.&lt;/p&gt;

&lt;p&gt;We renamed all 43 tools from &lt;code&gt;charlotte:xxx&lt;/code&gt; to &lt;code&gt;charlotte_xxx&lt;/code&gt; in a single commit. Breaking change. Documented in the changelog. Migration note in the release.&lt;/p&gt;

&lt;p&gt;Here's the thing about MCP: clients discover tools dynamically at connection time. When an agent connects to Charlotte, it asks "what tools do you have?" and Charlotte sends the current list. The agent doesn't care what the tools were called yesterday. So for most users, the upgrade is invisible. The old names simply don't exist anymore and the new names appear automatically.&lt;/p&gt;

&lt;p&gt;The people who get hit are the ones with custom prompts or configurations that reference tool names as strings. That's a small group right now. It will be a much larger group in six months.&lt;/p&gt;

&lt;p&gt;Breaking changes are cheaper when you're small. That's the whole argument. Ship it early, pay the small cost, avoid the large one later.&lt;/p&gt;

&lt;h2&gt;
  
  
  What else is in 0.6.0
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Batch form filling.&lt;/strong&gt; This one matters for token economics.&lt;/p&gt;

&lt;p&gt;Before 0.6.0, filling a 10-field contact form meant 10 separate tool calls: &lt;code&gt;charlotte_type&lt;/code&gt; for each text field, &lt;code&gt;charlotte_select&lt;/code&gt; for dropdowns, &lt;code&gt;charlotte_toggle&lt;/code&gt; for checkboxes. Each call carries tool definition overhead (the MCP server sends its tool schemas on every API round trip). Ten calls at ~4,000 definition tokens each is 40,000 tokens just in overhead, before any actual content.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;charlotte_fill_form&lt;/code&gt; takes an array of &lt;code&gt;{element_id, value}&lt;/code&gt; pairs and fills everything in one call:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"fields"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"element_id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"inp-a3f1"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"value"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Jane Smith"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"element_id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"inp-b7c2"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"value"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"jane@example.com"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"element_id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"sel-d4e8"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"value"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Enterprise"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"element_id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"chk-f9a0"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"value"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"true"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;One call. One set of definition tokens. The form is filled. It handles text inputs, textareas, selects, checkboxes, radios, toggles, date pickers, and color inputs. Type detection is automatic based on the element's role.&lt;/p&gt;

&lt;p&gt;For a testing agent running form validation across 50 pages, this is the difference between 500 tool calls and 50. The token savings compound fast.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Lazy Chromium initialization.&lt;/strong&gt; Charlotte used to launch a Chromium instance the moment the MCP server started. The problem: MCP clients like Claude Desktop and Cursor spawn all configured servers at startup, whether you're going to use them or not. If Charlotte is in your config but you're just writing code today, you had a headless browser burning RAM for nothing.&lt;/p&gt;

&lt;p&gt;Now the browser launches on the first actual tool call. If you never browse, Chromium never starts.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Slow typing.&lt;/strong&gt; &lt;code&gt;charlotte_type&lt;/code&gt; gains &lt;code&gt;slowly&lt;/code&gt; and &lt;code&gt;character_delay&lt;/code&gt; parameters for character-by-character input. This sounds trivial until your agent tries to test a search-as-you-type field and the site's event handler only fires on individual keystrokes, not pasted text. Autocomplete, live validation, search suggestions. They all need real keystroke events.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Node.js 20 support.&lt;/strong&gt; I was requiring Node 22 for no reason. No 22-only APIs were in use. Relaxing to &amp;gt;=20 opens Charlotte to the broader LTS user base.&lt;/p&gt;

&lt;h2&gt;
  
  
  The ASI bug, one last time
&lt;/h2&gt;

&lt;p&gt;In v0.4.1, I found a bug where &lt;code&gt;charlotte:evaluate&lt;/code&gt; silently returned null on multi-statement JavaScript. The cause was &lt;code&gt;new Function('return ' + expr)&lt;/code&gt; combined with Automatic Semicolon Insertion. I &lt;a href="https://dev.to/ticktockbent/i-let-an-ai-agent-use-my-browser-tool-unsupervised-it-found-3-bugs-in-20-minutes-2c70"&gt;wrote about it&lt;/a&gt; at the time.&lt;/p&gt;

&lt;p&gt;I fixed it in &lt;code&gt;evaluate.ts&lt;/code&gt;. Then found the same pattern in &lt;code&gt;wait-for.ts&lt;/code&gt; and fixed it in 0.5.0.&lt;/p&gt;

&lt;p&gt;0.6.0 found it a third time in &lt;code&gt;pollUntilCondition&lt;/code&gt;, a utility function used by the wait system. Same bug. Same &lt;code&gt;new Function('return ' + expr)&lt;/code&gt;. Same migration to CDP &lt;code&gt;Runtime.evaluate&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;That's three separate files with the same broken pattern, discovered across three releases. Copy-paste bugs are persistent. If you find a pattern-level bug in your codebase, grep for every instance before you close the issue. I should have done that the first time.&lt;/p&gt;

&lt;h2&gt;
  
  
  7 strangers improved my code
&lt;/h2&gt;

&lt;p&gt;When I started Charlotte in February, it was a solo project. 100% of commits from one person. An external evaluation in early March rated sustainability 2 out of 5 and flagged "97% single-developer commits" as the primary risk.&lt;/p&gt;

&lt;p&gt;Six weeks later, seven people I've never met have merged code into Charlotte:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Teoman Yavuzkurt&lt;/strong&gt; contributed three PRs: fixing the default viewport (800x600 was unrealistically small), solving a stale compositor frame bug in screenshots on SPA transitions, and fixing macOS symlink resolution in tests. Three different areas of the codebase. That's someone who read the code deeply enough to find problems across modules.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;clawtom&lt;/strong&gt; submitted two PRs: an O(1) lookup optimization for the snapshot store (replacing a linear scan with a Map index) and proper error logging for CDP failures in layout extraction. Both unsolicited. Both performance or reliability improvements that I hadn't prioritized.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Sandy McArthur, Jr.&lt;/strong&gt; joined as a new contributor this cycle. &lt;strong&gt;Nuno Curado&lt;/strong&gt; did the original security hardening back in February. &lt;strong&gt;kai-agent-free&lt;/strong&gt; picked up the "read version from package.json" issue I had tagged as "good first issue." &lt;strong&gt;Nestor Fernando De Leon Llanos&lt;/strong&gt; added the issue templates and community links.&lt;/p&gt;

&lt;p&gt;I didn't recruit any of them. They found the project, read the code, and decided it was worth contributing to. The issue templates, the "good first issue" labels, the CONTRIBUTING guide, the test suite that gives contributors confidence their changes don't break things. All of that infrastructure exists to make contributing feel safe and worthwhile. It seems to be working.&lt;/p&gt;

&lt;p&gt;The sustainability rating would look different today.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's next
&lt;/h2&gt;

&lt;p&gt;Charlotte is at 43 tools, 519 tests, and a 1.07:1 test-to-source line ratio. The structural tree view from 0.5.0 gives agents a full page map in under 2,000 characters. Iframe extraction handles embedded content. File output keeps large responses out of the context window. And now batch form fills collapse multi-step interactions into single calls.&lt;/p&gt;

&lt;p&gt;The focus for the next cycle is the connect-to-browser feature: attaching Charlotte to an already-running Chrome instance instead of launching its own. This unlocks screen recording of agent sessions, live debugging, and the kind of demo videos that are worth more than any blog post.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try it
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npx @ticktockbent/charlotte@latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Works with any MCP client: Claude Desktop, Claude Code, Cursor, Windsurf, Cline, VS Code, Amp.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/TickTockBent/charlotte" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt; | &lt;a href="https://www.npmjs.com/package/@ticktockbent/charlotte" rel="noopener noreferrer"&gt;npm&lt;/a&gt; | &lt;a href="https://charlotte-rose.vercel.app/vs-playwright" rel="noopener noreferrer"&gt;Charlotte vs Playwright MCP&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Open source, MIT licensed. If you're running browser-heavy agent workflows, I'd like to hear how it holds up.&lt;/p&gt;

</description>
      <category>mcp</category>
      <category>ai</category>
      <category>webdev</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Your System Is Not a State Machine</title>
      <dc:creator>Wes</dc:creator>
      <pubDate>Fri, 03 Apr 2026 12:10:55 +0000</pubDate>
      <link>https://dev.to/ticktockbent/your-system-is-not-a-state-machine-2jf1</link>
      <guid>https://dev.to/ticktockbent/your-system-is-not-a-state-machine-2jf1</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl9hkrvkkd7qijfq8uqmc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl9hkrvkkd7qijfq8uqmc.png" alt=" " width="800" height="320"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We were all taught the same thing. A system has states. It has transitions. Something happens, the system moves from State A to State B. You can draw it on a whiteboard. You can enumerate the possibilities. You can write tests for each branch.&lt;/p&gt;

&lt;p&gt;That was true for a long time. It's not true anymore.&lt;/p&gt;

&lt;h2&gt;
  
  
  Do the math
&lt;/h2&gt;

&lt;p&gt;Take a small transformer. 117 million parameters, each stored as a float32. The raw state space of just the weights is 2^(3.7 billion). The number of atoms in the observable universe is around 2^266.&lt;/p&gt;

&lt;p&gt;That's before you add activations, attention matrices, the KV cache growing with every token. And that's one model sitting idle. Not a system. Not an architecture. Just one small model.&lt;/p&gt;

&lt;p&gt;Now build something real. An orchestrator spawns four sub-agents. One is browsing a website. One is querying a database. One is calling an external API. One is doing a computation. Each has its own latency, its own failure modes, its own ability to return something you didn't expect.&lt;/p&gt;

&lt;p&gt;What state is that system in?&lt;/p&gt;

&lt;p&gt;You don't know. I don't know. Nobody knows, because the space of possible configurations is so absurdly vast that calling it "astronomical" is generous. You can't draw this on a whiteboard. You can't enumerate the branches. The flowchart is a lie.&lt;/p&gt;

&lt;h2&gt;
  
  
  It's not random either
&lt;/h2&gt;

&lt;p&gt;Your first instinct might be to reach for probability. If we can't predict the exact state, maybe we can describe the distribution of likely states. Stochastic modeling. Markov chains. The math is right there.&lt;/p&gt;

&lt;p&gt;But that framing is wrong too, because these systems aren't rolling dice. An agent returning a useful summary of a web page isn't a random event. It's the result of a goal-directed process that evaluated and corrected itself on a token-by-token basis across thousands of sequential decisions. The output is useful precisely because it isn't random.&lt;/p&gt;

&lt;p&gt;So you're stuck in a third space. Not deterministic. Not stochastic. Something else.&lt;/p&gt;

&lt;h2&gt;
  
  
  Convergent but underdetermined
&lt;/h2&gt;

&lt;p&gt;Here's the framing I keep coming back to.&lt;/p&gt;

&lt;p&gt;An LLM doesn't select an output from a distribution and accept whatever comes up. Every token is an evaluation. The weights encode something like "given everything generated so far, what moves me closer to a coherent completion?" The model is steering. Continuously. At the lowest level of its operation.&lt;/p&gt;

&lt;p&gt;That's already not a state machine. But zoom out.&lt;/p&gt;

&lt;p&gt;Your orchestrator has four sub-agents running. Each one is internally converging toward its own useful output. The orchestrator is monitoring returns in real time, and each return reshapes how it evaluates the others. Agent 3's result might make agent 2's task irrelevant. Agent 1's failure might mean re-dispatching agent 4 with different parameters.&lt;/p&gt;

&lt;p&gt;You have nested convergence loops running at different scales, different speeds, none following a predetermined path, all goal-directed. The system isn't transitioning between states. It's navigating toward coherence through a space that only reveals itself as the system moves through it.&lt;/p&gt;

&lt;p&gt;The closest analogy isn't computer science. It's biology. A cell responding to chemical gradients isn't executing a flowchart or rolling dice. It's resolving toward a functional configuration through continuous interaction with an environment it can't fully predict.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why this matters practically
&lt;/h2&gt;

&lt;p&gt;If this framing is right, then designing agentic systems with state-machine thinking isn't just imprecise. It's architecturally wrong. You're imposing discrete checkpoints on a system whose fundamental operation is continuous convergence. You're fighting the nature of the thing.&lt;/p&gt;

&lt;p&gt;The alternative might look something like designing around convergence envelopes. Not "what state should the system be in at step 3" but "what region of outcome space should this process be converging toward, and what boundaries should it not cross while getting there."&lt;/p&gt;

&lt;p&gt;Under that model, an orchestrator isn't a state manager. It's a convergence auditor. Its job isn't to track which step the system is on. Its job is to monitor whether the system is still heading toward a useful result, and intervene when it drifts outside acceptable bounds.&lt;/p&gt;

&lt;h2&gt;
  
  
  I don't have the answers
&lt;/h2&gt;

&lt;p&gt;I want to be clear that this is half a thought. I don't have a formal model. I don't have a replacement for the state machine abstraction that you can hand to a junior engineer and say "use this." I'm not sure one exists yet.&lt;/p&gt;

&lt;p&gt;But I know the old model is broken. If you've tried to draw a flowchart for an agentic system and felt like you were lying, you were. The system you're building doesn't have states. It has trajectories. It doesn't transition. It converges.&lt;/p&gt;

&lt;p&gt;Somebody smarter than me will figure out the formalism. I just wanted to point at the gap.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>architecture</category>
      <category>agents</category>
      <category>systems</category>
    </item>
    <item>
      <title>Anthropic Leaked Its Own Source Code and May Not Own It</title>
      <dc:creator>Wes</dc:creator>
      <pubDate>Thu, 02 Apr 2026 12:33:39 +0000</pubDate>
      <link>https://dev.to/ticktockbent/anthropic-leaked-its-own-source-code-and-may-not-own-it-3j3l</link>
      <guid>https://dev.to/ticktockbent/anthropic-leaked-its-own-source-code-and-may-not-own-it-3j3l</guid>
      <description>&lt;p&gt;On March 31st, Anthropic shipped version 2.1.88 of Claude Code to npm with a &lt;a href="https://www.bleepingcomputer.com/news/artificial-intelligence/claude-code-source-code-accidentally-leaked-in-npm-package/" rel="noopener noreferrer"&gt;60MB source map file&lt;/a&gt; that was supposed to stay internal. That file pointed to a zip archive on Anthropic's Cloudflare R2 bucket containing the entire TypeScript source. 1,900 files. 512,000 lines of code. The full architectural blueprint of one of the most commercially successful AI coding tools ever built.&lt;/p&gt;

&lt;p&gt;Security researcher &lt;a href="https://www.theregister.com/2026/03/31/anthropic_claude_code_source_code/" rel="noopener noreferrer"&gt;Chaofan Shou spotted it within hours&lt;/a&gt;. By the time Anthropic pulled the package, the codebase had been mirrored, forked over 41,500 times on GitHub, and archived on decentralized platforms that don't respond to takedown notices.&lt;/p&gt;

&lt;p&gt;What followed was a 12-hour chain reaction that may have permanently changed the legal landscape for AI-generated code.&lt;/p&gt;

&lt;p&gt;Anthropic's response was a &lt;a href="https://github.com/github/dmca/blob/master/2026/03/2026-03-31-anthropic.md" rel="noopener noreferrer"&gt;DMCA blitz&lt;/a&gt;. GitHub disabled &lt;a href="https://piunikaweb.com/2026/04/01/anthropic-dmca-claude-code-leak-github/" rel="noopener noreferrer"&gt;over 8,100 repositories&lt;/a&gt;. The original mirror and its entire fork network went dark. Lawyers moved fast.&lt;/p&gt;

&lt;p&gt;But a Korean developer named Sigrid Jin moved faster. Jin, previously &lt;a href="https://tech.yahoo.com/ai/claude/articles/anthropic-accidentally-leaked-claude-codes-180256954.html" rel="noopener noreferrer"&gt;profiled by the Wall Street Journal&lt;/a&gt; as the heaviest Claude Code user in the world (25 billion tokens consumed), woke up at 4am and rewrote the entire core architecture in Python before sunrise. The repo hit 30,000 GitHub stars faster than any repository in GitHub history. He then rewrote it again in Rust.&lt;/p&gt;

&lt;p&gt;Anthropic can't touch those rewrites. They're clean-room reimplementations. New code, new language, new creative expression. Copyright protects specific expression, not ideas, not architectures, not design patterns. Anyone can study how a system works and build their own version from scratch. That's how the entire software industry has always operated.&lt;/p&gt;

&lt;p&gt;But the legal problems go deeper than clean-room rewrites. The DMCA takedowns themselves rest on a copyright claim that Anthropic's own public statements may have already destroyed.&lt;/p&gt;

&lt;h2&gt;
  
  
  The authorship problem
&lt;/h2&gt;

&lt;p&gt;On March 18, 2025, the DC Circuit issued a unanimous opinion in &lt;a href="https://media.cadc.uscourts.gov/opinions/docs/2025/03/23-5233.pdf" rel="noopener noreferrer"&gt;&lt;em&gt;Thaler v. Perlmutter&lt;/em&gt;&lt;/a&gt; holding that the Copyright Act requires human authorship. The case involved a computer scientist who tried to register copyright for artwork generated by his AI system. The court didn't just deny the registration on narrow grounds. It &lt;a href="https://www.skadden.com/insights/publications/2025/03/appellate-court-affirms-human-authorship" rel="noopener noreferrer"&gt;reasoned&lt;/a&gt; that the word "author" throughout the Copyright Act is structurally incoherent when applied to a non-human entity. Applying it to a machine would produce absurdities, like referring to a "machine's children" or a "machine's nationality." The Supreme Court declined to hear the appeal.&lt;/p&gt;

&lt;p&gt;Some important caveats. That case was about visual art, not code. It dealt with a specific scenario where someone listed an AI as the sole author. It's one circuit, not a Supreme Court ruling. The court &lt;a href="https://foleyhoag.com/news-and-insights/publications/alerts-and-updates/2025/march/dc-circuit-holds-that-ai-generated-artwork-is-ineligible-for-copyright-protection/" rel="noopener noreferrer"&gt;deliberately left the door open&lt;/a&gt; for "AI-assisted" works where a human contributes meaningful creative input.&lt;/p&gt;

&lt;p&gt;The reasoning matters more than the narrow holding. The court established that "author" throughout the Copyright Act is structurally tied to human beings. That's not a ruling about art. That's a philosophical foundation that any future court addressing AI-generated code will find persuasive, even if it's not technically binding.&lt;/p&gt;

&lt;p&gt;And then Anthropic's leadership walked up to that open door and welded it shut.&lt;/p&gt;

&lt;p&gt;In January 2026, Boris Cherny, the head of Claude Code, &lt;a href="https://fortune.com/2026/01/29/100-percent-of-code-at-anthropic-and-openai-is-now-ai-written-boris-cherny-roon/" rel="noopener noreferrer"&gt;posted on X&lt;/a&gt; that 100% of his code is written by Claude. No manual edits. Not even small ones. He shipped 22 pull requests in one day and 27 the next, each one "100% written by Claude." Across Anthropic, he said the figure is "pretty much 100%."&lt;/p&gt;

&lt;p&gt;An &lt;a href="https://fortune.com/2026/01/29/100-percent-of-code-at-anthropic-and-openai-is-now-ai-written-boris-cherny-roon/" rel="noopener noreferrer"&gt;Anthropic spokesperson&lt;/a&gt; softened that to 70-90% company-wide. For Claude Code specifically, about 90% of its code is written by Claude Code itself.&lt;/p&gt;

&lt;p&gt;These aren't offhand comments. They're timestamped, attributed, &lt;a href="https://fortune.com/2026/01/29/100-percent-of-code-at-anthropic-and-openai-is-now-ai-written-boris-cherny-roon/" rel="noopener noreferrer"&gt;reported by Fortune&lt;/a&gt;, &lt;a href="https://www.theregister.com/2026/03/31/anthropic_claude_code_source_code/" rel="noopener noreferrer"&gt;The Register&lt;/a&gt;, &lt;a href="https://www.cnbc.com/2026/03/31/anthropic-leak-claude-code-internal-source.html" rel="noopener noreferrer"&gt;CNBC&lt;/a&gt;, and others. They're discoverable evidence. And they directly undermine any claim of human authorship over the leaked codebase.&lt;/p&gt;

&lt;p&gt;The court in &lt;em&gt;Thaler&lt;/em&gt; left Anthropic a door. Their marketing team closed it.&lt;/p&gt;

&lt;h2&gt;
  
  
  You can't copyright an idea
&lt;/h2&gt;

&lt;p&gt;There's a common response to this argument: "But the humans designed the system. They architected it. The AI just wrote the implementation."&lt;/p&gt;

&lt;p&gt;This gets the law exactly backwards. Copyright protects specific expression, not ideas, not designs, not architectures. You can design a system all day long. That design isn't copyrightable. Patents can protect novel functional inventions, but that's a completely different legal regime with a completely different process and standard.&lt;/p&gt;

&lt;p&gt;The part that copyright actually covers (the specific code) is the part Anthropic says the AI wrote. And the part the humans contributed (the design and architecture) is the part copyright doesn't protect.&lt;/p&gt;

&lt;p&gt;So when someone rewrites the architecture in a different language, there's nothing to claim. The ideas are free. The original expression is AI-generated. And the new expression belongs to whoever wrote it.&lt;/p&gt;

&lt;h2&gt;
  
  
  The transitive authorship problem
&lt;/h2&gt;

&lt;p&gt;Here's where it gets speculative, and where the implications extend far beyond one leaked npm package.&lt;/p&gt;

&lt;p&gt;If 90% of Claude Code's source was written by Claude, then the training pipeline code that produces the next generation of Claude models was also substantially written by Claude. The model weights that come out of that pipeline are the output of an AI-authored system.&lt;/p&gt;

&lt;p&gt;Can you copyright the product of a system that was built by AI?&lt;/p&gt;

&lt;p&gt;The existing precedent only addresses direct AI outputs. Nobody has litigated whether a work produced by an AI-coded system inherits the copyright problem of the system that created it. But the logic is hard to avoid. If human authorship is the prerequisite for copyright, and the authorship chain passes through a substantially non-human link, the claim gets weaker at every generation.&lt;/p&gt;

&lt;p&gt;Nobody knows where the line is. No court has addressed it. But every frontier AI company should be thinking about it, because the answer affects whether their core asset is protectable at all.&lt;/p&gt;

&lt;h2&gt;
  
  
  The moat that isn't
&lt;/h2&gt;

&lt;p&gt;Trade secret is the last legal defense in this analysis. Trade secret protection doesn't require human authorship. It doesn't care who or what created the information. It only requires that the holder took "reasonable measures" to keep it secret.&lt;/p&gt;

&lt;p&gt;Anthropic is not having a great month on that front either.&lt;/p&gt;

&lt;p&gt;Days before the Claude Code leak, &lt;a href="https://fortune.com/2026/03/31/anthropic-source-code-claude-code-data-leak-second-security-lapse-days-after-accidentally-revealing-mythos/" rel="noopener noreferrer"&gt;Fortune reported&lt;/a&gt; that descriptions of Anthropic's upcoming model (internally called "Mythos" or "Capybara") were sitting in a publicly accessible data cache along with close to 3,000 other files. Then the Claude Code source went out on npm. Two major exposures in one week.&lt;/p&gt;

&lt;p&gt;If this ever went to court, opposing counsel would argue that Anthropic's operational security doesn't meet the "reasonable measures" threshold for trade secret protection. A single incident might be forgivable. A pattern is harder to defend.&lt;/p&gt;

&lt;p&gt;The protections collapse one by one. Copyright requires human authorship, and Anthropic publicly says AI writes the code. Trade secret requires maintained confidentiality, and Anthropic keeps accidentally publishing things. Patent requires specific novel invention claims and a formal process, not something you can retroactively blanket over a leaked codebase. DMCA takedowns require a valid underlying copyright, and they only work on centralized platforms anyway.&lt;/p&gt;

&lt;p&gt;What's left is practical barriers: the cost of compute, the difficulty of assembling training data, the head start of an established product, brand trust, enterprise relationships. Those are real advantages. But they're business moats, not legal ones. They can't be enforced in court. They erode as compute gets cheaper, as open-source models close the gap, and as competitors absorb architectural insights from leaks exactly like this one.&lt;/p&gt;

&lt;h2&gt;
  
  
  The product demo
&lt;/h2&gt;

&lt;p&gt;There's an irony at the center of this whole story that's hard to overstate.&lt;/p&gt;

&lt;p&gt;Anthropic built Claude Code. They told the world it was so good that &lt;a href="https://fortune.com/2026/01/29/100-percent-of-code-at-anthropic-and-openai-is-now-ai-written-boris-cherny-roon/" rel="noopener noreferrer"&gt;their own engineers stopped writing code entirely&lt;/a&gt;. Then a packaging error exposed the source. The world's heaviest Claude Code user used what was almost certainly Claude to rewrite Claude Code in Python overnight. The result is legally untouchable, it's the fastest-starred repo in GitHub history, and it demonstrates exactly the capability Anthropic has been selling.&lt;/p&gt;

&lt;p&gt;Anthropic's own product, used by Anthropic's own power user, to neutralize Anthropic's own IP. Made possible because the product is exactly as good as they said it was.&lt;/p&gt;

&lt;p&gt;That's not a leak. That's a product demo.&lt;/p&gt;

&lt;h2&gt;
  
  
  What this means
&lt;/h2&gt;

&lt;p&gt;The Claude Code leak is entertaining. The legal questions it surfaces are not. Every frontier AI company that uses its own models to write production code is building on the same unstable ground. The more they market AI autonomy to sell products, the more they undermine the legal frameworks that protect those products. Every press quote about AI writing 100% of the code is a future exhibit in a case they hope never gets filed.&lt;/p&gt;

&lt;p&gt;The law hasn't caught up. Congress hasn't acted. The courts have addressed &lt;a href="https://media.cadc.uscourts.gov/opinions/docs/2025/03/23-5233.pdf" rel="noopener noreferrer"&gt;one narrow question&lt;/a&gt; in one circuit. But the trajectory is clear, and every company in this space is exposed to it.&lt;/p&gt;

&lt;p&gt;The question isn't whether AI-generated code is copyrightable. The court already answered that. The question is whether anyone in the industry is willing to admit what that answer means for them.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;I'm not a lawyer. This article is speculative analysis based on public reporting, public court opinions, and public statements by Anthropic leadership. If you're making business decisions about AI-generated IP, talk to an actual attorney.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Sources:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://media.cadc.uscourts.gov/opinions/docs/2025/03/23-5233.pdf" rel="noopener noreferrer"&gt;Thaler v. Perlmutter, DC Circuit opinion (March 18, 2025)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.axios.com/2026/03/31/anthropic-leaked-source-code-ai" rel="noopener noreferrer"&gt;Axios: Anthropic leaked its own Claude source code&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://fortune.com/2026/03/31/anthropic-source-code-claude-code-data-leak-second-security-lapse-days-after-accidentally-revealing-mythos/" rel="noopener noreferrer"&gt;Fortune: Anthropic leaks source code for Claude Code&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://fortune.com/2026/01/29/100-percent-of-code-at-anthropic-and-openai-is-now-ai-written-boris-cherny-roon/" rel="noopener noreferrer"&gt;Fortune: Top engineers at Anthropic say AI writes 100% of their code&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.theregister.com/2026/03/31/anthropic_claude_code_source_code/" rel="noopener noreferrer"&gt;The Register: Anthropic accidentally exposes Claude Code source code&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.cnbc.com/2026/03/31/anthropic-leak-claude-code-internal-source.html" rel="noopener noreferrer"&gt;CNBC: Anthropic leaks part of Claude Code's internal source code&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://venturebeat.com/technology/claude-codes-source-code-appears-to-have-leaked-heres-what-we-know" rel="noopener noreferrer"&gt;VentureBeat: Claude Code's source code appears to have leaked&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://decrypt.co/362917/anthropic-accidentally-leaked-claude-code-source-internet-keeping-forever" rel="noopener noreferrer"&gt;Decrypt: Anthropic Accidentally Leaked Claude Code's Source&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.bleepingcomputer.com/news/artificial-intelligence/claude-code-source-code-accidentally-leaked-in-npm-package/" rel="noopener noreferrer"&gt;BleepingComputer: Claude Code source code accidentally leaked&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/github/dmca/blob/master/2026/03/2026-03-31-anthropic.md" rel="noopener noreferrer"&gt;GitHub DMCA notice: Anthropic takedown (March 31, 2026)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.skadden.com/insights/publications/2025/03/appellate-court-affirms-human-authorship" rel="noopener noreferrer"&gt;Skadden: Appellate Court Affirms Human Authorship Requirement&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://foleyhoag.com/news-and-insights/publications/alerts-and-updates/2025/march/dc-circuit-holds-that-ai-generated-artwork-is-ineligible-for-copyright-protection/" rel="noopener noreferrer"&gt;Foley Hoag: DC Circuit Holds AI-Generated Artwork Ineligible for Copyright&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;em&gt;This post was originally published at &lt;a href="https://www.wshoffner.dev/blog" rel="noopener noreferrer"&gt;wshoffner.dev/blog&lt;/a&gt;. If you liked it, the Review Bomb series lives there too.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>opensource</category>
      <category>security</category>
      <category>legal</category>
    </item>
    <item>
      <title>Your Encrypted Backups Are Slow Because Encryption Isn't the Bottleneck</title>
      <dc:creator>Wes</dc:creator>
      <pubDate>Thu, 02 Apr 2026 11:46:35 +0000</pubDate>
      <link>https://dev.to/ticktockbent/your-encrypted-backups-are-slow-because-encryption-isnt-the-bottleneck-62k</link>
      <guid>https://dev.to/ticktockbent/your-encrypted-backups-are-slow-because-encryption-isnt-the-bottleneck-62k</guid>
      <description>&lt;p&gt;If you encrypt files before pushing them to backup storage, you've probably assumed the encryption step is what makes it slow. That's what I assumed too. Then I looked at the numbers. On any modern x86 chip with AES-NI, AES-256-GCM runs at 4-8 GB/s on a single core. ChaCha20-Poly1305 isn't far behind. The CPU is not the problem. The problem is that your encryption tool reads a chunk of data, encrypts it, writes it out, then reads the next chunk. It's serial. The disk sits idle while the CPU works, and the CPU sits idle while the disk works.&lt;/p&gt;

&lt;p&gt;One person decided to fix that by applying the same async I/O technique that powers modern databases to file encryption. The result hits GB/s throughput on commodity NVMe hardware, and the whole thing is about 900 lines of Rust.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Is Concryptor?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/FrogSnot/Concryptor" rel="noopener noreferrer"&gt;Concryptor&lt;/a&gt; is a multi-threaded AEAD file encryption CLI built by &lt;a href="https://github.com/FrogSnot" rel="noopener noreferrer"&gt;FrogSnot&lt;/a&gt;. It encrypts and decrypts files using AES-256-GCM or ChaCha20-Poly1305 with Argon2id key derivation, and it does it fast by overlapping disk I/O with CPU crypto using Linux's io_uring interface. It handles single files and directories (packed via tar), runs entirely in the terminal, and installs with &lt;code&gt;cargo install concryptor&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;73 stars. One month of focused development. A six-file core with 67 tests. It deserves more.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Snapshot
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Project&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href="https://github.com/FrogSnot/Concryptor" rel="noopener noreferrer"&gt;Concryptor&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Stars&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;73&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Maintainer&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Solo (FrogSnot)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Code health&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Clean architecture, 67 tests, clippy and fmt now enforced in CI&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Docs&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Excellent README with honest perf analysis and full format spec&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Contributor UX&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Fresh templates and CI, small codebase, easy to navigate&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Worth using&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Not yet for production (author's own disclaimer), but the architecture is real&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Under the Hood
&lt;/h2&gt;

&lt;p&gt;The centerpiece is a triple-buffered io_uring pipeline in &lt;code&gt;engine.rs&lt;/code&gt;. The idea is simple: keep three sets of buffers rotating through three stages. While buffer A's encrypted contents are being written to disk by the kernel, buffer B is being encrypted in parallel by Rayon worker threads, and buffer C's plaintext is being read from disk. Every component stays busy. Nothing waits.&lt;/p&gt;

&lt;p&gt;The implementation is tighter than you'd expect from a month-old project. Each io_uring submission queue entry carries bit-packed metadata in its &lt;code&gt;user_data&lt;/code&gt; field: the low two bits identify which buffer slot, bit 2 flags read vs. write, and the upper bits store the expected byte count for short-I/O detection. When completion queue entries come back, the pipeline routes them to per-slot counters without any hash lookups or allocations. The whole loop runs &lt;code&gt;num_batches + 2&lt;/code&gt; iterations to let the pipeline drain cleanly at the end.&lt;/p&gt;

&lt;p&gt;The file format is designed around O_DIRECT. Every encrypted chunk is padded to a 4 KiB boundary. The header is exactly 4096 bytes (52 bytes of data plus KDF parameters plus zero padding). Buffers are allocated with explicit 4096-byte alignment via &lt;code&gt;std::alloc&lt;/code&gt;. This lets Concryptor bypass the kernel's page cache entirely, talking directly to NVMe storage via DMA. It's the same technique databases use to avoid double-buffering, and it's a big part of why the throughput numbers are real.&lt;/p&gt;

&lt;p&gt;The security model is more careful than I expected from a solo hobby project. The full 4 KiB header is included as associated data in every chunk's AEAD tag, so modifying any header byte invalidates all chunks. There's a TLS 1.3-style nonce derivation scheme where each chunk's nonce is the base nonce XOR'd with the chunk index, preventing nonce reuse without coordination. A final-chunk flag in the AAD prevents truncation and append attacks. The 4032 reserved bytes in the header are authenticated too, so you can't smuggle data into them. The test suite covers chunk swapping, truncation (two variants), header field manipulation, reserved byte tampering, KDF parameter tampering, and cipher mismatch. These aren't afterthought tests. Someone thought about the threat model.&lt;/p&gt;

&lt;p&gt;What's rough? The project is Linux-only. io_uring doesn't exist on macOS or Windows, and there's no fallback backend. If you try to build it on a Mac you'll get errors that don't explain why. The README is upfront about the experimental status, which is honest and appreciated, but it does mean you shouldn't point this at anything you can't afford to lose yet. The &lt;code&gt;rand&lt;/code&gt; dependency is still on 0.8 (0.10 is current), and until recently clippy warnings and formatting drift had been accumulating unchecked. None of these are architectural problems. They're the kind of rough edges you get when one person is focused on making the core work first.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Contribution
&lt;/h2&gt;

&lt;p&gt;CONTRIBUTING.md asks you to run clippy and cargo fmt before submitting, but CI only ran &lt;code&gt;cargo test&lt;/code&gt;. No enforcement. The result was predictable: 7 clippy warnings had accumulated across &lt;code&gt;engine.rs&lt;/code&gt; and &lt;code&gt;header.rs&lt;/code&gt;, and formatting had drifted in almost every source file.&lt;/p&gt;

&lt;p&gt;I fixed all seven lints. Three were manual &lt;code&gt;div_ceil&lt;/code&gt; reimplementations (the &lt;code&gt;(a + b - 1) / b&lt;/code&gt; pattern that Rust now has a method for), one was a min/max chain that should have been &lt;code&gt;.clamp()&lt;/code&gt;, one was a manual range check, and two were &lt;code&gt;too_many_arguments&lt;/code&gt; warnings on internal pipeline functions where every parameter is essential and restructuring would just add noise. I also wired up &lt;code&gt;KdfParams::DEFAULT&lt;/code&gt; via struct update syntax to eliminate a dead-code warning, ran &lt;code&gt;cargo fmt --all&lt;/code&gt;, and added clippy and fmt checks to the CI workflow so they stay clean going forward.&lt;/p&gt;

&lt;p&gt;Getting into the codebase was straightforward. Six files, clear responsibilities: &lt;code&gt;engine.rs&lt;/code&gt; handles the pipeline, &lt;code&gt;crypto.rs&lt;/code&gt; handles primitives, &lt;code&gt;header.rs&lt;/code&gt; handles the format, &lt;code&gt;archive.rs&lt;/code&gt; handles tar packing. The code is dense but not clever. You can follow the pipeline loop without needing to hold too much in your head at once. I had the PR ready in under an hour.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/FrogSnot/Concryptor/pull/10" rel="noopener noreferrer"&gt;PR #10&lt;/a&gt; is open as of this writing.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Verdict
&lt;/h2&gt;

&lt;p&gt;Concryptor is for people who encrypt files regularly and want it to be fast. If you're backing up to cloud storage, encrypting disk images, or just moving sensitive data between machines, the throughput difference between a serial encryption tool and a pipelined one is real. On NVMe, it's the difference between saturating your drive and leaving most of its bandwidth on the table.&lt;/p&gt;

&lt;p&gt;The project is early. One maintainer, one month old, Linux-only, self-labeled experimental. It could stall. But the commit history tells a story of deliberate progression: the initial mmap approach was replaced with io_uring in the same day, security hardening followed within a week, the format was upgraded to v4 with full header authentication, and directory support landed before the first month was out. That's not hobby-project pacing. That's someone building something they intend to use.&lt;/p&gt;

&lt;p&gt;What would push Concryptor to the next level? A fallback I/O backend for macOS and Windows would be the single biggest improvement. Even a plain pread/pwrite loop, slower than io_uring but functional, would open the project to most Rust developers who want to try it. Stdin/stdout streaming for pipe composability would help too. And the rand 0.8 to 0.10 migration is a real breaking change that Dependabot can't auto-fix. That's a contribution waiting to happen.&lt;/p&gt;

&lt;h2&gt;
  
  
  Go Look At This
&lt;/h2&gt;

&lt;p&gt;If you care about I/O performance, encryption, or io_uring, &lt;a href="https://github.com/FrogSnot/Concryptor" rel="noopener noreferrer"&gt;Concryptor&lt;/a&gt; is worth reading. The codebase is small enough to understand in an afternoon, and the pipeline implementation is one of the cleaner io_uring examples I've seen in the wild.&lt;/p&gt;

&lt;p&gt;Star the repo. Try encrypting a large file and watch the throughput. If you want to contribute, the &lt;a href="https://github.com/FrogSnot/Concryptor/pull/6" rel="noopener noreferrer"&gt;rand 0.8 to 0.10 migration&lt;/a&gt; is sitting there waiting for someone to pick it up.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This is Review Bomb #11, a series where I find under-the-radar projects on GitHub, read the code, contribute something, and write it up. If you know a project that deserves more eyeballs, drop it in the comments.&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This post was originally published at &lt;a href="https://www.wshoffner.dev/blog" rel="noopener noreferrer"&gt;wshoffner.dev/blog&lt;/a&gt;. If you liked it, the Review Bomb series lives there too.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>rust</category>
      <category>security</category>
      <category>cli</category>
    </item>
    <item>
      <title>Your Package Manager's Installer Doesn't Know Fish Exists</title>
      <dc:creator>Wes</dc:creator>
      <pubDate>Tue, 31 Mar 2026 14:58:19 +0000</pubDate>
      <link>https://dev.to/ticktockbent/your-package-managers-installer-doesnt-know-fish-exists-19bh</link>
      <guid>https://dev.to/ticktockbent/your-package-managers-installer-doesnt-know-fish-exists-19bh</guid>
      <description>&lt;p&gt;You find a new CLI tool on GitHub. The README looks good. You scroll to "Installation" and see the magic one-liner: &lt;code&gt;curl -sSL https://... | sh&lt;/code&gt;. You run it. The script downloads a binary, drops it somewhere sensible, and adds it to your PATH by appending a line to your &lt;code&gt;.bashrc&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Except you use fish. And fish doesn't understand &lt;code&gt;export PATH=&lt;/code&gt;. So the binary is on your disk, but your shell can't find it. You open the install script, figure out where it put things, and manually write &lt;code&gt;set -gx PATH ~/.local/bin $PATH&lt;/code&gt; into your &lt;code&gt;config.fish&lt;/code&gt;. You've done this before. You'll do it again.&lt;/p&gt;

&lt;p&gt;This is a small problem. But it's a revealing one. The kind of developer who installs CLI tools from GitHub release pages, who tries new package managers, who runs fish instead of bash because they actually thought about their shell choice, that's your target user. And your installer just told them you didn't think about them.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Is parm?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/alxrw/parm" rel="noopener noreferrer"&gt;parm&lt;/a&gt; is a binary package manager for GitHub Releases, written in Go by &lt;a href="https://github.com/alxrw" rel="noopener noreferrer"&gt;alxrw&lt;/a&gt;. You give it a repo (&lt;code&gt;parm install owner/repo&lt;/code&gt;) and it finds the latest release, picks the right binary for your platform, downloads it, and symlinks it onto your PATH. No root access, no system package manager, no registry to maintain. It queries GitHub directly.&lt;/p&gt;

&lt;p&gt;It handles updates (&lt;code&gt;parm update&lt;/code&gt;), version pinning (&lt;code&gt;parm pin&lt;/code&gt;), removal (&lt;code&gt;parm remove&lt;/code&gt;), and has a search command that queries GitHub's API. It's pre-release (v0.1.6) but functional, with a clear roadmap toward v0.2.0. About 138 stars and one very active maintainer.&lt;/p&gt;

&lt;p&gt;The interesting design choice: there is no curated package registry. Homebrew has formulae. asdf has plugins. parm has GitHub's API and your judgment. The README is upfront about this: "Users are responsible for vetting packages." That's a tradeoff, and it's a deliberate one.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Snapshot
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Project&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href="https://github.com/alxrw/parm" rel="noopener noreferrer"&gt;parm&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Stars&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;~138 at time of writing&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Maintainer&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Solo developer, actively releasing&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Code health&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Clean Go with standard stack (Cobra, Viper, go-github), 32% test file ratio&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Docs&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Good README with usage table, disclaimers, and package compatibility guide&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Contributor UX&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Merged my PR next-day with "lgtm"&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Worth using&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Yes, for grabbing CLI tools from GitHub without the Homebrew ceremony&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Under the Hood
&lt;/h2&gt;

&lt;p&gt;parm follows the standard Go CLI layout. Commands live in &lt;code&gt;cmd/&lt;/code&gt; (one file per subcommand, Cobra-based). Core business logic lives in &lt;code&gt;internal/core/&lt;/code&gt; with separate &lt;code&gt;installer&lt;/code&gt; and &lt;code&gt;updater&lt;/code&gt; packages. The GitHub client lives in &lt;code&gt;internal/gh/&lt;/code&gt; using &lt;code&gt;go-github&lt;/code&gt; v74. Configuration is TOML-based via Viper.&lt;/p&gt;

&lt;p&gt;The dependency list is heavier than you'd expect for a tool this focused. Eight direct dependencies: &lt;code&gt;go-github&lt;/code&gt; for the API, &lt;code&gt;cobra&lt;/code&gt; and &lt;code&gt;viper&lt;/code&gt; for CLI and config, &lt;code&gt;semver&lt;/code&gt; for version comparison, &lt;code&gt;mpb&lt;/code&gt; for progress bars, &lt;code&gt;gopsutil&lt;/code&gt; for platform detection, &lt;code&gt;filetype&lt;/code&gt; for binary type detection, &lt;code&gt;oauth2&lt;/code&gt; for GitHub authentication. None of these are unreasonable individually, but it's a lot of moving parts for "download a binary and symlink it." Compare to tools like &lt;code&gt;ubi&lt;/code&gt; or &lt;code&gt;eget&lt;/code&gt; that do the same thing with fewer dependencies. That said, the extra weight buys real features: progress bars, proper semver handling, and platform detection that works on all three major OSes.&lt;/p&gt;

&lt;p&gt;The architecture within &lt;code&gt;internal/&lt;/code&gt; is well-separated. The installer handles asset selection (matching your OS and architecture against release asset names), archive extraction (tar, zip, and raw binaries), and symlink management. The manifest tracks what's installed, where, and at what version. The verification package handles binary validation. Each concern has its own package and its own tests. 19 test files out of 59 Go files is a decent ratio for a project this age.&lt;/p&gt;

&lt;p&gt;What the Go code gets right: cross-platform support. The build targets include linux/darwin/windows on both amd64 and arm64. Platform detection via &lt;code&gt;gopsutil&lt;/code&gt; picks the correct release asset. The asset name matching is smart enough to handle the inconsistent naming conventions across GitHub repos (&lt;code&gt;linux-amd64&lt;/code&gt;, &lt;code&gt;Linux_x86_64&lt;/code&gt;, &lt;code&gt;linux-x64&lt;/code&gt;, etc.).&lt;/p&gt;

&lt;p&gt;What the Go code doesn't cover: the shell. The install script (&lt;code&gt;scripts/install.sh&lt;/code&gt;) that handles the &lt;code&gt;curl | sh&lt;/code&gt; onboarding path was bash/zsh-only. It wrote &lt;code&gt;export PATH=...&lt;/code&gt; into &lt;code&gt;.bashrc&lt;/code&gt;, &lt;code&gt;.zshrc&lt;/code&gt;, or &lt;code&gt;.profile&lt;/code&gt;. Fish, the third most popular interactive shell, was completely unsupported. The binary would install, but the user's shell couldn't find it. For a tool whose entire value proposition is "install any program from your terminal," having the install script fail on a common terminal is a gap.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Contribution
&lt;/h2&gt;

&lt;p&gt;Issue #49 reported the fish problem in February 2026, and it was on the v0.2.0 roadmap. I picked it up.&lt;/p&gt;

&lt;p&gt;The fix was about 70 lines added to &lt;code&gt;scripts/install.sh&lt;/code&gt;. Fish uses &lt;code&gt;set -gx PATH&lt;/code&gt; instead of &lt;code&gt;export PATH=&lt;/code&gt;, and its config file lives at &lt;code&gt;~/.config/fish/config.fish&lt;/code&gt; (or wherever &lt;code&gt;$XDG_CONFIG_HOME&lt;/code&gt; points). The implementation detects fish by checking whether the config file exists or whether &lt;code&gt;fish&lt;/code&gt; is available on PATH, resolves the config path respecting XDG conventions, creates the directory if needed, and writes the PATH entry in fish syntax. It also handles &lt;code&gt;GITHUB_TOKEN&lt;/code&gt; persistence (parm uses this for GitHub API rate limits) with the fish equivalent. If a user has both bash and fish installed, both configs get updated. The deduplication logic (grep for the bin directory before appending) follows the same pattern the script already used for bash/zsh.&lt;/p&gt;

&lt;p&gt;Getting into the codebase took about fifteen minutes. The install script was self-contained, and the existing bash/zsh code was a clear template for the fish additions. &lt;a href="https://github.com/alxrw/parm/pull/51" rel="noopener noreferrer"&gt;PR #51&lt;/a&gt; was approved with "lgtm" and merged the next day with "Merged, thank you for your contribution!" No review rounds, no changes requested. Clean in, clean out.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Verdict
&lt;/h2&gt;

&lt;p&gt;parm is for developers who install CLI tools from GitHub and are tired of the manual download-extract-chmod-symlink dance. If you've ever navigated to a GitHub releases page, scrolled through thirty assets to find the right one for your platform, downloaded it, extracted it, figured out which binary inside the tarball is the one you actually want, chmod'd it, and moved it to somewhere on your PATH, parm automates all of that.&lt;/p&gt;

&lt;p&gt;The no-registry approach is either a feature or a concern depending on your threat model. There's no vetting, no review process, no curated list. You point parm at a repo and trust the maintainer's releases. The README is honest about this. For tools you already trust (ripgrep, fd, bat, delta), it's faster than Homebrew. For tools you've never heard of, you're on your own.&lt;/p&gt;

&lt;p&gt;The project has momentum. Version pinning just shipped. Fish shell support (that's us) just landed. Windows shim support is on the roadmap. The maintainer is responsive and the codebase is clean enough that contributions land quickly. What would push parm further: a &lt;code&gt;parm doctor&lt;/code&gt; command that validates your setup, shell completions for the major shells, and better error messages when a release doesn't have a compatible asset. But the core works today, and it's already replaced a chunk of my manual workflow.&lt;/p&gt;

&lt;h2&gt;
  
  
  Go Look At This
&lt;/h2&gt;

&lt;p&gt;If you install CLI tools from GitHub, &lt;a href="https://github.com/alxrw/parm" rel="noopener noreferrer"&gt;try parm&lt;/a&gt;. &lt;code&gt;parm install junegunn/fzf&lt;/code&gt; and see how it feels. If you use fish, the installer now works thanks to &lt;a href="https://github.com/alxrw/parm/pull/51" rel="noopener noreferrer"&gt;PR #51&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Star the repo. Check the &lt;a href="https://github.com/alxrw/parm/issues" rel="noopener noreferrer"&gt;open issues&lt;/a&gt;. The v0.2.0 milestone has clear feature requests if you want to contribute.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This is Review Bomb #10, a series where I find under-the-radar projects on GitHub, read the code, contribute something, and write it up. If you know a project that deserves more eyeballs, drop it in the comments.&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This post was originally published at &lt;a href="https://www.wshoffner.dev/blog" rel="noopener noreferrer"&gt;wshoffner.dev/blog&lt;/a&gt;. If you liked it, the Review Bomb series lives there too.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>go</category>
      <category>cli</category>
      <category>tooling</category>
    </item>
    <item>
      <title>The Blackwall Between Your AI Agent and Your Filesystem</title>
      <dc:creator>Wes</dc:creator>
      <pubDate>Mon, 30 Mar 2026 12:38:57 +0000</pubDate>
      <link>https://dev.to/ticktockbent/the-blackwall-between-your-ai-agent-and-your-filesystem-3m05</link>
      <guid>https://dev.to/ticktockbent/the-blackwall-between-your-ai-agent-and-your-filesystem-3m05</guid>
      <description>&lt;p&gt;Every AI coding agent you run has the same permissions you do. Claude Code, Cursor, Codex, Aider. They can read your SSH keys, write to your shell config, and run any command your user account can. We accept this because the alternative is setting up Docker containers and dealing with volume mounts and broken toolchains every time we want an agent to help with a project.&lt;/p&gt;

&lt;p&gt;That trade-off has always felt wrong to me. Not because I think my AI agent is malicious, but because I know it executes code from dependencies I haven't read, runs shell commands it hallucinated, and sometimes &lt;code&gt;rm&lt;/code&gt;s things it shouldn't. The blast radius of a mistake is my entire home directory.&lt;/p&gt;

&lt;p&gt;I went looking for something between "full trust" and "Docker wrapper," and I found a project named after the barrier between humanity and rogue AIs in Cyberpunk 2077.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Is greywall?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/GreyhavenHQ/greywall" rel="noopener noreferrer"&gt;greywall&lt;/a&gt; is a container-free sandbox for AI coding agents. It uses kernel-level enforcement on Linux (bubblewrap, seccomp, Landlock, eBPF) and Seatbelt profiles on macOS to isolate your agent's filesystem access, network traffic, and syscalls. Deny by default. No Docker, no VMs. One binary, four direct dependencies.&lt;/p&gt;

&lt;p&gt;It ships with built-in profiles for 13 agents (Claude Code, Cursor, Codex, Aider, and more), and it has a learning mode that traces what your agent actually touches and generates a least-privilege profile from the results. The project is three weeks old, has about 110 stars, and the sole maintainer merges external PRs within hours.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Snapshot
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Project&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href="https://github.com/GreyhavenHQ/greywall" rel="noopener noreferrer"&gt;greywall&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Stars&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;~109 at time of writing&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Maintainer&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Solo (tito), mass-committing daily&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Code health&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;17,400 lines of Go, 151 tests, clean layered architecture&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Docs&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;ARCHITECTURE.md, CONTRIBUTING.md, 18 doc files, a full docs site&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Contributor UX&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Merged my PR same-day, CI catches lint, good first issues labeled&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Worth using&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Yes if you run AI agents on Linux or macOS&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Under the Hood
&lt;/h2&gt;

&lt;p&gt;The codebase is ~17,400 lines of Go with only four direct dependencies: cobra for CLI, doublestar for glob matching, jsonc for config with comments, and x/sys for kernel syscalls. Everything else is hand-rolled against the kernel API.&lt;/p&gt;

&lt;p&gt;On Linux, greywall stacks five security layers, each covering what the others can't:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Bubblewrap namespaces&lt;/strong&gt; (&lt;code&gt;linux.go&lt;/code&gt;, 1,642 lines) handle the heavy lifting. In DefaultDenyRead mode, the sandbox starts from an empty root filesystem (&lt;code&gt;--tmpfs /&lt;/code&gt;) and selectively mounts system paths read-only and your project directory read-write. Network isolation drops all connectivity, then three bridge types restore controlled access: a ProxyBridge for SOCKS5 traffic, a DnsBridge for DNS resolution, and a ReverseBridge for inbound port forwarding. All of them relay over Unix sockets via socat.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Seccomp BPF&lt;/strong&gt; (&lt;code&gt;linux_seccomp.go&lt;/code&gt;) blocks 30+ dangerous syscalls: ptrace, mount, reboot, bpf, perf_event_open. If your kernel doesn't support seccomp, greywall skips it and continues. This graceful fallback pattern repeats at every layer.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Landlock&lt;/strong&gt; (&lt;code&gt;linux_landlock.go&lt;/code&gt;) adds kernel-level filesystem access control. It opens paths with &lt;code&gt;O_PATH&lt;/code&gt; and uses &lt;code&gt;fstat&lt;/code&gt; to avoid TOCTOU races between checking a path and applying a rule to it. It handles ABI versions 1 through 5, stripping directory-only rights from non-directory paths to avoid &lt;code&gt;EINVAL&lt;/code&gt; from the kernel.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;eBPF monitoring&lt;/strong&gt; traces violations in real time via bpftrace. &lt;strong&gt;Learning mode&lt;/strong&gt; runs strace under the hood, captures every file your agent touches, and collapses the results into a reusable profile.&lt;/p&gt;

&lt;p&gt;On macOS, greywall generates Seatbelt profiles for &lt;code&gt;sandbox-exec&lt;/code&gt; with deny-by-default network rules and selective file access via regex patterns. macOS actually has a cleaner security model here. Seatbelt supports both allow and deny rules with regex, so you can write "allow &lt;code&gt;~/.claude.json*&lt;/code&gt;, deny everything else in home." Linux's Landlock is additive-only. Once you grant write access to a directory, you can't deny individual files inside it. This is the project's most interesting architectural tension, and it surfaces as a real bug: issue #62, where programs that do atomic file writes (create a temp file, rename over the target) break because the temp file and the target live on different filesystems inside the sandbox.&lt;/p&gt;

&lt;p&gt;Command blocking (&lt;code&gt;command.go&lt;/code&gt;, 524 lines) doesn't just match command names. It parses shell syntax: pipes, &lt;code&gt;&amp;amp;&amp;amp;&lt;/code&gt;, &lt;code&gt;||&lt;/code&gt;, semicolons, subshells, and quoted strings. &lt;code&gt;echo foo | shutdown&lt;/code&gt; gets caught. &lt;code&gt;bash -c "rm -rf /"&lt;/code&gt; gets caught. It's more parser than filter.&lt;/p&gt;

&lt;p&gt;The architecture makes sense for what it's doing. Each layer has a clear file, clear responsibility, and a fallback path. The build tags (&lt;code&gt;//go:build linux&lt;/code&gt;, &lt;code&gt;//go:build darwin&lt;/code&gt;) keep platform code separated without runtime conditionals. The test suite has 151 tests across 13 files covering command blocking, Landlock rules, Seatbelt profile generation, learning mode, and config validation. For a three-week-old project, that's unusually disciplined.&lt;/p&gt;

&lt;p&gt;What's rough: the project is pre-1.0 and moving fast. Eight releases in 23 days. The DefaultDenyRead mode is ambitious and still has edge cases (the atomic writes bug, WSL DNS issues, AppArmor conflicts with TUN devices). The documentation is comprehensive but assumes you already know what bubblewrap and Landlock are. If you're new to Linux security primitives, the onboarding curve is steep.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Contribution
&lt;/h2&gt;

&lt;p&gt;Issue #5 asked for a &lt;code&gt;greywall profiles edit&lt;/code&gt; command. The learning mode generates JSON profiles and saves them to &lt;code&gt;~/.config/greywall/learned/&lt;/code&gt;, but there was no way to edit them without hunting for the file path and hand-validating the JSON. The maintainer wanted an editor command that validates on close.&lt;/p&gt;

&lt;p&gt;Getting into the codebase was straightforward. The existing &lt;code&gt;profiles list&lt;/code&gt; and &lt;code&gt;profiles show&lt;/code&gt; commands were right there in &lt;code&gt;main.go&lt;/code&gt;, following the standard cobra subcommand pattern. The config validation was already built: &lt;code&gt;config.Load()&lt;/code&gt; parses JSON (with comments via jsonc) and runs &lt;code&gt;Validate()&lt;/code&gt;. I just needed to wire up an editor loop.&lt;/p&gt;

&lt;p&gt;The implementation opens the profile in &lt;code&gt;$EDITOR&lt;/code&gt; (splitting on whitespace to support &lt;code&gt;code --wait&lt;/code&gt; and &lt;code&gt;emacs -nw&lt;/code&gt;), saves the original content for rollback, and after the editor closes: detects no-change exits, validates the JSON, and on failure prompts to re-edit or discard. Discard restores the original file. About 95 lines total.&lt;/p&gt;

&lt;p&gt;CI caught two lint issues I couldn't test locally (the project requires Go 1.25, I had 1.22): gocritic flagged an &lt;code&gt;append&lt;/code&gt; to a different variable, and gofumpt wanted explicit octal syntax (&lt;code&gt;0o600&lt;/code&gt; instead of &lt;code&gt;0600&lt;/code&gt;). Pushed the fix, and the maintainer merged the whole thing within hours of submission. Approved the code immediately, just asked for the lint fix. That's a three-week-old project with a same-day merge for a first-time contributor. &lt;a href="https://github.com/GreyhavenHQ/greywall/pull/64" rel="noopener noreferrer"&gt;PR #64&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Verdict
&lt;/h2&gt;

&lt;p&gt;greywall is for anyone running AI coding agents who wants more than trust and less than Docker. If you use Claude Code or Cursor on a machine with real credentials, SSH keys, or cloud configs, this fills a gap that nothing else does at this weight class.&lt;/p&gt;

&lt;p&gt;The project is young and moving fast. Three weeks old, 109 stars, eight releases. The maintainer is clearly using it daily and fixing bugs as they surface. The contributor experience is excellent: labeled issues, fast merges, CI that catches real problems. The Landlock limitation (no per-file deny inside a writable directory) is a genuine technical constraint that will shape the project's future, and the maintainer's detailed write-up on issue #62 shows someone who understands the problem deeply and isn't reaching for shortcuts.&lt;/p&gt;

&lt;p&gt;What would push greywall to the next level? Solving the atomic writes problem would unblock a lot of real-world usage. A guided setup wizard (instead of requiring users to understand profiles and config files) would lower the barrier for non-security-minded developers. And more built-in profiles for common development workflows beyond AI agents could widen the audience. But the foundation is solid, the security model is sound, and the code is cleaner than most projects ten times its age.&lt;/p&gt;

&lt;h2&gt;
  
  
  Go Look At This
&lt;/h2&gt;

&lt;p&gt;If you run AI agents on your dev machine, &lt;a href="https://github.com/GreyhavenHQ/greywall" rel="noopener noreferrer"&gt;go install greywall&lt;/a&gt; and try &lt;code&gt;greywall -- claude&lt;/code&gt; or &lt;code&gt;greywall -- cursor&lt;/code&gt;. The built-in profiles work out of the box. If you want tighter control, run &lt;code&gt;greywall --learning -- &amp;lt;your-agent&amp;gt;&lt;/code&gt; to generate a profile from actual usage, then &lt;code&gt;greywall profiles edit&lt;/code&gt; to fine-tune it.&lt;/p&gt;

&lt;p&gt;Star the repo. Try the learning mode. If something breaks in your setup, open an issue. The maintainer responds fast and the codebase is navigable enough that you might end up &lt;a href="https://github.com/GreyhavenHQ/greywall/pull/64" rel="noopener noreferrer"&gt;fixing it yourself&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This is Review Bomb #9, a series where I find under-the-radar projects on GitHub, read the code, contribute something, and write it up. If you know a project that deserves more eyeballs, drop it in the comments.&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This post was originally published at &lt;a href="https://www.wshoffner.dev/blog" rel="noopener noreferrer"&gt;wshoffner.dev/blog&lt;/a&gt;. If you liked it, the Review Bomb series lives there too.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>security</category>
      <category>go</category>
      <category>ai</category>
    </item>
    <item>
      <title>Why Your SFTP Transfer Is Stuck at 2 MB/s (and the Fix Is a Protocol from 1983)</title>
      <dc:creator>Wes</dc:creator>
      <pubDate>Sun, 29 Mar 2026 16:14:56 +0000</pubDate>
      <link>https://dev.to/ticktockbent/why-your-sftp-transfer-is-stuck-at-2-mbs-and-the-fix-is-a-protocol-from-1983-5c3c</link>
      <guid>https://dev.to/ticktockbent/why-your-sftp-transfer-is-stuck-at-2-mbs-and-the-fix-is-a-protocol-from-1983-5c3c</guid>
      <description>&lt;p&gt;Two minutes to copy a 274 MB file to a VM running on localhost. Not over the internet. Not to a cloud instance across the country. Localhost. The same machine, loopback, zero network latency.&lt;/p&gt;

&lt;p&gt;That was the experience a user reported in issue #290 on cubic, a lightweight CLI for managing QEMU/KVM virtual machines. The maintainer reproduced it, traced the problem to the upstream &lt;code&gt;russh-sftp&lt;/code&gt; crate, and posted a comment asking if anyone had ideas about where the bottleneck was. I did. The answer turned out to be a protocol design decision that limits every Rust project using this crate to about 2 MB/s on file transfers, regardless of how fast the link is.&lt;/p&gt;

&lt;p&gt;The fix was to stop using SFTP entirely and fall back to a simpler, older protocol.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Is cubic?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/cubic-vm/cubic" rel="noopener noreferrer"&gt;cubic&lt;/a&gt; is a CLI tool for creating and managing lightweight virtual machines on Linux and macOS. Think of it as the middle ground between running Docker containers and spinning up full VMs in libvirt. You run &lt;code&gt;cubic create myvm --image debian&lt;/code&gt; and get a cloud-init provisioned VM with SSH access, a dedicated disk, and port forwarding. &lt;code&gt;cubic ssh myvm&lt;/code&gt; drops you into a shell. &lt;code&gt;cubic scp file.tar.gz myvm:~/&lt;/code&gt; copies files in. It's about 7,000 lines of Rust, built on QEMU/KVM with cloud-init for provisioning.&lt;/p&gt;

&lt;p&gt;Under 40 stars. The maintainer (rogkne) commits daily and reviews external PRs within hours.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Snapshot
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Project&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href="https://github.com/cubic-vm/cubic" rel="noopener noreferrer"&gt;cubic&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Stars&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;~37 at time of writing&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Maintainer&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Solo developer, committing daily&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Code health&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;~7,000 lines of clean Rust, 104 unit tests, clap + thiserror + tokio&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Docs&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Good README, CONTRIBUTING.md with conventional commit rules&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Contributor UX&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Fast reviews, specific feedback, merged shell completions PR in multi-round review&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Worth using&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Yes, if you want lightweight VMs without libvirt's complexity&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Under the Hood
&lt;/h2&gt;

&lt;p&gt;cubic has a clean layered architecture. CLI commands live in &lt;code&gt;src/commands/&lt;/code&gt; (one file per subcommand, clap with derive macros). Business logic lives in &lt;code&gt;src/actions/&lt;/code&gt;. The instance model, serialization (TOML and YAML), and storage live in &lt;code&gt;src/instance/&lt;/code&gt;. Image fetching and distro definitions live in &lt;code&gt;src/image/&lt;/code&gt;. SSH and file transfer live in &lt;code&gt;src/ssh_cmd/&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The dependency list is lean. Four crates handle the heavy lifting: &lt;code&gt;russh&lt;/code&gt; for SSH connections, &lt;code&gt;russh-sftp&lt;/code&gt; for file transfers, &lt;code&gt;clap&lt;/code&gt; for CLI parsing, and &lt;code&gt;reqwest&lt;/code&gt; for image downloads. Everything else is standard library or small utility crates. The &lt;code&gt;Cargo.toml&lt;/code&gt; is not trying to be clever.&lt;/p&gt;

&lt;p&gt;One pattern that caught my eye: the project is async internally (tokio, russh) but sync at the CLI boundary. An &lt;code&gt;AsyncCaller&lt;/code&gt; struct wraps a tokio multi-threaded runtime and exposes a &lt;code&gt;call()&lt;/code&gt; method that blocks on a future. Every command creates one, runs its async work through it, and returns a sync result. It's simple and it works. No async bleeding into the CLI layer.&lt;/p&gt;

&lt;p&gt;The image pipeline is solid. cubic fetches cloud images from distro mirrors, verifies SHA-256/SHA-512 checksums against the upstream checksum file, shows a progress bar during download, and caches images locally. Adding a new distro means adding one entry to the &lt;code&gt;DISTROS&lt;/code&gt; static in &lt;code&gt;image_factory.rs&lt;/code&gt;. Rocky Linux was added in a recent PR following this exact pattern.&lt;/p&gt;

&lt;p&gt;The rough edges are in the SSH layer. The SFTP implementation delegates to &lt;code&gt;russh-sftp&lt;/code&gt;, which turned out to be the source of the performance bug. The progress bar during file transfers is coupled to the async read wrapper (&lt;code&gt;AsyncTransferView&lt;/code&gt;), which works but makes it hard to swap the underlying transfer mechanism without touching the view layer. The test coverage is good for models and serialization but thin for the SSH and QEMU interaction code, which is typical for tools that depend on external services.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Contribution
&lt;/h2&gt;

&lt;p&gt;The performance issue (#290) reported that &lt;code&gt;cubic scp&lt;/code&gt; transferred files at roughly 2 MB/s on loopback. I dug into the &lt;code&gt;russh-sftp&lt;/code&gt; internals to find out why.&lt;/p&gt;

&lt;p&gt;The answer is in how &lt;code&gt;russh-sftp&lt;/code&gt; implements &lt;code&gt;AsyncWrite&lt;/code&gt;. Every call to &lt;code&gt;poll_write()&lt;/code&gt; creates a one-shot channel, sends an SFTP write request, and blocks until the server responds with an acknowledgment. One write in flight at a time. No pipelining. The SFTP protocol (RFC 4253) explicitly supports pipelining: clients can send many write requests with different IDs and collect the responses asynchronously. OpenSSH's &lt;code&gt;sftp&lt;/code&gt; client does exactly this with 64 outstanding requests by default. &lt;code&gt;russh-sftp&lt;/code&gt; doesn't. The upstream issue (#70) has been open since June 2025 with no fix.&lt;/p&gt;

&lt;p&gt;For a 274 MB file at the default 255 KB max write size, that's roughly 1,075 round-trips, each waiting for an ACK. Even on loopback, the per-request overhead adds up to minutes.&lt;/p&gt;

&lt;p&gt;Wrapping the writer in a &lt;code&gt;BufWriter&lt;/code&gt; wouldn't help. It coalesces small writes into larger ones, but each &lt;code&gt;poll_write()&lt;/code&gt; still blocks on the ACK. You'd go from many small round-trips to fewer large ones, but the bottleneck is the same.&lt;/p&gt;

&lt;p&gt;The fix was to bypass SFTP for single-file transfers and use SCP instead. SCP is a much simpler protocol: open an SSH exec channel with &lt;code&gt;scp -t &amp;lt;path&amp;gt;&lt;/code&gt;, send a one-line header (&lt;code&gt;C0644 &amp;lt;size&amp;gt; &amp;lt;filename&amp;gt;\n&lt;/code&gt;), stream the raw bytes, send a null byte, done. No request IDs, no per-packet ACKs during data transfer. Just a header and a byte stream.&lt;/p&gt;

&lt;p&gt;I added a new &lt;code&gt;scp.rs&lt;/code&gt; module (~170 lines) that implements SCP upload and download over a raw &lt;code&gt;russh&lt;/code&gt; channel via &lt;code&gt;channel.into_stream()&lt;/code&gt;. The &lt;code&gt;async_copy&lt;/code&gt; function in &lt;code&gt;russh.rs&lt;/code&gt; now detects single-file host-to-guest transfers and routes them through SCP. Directory copies and guest-to-guest transfers still use SFTP. Guest-to-host tries SCP first and falls back to SFTP if it fails (which it will for directories).&lt;/p&gt;

&lt;p&gt;The review was thorough. The maintainer requested eight changes, all cleanups: use &lt;code&gt;BufReader.read_line()&lt;/code&gt; instead of byte-by-byte loops, add error message prefixes, reuse the ack-reading function in the download path, validate the end-of-transfer marker byte. All reasonable, all addressed. He also asked (politely) whether the PR was AI-generated. I explained my workflow and he was satisfied. The PR went through two review rounds over 12 days and merged. &lt;a href="https://github.com/cubic-vm/cubic/pull/311" rel="noopener noreferrer"&gt;PR #311&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Verdict
&lt;/h2&gt;

&lt;p&gt;cubic is for developers who want lightweight VMs without the weight of libvirt or the constraints of Docker. If you're testing deployment scripts, need an isolated Linux environment for a project, or just want to spin up a Debian box and SSH into it without thinking about Vagrant files, this does the job.&lt;/p&gt;

&lt;p&gt;The project is young (v0.19.0, solo maintainer) but the trajectory is good. New distros get added regularly. The contributor experience is above average: specific review feedback, no ego, merged with thanks. The maintainer is clearly using this tool daily and fixing things as they surface.&lt;/p&gt;

&lt;p&gt;What would push cubic to the next level? The SFTP performance fix helps, but the bigger opportunity is user experience. A &lt;code&gt;cubic init&lt;/code&gt; that scaffolds a project config file, better error messages when QEMU isn't installed, and a Homebrew formula for macOS users would all lower the barrier. The foundation is clean. It just needs more people kicking the tires.&lt;/p&gt;

&lt;h2&gt;
  
  
  Go Look At This
&lt;/h2&gt;

&lt;p&gt;If you manage VMs from the command line, &lt;a href="https://github.com/cubic-vm/cubic" rel="noopener noreferrer"&gt;try cubic&lt;/a&gt;. &lt;code&gt;cubic create myvm --image debian&lt;/code&gt; and you're running in under a minute. If you've been burned by slow file transfers to VMs before, the SCP fix in &lt;a href="https://github.com/cubic-vm/cubic/pull/311" rel="noopener noreferrer"&gt;PR #311&lt;/a&gt; is worth a look for the protocol analysis alone.&lt;/p&gt;

&lt;p&gt;Star the repo. The codebase is small enough to read in a sitting, and there are &lt;a href="https://github.com/cubic-vm/cubic/issues" rel="noopener noreferrer"&gt;open issues&lt;/a&gt; at every difficulty level.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This is Review Bomb #8, a series where I find under-the-radar projects on GitHub, read the code, contribute something, and write it up. If you know a project that deserves more eyeballs, drop it in the comments.&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This post was originally published at &lt;a href="https://www.wshoffner.dev/blog" rel="noopener noreferrer"&gt;wshoffner.dev/blog&lt;/a&gt;. If you liked it, the Review Bomb series lives there too.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>rust</category>
      <category>ssh</category>
      <category>tooling</category>
    </item>
    <item>
      <title>Finding Blocking Code in Async Rust Without Changing a Single Line</title>
      <dc:creator>Wes</dc:creator>
      <pubDate>Wed, 18 Mar 2026 12:52:00 +0000</pubDate>
      <link>https://dev.to/ticktockbent/finding-blocking-code-in-async-rust-without-changing-a-single-line-3c75</link>
      <guid>https://dev.to/ticktockbent/finding-blocking-code-in-async-rust-without-changing-a-single-line-3c75</guid>
      <description>&lt;p&gt;You know the symptoms. Latency spikes under load. Throughput that should be higher. A Tokio runtime that's doing less work than it should be, and you can't see why. Something is blocking a worker thread, starving the other tasks, and nobody's throwing an error about it.&lt;/p&gt;

&lt;p&gt;The standard advice is tokio-console. Add &lt;code&gt;console-subscriber&lt;/code&gt; to your dependencies, rebuild, redeploy, reproduce the problem, and look at task poll times. It works well. It also requires code changes, a rebuild, and a redeployment, which means it's not what you reach for when staging is melting and you need answers now.&lt;/p&gt;

&lt;p&gt;The other option is &lt;code&gt;perf&lt;/code&gt;. Attach to the process, collect stack traces, generate a flamegraph, and interpret a wall of unsymbolized frames. It'll tell you everything that's happening on every thread. The signal-to-noise ratio for "which Tokio worker is blocked and by what" is not great.&lt;/p&gt;

&lt;p&gt;There's a gap between those two. A tool that attaches to a running Tokio process, finds the blocking code, and shows you the result, without touching your source.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Is hud?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/cong-or/hud" rel="noopener noreferrer"&gt;hud&lt;/a&gt; is an eBPF-based profiler for Tokio applications, built by &lt;a href="https://github.com/cong-or" rel="noopener noreferrer"&gt;cong-or&lt;/a&gt;. You give it a process name or PID, and it hooks into the Linux scheduler via eBPF tracepoints to detect when Tokio worker threads experience high scheduling latency. When a worker is off-CPU longer than a configurable threshold (default 5ms), hud captures a stack trace, resolves it against DWARF debug symbols, and shows you what was on the stack. No recompile, no instrumentation, no code changes.&lt;/p&gt;

&lt;p&gt;It runs as a real-time TUI or in headless mode with Chrome Trace JSON export. About 147 stars. For the problem it solves, it should have more.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Snapshot
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Project&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href="https://github.com/cong-or/hud" rel="noopener noreferrer"&gt;hud&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Stars&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;~147 at time of writing&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Maintainer&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Solo developer, 178 commits and 15 releases in 3 months&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Code health&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Clean workspace, good module boundaries, well-documented internals&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Docs&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Five dedicated doc files (architecture, development, exports, troubleshooting, tuning)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Contributor UX&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Both PRs merged within minutes. Would contribute again.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Worth using&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Yes, if you run Tokio on Linux and have ever wondered "what's blocking?"&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Under the Hood
&lt;/h2&gt;

&lt;p&gt;The project is a Rust workspace with three crates. &lt;code&gt;hud-ebpf&lt;/code&gt; (~400 lines, &lt;code&gt;#![no_std]&lt;/code&gt;) runs inside the kernel: a &lt;code&gt;sched_switch&lt;/code&gt; tracepoint for off-CPU detection and a &lt;code&gt;perf_event&lt;/code&gt; hook sampling at 99 Hz for stack traces. &lt;code&gt;hud-common&lt;/code&gt; (~330 lines) defines the shared types that cross the kernel/userspace boundary. &lt;code&gt;hud&lt;/code&gt; (~8,700 lines) is the userspace application: event processing, DWARF symbol resolution, a ratatui TUI, and Chrome Trace export. The whole thing builds with &lt;code&gt;cargo xtask build-ebpf&lt;/code&gt; for the eBPF side and a regular &lt;code&gt;cargo build&lt;/code&gt; for userspace.&lt;/p&gt;

&lt;p&gt;The interesting engineering starts with worker discovery. Tokio worker threads need to be identified before hud can filter events to just the runtime. This turns out to be harder than it sounds. The first problem is &lt;code&gt;/proc&lt;/code&gt;'s 15-character &lt;code&gt;TASK_COMM_LEN&lt;/code&gt; limit, which truncates &lt;code&gt;tokio-runtime-worker-0&lt;/code&gt; to &lt;code&gt;tokio-runtime-w&lt;/code&gt;. The second is custom runtimes: if you called &lt;code&gt;thread_name("my-pool")&lt;/code&gt;, the default prefixes don't match. hud handles this with a 4-step fallback chain: explicit prefix via &lt;code&gt;--workers&lt;/code&gt;, default Tokio prefixes, stack-based classification (sample for 500ms and look for Tokio scheduler frames), and a largest-thread-group heuristic. That last one just picks the biggest group of threads following a &lt;code&gt;{name}-{N}&lt;/code&gt; naming pattern.&lt;/p&gt;

&lt;p&gt;Frame classification has its own complexity. Rust statically links dependencies into the main binary, so being "inside the executable" doesn't distinguish your code from tokio's code from serde's code. hud uses a 3-tier classifier: file path patterns first (&lt;code&gt;.cargo/registry/&lt;/code&gt; means third-party, &lt;code&gt;.rustup/toolchains/&lt;/code&gt; means stdlib), then function name prefixes (&lt;code&gt;tokio::&lt;/code&gt;, &lt;code&gt;std::&lt;/code&gt;, &lt;code&gt;hyper::&lt;/code&gt;), then memory range as a last resort. The TUI highlights user code in green and dims everything else.&lt;/p&gt;

&lt;p&gt;The README is refreshingly honest about limitations. It measures scheduling latency, which is a symptom of blocking, not the blocking itself. It captures the victim's stack, not the blocker's. System CPU pressure can cause false positives. The comparison table with tokio-console and Tokio's built-in detection doesn't oversell hud. It positions it as a triage tool: narrow down the suspects, then dig deeper with instrumentation if needed.&lt;/p&gt;

&lt;p&gt;The rough spots are minor. Test coverage is decent for the core modules (classification, worker discovery, hotspot analysis) but thin for the event processing pipeline and TUI rendering. The project is three months old and iterating fast (15 releases), so some gaps are expected. The docs make up for it: five dedicated files covering architecture, development workflow, export format, troubleshooting, and threshold tuning. That's unusual care for a project at this scale.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Contribution
&lt;/h2&gt;

&lt;p&gt;I submitted two PRs, targeting different layers of the stack.&lt;/p&gt;

&lt;p&gt;The first was test coverage for the blocking pool filter. Tokio's &lt;code&gt;spawn_blocking&lt;/code&gt; creates threads that share the same &lt;code&gt;Inner::run&lt;/code&gt; function at the bottom of their stacks as actual worker threads. This is because Tokio bootstraps workers through the blocking pool mechanism. The distinguishing factor is that workers also have &lt;code&gt;scheduler::multi_thread::worker&lt;/code&gt; frames higher up the stack. The &lt;code&gt;is_blocking_pool_stack()&lt;/code&gt; function filters on this distinction to suppress spawn_blocking noise from the TUI.&lt;/p&gt;

&lt;p&gt;This function went through four release iterations (v0.4.2 through v0.5.0) in response to a bug report where spawn_blocking tasks were showing up as false positives. The maintainer shipped multiple fix releases in rapid succession. But the function had zero test coverage. I added 9 tests covering the core logic: genuine blocking pool stacks, genuine worker stacks, empty stacks, partial matches, closure wrappers, and two realistic deep-stack scenarios. I bundled in doc fixes where TROUBLESHOOTING.md listed 3 worker discovery steps instead of the actual 4, and where the README said "x86_64 architecture" while every other doc said "x86_64/aarch64."&lt;/p&gt;

&lt;p&gt;The second PR was an eBPF fix. The &lt;code&gt;get_cpu_id()&lt;/code&gt; function in the kernel-side code always returned 0, with a TODO comment saying "aya-ebpf doesn't expose bpf_get_smp_processor_id directly yet." It does. The helper is re-exported through &lt;code&gt;pub use gen::*&lt;/code&gt; in the aya-ebpf helpers module, but it's &lt;code&gt;#[doc(hidden)]&lt;/code&gt;, so it never shows up in the generated docs. The fix was adding an import and replacing the stub with the real call. Three lines changed. Every exported trace event was silently reporting the wrong CPU core.&lt;/p&gt;

&lt;p&gt;Both PRs were &lt;a href="https://github.com/cong-or/hud/pull/4" rel="noopener noreferrer"&gt;merged&lt;/a&gt; &lt;a href="https://github.com/cong-or/hud/pull/5" rel="noopener noreferrer"&gt;within minutes&lt;/a&gt;. The codebase was easy to navigate: clear module boundaries, descriptive file names, good internal documentation. The eBPF side requires nightly Rust and &lt;code&gt;bpf-linker&lt;/code&gt;, which adds setup friction, but the build process is documented and worked on the first try.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Verdict
&lt;/h2&gt;

&lt;p&gt;hud is for Rust developers running Tokio on Linux who want to understand what's blocking their runtime without adding instrumentation. The workflow is &lt;code&gt;sudo hud my-app&lt;/code&gt; and you're looking at results. If you've ever stared at a flamegraph trying to figure out which of those Tokio frames is yours, hud does that filtering for you.&lt;/p&gt;

&lt;p&gt;The project is young (three months) and solo-maintained, but the trajectory is strong. The commit history shows a developer who responds to bug reports with same-day fix releases, who writes honest documentation about tradeoffs, and who merges external contributions without friction. The codebase is clean enough that I was reading eBPF kernel code within an hour of cloning the repo. That doesn't happen by accident.&lt;/p&gt;

&lt;p&gt;What would push hud further? More metrics in the TUI (per-CPU breakdown, timeline visualization of blocking events), broader async runtime support beyond Tokio, and CI integration for the headless export mode (pipe the JSON through &lt;code&gt;jq&lt;/code&gt; for regression detection). The architecture supports all of this. The &lt;code&gt;Metric&lt;/code&gt; approach is indirect by design, and the project is honest about that. What it offers in return is zero-friction access to information that would otherwise require a rebuild.&lt;/p&gt;

&lt;h2&gt;
  
  
  Go Look At This
&lt;/h2&gt;

&lt;p&gt;If you run Tokio on Linux, &lt;a href="https://github.com/cong-or/hud" rel="noopener noreferrer"&gt;try hud&lt;/a&gt;. Download the pre-built binary, point it at a running process, and see what shows up. If nothing does, your runtime is clean. If something does, you just saved yourself a rebuild.&lt;/p&gt;

&lt;p&gt;Star the repo. Here are &lt;a href="https://github.com/cong-or/hud/pull/4" rel="noopener noreferrer"&gt;the tests I added&lt;/a&gt; for the blocking pool filter and &lt;a href="https://github.com/cong-or/hud/pull/5" rel="noopener noreferrer"&gt;the eBPF fix&lt;/a&gt; for the cpu_id stub. Both small, both merged.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This is Review Bomb #7, a series where I find under-the-radar projects on GitHub, read the code, contribute something, and write it up. If you know a project that deserves more eyeballs, drop it in the comments.&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This post was originally published at &lt;a href="https://www.wshoffner.dev/blog" rel="noopener noreferrer"&gt;wshoffner.dev/blog&lt;/a&gt;. If you liked it, the Review Bomb series lives there too.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>rust</category>
      <category>performance</category>
      <category>async</category>
    </item>
  </channel>
</rss>
