<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Matías Denda</title>
    <description>The latest articles on DEV Community by Matías Denda (@mdenda).</description>
    <link>https://dev.to/mdenda</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/mdenda"/>
    <language>en</language>
    <item>
      <title>The four decisions every team makes about Git (whether they realize it or not)</title>
      <dc:creator>Matías Denda</dc:creator>
      <pubDate>Wed, 13 May 2026 13:00:00 +0000</pubDate>
      <link>https://dev.to/mdenda/the-four-decisions-every-team-makes-about-git-whether-they-realize-it-or-not-9hb</link>
      <guid>https://dev.to/mdenda/the-four-decisions-every-team-makes-about-git-whether-they-realize-it-or-not-9hb</guid>
      <description>&lt;p&gt;Most teams think they have a "Git workflow." What they actually have is a set of habits that accumulated over time, often made by people who left the company years ago, based on assumptions that no longer hold.&lt;/p&gt;

&lt;p&gt;Ask three developers on the same team to explain "how we use Git" and you'll get three different answers. Not because anyone is wrong — because the team never actually made the decisions explicitly. They made them by accident.&lt;/p&gt;

&lt;p&gt;This article is about surfacing those decisions. Not telling you what's right — telling you &lt;strong&gt;what you're implicitly choosing&lt;/strong&gt;, so you can choose deliberately.&lt;/p&gt;

&lt;h2&gt;
  
  
  The four decisions
&lt;/h2&gt;

&lt;p&gt;Every team, whether they know it or not, has made a choice about four things:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Branching strategy&lt;/strong&gt; — How does code move from idea to production?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tracking methodology&lt;/strong&gt; — How do you decide what to work on and measure progress?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Release cadence&lt;/strong&gt; — How often does code reach users?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automation level&lt;/strong&gt; — How much of the workflow is human-driven vs event-driven?&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Each decision constrains the next. Each one, if not made explicitly, gets made implicitly — and implicit decisions tend to contradict each other.&lt;/p&gt;

&lt;h2&gt;
  
  
  Decision 1: Branching strategy
&lt;/h2&gt;

&lt;p&gt;Your options, roughly:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Trunk-based development&lt;/strong&gt; — everyone commits to (or merges small changes into) main, multiple times a day&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GitHub Flow&lt;/strong&gt; — feature branches, short-lived, merged via PR to main&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GitLab Flow&lt;/strong&gt; — feature branches + environment branches (staging, production)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Git Flow&lt;/strong&gt; — long-lived develop + main branches, release branches, hotfix branches&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Custom hybrid&lt;/strong&gt; — some combination of the above&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Most tutorials frame this as "which is best?" The honest answer is: &lt;strong&gt;it depends on the next three decisions&lt;/strong&gt;. Git Flow with continuous deployment is a contradiction. Trunk-based with a monthly release train is a waste.&lt;/p&gt;

&lt;p&gt;The right question isn't "which strategy should we use?" — it's "&lt;strong&gt;which strategy supports the other three decisions?&lt;/strong&gt;"&lt;/p&gt;

&lt;h2&gt;
  
  
  Decision 2: Tracking methodology
&lt;/h2&gt;

&lt;p&gt;This one most teams think they've decided, but haven't.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Scrum&lt;/strong&gt; — sprints, story points, planning/retro ceremonies&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Kanban&lt;/strong&gt; — continuous flow, WIP limits, no sprints&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Waterfall or phase-gated&lt;/strong&gt; — requirements → design → build → test → deploy&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SAFe or other scaled framework&lt;/strong&gt; — multiple teams coordinating via ARTs or similar&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The question that reveals your actual methodology: &lt;strong&gt;what triggers work to start?&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If it's "this sprint's plan" → you're doing Scrum (or pretending to)&lt;/li&gt;
&lt;li&gt;If it's "capacity is available and this is prioritized" → you're doing Kanban (or should be)&lt;/li&gt;
&lt;li&gt;If it's "this phase is complete and signed off" → you're doing Waterfall (admit it)&lt;/li&gt;
&lt;li&gt;If it's "a manager said so" → you're in a methodology vacuum, which is its own problem&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The implications for Git are direct. Scrum with 2-week sprints wants branches that close before sprint end. Kanban doesn't care about time — it cares about WIP limits. Waterfall expects phase-gated tags. These are incompatible constraints on how your branches live.&lt;/p&gt;

&lt;h2&gt;
  
  
  Decision 3: Release cadence
&lt;/h2&gt;

&lt;p&gt;This is the one most teams lie to themselves about.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Continuous deployment&lt;/strong&gt; — every merge to main goes to production (with canary, feature flags, etc.)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Continuous delivery&lt;/strong&gt; — every merge to main is production-ready, but deployment is a manual trigger&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Release train&lt;/strong&gt; — scheduled releases (weekly, biweekly, monthly)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Ad-hoc releases&lt;/strong&gt; — "when it's ready," which usually means "when someone has time"&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The diagnostic question: &lt;strong&gt;how long from a developer's merge to users seeing the change?&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Hours → you're on continuous deployment&lt;/li&gt;
&lt;li&gt;A day or two → continuous delivery&lt;/li&gt;
&lt;li&gt;Consistent schedule → release train&lt;/li&gt;
&lt;li&gt;"Varies" → ad-hoc (and your lead time is probably terrible)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Release cadence is the cruellest decision to get wrong, because it interacts with everything:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Long cadence + trunk-based = huge batch risk&lt;/li&gt;
&lt;li&gt;Short cadence + Git Flow = unnecessary overhead&lt;/li&gt;
&lt;li&gt;Ad-hoc + Scrum = sprint goals that never ship at sprint end&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Decision 4: Automation level
&lt;/h2&gt;

&lt;p&gt;Unlike the others, this is a spectrum, not a discrete choice. But at a high level:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Fully automated&lt;/strong&gt; — PR opens → CI runs → auto-merge on approval → auto-deploy to staging → auto-promote to prod (with gates)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Event-driven&lt;/strong&gt; — Git events trigger most transitions; humans handle exceptions&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Semi-manual&lt;/strong&gt; — CI runs, humans approve and merge, humans decide when to deploy&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fully manual&lt;/strong&gt; — every step requires a human action&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The test of your automation level: &lt;strong&gt;what happens at 2 AM when a hotfix needs to ship?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If the answer is "we page someone who manually runs the deploy script," your automation level is semi-manual at best. If the answer is "the hotfix workflow handles it; we review the result in the morning," you're much closer to the automated end.&lt;/p&gt;

&lt;h2&gt;
  
  
  The decision table
&lt;/h2&gt;

&lt;p&gt;Here's what it looks like when a team makes all four decisions consistently:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Team Type&lt;/th&gt;
&lt;th&gt;Branching&lt;/th&gt;
&lt;th&gt;Tracking&lt;/th&gt;
&lt;th&gt;Release&lt;/th&gt;
&lt;th&gt;Automation&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;SaaS startup&lt;/td&gt;
&lt;td&gt;Trunk-based&lt;/td&gt;
&lt;td&gt;Kanban&lt;/td&gt;
&lt;td&gt;Continuous&lt;/td&gt;
&lt;td&gt;High&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Regulated fintech&lt;/td&gt;
&lt;td&gt;GitLab Flow&lt;/td&gt;
&lt;td&gt;Scrum&lt;/td&gt;
&lt;td&gt;Biweekly&lt;/td&gt;
&lt;td&gt;Medium&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Enterprise product&lt;/td&gt;
&lt;td&gt;Git Flow&lt;/td&gt;
&lt;td&gt;SAFe&lt;/td&gt;
&lt;td&gt;Quarterly&lt;/td&gt;
&lt;td&gt;Medium-high&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Open source project&lt;/td&gt;
&lt;td&gt;GitHub Flow&lt;/td&gt;
&lt;td&gt;Issue-driven&lt;/td&gt;
&lt;td&gt;Ad-hoc&lt;/td&gt;
&lt;td&gt;High&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Regulated hardware&lt;/td&gt;
&lt;td&gt;Custom&lt;/td&gt;
&lt;td&gt;Waterfall-hybrid&lt;/td&gt;
&lt;td&gt;Milestone-based&lt;/td&gt;
&lt;td&gt;Low-medium&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Notice that each row is &lt;strong&gt;internally consistent&lt;/strong&gt;. The branching strategy supports the cadence. The tracking methodology matches the release frequency. The automation level is appropriate for the regulatory context.&lt;/p&gt;

&lt;h2&gt;
  
  
  Natural combinations that work
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Trunk-based + Kanban + continuous deployment + high automation.&lt;/strong&gt; This is the SaaS ideal. Works when the team is small-to-medium, the product tolerates continuous change, and there's sufficient test coverage. Fastest feedback loop, but requires discipline.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;GitHub Flow + Scrum + continuous delivery + medium-high automation.&lt;/strong&gt; The most common mid-size-team setup. Feature branches align with sprint items. Continuous delivery means each sprint can ship if the business decides. Good for product teams at B2B SaaS companies.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Git Flow + SAFe + release train + medium automation.&lt;/strong&gt; Enterprise reality. Multiple teams coordinating, scheduled releases, version-branching for support of multiple production versions. Unfashionable, but genuinely correct for some contexts.&lt;/p&gt;

&lt;h2&gt;
  
  
  Combinations that create friction
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Git Flow + continuous deployment.&lt;/strong&gt; Your long-lived develop branch creates unnecessary batching. Why have &lt;code&gt;develop&lt;/code&gt; if you deploy on every merge? You're paying the cost of Git Flow without getting its benefits.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Trunk-based + quarterly release train.&lt;/strong&gt; You're merging to main continuously, but users don't see the changes for 3 months. Either deploy more often, or use longer-lived branches. The current setup batches risk without batching value.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scrum + ad-hoc releases.&lt;/strong&gt; You commit to sprint goals, but the output doesn't reach users at sprint end. The team's measurement of success (sprint completion) is divorced from actual delivery. Over time, developers stop believing sprints matter.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Kanban + Git Flow.&lt;/strong&gt; Kanban is about continuous flow; Git Flow creates flow-blocking integration points. Every release branch is a ceremony Kanban is designed to eliminate.&lt;/p&gt;

&lt;h2&gt;
  
  
  The test: can you write down your answer?
&lt;/h2&gt;

&lt;p&gt;Here's a diagnostic exercise that takes 10 minutes and is worth more than most consulting engagements:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Write down the four decisions for your team, as one sentence each.&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"We use GitHub Flow."&lt;/li&gt;
&lt;li&gt;"We use Scrum with 2-week sprints."&lt;/li&gt;
&lt;li&gt;"We release every Thursday at 2 PM."&lt;/li&gt;
&lt;li&gt;"PR opens trigger CI automatically; merges trigger deploy-to-staging automatically; promotion to production requires manual approval."&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you can write these four sentences confidently, and every senior person on the team would write approximately the same four sentences, &lt;strong&gt;you have a workflow&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;If different people would write different sentences, or if you find yourself writing "it depends," or "we kind of do X but sometimes Y" — you don't have a workflow. You have accumulated habits pretending to be a workflow.&lt;/p&gt;

&lt;p&gt;And that's fine to discover. The discovery is the start of fixing it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Your workflow will evolve
&lt;/h2&gt;

&lt;p&gt;One more thing: these four decisions aren't permanent. Teams grow, products mature, regulations change.&lt;/p&gt;

&lt;p&gt;A startup that started with trunk-based + continuous deployment may, at 50 developers, realize that coordination requires feature branches. That's not a failure — that's scale demanding a new set of decisions.&lt;/p&gt;

&lt;p&gt;A mature enterprise product might, after adopting feature flags and improving test coverage, discover it can move from quarterly release trains to biweekly. That's not a rebellion — that's capability unlocking a new set of decisions.&lt;/p&gt;

&lt;p&gt;The mistake isn't in the specific decisions. The mistake is making them implicitly, letting them drift, and then wondering why delivery is slow and everyone's frustrated.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Make them explicit. Revisit them quarterly. Change them when the context changes.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;That's how you go from "we have a Git workflow" to actually having one.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This post is adapted from &lt;em&gt;&lt;a href="https://mdenda.gumroad.com/l/git-in-depth" rel="noopener noreferrer"&gt;Git in Depth: From Solo Developer to Engineering Teams&lt;/a&gt;&lt;/em&gt;, a 658-page book with a full chapter on the complete map connecting board columns, Git states, and deployment environments — plus decision frameworks for choosing the right combination for your team.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Related: &lt;a href="https://dev.to/matutetandil/ARTICLE-2-SLUG"&gt;Your Kanban board is lying to you (and Git knows it)&lt;/a&gt; · &lt;a href="https://dev.to/matutetandil/ARTICLE-3-SLUG"&gt;Little's Law: the math that explains why your team delivers slowly&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;See all my articles on Git and engineering practice: &lt;a href="https://dev.to/matutetandil"&gt;dev.to/matutetandil&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>git</category>
      <category>devops</category>
      <category>architecture</category>
      <category>teams</category>
    </item>
    <item>
      <title>N! Ways to Hide a Message: Multi-Carrier Encoding in Rust</title>
      <dc:creator>Matías Denda</dc:creator>
      <pubDate>Tue, 12 May 2026 14:00:00 +0000</pubDate>
      <link>https://dev.to/mdenda/n-ways-to-hide-a-message-multi-carrier-encoding-in-rust-23mo</link>
      <guid>https://dev.to/mdenda/n-ways-to-hide-a-message-multi-carrier-encoding-in-rust-23mo</guid>
      <description>&lt;p&gt;&lt;em&gt;Post 2 of 6 in the series on building &lt;a href="https://github.com/matutetandil/anyhide" rel="noopener noreferrer"&gt;Anyhide&lt;/a&gt;, a Rust steganography tool. This post is about a small feature that multiplies your adversary's work by a factorial.&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;I like features that give you an outsized security improvement for very little code. Multi-carrier encoding is one of those. The implementation is maybe twelve lines. The effect is that the set of carriers you use becomes an &lt;em&gt;ordered&lt;/em&gt; secret — and getting that order wrong produces deterministic garbage, which an attacker can't distinguish from using the wrong files entirely.&lt;/p&gt;

&lt;p&gt;Let me explain.&lt;/p&gt;

&lt;h2&gt;
  
  
  The single-carrier recap
&lt;/h2&gt;

&lt;p&gt;In a normal Anyhide flow, both parties share one file. They use it as a reference, and the sender encodes a message by computing byte positions into that file. Single file, single shared secret.&lt;/p&gt;

&lt;p&gt;But "the carrier is the file we both have" is a constraint I wanted to relax. What if the carrier is &lt;em&gt;a set of files&lt;/em&gt;? What if the set is ordered, and the order itself is a secret nobody but the two parties knows?&lt;/p&gt;

&lt;p&gt;That's multi-carrier encoding.&lt;/p&gt;

&lt;h2&gt;
  
  
  The API
&lt;/h2&gt;

&lt;p&gt;From the command line, you just pass &lt;code&gt;-c&lt;/code&gt; multiple times:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Sender&lt;/span&gt;
anyhide encode &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-c&lt;/span&gt; song.mp3 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-c&lt;/span&gt; photo.jpg &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-c&lt;/span&gt; document.pdf &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-m&lt;/span&gt; &lt;span class="s2"&gt;"meeting moved to 9pm"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-p&lt;/span&gt; &lt;span class="s2"&gt;"passphrase"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--their-key&lt;/span&gt; bob.pub

&lt;span class="c"&gt;# Receiver needs the SAME files in the SAME order&lt;/span&gt;
anyhide decode &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--code&lt;/span&gt; &lt;span class="s2"&gt;"..."&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-c&lt;/span&gt; song.mp3 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-c&lt;/span&gt; photo.jpg &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-c&lt;/span&gt; document.pdf &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-p&lt;/span&gt; &lt;span class="s2"&gt;"passphrase"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--my-key&lt;/span&gt; bob.key
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If Bob swaps &lt;code&gt;photo.jpg&lt;/code&gt; and &lt;code&gt;document.pdf&lt;/code&gt;, he does not get an error. He gets garbage — random-looking bytes that happen to decode cleanly from the wrong carrier. He won't know whether the message was the wrong passphrase, the wrong key, the wrong files, or the wrong &lt;em&gt;order&lt;/em&gt; of the right files.&lt;/p&gt;

&lt;h2&gt;
  
  
  The math
&lt;/h2&gt;

&lt;p&gt;The number of ways to order N distinct files is N! (N factorial). That means if an adversary has all four of your carrier files and the passphrase and the private key, they &lt;em&gt;still&lt;/em&gt; have to try up to 24 orderings to recover a 4-carrier message.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;N carriers&lt;/th&gt;
&lt;th&gt;Orderings to try&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;td&gt;6&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;24&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;5&lt;/td&gt;
&lt;td&gt;120&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;7&lt;/td&gt;
&lt;td&gt;5,040&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;10&lt;/td&gt;
&lt;td&gt;3,628,800&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;This isn't cryptographic security — factorial growth is polynomial next to exponential keyspaces. But it's not &lt;em&gt;meant&lt;/em&gt; to be. It's an additional layer of ambiguity, and more importantly, it's ambiguity that an attacker can't distinguish from other failures. They can't test "is this the wrong order?" without a ciphertext oracle, which they don't have.&lt;/p&gt;

&lt;h2&gt;
  
  
  The implementation
&lt;/h2&gt;

&lt;p&gt;Here's the actual code. It lives in &lt;code&gt;src/text/carrier.rs&lt;/code&gt; and is 17 lines:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="cd"&gt;/// Creates a carrier from multiple files concatenated in order.&lt;/span&gt;
&lt;span class="cd"&gt;///&lt;/span&gt;
&lt;span class="cd"&gt;/// *Order matters!* Different order = different carrier = different decoding result.&lt;/span&gt;
&lt;span class="cd"&gt;/// This provides N! additional security combinations for N carriers.&lt;/span&gt;
&lt;span class="cd"&gt;///&lt;/span&gt;
&lt;span class="cd"&gt;/// - Single file: Delegates to `from_file()` (preserves text vs binary detection)&lt;/span&gt;
&lt;span class="cd"&gt;/// - Multiple files: All read as bytes and concatenated (always binary carrier)&lt;/span&gt;
&lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="k"&gt;fn&lt;/span&gt; &lt;span class="nf"&gt;from_files&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;paths&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nn"&gt;std&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nn"&gt;path&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;PathBuf&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt; &lt;span class="k"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nn"&gt;std&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nn"&gt;io&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nb"&gt;Result&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="k"&gt;Self&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;paths&lt;/span&gt;&lt;span class="nf"&gt;.is_empty&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;Ok&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nn"&gt;Carrier&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;from_bytes&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nd"&gt;vec!&lt;/span&gt;&lt;span class="p"&gt;[]));&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;paths&lt;/span&gt;&lt;span class="nf"&gt;.len&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="k"&gt;Self&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;from_file&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;paths&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;]);&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="c1"&gt;// Multiple files: read all as bytes and concatenate in order&lt;/span&gt;
    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="k"&gt;mut&lt;/span&gt; &lt;span class="n"&gt;combined&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nn"&gt;Vec&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;path&lt;/span&gt; &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="n"&gt;paths&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nn"&gt;std&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nn"&gt;fs&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;read&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;path&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;?&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="n"&gt;combined&lt;/span&gt;&lt;span class="nf"&gt;.extend&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="nf"&gt;Ok&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nn"&gt;Carrier&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;from_bytes&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;combined&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's it. Read each file as bytes, concatenate them in the provided order, hand the resulting buffer to the binary carrier machinery, and move on. The rest of the encoder and decoder doesn't even know it's dealing with multiple files — to them, it's just a longer buffer.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why the single-file branch matters
&lt;/h2&gt;

&lt;p&gt;Notice the early return at &lt;code&gt;paths.len() == 1&lt;/code&gt;. Without it, a single-carrier call would lose the text/binary autodetection that &lt;code&gt;from_file&lt;/code&gt; gives you. A &lt;code&gt;.txt&lt;/code&gt; with one carrier would be treated as binary, and suddenly substring matching becomes byte matching. The search still works, but you lose the case-insensitive lookup that text carriers give you for free.&lt;/p&gt;

&lt;p&gt;Keeping the single-file case routed through &lt;code&gt;from_file&lt;/code&gt; means the multi-carrier feature is a pure extension — existing callers and existing encoded codes are untouched. This is the kind of thing I care about when adding features to a tool people might already depend on.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why ordered concatenation instead of something cleverer
&lt;/h2&gt;

&lt;p&gt;I considered a few alternatives while designing this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;em&gt;Hashing the carrier set and using the hash as a seed&lt;/em&gt;. Too indirect. The whole appeal of Anyhide is that the carrier is &lt;em&gt;the file&lt;/em&gt;, not a digest of it. Hashing would also make the "why wrong order gives garbage" property harder to reason about.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Interleaving bytes from each file in a passphrase-derived pattern&lt;/em&gt;. More complex, marginally better, and it made the error surface much bigger. Every bug would be a silent decoder failure, which is exactly what Anyhide is designed to &lt;em&gt;never&lt;/em&gt; surface.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Each carrier gets its own search, and fragments are distributed across carriers&lt;/em&gt;. Elegant on paper, but it changes the security model: now partial knowledge of one carrier compromises the whole message instead of just its fraction. Worse.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Concatenation in caller-provided order is the simplest model that works and the easiest to reason about. Boring. Correct.&lt;/p&gt;

&lt;h2&gt;
  
  
  Plausible deniability across the factorial
&lt;/h2&gt;

&lt;p&gt;Here's where this gets fun. Imagine you encode a message with &lt;code&gt;[A, B, C]&lt;/code&gt;. Six orderings are possible:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[A, B, C]  → "meeting at 9pm"  (real)
[A, C, B]  → garbage
[B, A, C]  → garbage
[B, C, A]  → garbage
[C, A, B]  → garbage
[C, B, A]  → garbage
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;All five "wrong" orderings produce outputs &lt;em&gt;indistinguishable from random&lt;/em&gt;. An attacker who recovers your files, your key, and your passphrase — but not the ordering — sees six plausible-looking plaintexts. None of them says "ERROR". One says "meeting at 9pm". The other five say things like &lt;code&gt;\x8f\x3aq#w...&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Now pair this with the duress-password feature from &lt;a href="https://dev.tolink"&gt;Post 3&lt;/a&gt; and the adversary's problem gets harder still: they can't tell whether a given output is "the real message in the right order", "the decoy message in the right order", or "the wrong ordering producing noise".&lt;/p&gt;

&lt;h2&gt;
  
  
  A note on testing
&lt;/h2&gt;

&lt;p&gt;The integration test that verifies this is one line of logic:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="nd"&gt;#[test]&lt;/span&gt;
&lt;span class="k"&gt;fn&lt;/span&gt; &lt;span class="nf"&gt;test_multi_carrier_wrong_order_produces_garbage&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;msg&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"top secret"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;carriers&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nd"&gt;vec!&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;"a.txt"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"b.txt"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"c.txt"&lt;/span&gt;&lt;span class="p"&gt;];&lt;/span&gt;
    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;code&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;encode_with&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;carriers&lt;/span&gt;&lt;span class="nf"&gt;.clone&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="n"&gt;msg&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

    &lt;span class="c1"&gt;// Wrong order&lt;/span&gt;
    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;wrong&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nd"&gt;vec!&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;"b.txt"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"a.txt"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"c.txt"&lt;/span&gt;&lt;span class="p"&gt;];&lt;/span&gt;
    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;decoded&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;decode_with&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;wrong&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;code&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

    &lt;span class="c1"&gt;// Does not return Err(); returns Ok with garbage&lt;/span&gt;
    &lt;span class="nd"&gt;assert_ne!&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;decoded&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;msg&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="c1"&gt;// And it's "garbage" in the sense that re-encoding it&lt;/span&gt;
    &lt;span class="c1"&gt;// wouldn't round-trip either&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The interesting assertion isn't &lt;code&gt;assert_ne!&lt;/code&gt;. It's that the function returns &lt;code&gt;Ok&lt;/code&gt; with bytes, not an error. That's the "never-fail decoder" invariant Anyhide is built around, and multi-carrier inherits it for free because it's just a longer buffer going through the same pipeline.&lt;/p&gt;

&lt;h2&gt;
  
  
  What you get for 17 lines of Rust
&lt;/h2&gt;

&lt;p&gt;To recap:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;N! additional orderings an adversary must exhaustively search through.&lt;/li&gt;
&lt;li&gt;Zero impact on single-carrier code paths (backwards compatible by design).&lt;/li&gt;
&lt;li&gt;Order is a &lt;em&gt;new&lt;/em&gt; secret, composable with the passphrase, key, and (see Post 3) duress password.&lt;/li&gt;
&lt;li&gt;Wrong order produces garbage indistinguishable from any other failure mode.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The lesson for me, writing this: the best additions to a security tool are the ones you can explain in two sentences and implement without touching the core. If your new feature reshapes the pipeline, you've probably made a mistake.&lt;/p&gt;

&lt;p&gt;Next up in the series: &lt;em&gt;Post 3 — Plausible Deniability and Duress Passwords&lt;/em&gt;. How to encode &lt;em&gt;two&lt;/em&gt; messages under two different passphrases, so that under coercion you can reveal a decoy that's cryptographically indistinguishable from the real one. Dropping in two weeks.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Repo: &lt;a href="https://github.com/matutetandil/anyhide" rel="noopener noreferrer"&gt;github.com/matutetandil/anyhide&lt;/a&gt;. If you spot a way multi-carrier breaks the security properties I claimed above, open an issue — I want to know.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>rust</category>
      <category>security</category>
      <category>cryptography</category>
      <category>privacy</category>
    </item>
    <item>
      <title>Stop estimating in hours. Start estimating in complexity.</title>
      <dc:creator>Matías Denda</dc:creator>
      <pubDate>Thu, 07 May 2026 13:00:00 +0000</pubDate>
      <link>https://dev.to/mdenda/stop-estimating-in-hours-start-estimating-in-complexity-10ep</link>
      <guid>https://dev.to/mdenda/stop-estimating-in-hours-start-estimating-in-complexity-10ep</guid>
      <description>&lt;h2&gt;
  
  
  Stop Estimating in Hours. Start Estimating in Complexity.
&lt;/h2&gt;

&lt;p&gt;There's a quiet truth every developer knows but nobody says out loud at sprint planning:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;When we estimate in hours, we always estimate low.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Always. The senior estimates low because they want to look efficient. The junior estimates low because they don't want to seem slow. The team estimates low because the PM is in the room. And then the sprint ends, half the tickets roll over, and everyone pretends to be surprised.&lt;/p&gt;

&lt;p&gt;After years of watching this play out across teams, languages, and stacks, I've come to believe that the problem isn't that we're bad at estimating hours. The problem is that &lt;strong&gt;hours are the wrong unit&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Let me explain why I now estimate in complexity, and why I think it leads to better software, better teams, and — surprisingly — better deadlines.&lt;/p&gt;

&lt;h2&gt;
  
  
  The misunderstanding at the core
&lt;/h2&gt;

&lt;p&gt;Here's the trap most teams fall into: they treat &lt;strong&gt;complexity&lt;/strong&gt; and &lt;strong&gt;time&lt;/strong&gt; as the same thing measured with different rulers. They're not. They're two independent axes.&lt;/p&gt;

&lt;p&gt;Consider translation. Translating a paragraph from English to Spanish is &lt;em&gt;easy&lt;/em&gt;. There's almost no complexity. But translating the entire Bible? That's still easy — the per-sentence cognitive load hasn't changed — it's just &lt;em&gt;long&lt;/em&gt;. Easy doesn't mean fast.&lt;/p&gt;

&lt;p&gt;Now flip it. A complex distributed-systems migration sounds like it should take weeks. But if your platform happens to have the right tooling already in place, you might pull it off in an afternoon. Complex doesn't mean slow.&lt;/p&gt;

&lt;p&gt;Once you internalize this, the whole hour-based estimation game starts looking absurd. You're collapsing two dimensions into one and pretending the result is meaningful.&lt;/p&gt;

&lt;h2&gt;
  
  
  So what is complexity, then?
&lt;/h2&gt;

&lt;p&gt;In the teams I've worked on, we settled on a Fibonacci-ish scale: &lt;strong&gt;3, 5, 8, 13&lt;/strong&gt;. Anything bigger than 13 wasn't an estimate — it was a signal to break the task down.&lt;/p&gt;

&lt;p&gt;The numbers themselves don't matter much. What matters is what they represent:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;3&lt;/strong&gt; — Well-understood. We've done this kind of thing before. Few moving pieces. Low risk.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;5&lt;/strong&gt; — Some unknowns, or more pieces involved, but nothing scary.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;8&lt;/strong&gt; — Several systems touched, real risk, or genuinely new territory.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;13&lt;/strong&gt; — Too big. Stop. Break it apart.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can layer in dimensions like uncertainty, coupling, blast radius, dependencies, team familiarity — but the goal isn't to build a precise rubric. The goal is to give the team a shared vocabulary for talking about how &lt;em&gt;hard&lt;/em&gt; something is, separate from how &lt;em&gt;long&lt;/em&gt; it takes.&lt;/p&gt;

&lt;h2&gt;
  
  
  The real magic isn't the number — it's the conversation
&lt;/h2&gt;

&lt;p&gt;Here's what nobody tells you about story points: they're not better than hours because they're more accurate. Honestly, they're probably &lt;em&gt;less&lt;/em&gt; accurate in absolute terms.&lt;/p&gt;

&lt;p&gt;They're better because they change the conversation.&lt;/p&gt;

&lt;p&gt;When you ask someone "how long will this take?", the conversation is individual and defensive. Whoever knows the most throws out a number. Everyone else nods. The junior who's actually going to do the work quietly panics, because they know they can't hit that number, but pushing back means admitting they're slower.&lt;/p&gt;

&lt;p&gt;When you ask "how complex is this?", the conversation is collective. Why is this a 5 and not a 3? What pieces does it have? What could go wrong? Juniors learn by watching seniors reason through problems. Seniors occasionally discover that something they called "trivial" wasn't trivial at all. The team understands what they're about to build &lt;em&gt;before&lt;/em&gt; they build it.&lt;/p&gt;

&lt;p&gt;That's what hours don't give you, no matter how precise they are.&lt;/p&gt;

&lt;h2&gt;
  
  
  The split that changes everything
&lt;/h2&gt;

&lt;p&gt;Here's the part of my workflow that I think is genuinely underrated:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The team estimates complexity. The individual developer estimates their own hours.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Complexity is a property of the &lt;em&gt;problem&lt;/em&gt;. Hours are a property of the &lt;em&gt;person solving it&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;I'm a senior architect. A junior on my team is not going to take the same time I do on the same task. That's not a flaw — it's reality. Telling a junior "this should take you 3 hours because the senior said so" is one of the cruelest, most counterproductive things we do in this industry. They burn out trying to hit a number that was never theirs to hit.&lt;/p&gt;

&lt;p&gt;So instead: the team agrees this task is a 5. Then the developer who picks it up estimates &lt;em&gt;their own hours&lt;/em&gt;. Those hours are mostly for them — to plan their day, to learn calibration, to flag early when they're slipping. We sum them as a sanity check against the sprint capacity, but the commitment to the business doesn't come from those numbers. It comes from velocity (more on that in a sec).&lt;/p&gt;

&lt;p&gt;Junior devs get the low-complexity tasks first. Not because we don't trust them, but because &lt;strong&gt;low-complexity tasks are where it's cheap to be wrong&lt;/strong&gt;. That's where you learn to estimate without blowing up the sprint.&lt;/p&gt;

&lt;h2&gt;
  
  
  "But the junior will estimate wrong too"
&lt;/h2&gt;

&lt;p&gt;Yes. They will. That's the point.&lt;/p&gt;

&lt;p&gt;I get this objection every time I describe this system: &lt;em&gt;"if the dev estimates their own hours, they can still get it wrong — for any of the reasons people get hours wrong in the first place."&lt;/em&gt; True. A junior estimating their own hours will probably underestimate 9 times out of 10. A senior in unfamiliar territory will do the same.&lt;/p&gt;

&lt;p&gt;The difference isn't that the estimate magically becomes correct. The difference is &lt;strong&gt;what happens when it's wrong&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;When hours are imposed top-down by whoever-knows-most, a missed estimate is a personal failure. The junior is just behind. Tough luck, work weekends.&lt;/p&gt;

&lt;p&gt;When the dev estimates their own hours, a missed estimate is a &lt;strong&gt;calibration signal&lt;/strong&gt;. It's the moment the team lead — the architect, the TL, the assigned senior — steps in. Not to scold, but to give context. To explain what the dev didn't see. To walk through why this task that looked like 4 hours was actually 12.&lt;/p&gt;

&lt;p&gt;This is where the didactic side of the senior matters, and where teams really differ. Some leads let juniors slam their heads against the wall and call it "learning by doing". Others sit down and unpack the problem with them. The system doesn't fix that for you — but at least it makes the moment &lt;em&gt;visible&lt;/em&gt;, instead of burying it under a missed deadline that nobody wanted to admit was unrealistic from the start.&lt;/p&gt;

&lt;p&gt;Over time, the junior's estimates get sharper. Not because they got faster, but because they learned to &lt;em&gt;see&lt;/em&gt; more of the task before starting it. That's a skill hours-based estimation never teaches, because in hours-based estimation, the junior never gets to estimate at all.&lt;/p&gt;

&lt;h2&gt;
  
  
  What about the unknown?
&lt;/h2&gt;

&lt;p&gt;Every estimation system breaks at the same place: how do you estimate something nobody has done before?&lt;/p&gt;

&lt;p&gt;You don't. You &lt;strong&gt;spike&lt;/strong&gt; it.&lt;/p&gt;

&lt;p&gt;A spike is a timeboxed investigation. "Spend 4 hours figuring out if this is feasible, then come back." The output isn't an estimate — the output is &lt;em&gt;enough understanding to estimate&lt;/em&gt;. And honestly, half the time the spike basically solves the problem, because the hard part wasn't the doing, it was the figuring out.&lt;/p&gt;

&lt;p&gt;This is the part I think most teams miss. They try to estimate the unknown anyway, padding numbers "just in case", and end up with stories that are 80% mystery and 20% work. Spikes are the escape valve. Use them.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to actually do this
&lt;/h2&gt;

&lt;p&gt;If you're sold on the idea but wondering how it looks in practice, there's no single right answer. Here are a few techniques teams use — pick whichever fits your group's vibe:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Planning poker.&lt;/strong&gt; Everyone on the team has a deck of cards with the values (3, 5, 8, 13, plus a "?" for "I have no idea, we need a spike"). Someone reads the task. Everyone picks a card face-down, then reveals at the same time. If the numbers diverge wildly, the highest and lowest explain their reasoning, and you re-vote. The simultaneous reveal is the whole point — it stops people from anchoring on whatever the most senior person said first.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;T-shirt sizes.&lt;/strong&gt; Same idea, but with S / M / L / XL instead of numbers. Useful for teams that find numbers feel falsely precise, or for early-stage estimation where you just want a rough bucket. You can always map sizes to points later if you need velocity tracking.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Affinity estimation.&lt;/strong&gt; Print all the tasks on cards, lay them on a table, and have the team physically group them by relative complexity — "this feels about as hard as that one". Fast for large backlogs, and surprisingly accurate, because humans are much better at &lt;em&gt;comparing&lt;/em&gt; than at &lt;em&gt;measuring&lt;/em&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can mix these. Some teams use affinity estimation for backlog grooming and planning poker for sprint refinement. Others just default to a quick t-shirt sizing in a 15-minute meeting and call it done.&lt;/p&gt;

&lt;p&gt;The technique matters less than the conversation it produces. If your team is genuinely talking about the problem — surfacing risks, sharing context, learning from each other — the format is just scaffolding. Pick whatever scaffolding gets you there.&lt;/p&gt;

&lt;h2&gt;
  
  
  "But the business needs dates"
&lt;/h2&gt;

&lt;p&gt;This is the objection that always comes up, and it's a fair one.&lt;/p&gt;

&lt;p&gt;The answer is &lt;strong&gt;velocity&lt;/strong&gt;. Track how many points your team actually completes per sprint over time. After a few sprints, you have a reasonable estimate. Multiply points-remaining by velocity and you have a date range.&lt;/p&gt;

&lt;p&gt;I want to be honest, though: velocity isn't magic. It has real problems. It can be gamed by inflating points. It assumes a stable team — when people leave or join, it breaks. It works badly for highly exploratory work. And in the wrong hands it stops being a planning tool and becomes a productivity stick to beat people with.&lt;/p&gt;

&lt;p&gt;But used carefully, it gives you something hour-based estimation never does: &lt;strong&gt;a system that gets more accurate over time instead of less&lt;/strong&gt;. The curve is bumpy at first, and then it smooths out. With hours, the curve never smooths out, because the underlying signal was noise from the start.&lt;/p&gt;

&lt;h2&gt;
  
  
  When this doesn't apply
&lt;/h2&gt;

&lt;p&gt;I'm not selling a silver bullet. Complexity-based estimation is overkill when:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Your team is 1–3 people and you all have the same context anyway&lt;/li&gt;
&lt;li&gt;The work is repetitive (pure bug fixing, low-novelty maintenance)&lt;/li&gt;
&lt;li&gt;You're in an early prototype phase where everything is changing every day&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In those cases, hours — or no estimates at all — are probably fine. Don't impose ceremony where it doesn't earn its keep.&lt;/p&gt;

&lt;h2&gt;
  
  
  The honest summary
&lt;/h2&gt;

&lt;p&gt;After years of doing this, I don't think estimating in complexity is &lt;em&gt;more accurate&lt;/em&gt; than estimating in hours. Probably it isn't. But that was never the right question.&lt;/p&gt;

&lt;p&gt;The right question is: &lt;strong&gt;what kind of conversation do you want your team to have?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When you estimate in hours, the conversation is individual and defensive. Whoever knows the most throws out a number, everyone nods, and the person with the least experience ends up trapped trying to hit a commitment they never had a real say in. Nobody learns. Nobody talks about the problem itself. Numbers just get distributed.&lt;/p&gt;

&lt;p&gt;When you estimate in complexity, the conversation is about the problem. Why is this a 5 and not a 3. What's hiding inside it. What risks it carries. Juniors learn by watching seniors reason. Seniors sometimes realize the "trivial" thing wasn't trivial. The team understands what they're about to do — together — before they do it.&lt;/p&gt;

&lt;p&gt;That's what hours don't give you, no matter how precise.&lt;/p&gt;




&lt;p&gt;If your team estimates in hours and it works for you, great — keep going. But if you find yourselves fighting estimates that never land, devs burning out, and PMs disappointed sprint after sprint, maybe the hours aren't being measured wrong.&lt;/p&gt;

&lt;p&gt;Maybe &lt;strong&gt;hours are just the wrong question.&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;What does estimation look like on your team? Do you fight with hours, swear by points, or have you found something else that works? I'd love to hear it in the comments.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>agile</category>
      <category>productivity</category>
      <category>career</category>
      <category>discuss</category>
    </item>
    <item>
      <title>Your team isn’t slow — your WIP is too high (Little’s Law explained)</title>
      <dc:creator>Matías Denda</dc:creator>
      <pubDate>Wed, 06 May 2026 13:00:00 +0000</pubDate>
      <link>https://dev.to/mdenda/littles-law-the-math-that-explains-why-your-team-delivers-slowly-31j3</link>
      <guid>https://dev.to/mdenda/littles-law-the-math-that-explains-why-your-team-delivers-slowly-31j3</guid>
      <description>&lt;p&gt;A team with 20 open PRs is &lt;strong&gt;mathematically slower&lt;/strong&gt; than a team with 3.&lt;/p&gt;

&lt;p&gt;Not “feels slower.” Not “probably slower.”&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Is slower.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If your tickets take two weeks to cross the board while coding takes a few hours, the problem isn’t effort — it’s how much work your team keeps in flight at the same time.&lt;/p&gt;

&lt;p&gt;And there’s a 65-year-old law that explains exactly why.&lt;/p&gt;




&lt;h2&gt;
  
  
  The equation
&lt;/h2&gt;

&lt;p&gt;Little’s Law, proven by MIT professor John Little in 1961, applies to any stable flow system over a long enough interval.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Lead Time = WIP / Throughput&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In plain English:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Lead Time&lt;/strong&gt; — how long it takes for a ticket to go from started → merged → deployed
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;WIP&lt;/strong&gt; — Work In Progress, the number of items currently in flight
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Throughput&lt;/strong&gt; — how many items your team completes per unit time (usually per week)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is not a metaphor. It governs factories, airport security lines, and fast-food drive-throughs.&lt;/p&gt;

&lt;p&gt;It also governs your Git workflow — whether you acknowledge it or not.&lt;/p&gt;




&lt;h2&gt;
  
  
  The example that breaks the illusion
&lt;/h2&gt;

&lt;p&gt;Let’s run the numbers on a real team.&lt;/p&gt;

&lt;p&gt;Priya’s team closes &lt;strong&gt;10 PRs per week&lt;/strong&gt; on average (measured over the last 8 weeks). At any given moment, they have &lt;strong&gt;20 PRs open&lt;/strong&gt; (in progress + in review).&lt;/p&gt;

&lt;p&gt;Their lead time is:&lt;/p&gt;

&lt;p&gt;20 WIP / 10 PRs per week = 2 weeks&lt;/p&gt;

&lt;p&gt;That means every ticket entering the system today is &lt;strong&gt;guaranteed&lt;/strong&gt; to take about two weeks to ship — even if the code itself takes 3 hours.&lt;/p&gt;

&lt;p&gt;When someone asks:&lt;/p&gt;

&lt;p&gt;“Why does this take two weeks if coding takes half a day?”&lt;/p&gt;

&lt;p&gt;This is the answer.&lt;/p&gt;

&lt;p&gt;The code is not slow.&lt;br&gt;&lt;br&gt;
The developers are not slow.  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The queue is long.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  The part that actually changes how you work
&lt;/h2&gt;

&lt;p&gt;If throughput is roughly stable (your team works at a certain pace), then:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Lowering WIP lowers lead time proportionally.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Same team:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;WIP 20 → 10 → lead time drops from 2 weeks to &lt;strong&gt;1 week&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;WIP 20 → 6 → lead time drops to &lt;strong&gt;~3 days&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Nothing else changes.&lt;/p&gt;

&lt;p&gt;No overtime.&lt;br&gt;&lt;br&gt;
No process overhaul.  &lt;/p&gt;

&lt;p&gt;Just finishing work before starting new work.&lt;/p&gt;

&lt;p&gt;This is why WIP limits exist in Kanban — not as a constraint on developers, but as a way to &lt;strong&gt;shorten delivery time by physics&lt;/strong&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  “If I limit WIP, people will sit idle”
&lt;/h2&gt;

&lt;p&gt;This is the most common objection — and it’s backwards.&lt;/p&gt;

&lt;p&gt;A team with 10 open PRs is &lt;strong&gt;slower&lt;/strong&gt; than a team with 3.&lt;/p&gt;

&lt;p&gt;Why?&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Context switching is expensive&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Every open branch is cognitive load. Ten branches means ten partial states competing for attention.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Half-done work has zero value&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
A PR at 90% is still 0% delivered.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Waiting PRs are inventory, not progress&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
In manufacturing, inventory is waste. In software, a PR sitting in review for days is exactly that.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Limiting WIP forces a simple rule:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Finish something before starting something new.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;That alone compresses your delivery timeline.&lt;/p&gt;




&lt;h2&gt;
  
  
  You can measure this from Git (no fancy tools needed)
&lt;/h2&gt;

&lt;p&gt;You don’t need Jira dashboards or expensive analytics. You can get all three variables directly from Git.&lt;/p&gt;




&lt;h3&gt;
  
  
  Throughput — PRs merged per week
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gh &lt;span class="nb"&gt;pr &lt;/span&gt;list &lt;span class="nt"&gt;--state&lt;/span&gt; merged &lt;span class="nt"&gt;--limit&lt;/span&gt; 200 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--json&lt;/span&gt; mergedAt &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--jq&lt;/span&gt; &lt;span class="s1"&gt;'[.[] | .mergedAt | fromdate | strftime("%Y-W%V")] 
        | group_by(.) 
        | map({week: .[0], count: length})'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Average the last 8 weeks for a stable number.&lt;/p&gt;




&lt;h3&gt;
  
  
  WIP — what’s currently in flight
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# PRs in review&lt;/span&gt;
gh &lt;span class="nb"&gt;pr &lt;/span&gt;list &lt;span class="nt"&gt;--state&lt;/span&gt; open | &lt;span class="nb"&gt;wc&lt;/span&gt; &lt;span class="nt"&gt;-l&lt;/span&gt;

&lt;span class="c"&gt;# Branches not yet merged (likely in progress)&lt;/span&gt;
git fetch &lt;span class="nt"&gt;--prune&lt;/span&gt;
git branch &lt;span class="nt"&gt;-r&lt;/span&gt; &lt;span class="nt"&gt;--no-merged&lt;/span&gt; origin/main | &lt;span class="nb"&gt;wc&lt;/span&gt; &lt;span class="nt"&gt;-l&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Add both — that’s your WIP.&lt;/p&gt;




&lt;h3&gt;
  
  
  Lead time — created → merged
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gh &lt;span class="nb"&gt;pr &lt;/span&gt;list &lt;span class="nt"&gt;--state&lt;/span&gt; merged &lt;span class="nt"&gt;--limit&lt;/span&gt; 50 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--json&lt;/span&gt; number,createdAt,mergedAt &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--jq&lt;/span&gt; &lt;span class="s1"&gt;'.[] | {n: .number, hours: (((.mergedAt|fromdate) - (.createdAt|fromdate))/3600|floor)}'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Don’t just look at the average — look at the spread. A few PRs taking 10+ days can dominate the system.&lt;/p&gt;




&lt;h2&gt;
  
  
  What to do with the numbers
&lt;/h2&gt;

&lt;p&gt;Once you have all three:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If the equation matches your lead time → your system is stable. Lower WIP to go faster.
&lt;/li&gt;
&lt;li&gt;If your lead time is &lt;strong&gt;longer&lt;/strong&gt; → you have bottlenecks (reviews, CI, environments). Fix those first.
&lt;/li&gt;
&lt;li&gt;If it’s &lt;strong&gt;shorter&lt;/strong&gt; → you’re likely measuring WIP wrong or hiding variance.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you’re leading a team and this number is higher than expected, you’re not alone — most teams never measure it.&lt;/p&gt;




&lt;h2&gt;
  
  
  A practical starting point for WIP limits
&lt;/h2&gt;

&lt;p&gt;There’s no universal number, but these are solid defaults:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;In Progress ≈ team size&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;In Review ≈ team size ÷ 2&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Example (6 devs):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In Progress = 6
&lt;/li&gt;
&lt;li&gt;In Review = 3
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;From there, adjust slowly. Not weekly — &lt;strong&gt;quarterly&lt;/strong&gt;. The signal takes time.&lt;/p&gt;

&lt;p&gt;If lead time is too long:&lt;br&gt;
Don’t add people — lower WIP.&lt;/p&gt;




&lt;h2&gt;
  
  
  When perception and math disagree
&lt;/h2&gt;

&lt;p&gt;This is the uncomfortable part.&lt;/p&gt;

&lt;p&gt;A team with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;WIP = 20
&lt;/li&gt;
&lt;li&gt;Throughput = 5/week
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Has a &lt;strong&gt;4-week lead time&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;It doesn’t matter if individual PRs “feel fast.”&lt;/p&gt;

&lt;p&gt;Users don’t feel individual PR speed.&lt;br&gt;&lt;br&gt;
They feel &lt;strong&gt;system lead time&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;When the board says “we’re delivering” but the math says 4 weeks — the business experiences 4 weeks.&lt;/p&gt;




&lt;h2&gt;
  
  
  The takeaway
&lt;/h2&gt;

&lt;p&gt;You don’t need a new process.&lt;/p&gt;

&lt;p&gt;You don’t need better estimates.&lt;/p&gt;

&lt;p&gt;You need a constraint:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Stop starting. Start finishing.&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This post is adapted from &lt;a href="https://mdenda.gumroad.com/l/git-in-depth" rel="noopener noreferrer"&gt;Git in Depth: From Solo Developer to Engineering Teams&lt;/a&gt;, a 658-page book covering Git the way it’s actually used in real engineering teams.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Little’s Law looks simple — until you apply it and realize your entire workflow is shaped by it. In the book, I go deeper into how this connects to branching strategies, PR flows, and why some teams get faster as they scale while others slow down.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>git</category>
      <category>devops</category>
      <category>productivity</category>
      <category>teams</category>
    </item>
    <item>
      <title>I shipped a feature in a language I barely know — thanks to AI (and not for the reason you think)</title>
      <dc:creator>Matías Denda</dc:creator>
      <pubDate>Thu, 30 Apr 2026 13:00:00 +0000</pubDate>
      <link>https://dev.to/mdenda/i-shipped-a-feature-in-a-language-i-barely-know-thanks-to-ai-and-not-for-the-reason-you-think-5hi5</link>
      <guid>https://dev.to/mdenda/i-shipped-a-feature-in-a-language-i-barely-know-thanks-to-ai-and-not-for-the-reason-you-think-5hi5</guid>
      <description>&lt;p&gt;Last week, I built a small service in Go.&lt;/p&gt;

&lt;p&gt;I don’t write Go.&lt;/p&gt;

&lt;p&gt;I don’t know its idioms. I don’t have muscle memory for its syntax. I’ve never used it in production.&lt;/p&gt;

&lt;p&gt;And yet, in a couple of hours, I had something working. Clean enough. Tested. Doing what it was supposed to do.&lt;/p&gt;

&lt;p&gt;AI did most of the typing. That part isn’t surprising anymore.&lt;/p&gt;

&lt;p&gt;What &lt;em&gt;was&lt;/em&gt; surprising is that the bottleneck wasn’t the language.&lt;/p&gt;

&lt;p&gt;It was me.&lt;/p&gt;

&lt;h2&gt;
  
  
  The part that feels like magic
&lt;/h2&gt;

&lt;p&gt;If you haven’t tried this yet, it does feel like a superpower at first.&lt;/p&gt;

&lt;p&gt;You describe what you want:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“Build a REST endpoint that processes X, stores Y, and returns Z.”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;AI gives you:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;project structure&lt;/li&gt;
&lt;li&gt;handlers&lt;/li&gt;
&lt;li&gt;models&lt;/li&gt;
&lt;li&gt;tests&lt;/li&gt;
&lt;li&gt;even some documentation&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You iterate a bit:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;“this should be async”&lt;/li&gt;
&lt;li&gt;“add retries”&lt;/li&gt;
&lt;li&gt;“separate this into a service layer”&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And you end up with a working feature in a language you barely know.&lt;/p&gt;

&lt;p&gt;No docs. No Stack Overflow. No long detours learning syntax.&lt;/p&gt;

&lt;p&gt;It’s tempting to conclude:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“Languages don’t matter anymore.”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That’s not what’s happening.&lt;/p&gt;

&lt;h2&gt;
  
  
  What actually made it work
&lt;/h2&gt;

&lt;p&gt;I didn’t know Go.&lt;/p&gt;

&lt;p&gt;But I knew what I wanted the system to do.&lt;/p&gt;

&lt;p&gt;I knew:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;this endpoint shouldn’t block on external calls&lt;/li&gt;
&lt;li&gt;this logic needed to be isolated from transport concerns&lt;/li&gt;
&lt;li&gt;this data access pattern would become a bottleneck if left naive&lt;/li&gt;
&lt;li&gt;this part would need caching if traffic increased&lt;/li&gt;
&lt;li&gt;this failure mode needed explicit handling&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;None of that is Go.&lt;/p&gt;

&lt;p&gt;That’s architecture. That’s design. That’s experience.&lt;/p&gt;

&lt;p&gt;AI was just translating those decisions into code.&lt;/p&gt;

&lt;p&gt;And that translation layer is exactly what AI is incredibly good at.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;AI is a universal translator for code — not a substitute for thinking.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Where things start to break
&lt;/h2&gt;

&lt;p&gt;Here’s the part that’s easy to ignore when everything is working.&lt;/p&gt;

&lt;p&gt;I could build the service.&lt;/p&gt;

&lt;p&gt;I couldn’t &lt;em&gt;own&lt;/em&gt; it.&lt;/p&gt;

&lt;p&gt;If something subtle went wrong, I wasn’t operating from intuition. I was guessing.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Is this idiomatic Go, or just something that compiles?&lt;/li&gt;
&lt;li&gt;Is this memory-safe under load?&lt;/li&gt;
&lt;li&gt;Is this concurrency model correct, or just “seems fine”?&lt;/li&gt;
&lt;li&gt;Is the performance acceptable, or accidentally terrible?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When I write in a language I know well, those questions don’t feel like questions. They feel like constraints I naturally design around.&lt;/p&gt;

&lt;p&gt;Here, they were blind spots.&lt;/p&gt;

&lt;p&gt;AI helped me move fast. It didn’t remove those blind spots.&lt;/p&gt;

&lt;p&gt;It just made them easier to miss.&lt;/p&gt;

&lt;h2&gt;
  
  
  The real shift
&lt;/h2&gt;

&lt;p&gt;Before AI, learning a new language had a steep upfront cost.&lt;/p&gt;

&lt;p&gt;You had to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;learn syntax&lt;/li&gt;
&lt;li&gt;understand standard libraries&lt;/li&gt;
&lt;li&gt;internalize patterns&lt;/li&gt;
&lt;li&gt;build enough context to be productive&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now, that cost is dramatically lower.&lt;/p&gt;

&lt;p&gt;You can get to “working code” almost immediately.&lt;/p&gt;

&lt;p&gt;But something important didn’t change:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The quality of the system still depends on the quality of the decisions behind it.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;AI removed the friction of &lt;em&gt;writing&lt;/em&gt; code.&lt;/p&gt;

&lt;p&gt;It did not remove the need to decide:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;what code should exist&lt;/li&gt;
&lt;li&gt;how components should interact&lt;/li&gt;
&lt;li&gt;what trade-offs are acceptable&lt;/li&gt;
&lt;li&gt;what will break at scale&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That’s still on you.&lt;/p&gt;

&lt;h2&gt;
  
  
  What this enables (and what it doesn’t)
&lt;/h2&gt;

&lt;p&gt;This is genuinely powerful.&lt;/p&gt;

&lt;p&gt;It means:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;you can explore new stacks without a huge upfront investment&lt;/li&gt;
&lt;li&gt;you can prototype across ecosystems quickly&lt;/li&gt;
&lt;li&gt;you can apply your knowledge in more places than before&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But it does &lt;em&gt;not&lt;/em&gt; mean:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;you understand the language&lt;/li&gt;
&lt;li&gt;you can debug deep issues&lt;/li&gt;
&lt;li&gt;you can reason about edge cases confidently&lt;/li&gt;
&lt;li&gt;you can make informed trade-offs inside that ecosystem&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;You can switch languages. You can’t switch fundamentals.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  The uncomfortable implication
&lt;/h2&gt;

&lt;p&gt;Two developers can now build the same feature in a language neither of them knows.&lt;/p&gt;

&lt;p&gt;Both use AI. Both get working code.&lt;/p&gt;

&lt;p&gt;The difference isn’t in how well they prompt.&lt;/p&gt;

&lt;p&gt;It’s in what they &lt;em&gt;know to care about&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;One will ship something that works.&lt;/p&gt;

&lt;p&gt;The other will ship something that keeps working.&lt;/p&gt;

&lt;p&gt;From the outside, those look identical — at least at first.&lt;/p&gt;

&lt;h2&gt;
  
  
  What changed (and what didn’t)
&lt;/h2&gt;

&lt;p&gt;AI didn’t make programming languages irrelevant.&lt;/p&gt;

&lt;p&gt;It made them less of a bottleneck.&lt;/p&gt;

&lt;p&gt;The bottleneck moved somewhere else:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;system design&lt;/li&gt;
&lt;li&gt;understanding trade-offs&lt;/li&gt;
&lt;li&gt;anticipating failure modes&lt;/li&gt;
&lt;li&gt;reviewing and validating code&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In other words: fundamentals.&lt;/p&gt;

&lt;p&gt;Not the kind you memorize.&lt;/p&gt;

&lt;p&gt;The kind you only get by building, breaking, and fixing systems over time.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I took away from this
&lt;/h2&gt;

&lt;p&gt;Using AI in a language I barely know wasn’t impressive.&lt;/p&gt;

&lt;p&gt;It was revealing.&lt;/p&gt;

&lt;p&gt;It showed me very clearly what parts of my skillset are portable — and what parts aren’t.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Syntax? Portable.&lt;/li&gt;
&lt;li&gt;Patterns? Mostly portable.&lt;/li&gt;
&lt;li&gt;Judgment? Not portable. Earned.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And AI amplifies all three.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I’ll write about next
&lt;/h2&gt;

&lt;p&gt;This might look like AI is leveling the playing field.&lt;/p&gt;

&lt;p&gt;It’s not.&lt;/p&gt;

&lt;p&gt;If anything, it’s making the differences between developers harder to see — and more important.&lt;/p&gt;

&lt;p&gt;Two people can now produce similar-looking code at similar speed.&lt;/p&gt;

&lt;p&gt;That doesn’t mean they built the same system.&lt;/p&gt;

&lt;p&gt;I’ll dig into that in the next post.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;I write about Git and engineering practice for working developers. My book &lt;strong&gt;Git in Depth&lt;/strong&gt; is 658 pages of the fundamentals AI assumes you already know.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;If you're trying to build stronger foundations — not just ship faster — you can check it out here: &lt;a href="https://mdenda.gumroad.com/l/git-in-depth" rel="noopener noreferrer"&gt;https://mdenda.gumroad.com/l/git-in-depth&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>softwareengineering</category>
      <category>productivity</category>
      <category>programming</category>
    </item>
    <item>
      <title>Your Kanban board is lying to you (and Git knows it)</title>
      <dc:creator>Matías Denda</dc:creator>
      <pubDate>Wed, 29 Apr 2026 13:00:00 +0000</pubDate>
      <link>https://dev.to/mdenda/your-kanban-board-is-lying-to-you-and-git-knows-it-1hof</link>
      <guid>https://dev.to/mdenda/your-kanban-board-is-lying-to-you-and-git-knows-it-1hof</guid>
      <description>&lt;p&gt;Look at your team's board right now.&lt;/p&gt;

&lt;p&gt;How many tickets are in "In Review" that haven't been looked at in three days? How many are in "In QA" even though nobody's tested them? How many jumped straight from "In Progress" to "Done" without ever appearing in the intermediate columns?&lt;/p&gt;

&lt;p&gt;If you're like most teams, the answer is "a lot." Your board is not reflecting reality. It's a fiction your team politely maintains.&lt;/p&gt;

&lt;p&gt;Here's the uncomfortable truth: &lt;strong&gt;your Git repository knows the real state of every ticket. Your board just hasn't caught up.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This article is about bridging that gap.&lt;/p&gt;

&lt;h2&gt;
  
  
  The problem: columns designed by process people, not by engineers
&lt;/h2&gt;

&lt;p&gt;Most boards are designed by someone who thinks about process, not about what actually happens in Git. That's why you end up with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;"Ready for Development"&lt;/strong&gt; — a holding pen for tickets nobody wants to admit aren't ready. If it's prioritized and has detail, it's "To Do." If it doesn't, it's still "Backlog."&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;"Ready for QA"&lt;/strong&gt; — this implies a manual handoff. If the code is merged and deployed to a place QA can access, it's in QA. The deployment is the handoff, not a ticket drag.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;"Deployed" as a column separate from "Done"&lt;/strong&gt; — if the code is in production and verified, it's done. If it's deployed but not verified, it's still in QA (in production). A column between them means nobody is verifying production deployments.
These columns feel orderly on a wiki diagram. In practice, they're fiction. Tickets skip them. Developers lie about their state. The board drifts from reality.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The test: can someone verify the column from Git alone?
&lt;/h2&gt;

&lt;p&gt;Here's the rule that cuts through all the noise:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Every column must correspond to a state that someone can verify, independently of anyone else's memory.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;"In Progress" passes this test — is there a branch with commits? Verifiable.&lt;/p&gt;

&lt;p&gt;"Code Review" passes — is there an open PR? Does it have an approving review? Verifiable via &lt;code&gt;gh pr view&lt;/code&gt; or the hosting platform's API.&lt;/p&gt;

&lt;p&gt;"Ready for Testing" fails — ready according to whom? Tested where? This is a human statement, not a Git state.&lt;/p&gt;

&lt;p&gt;If a column can't be verified from the repo, one of two things is happening:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;The column represents a task-flow state&lt;/strong&gt;, not a code-flow state. That's fine — but it needs to be a &lt;em&gt;lateral exit&lt;/em&gt;, not part of the main flow. "Blocked" and "Needs Spec" are valid columns, but they're paused states, not progression states.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The column is fiction.&lt;/strong&gt; Remove it, or fix the process so it becomes real.
## Two dimensions that most boards collapse into one&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The biggest insight I had while designing boards with teams: there are two independent dimensions of ticket state, and teams keep smashing them into a single horizontal flow.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Code-flow columns&lt;/strong&gt; (left to right): &lt;code&gt;To Do → In Progress → In Review → In QA → Done&lt;/code&gt;. These map directly to Git states.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Task-flow columns&lt;/strong&gt; (lateral exits): &lt;code&gt;Blocked&lt;/code&gt;, &lt;code&gt;On Hold&lt;/code&gt;, &lt;code&gt;Needs Spec&lt;/code&gt;, &lt;code&gt;Waiting for Client&lt;/code&gt;. These don't map to Git at all — they reflect the state of the work, not the state of the code.&lt;/p&gt;

&lt;p&gt;A ticket in "Blocked" still has a Git branch open. It hasn't moved backward in the flow — it's paused. When you separate these two dimensions, you prevent the common mistake of adding "Blocked" as a column between "In Progress" and "In Review," which breaks the flow and confuses everyone.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu3clnihdferw3iss1vo3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu3clnihdferw3iss1vo3.png" alt="Diagram showing two independent dimensions of a board. The top row shows code-flow columns arranged horizontally with arrows: To Do, In Progress, In Review, In QA, Done — each labeled with its Git equivalent. The bottom row shows task-flow columns arranged laterally: Blocked, On Hold, Needs Spec, Waiting for Client — with dashed borders indicating they are lateral exits. Dashed arrows descend from code-flow states to task-flow states, showing that any code-flow state can derive to a lateral exit at any time." width="800" height="444"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The reality: columns that get skipped
&lt;/h2&gt;

&lt;p&gt;On paper, tickets move through every column. In practice, developers skip half of them.&lt;/p&gt;

&lt;p&gt;Tickets jump from "In Progress" directly to "Done" — skipping "In Review" and "In QA" entirely. Why? Because the developer merged the PR, verified it themselves in production, and dragged the ticket in one motion.&lt;/p&gt;

&lt;p&gt;This isn't laziness. It's the board not matching reality. If the team doesn't have a dedicated QA person or a staging environment, the "In QA" column is fiction. Tickets skip it because the state it represents doesn't exist.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The rule: if a column is consistently skipped, remove it or fix the process.&lt;/strong&gt; Either add the missing step (a real QA phase, a real staging environment) or remove the column that pretends it exists. A board with skipped columns is a board that lies.&lt;/p&gt;

&lt;h2&gt;
  
  
  The solution: let Git move the tickets
&lt;/h2&gt;

&lt;p&gt;The best boards are updated by Git events, not by humans dragging cards. Most tracking systems support this natively:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Opening a PR&lt;/strong&gt; moves the ticket to "In Review" — automatically&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Review approval&lt;/strong&gt; can move it to "Approved" — automatically&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Merging the PR&lt;/strong&gt; moves it to "In QA" or "Done" — automatically&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tagging a release&lt;/strong&gt; can move it to "Done" — automatically
Here's a GitHub Actions workflow that moves a GitHub Projects card when a PR is opened or merged:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# .github/workflows/board-automation.yml&lt;/span&gt;
&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Board automation&lt;/span&gt;
&lt;span class="na"&gt;on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;pull_request&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;types&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;opened&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;closed&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;

&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;move-card&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;
    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Move to In Review when PR opens&lt;/span&gt;
        &lt;span class="na"&gt;if&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;github.event.action == 'opened'&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/github-script@v7&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;script&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
            &lt;span class="s"&gt;// Move the linked issue to "In Review" column&lt;/span&gt;
            &lt;span class="s"&gt;// Uses GitHub Projects V2 API&lt;/span&gt;
            &lt;span class="s"&gt;const query = `mutation($projectId: ID!, $itemId: ID!, $fieldId: ID!, $value: String!) {&lt;/span&gt;
              &lt;span class="s"&gt;updateProjectV2ItemFieldValue(input: {&lt;/span&gt;
                &lt;span class="s"&gt;projectId: $projectId, itemId: $itemId,&lt;/span&gt;
                &lt;span class="s"&gt;fieldId: $fieldId,&lt;/span&gt;
                &lt;span class="s"&gt;value: { singleSelectOptionId: $value }&lt;/span&gt;
              &lt;span class="s"&gt;}) { projectV2Item { id } }&lt;/span&gt;
            &lt;span class="s"&gt;}`;&lt;/span&gt;
            &lt;span class="s"&gt;// projectId, fieldId, and optionId come from your project's&lt;/span&gt;
            &lt;span class="s"&gt;// settings — query them once with the GraphQL explorer&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Move to In QA when PR merges&lt;/span&gt;
        &lt;span class="na"&gt;if&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;github.event.action == 'closed' &amp;amp;&amp;amp; github.event.pull_request.merged&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/github-script@v7&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;script&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
            &lt;span class="s"&gt;// Same mutation, different singleSelectOptionId&lt;/span&gt;
            &lt;span class="s"&gt;// pointing to the "In QA" column&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For Jira, it's even simpler — built-in automation rules handle this:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Rule 1:&lt;/strong&gt; Trigger: Pull request created. Condition: Issue status is 'In Progress'. Action: Transition to 'In Review'.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Rule 2:&lt;/strong&gt; Trigger: Pull request merged. Condition: Issue status is 'In Review'. Action: Transition to 'In QA'.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Linear, Shortcut, and Azure DevOps have similar integrations. The specifics differ, but the pattern is universal: &lt;strong&gt;Git event → webhook → board transition&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  What you need to make this work
&lt;/h2&gt;

&lt;p&gt;The automation is trivial. The discipline it requires is the hard part:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Branch names must include ticket IDs.&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Good — automation can match on this&lt;/span&gt;
git checkout &lt;span class="nt"&gt;-b&lt;/span&gt; feature/PROJ-247-search-filters

&lt;span class="c"&gt;# Bad — automation has nothing to match&lt;/span&gt;
git checkout &lt;span class="nt"&gt;-b&lt;/span&gt; sam/search-thing
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;2. PR descriptions must reference the ticket.&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gh &lt;span class="nb"&gt;pr &lt;/span&gt;create &lt;span class="nt"&gt;--title&lt;/span&gt; &lt;span class="s2"&gt;"feat(search): add filters"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
             &lt;span class="nt"&gt;--body&lt;/span&gt; &lt;span class="s2"&gt;"Closes PROJ-247"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;3. Accept that some transitions stay manual.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;"In Progress" is manual — and that's fine. When a developer picks up a ticket, they drag it and create a local branch. Nobody else knows about the branch until they push. Moving the ticket is faster than pushing a WIP commit just to trigger automation, and it avoids polluting Git history with empty commits.&lt;/p&gt;

&lt;p&gt;The goal isn't 100% automation. The goal is that &lt;strong&gt;the board reflects reality&lt;/strong&gt;, even when nobody has time to update it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Reading the board from Git (when you don't trust the board)
&lt;/h2&gt;

&lt;p&gt;Once you accept that Git is the source of truth, you can answer board questions without even looking at the board.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;"What's actually in progress right now?"&lt;/strong&gt; Remote branches ahead of main with no open PR:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git fetch &lt;span class="nt"&gt;--prune&lt;/span&gt;
&lt;span class="k"&gt;for &lt;/span&gt;branch &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;git &lt;span class="k"&gt;for&lt;/span&gt;&lt;span class="nt"&gt;-each-ref&lt;/span&gt; &lt;span class="nt"&gt;--format&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'%(refname:short)'&lt;/span&gt; refs/remotes/origin/ &lt;span class="se"&gt;\&lt;/span&gt;
                | &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="nt"&gt;-v&lt;/span&gt; &lt;span class="s1"&gt;'origin/main$'&lt;/span&gt; | &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="nt"&gt;-v&lt;/span&gt; &lt;span class="s1"&gt;'origin/HEAD'&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do
  &lt;/span&gt;&lt;span class="nv"&gt;ahead&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;git rev-list &lt;span class="nt"&gt;--count&lt;/span&gt; origin/main..&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$branch&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; 2&amp;gt;/dev/null&lt;span class="si"&gt;)&lt;/span&gt;
  &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$ahead&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nt"&gt;-gt&lt;/span&gt; 0 &lt;span class="o"&gt;]&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="k"&gt;continue
  &lt;/span&gt;&lt;span class="nv"&gt;has_pr&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;gh &lt;span class="nb"&gt;pr &lt;/span&gt;list &lt;span class="nt"&gt;--head&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;branch&lt;/span&gt;&lt;span class="p"&gt;#origin/&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nt"&gt;--state&lt;/span&gt; open &lt;span class="nt"&gt;--json&lt;/span&gt; number &lt;span class="nt"&gt;--jq&lt;/span&gt; &lt;span class="s1"&gt;'length'&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;
  &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$has_pr&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"0"&lt;/span&gt; &lt;span class="o"&gt;]&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"In Progress: &lt;/span&gt;&lt;span class="nv"&gt;$branch&lt;/span&gt;&lt;span class="s2"&gt; (&lt;/span&gt;&lt;span class="nv"&gt;$ahead&lt;/span&gt;&lt;span class="s2"&gt; commits ahead)"&lt;/span&gt;
&lt;span class="k"&gt;done&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;"What's in review?"&lt;/strong&gt; Open PRs — simplest query, most useful:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gh &lt;span class="nb"&gt;pr &lt;/span&gt;list &lt;span class="nt"&gt;--state&lt;/span&gt; open &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--json&lt;/span&gt; number,title,author,reviewDecision,createdAt &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--jq&lt;/span&gt; &lt;span class="s1"&gt;'.[] | "\(.number)\t\(.author.login)\t\(.reviewDecision // "PENDING")\t\(.title)"'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;"What's merged but not deployed?"&lt;/strong&gt; Commits on main since the last production tag — the invisible column most teams don't track:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# What's on main that hasn't been tagged yet?&lt;/span&gt;
git log &lt;span class="nt"&gt;--oneline&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;git describe &lt;span class="nt"&gt;--tags&lt;/span&gt; &lt;span class="nt"&gt;--abbrev&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0&lt;span class="si"&gt;)&lt;/span&gt;..origin/main

&lt;span class="c"&gt;# Extract ticket IDs — these are "done but undelivered"&lt;/span&gt;
git log &lt;span class="nt"&gt;--pretty&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;%s &lt;span class="si"&gt;$(&lt;/span&gt;git describe &lt;span class="nt"&gt;--tags&lt;/span&gt; &lt;span class="nt"&gt;--abbrev&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0&lt;span class="si"&gt;)&lt;/span&gt;..origin/main &lt;span class="se"&gt;\&lt;/span&gt;
  | &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="nt"&gt;-oE&lt;/span&gt; &lt;span class="s1"&gt;'[A-Z]+-[0-9]+'&lt;/span&gt; | &lt;span class="nb"&gt;sort&lt;/span&gt; &lt;span class="nt"&gt;-u&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;"What's actually in production right now?"&lt;/strong&gt; Tickets referenced by commits reachable from the production tag:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;CURRENT_PROD_TAG&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;git describe &lt;span class="nt"&gt;--tags&lt;/span&gt; &lt;span class="nt"&gt;--abbrev&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0 &lt;span class="nt"&gt;--match&lt;/span&gt; &lt;span class="s1"&gt;'v*'&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;
git log &lt;span class="nt"&gt;--pretty&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;%s &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CURRENT_PROD_TAG&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  | &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="nt"&gt;-oE&lt;/span&gt; &lt;span class="s1"&gt;'[A-Z]+-[0-9]+'&lt;/span&gt; | &lt;span class="nb"&gt;sort&lt;/span&gt; &lt;span class="nt"&gt;-u&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run these for a week. Compare against what the board shows. If they disagree, &lt;strong&gt;the board is lying&lt;/strong&gt; — and now you have the data to fix either the board or the process.&lt;/p&gt;

&lt;h2&gt;
  
  
  The mindset shift
&lt;/h2&gt;

&lt;p&gt;The board isn't the truth. It's a summary of the truth that drifts whenever someone forgets to drag a ticket.&lt;/p&gt;

&lt;p&gt;Git &lt;strong&gt;is&lt;/strong&gt; the truth. Every branch, every PR, every merge is a real event with a real timestamp. You can't forget to tell Git — it records everything automatically, as a side effect of doing the work.&lt;/p&gt;

&lt;p&gt;Teams that embrace this get three wins:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;The board starts matching reality&lt;/strong&gt; (because Git drives it, not human memory)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;"Where is this ticket?" gets one answer, not three&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Standup meetings stop being ticket-hunting exercises&lt;/strong&gt; and start being real conversations about blockers
That's what a good board does. That's what yours could be doing, if you let Git update it.&lt;/li&gt;
&lt;/ol&gt;




&lt;p&gt;&lt;em&gt;This post is adapted from &lt;em&gt;&lt;a href="https://mdenda.gumroad.com/l/git-in-depth" rel="noopener noreferrer"&gt;Git in Depth: From Solo Developer to Engineering Teams&lt;/a&gt;&lt;/em&gt;, a 658-page book covering Git the way it's actually used in real engineering teams — including a full chapter on aligning board columns with Git states.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Related: &lt;a href="https://dev.to/mdenda/what-actually-happens-when-you-git-merge-no-ff-4il1"&gt;What actually happens when you git merge --no-ff&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;See all my articles on Git and engineering practice: &lt;a href="https://dev.to/mdenda"&gt;dev.to/mdenda&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>git</category>
      <category>devops</category>
      <category>agile</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Hiding Data in Plain Sight: Building Anyhide in Rust</title>
      <dc:creator>Matías Denda</dc:creator>
      <pubDate>Tue, 28 Apr 2026 14:00:00 +0000</pubDate>
      <link>https://dev.to/mdenda/hiding-data-in-plain-sight-building-anyhide-in-rust-3jnd</link>
      <guid>https://dev.to/mdenda/hiding-data-in-plain-sight-building-anyhide-in-rust-3jnd</guid>
      <description>&lt;p&gt;&lt;em&gt;This is the first post of a 6-part series where I unpack the design of &lt;a href="https://github.com/matutetandil/anyhide" rel="noopener noreferrer"&gt;Anyhide&lt;/a&gt; — a steganography tool I built in Rust. Each post tackles one feature, one decision, and the code behind it.&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;Classical steganography is simple: you take an image, flip the least-significant bits of some pixels, and smuggle your message inside. The image now carries a payload. You send the image. The recipient extracts the payload.&lt;/p&gt;

&lt;p&gt;It works. It's also, in 2026, kind of a disaster.&lt;/p&gt;

&lt;p&gt;LSB steganography is detectable by statistical analysis. Any carrier you modify is evidence. Any file you transmit is a file someone can analyze. If your threat model includes an adversary who can examine the files you send, then hiding data &lt;em&gt;inside&lt;/em&gt; those files is exactly the wrong move.&lt;/p&gt;

&lt;p&gt;So I spent the last few months building a steganography tool that works the other way around. The carrier is never modified. The carrier is never transmitted. What goes over the wire is a short encrypted code — a set of positions into a file both parties already have.&lt;/p&gt;

&lt;p&gt;It's called &lt;em&gt;Anyhide&lt;/em&gt;. It's written in Rust. The repo is &lt;a href="https://github.com/matutetandil/anyhide" rel="noopener noreferrer"&gt;here&lt;/a&gt;. This post is a walkthrough of the core idea.&lt;/p&gt;

&lt;h2&gt;
  
  
  The inversion
&lt;/h2&gt;

&lt;p&gt;The mental model of classical steganography is "hide the payload in the carrier." The mental model of Anyhide is "hide the &lt;em&gt;map&lt;/em&gt; to the payload in a shared reference."&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SENDER                         RECEIVER

carrier.mp4 ──┐                      ┌── carrier.mp4
              │                      │   (same file, unchanged)
secret.zip ───┼──► ANYHIDE CODE ─────┼──► secret.zip
              │   (only this         │
passphrase ───┘    is sent)          └── passphrase
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Both parties hold the same file — any file. A video, a PDF, an MP3, a PNG, a Linux kernel tarball. It doesn't matter. The file is never touched and never transmitted.&lt;/p&gt;

&lt;p&gt;The sender encrypts their message and encodes it as a sequence of byte positions that reconstruct the ciphertext when read out of the shared carrier. Those positions, compressed and encrypted again, become the "Anyhide code" — a base64 string that fits in a tweet.&lt;/p&gt;

&lt;p&gt;That string is all that travels.&lt;/p&gt;

&lt;h2&gt;
  
  
  Two things this gets you for free
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;Plausible deniability.&lt;/em&gt; The string on the wire looks like any other base64 blob. An adversary can't even tell there's a payload in the carrier, because the carrier isn't involved. There's nothing to analyze.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;No forensic artifacts on the carrier.&lt;/em&gt; Traditional stego tools leave telltale statistical anomalies in the carrier — histograms that don't match natural images, file sizes that are off by exactly the right number of bits. Anyhide touches nothing. If law enforcement seizes your laptop and finds the MP4, they find an ordinary MP4.&lt;/p&gt;

&lt;h2&gt;
  
  
  The demo
&lt;/h2&gt;

&lt;p&gt;Here's what the actual flow looks like on the command line.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Both parties generate keypairs once&lt;/span&gt;
alice&lt;span class="nv"&gt;$ &lt;/span&gt;anyhide keygen &lt;span class="nt"&gt;-o&lt;/span&gt; alice
bob&lt;span class="nv"&gt;$ &lt;/span&gt;  anyhide keygen &lt;span class="nt"&gt;-o&lt;/span&gt; bob

&lt;span class="c"&gt;# They exchange public keys somehow — this is standard PKI&lt;/span&gt;
&lt;span class="c"&gt;# Alice has bob.pub, Bob has alice.pub&lt;/span&gt;

&lt;span class="c"&gt;# Alice hides a message using a shared carrier both of them have&lt;/span&gt;
alice&lt;span class="nv"&gt;$ &lt;/span&gt;anyhide encode &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-c&lt;/span&gt; shared_video.mp4 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-m&lt;/span&gt; &lt;span class="s2"&gt;"meet at the usual place, 8pm"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-p&lt;/span&gt; &lt;span class="s2"&gt;"correcthorse"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--their-key&lt;/span&gt; bob.pub

&lt;span class="c"&gt;# Output:&lt;/span&gt;
&lt;span class="c"&gt;# AwNhYmMxMjM0NTY3ODkwYWJjZGVmZ2hpamtsbW5vcHFyc3R1dnd4eXox... (~200 chars)&lt;/span&gt;

&lt;span class="c"&gt;# Alice sends that string to Bob. Any channel. SMS, email, a napkin.&lt;/span&gt;
&lt;span class="c"&gt;# Bob decodes using the same carrier + his private key + passphrase&lt;/span&gt;

bob&lt;span class="nv"&gt;$ &lt;/span&gt;anyhide decode &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--code&lt;/span&gt; &lt;span class="s2"&gt;"AwNhYmMxMjM0..."&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-c&lt;/span&gt; shared_video.mp4 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-p&lt;/span&gt; &lt;span class="s2"&gt;"correcthorse"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--my-key&lt;/span&gt; bob.key

&lt;span class="c"&gt;# Output: meet at the usual place, 8pm&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;shared_video.mp4&lt;/code&gt; never moved. The passphrase never moved. Only that one base64 string moved.&lt;/p&gt;

&lt;p&gt;If an adversary intercepts the string and doesn't have the carrier, they have nothing. If they have the carrier but not the passphrase, they have nothing. If they have everything but the wrong passphrase, they get deterministic garbage — not an error message. (More on that in Post 3.)&lt;/p&gt;

&lt;h2&gt;
  
  
  The Rust that makes it possible
&lt;/h2&gt;

&lt;p&gt;At the heart of Anyhide is a small, clean abstraction: the &lt;code&gt;Carrier&lt;/code&gt; enum. It's the kind of thing Rust lets you write that would be awkward in most languages.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="c1"&gt;// src/text/carrier.rs&lt;/span&gt;

&lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="k"&gt;enum&lt;/span&gt; &lt;span class="n"&gt;Carrier&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="cd"&gt;/// Text carrier - uses substring matching (case-insensitive)&lt;/span&gt;
    &lt;span class="nf"&gt;Text&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;CarrierSearch&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="cd"&gt;/// Binary carrier - uses byte-sequence matching&lt;/span&gt;
    &lt;span class="nf"&gt;Binary&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;BinaryCarrierSearch&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;impl&lt;/span&gt; &lt;span class="n"&gt;Carrier&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="k"&gt;fn&lt;/span&gt; &lt;span class="nf"&gt;from_text&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;text&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="k"&gt;Self&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nn"&gt;Carrier&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;Text&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nn"&gt;CarrierSearch&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;text&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="k"&gt;fn&lt;/span&gt; &lt;span class="nf"&gt;from_bytes&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;Vec&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nb"&gt;u8&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="k"&gt;Self&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nn"&gt;Carrier&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;Binary&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nn"&gt;BinaryCarrierSearch&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="cd"&gt;/// Detects carrier type from file extension and loads appropriately.&lt;/span&gt;
    &lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="k"&gt;fn&lt;/span&gt; &lt;span class="nf"&gt;from_file&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;path&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="nn"&gt;std&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nn"&gt;path&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;Path&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nn"&gt;std&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nn"&gt;io&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nb"&gt;Result&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="k"&gt;Self&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;extension&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;path&lt;/span&gt;&lt;span class="nf"&gt;.extension&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
            &lt;span class="nf"&gt;.and_then&lt;/span&gt;&lt;span class="p"&gt;(|&lt;/span&gt;&lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;|&lt;/span&gt; &lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="nf"&gt;.to_str&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
            &lt;span class="nf"&gt;.unwrap_or&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;""&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="nf"&gt;.to_lowercase&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

        &lt;span class="k"&gt;match&lt;/span&gt; &lt;span class="n"&gt;extension&lt;/span&gt;&lt;span class="nf"&gt;.as_str&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="s"&gt;"txt"&lt;/span&gt; &lt;span class="p"&gt;|&lt;/span&gt; &lt;span class="s"&gt;"md"&lt;/span&gt; &lt;span class="p"&gt;|&lt;/span&gt; &lt;span class="s"&gt;"text"&lt;/span&gt; &lt;span class="p"&gt;|&lt;/span&gt; &lt;span class="s"&gt;"csv"&lt;/span&gt; &lt;span class="p"&gt;|&lt;/span&gt; &lt;span class="s"&gt;"json"&lt;/span&gt; &lt;span class="p"&gt;|&lt;/span&gt; &lt;span class="s"&gt;"xml"&lt;/span&gt; &lt;span class="p"&gt;|&lt;/span&gt; &lt;span class="s"&gt;"html"&lt;/span&gt; &lt;span class="p"&gt;|&lt;/span&gt; &lt;span class="s"&gt;"htm"&lt;/span&gt; &lt;span class="k"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;content&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nn"&gt;std&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nn"&gt;fs&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;read_to_string&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;path&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;?&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
                &lt;span class="nf"&gt;Ok&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nn"&gt;Carrier&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;from_text&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;content&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt;
            &lt;span class="n"&gt;_&lt;/span&gt; &lt;span class="k"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nn"&gt;std&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nn"&gt;fs&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;read&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;path&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;?&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
                &lt;span class="nf"&gt;Ok&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nn"&gt;Carrier&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;from_bytes&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Two variants, one trait-like surface. Text carriers use substring matching (so you can hide a message by pointing at character positions in Shakespeare). Binary carriers use byte-sequence matching (so you can hide a message inside a raw MP4). The encoder and decoder don't care which one they got — they pattern-match on the enum and dispatch.&lt;/p&gt;

&lt;p&gt;This is where Rust earns its keep. In a dynamic language I'd be doing runtime type checks and praying. Here the compiler forces every code path to handle both variants, and I can extend the enum later (a third &lt;code&gt;Chunked&lt;/code&gt; variant for very large files is on my list) without touching any calling code.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's under the hood
&lt;/h2&gt;

&lt;p&gt;Internally, encoding a message goes through this pipeline:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;message  →  compress (DEFLATE)
         →  sign (Ed25519, optional)
         →  symmetric encrypt (ChaCha20-Poly1305, key from HKDF(passphrase))
         →  asymmetric encrypt (X25519 ECDH + ChaCha20-Poly1305)
         →  find byte positions in carrier
         →  distribute positions via passphrase-seeded PRNG
         →  base64 encode
         →  ANYHIDE CODE
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Every stage is there for a reason. The symmetric layer protects the positions even if you leak the long-term keys (the passphrase is a second factor). The asymmetric layer makes it end-to-end so only the intended recipient can decode. The position distribution means the same message encoded twice with the same keys and passphrase still produces different outputs.&lt;/p&gt;

&lt;p&gt;The whole thing is ~16,300 lines of Rust. There are 319 tests and they all pass. I know because I just ran them:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="gp"&gt;$&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;cargo &lt;span class="nb"&gt;test&lt;/span&gt;
&lt;span class="gp"&gt;test result: ok. 260 passed;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;0 failed  &lt;span class="o"&gt;(&lt;/span&gt;unit&lt;span class="o"&gt;)&lt;/span&gt;
&lt;span class="gp"&gt;test result: ok. 15 passed;&lt;/span&gt;&lt;span class="w"&gt;  &lt;/span&gt;0 failed  &lt;span class="o"&gt;(&lt;/span&gt;chat&lt;span class="o"&gt;)&lt;/span&gt;
&lt;span class="gp"&gt;test result: ok. 43 passed;&lt;/span&gt;&lt;span class="w"&gt;  &lt;/span&gt;0 failed  &lt;span class="o"&gt;(&lt;/span&gt;integration&lt;span class="o"&gt;)&lt;/span&gt;
&lt;span class="gp"&gt;test result: ok. 1 passed;&lt;/span&gt;&lt;span class="w"&gt;   &lt;/span&gt;0 failed  &lt;span class="o"&gt;(&lt;/span&gt;doctest&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  What's coming in this series
&lt;/h2&gt;

&lt;p&gt;This is Post 1 of 6. The roadmap:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;em&gt;This post&lt;/em&gt; — What Anyhide is and why the carrier-is-never-sent model matters.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Multi-carrier encoding&lt;/em&gt; — Using &lt;em&gt;multiple&lt;/em&gt; carrier files together, where the order is itself a secret. N carriers → N! additional combinations for an adversary to brute-force.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Plausible deniability with duress passwords&lt;/em&gt; — How to encode &lt;em&gt;two&lt;/em&gt; messages under two different passphrases. Under coercion, reveal the decoy. The real message stays hidden and is cryptographically indistinguishable from a wrong guess.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Forward secrecy and the Double Ratchet&lt;/em&gt; — Per-message key rotation, ephemeral keypairs, and the three storage formats I ended up supporting because chat has different needs than file transfer.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;P2P chat over Tor&lt;/em&gt; — Building a chat client on top of &lt;code&gt;arti-client&lt;/code&gt; (Rust's native Tor implementation), with hidden services as the transport and the Double Ratchet doing end-to-end encryption.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;A multi-contact TUI with ratatui&lt;/em&gt; — The terminal UI: sidebar, tabs, a Doom-style command console, request/accept for incoming connections, and the UX work that goes into making cryptography usable.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Each post stands alone, and each one is grounded in the actual code — no hand-waving. If you want to skim the repo first, it lives at &lt;a href="https://github.com/matutetandil/anyhide" rel="noopener noreferrer"&gt;github.com/matutetandil/anyhide&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Post 2 drops in two weeks.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;If you build privacy or security tooling in Rust, I'd love to hear what you're working on. If you think the "carrier is never sent" model is broken somewhere, I'd love to hear that too — drop a comment or open an issue on the repo.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>rust</category>
      <category>security</category>
      <category>privacy</category>
      <category>cryptography</category>
    </item>
    <item>
      <title>SOLID isn't overrated, it's misapplied</title>
      <dc:creator>Matías Denda</dc:creator>
      <pubDate>Mon, 27 Apr 2026 15:42:09 +0000</pubDate>
      <link>https://dev.to/mdenda/solid-isnt-overrated-its-misapplied-41lm</link>
      <guid>https://dev.to/mdenda/solid-isnt-overrated-its-misapplied-41lm</guid>
      <description>&lt;p&gt;Another article about SOLID. Original, I know.&lt;/p&gt;

&lt;p&gt;But I want to discuss something different from the usual letter-by-letter explanation: why entire ecosystems — Node.js, Kotlin/Android, parts of the Python world — treat SOLID as a relic of Java enterprise that's better avoided. And why I think that reading is wrong.&lt;/p&gt;

&lt;h2&gt;
  
  
  The problem isn't SOLID, it's the cargo cult
&lt;/h2&gt;

&lt;p&gt;When someone says "SOLID is overrated," nine times out of ten, they're not attacking the principles. They're attacking the caricature: five layers of abstraction for a CRUD, an &lt;code&gt;IUserRepositoryFactory&lt;/code&gt; that returns a &lt;code&gt;UserRepositoryFactory&lt;/code&gt; that builds a &lt;code&gt;UserRepository&lt;/code&gt;, interfaces declared "just in case we need them someday," dependency injection containers to resolve a pure function.&lt;/p&gt;

&lt;p&gt;That's not SOLID. That's overengineering wearing SOLID vocabulary. The distinction matters because when you discard the principles along with their misapplication, you end up making the opposite mistakes: 2000-line files, classes that do seven things, and code that's impossible to test because business logic is coupled to the HTTP framework.&lt;/p&gt;

&lt;h2&gt;
  
  
  The cultural bias by language
&lt;/h2&gt;

&lt;p&gt;This is where it gets interesting. The same principle — say, Single Responsibility — produces different reactions depending on the ecosystem, and that has less to do with the language itself than with the culture built around it.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Node.js / Express.&lt;/em&gt; Open any mid-sized Express project, and you'll likely find a &lt;code&gt;routes.js&lt;/code&gt; with 800 lines mixing route definitions, validation, business logic, and database queries. It's not that SRP "doesn't apply" in JavaScript. It's that the "move fast" culture treats it as an unnecessary ceremony — until the file becomes unmaintainable and subtle bugs from inconsistently duplicated validation start appearing. A quick look at popular Express tutorials makes this pattern clear: many introduce route handlers with inline business logic and only mention separation of concerns as an advanced topic, if at all.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Kotlin / Android.&lt;/em&gt; Particularly interesting case. The language gives you &lt;code&gt;data class&lt;/code&gt;, sealed classes, extension functions, top-level functions — tools tailor-made for SRP and ISP without Java's verbosity. And yet, "God ViewModels" — ViewModels mixing UI logic, network calls, DTO mapping, and caching — are common enough that Google's own architecture guide for Android explicitly warns against them and recommends splitting responsibilities into dedicated classes. The language enables doing it well; many tutorials and starter templates don't model that.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Go (positive contrast).&lt;/em&gt; Go is one of the few ecosystems where the culture embraces ISP and SRP as idiomatic, without anyone calling it "SOLID." The proverb "the bigger the interface, the weaker the abstraction" — attributed to Rob Pike — is ISP in another language. Single-purpose packages, small interfaces defined on the consumer side, composition over inheritance — all SOLID, no ceremony.&lt;/p&gt;

&lt;h2&gt;
  
  
  Idiomatic SOLID ≠ SOLID in Java
&lt;/h2&gt;

&lt;p&gt;The central misunderstanding is assuming SOLID demands the format it was originally taught with: classes, explicit interfaces, and hierarchies. It doesn't.&lt;/p&gt;

&lt;p&gt;SRP doesn't require one class per responsibility. It requires one reason to change. In TypeScript, that can be a pure function:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// This satisfies SRP perfectly&lt;/span&gt;
&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;calculateShippingCost&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;order&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Order&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;rates&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;ShippingRates&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="nx"&gt;Money&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="c1"&gt;// one responsibility, one reason to change&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;DIP doesn't require a DI container with XML config. A function parameter is already dependency inversion:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// The dependency is a parameter, not an import. That's DIP.&lt;/span&gt;
&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;processPayment&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="nx"&gt;order&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Order&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;charge&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;amount&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Money&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;Promise&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;Receipt&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="nb"&gt;Promise&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;OrderResult&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;receipt&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;charge&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;order&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;total&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;order&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;receipt&lt;/span&gt; &lt;span class="p"&gt;};&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;OCP in Go looks like this, with no inheritance:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="c"&gt;// Adding a new connector requires no changes to existing code.&lt;/span&gt;
&lt;span class="c"&gt;// Just implement the interface.&lt;/span&gt;
&lt;span class="k"&gt;type&lt;/span&gt; &lt;span class="n"&gt;Connector&lt;/span&gt; &lt;span class="k"&gt;interface&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;Execute&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ctx&lt;/span&gt; &lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Context&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;input&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;&lt;span class="kt"&gt;byte&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;([]&lt;/span&gt;&lt;span class="kt"&gt;byte&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="kt"&gt;error&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Three languages, three syntaxes, same principles. What changes is the implementation; what stays is the intent.&lt;/p&gt;

&lt;h2&gt;
  
  
  A heuristic
&lt;/h2&gt;

&lt;p&gt;Before dismissing SOLID in your stack, ask yourself: Am I rejecting the principle, or a Java 2008 enterprise implementation that scarred me years ago?&lt;/p&gt;

&lt;p&gt;If your routes file has 800 lines mixing five responsibilities, you're not being pragmatic. You're being disorganized and calling it pragmatism. If you have an &lt;code&gt;AbstractFactoryBuilderStrategy&lt;/code&gt; to build a config object, you're not applying SOLID. You're cosplaying architecture.&lt;/p&gt;

&lt;p&gt;The middle ground exists, and it's where most code that ages well lives: functions with one clear responsibility, explicit dependencies as parameters, small interfaces defined where they're consumed, and modules that do one thing.&lt;/p&gt;

&lt;p&gt;That's SOLID. You don't have to call it that.&lt;/p&gt;

</description>
      <category>architecture</category>
      <category>javascript</category>
      <category>kotlin</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Git worktree: the stash replacement nobody teaches you</title>
      <dc:creator>Matías Denda</dc:creator>
      <pubDate>Fri, 24 Apr 2026 13:00:00 +0000</pubDate>
      <link>https://dev.to/mdenda/git-worktree-the-stash-replacement-nobody-teaches-you-5akd</link>
      <guid>https://dev.to/mdenda/git-worktree-the-stash-replacement-nobody-teaches-you-5akd</guid>
      <description>&lt;p&gt;The scenario every developer knows: you're deep in a feature, 15 files modified, half-working tests, and the production alert hits. You need to fix a bug on &lt;code&gt;main&lt;/code&gt;, &lt;em&gt;now&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;The standard advice: &lt;code&gt;git stash&lt;/code&gt;. Switch branches. Fix the bug. Come back. Unstash. Pray nothing conflicts.&lt;/p&gt;

&lt;p&gt;There's a better way, and it's been in Git since 2015.&lt;/p&gt;

&lt;h2&gt;
  
  
  Meet &lt;code&gt;git worktree&lt;/code&gt;
&lt;/h2&gt;

&lt;p&gt;A Git worktree lets you check out multiple branches &lt;em&gt;simultaneously&lt;/em&gt;, each in its own directory, sharing the same underlying &lt;code&gt;.git&lt;/code&gt; repository. No stashing, no context switching, no rebuilding &lt;code&gt;node_modules&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6fhta0nvzg4mm425h948.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6fhta0nvzg4mm425h948.png" alt="One Git repo, two working directories, two branches, in parallel" width="800" height="378"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# You're in ~/work/app on feature/profile, deep in dirty changes&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;pwd&lt;/span&gt;
/Users/you/work/app
&lt;span class="nv"&gt;$ &lt;/span&gt;git status
On branch feature/profile
Changes not staged &lt;span class="k"&gt;for &lt;/span&gt;commit:
    modified:   src/profile.js
    modified:   src/avatar.js
    &lt;span class="o"&gt;(&lt;/span&gt;10 more files...&lt;span class="o"&gt;)&lt;/span&gt;

&lt;span class="c"&gt;# Production alert — open a worktree for the hotfix&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;git worktree add ../app-hotfix &lt;span class="nt"&gt;-b&lt;/span&gt; hotfix/payment-500 main
Preparing worktree &lt;span class="o"&gt;(&lt;/span&gt;new branch &lt;span class="s1"&gt;'hotfix/payment-500'&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
HEAD is now at a3f1d22 feat&lt;span class="o"&gt;(&lt;/span&gt;profile&lt;span class="o"&gt;)&lt;/span&gt;: add avatar

&lt;span class="c"&gt;# Switch terminal to the new directory&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;cd&lt;/span&gt; ../app-hotfix
&lt;span class="nv"&gt;$ &lt;/span&gt;git status
On branch hotfix/payment-500
nothing to commit, working tree clean
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's it. You now have two directories:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;app/&lt;/code&gt; — your feature work, untouched, 15 dirty files exactly where you left them&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;app-hotfix/&lt;/code&gt; — a clean checkout of &lt;code&gt;main&lt;/code&gt; in its own folder&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Fix the bug in &lt;code&gt;app-hotfix/&lt;/code&gt;, commit, push, open the PR. When the hotfix merges, remove the worktree:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;git worktree remove ../app-hotfix
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The branch lives on (on the remote). The extra directory is gone.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why this beats stash
&lt;/h2&gt;

&lt;p&gt;Stash works for small, short interruptions. But it has real problems:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Stash assumes you can only run one environment at a time.&lt;/strong&gt; With a worktree, you have two &lt;code&gt;node_modules/&lt;/code&gt;, two running &lt;code&gt;npm run dev&lt;/code&gt; processes, two build caches. No conflict, no rebuild.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Stash is fragile.&lt;/strong&gt; &lt;code&gt;git stash pop&lt;/code&gt; with a conflict can leave your working tree in a mess. A worktree is just a checkout — nothing special can go wrong.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Stash is invisible.&lt;/strong&gt; Three days later, did you &lt;code&gt;git stash pop&lt;/code&gt;? Did you have two stashes? Which one was which? Worktrees are directories — you can see them, &lt;code&gt;ls&lt;/code&gt; them, run them side by side.&lt;/p&gt;

&lt;h2&gt;
  
  
  The other big use case: reviewing PRs
&lt;/h2&gt;

&lt;p&gt;If you review a lot of PRs, you know the pain: to test someone else's branch locally you have to stash your work, check out their branch, run it, switch back, re-stash, re-rebuild.&lt;/p&gt;

&lt;p&gt;With worktrees:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;git worktree add ../app-review origin/coworker-branch
&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;cd&lt;/span&gt; ../app-review
&lt;span class="nv"&gt;$ &lt;/span&gt;npm &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; npm run dev
&lt;span class="c"&gt;# Poke around, test, review, come back to your own work untouched&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Useful commands
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# List all active worktrees&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;git worktree list
/Users/you/work/app           a3f1d22 &lt;span class="o"&gt;[&lt;/span&gt;feature/profile]
/Users/you/work/app-hotfix    7e4b9c1 &lt;span class="o"&gt;[&lt;/span&gt;hotfix/payment-500]

&lt;span class="c"&gt;# Add a worktree tracking an existing branch&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;git worktree add ../app-qa qa-branch

&lt;span class="c"&gt;# Remove a worktree (the branch stays)&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;git worktree remove ../app-qa

&lt;span class="c"&gt;# If you deleted the directory manually, clean up the reference&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;git worktree prune
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  When NOT to use worktree
&lt;/h2&gt;

&lt;p&gt;Stash is still the right tool when:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The interruption takes 5 minutes and you don't need to run anything&lt;/li&gt;
&lt;li&gt;You're on a disk-constrained machine (each worktree is a full checkout)&lt;/li&gt;
&lt;li&gt;Your build system assumes a single working directory (some old monorepo tooling has trouble)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The one gotcha
&lt;/h2&gt;

&lt;p&gt;You can't have the same branch checked out in two worktrees at once. If you try, Git refuses. This is a feature, not a bug — it prevents you from committing to the same branch from two places and creating a mess. If you need the same branch twice (to compare states, for example), check out a detached HEAD in the second worktree instead:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;git worktree add &lt;span class="nt"&gt;--detach&lt;/span&gt; ../app-compare main
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Make it a habit
&lt;/h2&gt;

&lt;p&gt;I have three muscle memories now:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;5-minute interruption, nothing to run&lt;/strong&gt; → stash&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Longer interruption, different branch, needs full env&lt;/strong&gt; → worktree&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Reviewing a PR locally&lt;/strong&gt; → always worktree&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After a year of doing this, I almost never stash anymore. And I never panic when production pages me while I'm mid-feature.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This post is adapted from &lt;em&gt;&lt;a href="https://mdenda.gumroad.com/l/git-in-depth" rel="noopener noreferrer"&gt;Git in Depth: From Solo Developer to Engineering Teams&lt;/a&gt;&lt;/em&gt;, a 658-page book covering Git the way it's actually used in real engineering teams — from day-to-day commands to CI/CD, branching strategies, methodology alignment, and Git at organizational scale.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>git</category>
      <category>productivity</category>
      <category>tutorial</category>
      <category>webdev</category>
    </item>
    <item>
      <title>What cave diving taught me about distributed systems</title>
      <dc:creator>Matías Denda</dc:creator>
      <pubDate>Thu, 23 Apr 2026 16:32:38 +0000</pubDate>
      <link>https://dev.to/mdenda/what-cave-diving-taught-me-about-distributed-systems-2a83</link>
      <guid>https://dev.to/mdenda/what-cave-diving-taught-me-about-distributed-systems-2a83</guid>
      <description>&lt;h2&gt;
  
  
  What cave diving taught me about distributed systems
&lt;/h2&gt;

&lt;p&gt;I've been building backend systems for 14 years. I've also spent a decent chunk of the last decade underwater, mostly in caves.&lt;/p&gt;

&lt;p&gt;At some point I stopped being surprised by how often the two worlds rhyme. The deeper you go into either, the more you notice the same ideas showing up in different costumes. Here are a few that stuck with me.&lt;/p&gt;

&lt;h2&gt;
  
  
  You plan the dive, then you dive the plan
&lt;/h2&gt;

&lt;p&gt;In open water, if something goes wrong, you go up. That's it. The surface is always there, a few kicks away, a guaranteed exit.&lt;/p&gt;

&lt;p&gt;In a cave, there is no "up". There's a ceiling, and between you and air there's sometimes hundreds of meters of rock and a specific path you came in through. If something goes wrong at the end of a one-hour penetration, the solution is still one hour of swimming away — and you're the one who has to swim it, with whatever gas, light, and composure you have left.&lt;/p&gt;

&lt;p&gt;So technical divers plan &lt;em&gt;everything&lt;/em&gt; before getting in the water. Gas volumes for every phase, with reserves for the worst case and the worst case after that. Turn points. Decompression schedules. Equipment failures and who does what when they happen. Team positions, signals, lost-diver procedures. Murphy's law isn't a joke in this context — it's a design input.&lt;/p&gt;

&lt;p&gt;The rule is: &lt;em&gt;plan the dive, dive the plan.&lt;/em&gt; You don't improvise &lt;br&gt;
underwater. You execute what you already decided on land, when your &lt;br&gt;
brain had oxygen and no time pressure.&lt;/p&gt;

&lt;p&gt;Software has the same trap, and most teams fall into it. "We'll figure it out in production" is the engineering equivalent of "we'll figure it out at 80 meters." Sometimes you get lucky. Often you don't.&lt;/p&gt;

&lt;p&gt;The work that matters — capacity planning, failure mode analysis, &lt;br&gt;
runbooks, rollback procedures, on-call rotations, dependency mapping — happens &lt;em&gt;before&lt;/em&gt; the system is under load. Before the incident. Before anyone is stressed. Because the incident is not the time to start thinking. It's time to execute what you already thought through.&lt;/p&gt;

&lt;p&gt;And just like diving, the planning doesn't eliminate failure. It just makes sure that when failure shows up, you've already met it on paper.&lt;/p&gt;

&lt;h2&gt;
  
  
  Failures cascade. Plan for the second failure, not the first.
&lt;/h2&gt;

&lt;p&gt;The thing that kills divers isn't usually the first problem. It's the panic reaction to the first problem that causes the second one — and the second one is the one you weren't ready for.&lt;/p&gt;

&lt;p&gt;Same in distributed systems. The database slowdown isn't what takes you down. It's the retry storm from 400 service instances hammering the recovering database that takes you down.&lt;/p&gt;

&lt;p&gt;Good divers train for &lt;em&gt;compound&lt;/em&gt; failures: light out &lt;em&gt;and&lt;/em&gt; low on gas, lost line &lt;em&gt;and&lt;/em&gt; silted visibility. Good systems are designed for compound failures too: circuit breakers, exponential backoff with jitter, bulkheads, and graceful degradation. Not because the first failure is rare, but because the second one, triggered by your response to the first, is where the real damage happens.&lt;/p&gt;

&lt;h2&gt;
  
  
  Turn pressure is a circuit breaker
&lt;/h2&gt;

&lt;p&gt;Before a cave dive, you calculate your "turn pressure" — the tank pressure at which you stop going in and start coming out, regardless of how close you are to the thing you wanted to see. It's non-negotiable. You don't get to feel your way through it.&lt;/p&gt;

&lt;p&gt;Circuit breakers work the same way. You pick a threshold in advance, when you're calm and have a clear head. And when the threshold trips, the system doesn't get to argue with it. It just turns around.&lt;/p&gt;

&lt;p&gt;The hardest part of both is the same: &lt;em&gt;accepting the limit you set for yourself when you were thinking clearly, even when the situation makes you want to push past it.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Checklists feel stupid until they save you
&lt;/h2&gt;

&lt;p&gt;Every cave diver I respect uses a pre-dive checklist. Not because they forget things — but because under stress, everyone forgets things. The checklist is what your past, calm self leaves behind to protect your future, stressed self.&lt;/p&gt;

&lt;p&gt;Runbooks are the same. The incident is not the time to remember the command. The deployment at 2 am is not the time to improvise the rollback procedure. Write it down when it's quiet. Read it when it's loud.&lt;/p&gt;

&lt;h2&gt;
  
  
  The real lesson
&lt;/h2&gt;

&lt;p&gt;Both disciplines teach you the same uncomfortable thing: &lt;strong&gt;most disasters are built in advance, by people who assumed the happy path was the only path.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The habits that keep you alive in a cave are the same ones that keep systems running at 3 am on a Saturday. Redundancy, calm limits, planning for the compound failure, trusting your past self's checklist over your present self's instincts.&lt;/p&gt;

&lt;p&gt;The costume is different. The physics of failure is the same.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;If you're into either distributed systems or cave diving, I'd love to hear what overlaps you've noticed. Always surprising how many fields converge on the same answers.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>distributedsystems</category>
      <category>softwareengineering</category>
      <category>backend</category>
      <category>career</category>
    </item>
    <item>
      <title>One TUI for RabbitMQ, Kafka, and MQTT: why I built queuepeek</title>
      <dc:creator>Matías Denda</dc:creator>
      <pubDate>Wed, 22 Apr 2026 13:00:00 +0000</pubDate>
      <link>https://dev.to/mdenda/one-tui-for-rabbitmq-kafka-and-mqtt-why-i-built-queuepeek-1ldn</link>
      <guid>https://dev.to/mdenda/one-tui-for-rabbitmq-kafka-and-mqtt-why-i-built-queuepeek-1ldn</guid>
      <description>&lt;h2&gt;
  
  
  The problem I was trying to solve
&lt;/h2&gt;

&lt;p&gt;I work across a few projects that all talk to message brokers, but never the same one. Some services are on RabbitMQ. The data pipeline runs on Kafka. A handful of IoT integrations use MQTT. Normal stuff.&lt;/p&gt;

&lt;p&gt;What wasn't normal was the amount of context-switching every time something went wrong in production. Three different web UIs, three different mental models, three different ways to peek at a message without accidentally consuming it.&lt;/p&gt;

&lt;p&gt;The RabbitMQ Management UI is fine, but it's a web app — and half the time I'm already in a terminal next to the logs. Kafka UIs are a whole can of worms (every company seems to use a different one). MQTT doesn't really have a good "just let me see what's retained on this topic" tool for free.&lt;/p&gt;

&lt;p&gt;So after one too many incidents where I wanted to diff two DLQ messages and ended up pasting JSON into an online diff tool, I started building what became &lt;strong&gt;queuepeek&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  What it is
&lt;/h2&gt;

&lt;p&gt;queuepeek is a terminal UI written in Rust (on top of &lt;a href="https://github.com/ratatui-org/ratatui" rel="noopener noreferrer"&gt;ratatui&lt;/a&gt;) that speaks RabbitMQ, Kafka, and MQTT from the same interface. You launch it, pick a profile, drill down through queues/topics, and land on individual messages.&lt;/p&gt;

&lt;p&gt;The entire thing is keyboard-driven and follows the same wizard flow regardless of the broker:&lt;/p&gt;

&lt;p&gt;Profiles -&amp;gt; Queues/Topics -&amp;gt; Messages -&amp;gt; Message Detail&lt;/p&gt;

&lt;p&gt;Esc always pops one level up. &lt;code&gt;/&lt;/code&gt; always filters. &lt;code&gt;?&lt;/code&gt; always shows help contextual to where you are and which broker you're on.&lt;/p&gt;

&lt;p&gt;No mouse. No tabs. No switching apps.&lt;/p&gt;

&lt;h2&gt;
  
  
  Design choices that paid off
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Non-destructive peek
&lt;/h3&gt;

&lt;p&gt;This was the whole point. A "queue inspector" that consumes messages while you're reading them is not an inspector — it's a silent bug waiting to happen.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;For &lt;strong&gt;RabbitMQ&lt;/strong&gt;, queuepeek uses the Management HTTP API's &lt;code&gt;get&lt;/code&gt; endpoint with &lt;code&gt;ack_requeue_true&lt;/code&gt;. You read, the message stays.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Kafka&lt;/strong&gt;, every read session spins up an ephemeral consumer with a unique group ID. You're not stealing offsets from anyone.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;MQTT&lt;/strong&gt;, you're subscribing to a topic so it's inherently non-mutating, but retained message management has its own explicit screen with a clear "this will clear the retained payload" confirmation.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Sounds basic. A surprising number of tools don't do this.&lt;/p&gt;

&lt;h3&gt;
  
  
  Same keybindings, different brokers
&lt;/h3&gt;

&lt;p&gt;Kafka doesn't have queues, it has topics. RabbitMQ has exchanges and bindings. MQTT has topic hierarchies. Under the hood these are very different beasts, but from the user's point of view, "show me what's in here" should feel the same.&lt;/p&gt;

&lt;p&gt;The footer at the bottom of every screen dynamically filters shortcuts to the backend you're connected to — &lt;code&gt;G:groups&lt;/code&gt; only shows on Kafka, &lt;code&gt;X:topology&lt;/code&gt; only on RabbitMQ, &lt;code&gt;H:retained&lt;/code&gt; only on MQTT. No dead keys, no "this doesn't work on your broker" errors.&lt;/p&gt;

&lt;h3&gt;
  
  
  Multi-select bulk ops
&lt;/h3&gt;

&lt;p&gt;Checkboxes in a TUI sound fiddly, but they turned out to be one of the most useful features. &lt;code&gt;Space&lt;/code&gt; toggles selection on the current message, then any operation you trigger (delete, copy, move, export) applies to all selected messages — streamed, not loaded into memory.&lt;/p&gt;

&lt;p&gt;Want to delete 10,000 messages from a DLQ after confirming none of them match a pattern you care about? Filter, select all, delete. Done from the keyboard.&lt;/p&gt;

&lt;h2&gt;
  
  
  A few features I'm particularly happy with
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Side-by-side message diff.&lt;/strong&gt; Select two messages, press &lt;code&gt;d&lt;/code&gt;, get a colored diff. Uses the &lt;a href="https://github.com/mitsuhiko/similar" rel="noopener noreferrer"&gt;&lt;code&gt;similar&lt;/code&gt;&lt;/a&gt; crate. Embarrassingly useful for "why does this one message fail and the other succeed?"&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Schema Registry integration.&lt;/strong&gt; If you configure a Confluent-compatible Schema Registry URL, queuepeek auto-decodes Avro and raw Protobuf payloads using the Confluent wire format (magic byte &lt;code&gt;0x00&lt;/code&gt; + 4-byte schema ID + body). Toggle raw/decoded with&lt;br&gt;
  &lt;code&gt;s&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Real concurrent benchmarking.&lt;/strong&gt; Press &lt;code&gt;F5&lt;/code&gt; on a queue to run a flood-publish benchmark with N worker threads (via &lt;code&gt;std::thread::scope&lt;/code&gt;), rendering a live gauge and p50/p95/p99 latency percentiles at the end. Useful for capacity-planning conversations&lt;br&gt;
  that usually start with "how fast can this queue take?"&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Webhook alerts.&lt;/strong&gt; Configure a regex pattern in &lt;code&gt;config.toml&lt;/code&gt;, point it at a webhook URL, and queuepeek polls every 30 seconds and POSTs on match (deduplicated by message hash so you don't spam yourself). Good for catching a specific error pattern appearing in a queue before it becomes an incident.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Payload templates with interpolation.&lt;/strong&gt; &lt;code&gt;Ctrl+T&lt;/code&gt; to save the current message, &lt;code&gt;Ctrl+W&lt;/code&gt; to insert it. Supports variables like &lt;code&gt;{{timestamp}}&lt;/code&gt;, &lt;code&gt;{{uuid}}&lt;/code&gt;, &lt;code&gt;{{random_int}}&lt;/code&gt;, &lt;code&gt;{{counter}}&lt;/code&gt;, and &lt;code&gt;{{env.VAR}}&lt;/code&gt; for anything you export in your shell.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;DLQ reroute.&lt;/strong&gt; RabbitMQ x-death headers get parsed and displayed, and &lt;code&gt;L&lt;/code&gt; re-routes a message back to its original exchange. Mostly because I got tired of manually copy-pasting routing keys out of &lt;code&gt;x-death&lt;/code&gt;.&lt;/p&gt;
&lt;h2&gt;
  
  
  Interesting implementation bits
&lt;/h2&gt;

&lt;p&gt;A few things that weren't obvious going in:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;ratatui + crossterm + mpsc channels&lt;/strong&gt; is all you need for a responsive TUI. Background I/O (broker calls, file operations, webhook polling) runs in &lt;code&gt;std::thread&lt;/code&gt; workers and posts results back through a channel that the event loop drains each tick.&lt;br&gt;
No async runtime, no Tokio. The whole app feels instant.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Auto-refresh is dumb and works.&lt;/strong&gt; Queue list refreshes every 5 seconds, message list every time tail mode is on. No WebSockets, no push. The Management API is cheap enough that polling is fine.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Scheduled messages persist to disk.&lt;/strong&gt; &lt;code&gt;~/.config/queuepeek/scheduled.json&lt;/code&gt; with epoch seconds. If you schedule a publish and the app crashes or you close it, the schedule survives.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;79 unit tests&lt;/strong&gt; across filters, comparison, operations, schema decoding, and config. TUI logic is hard to test end-to-end, but the pure functions underneath are easy.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Try it
&lt;/h2&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# From crates.io (needs cmake for librdkafka)&lt;/span&gt;
cargo &lt;span class="nb"&gt;install &lt;/span&gt;queuepeek
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Or grab a prebuilt binary: &lt;a href="https://github.com/matutetandil/queuepeek/releases" rel="noopener noreferrer"&gt;https://github.com/matutetandil/queuepeek/releases&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Supported platforms: macOS (ARM/Intel), Linux (x86), Windows (x86/ARM). Linux ARM builds need cargo install because cross-compiling librdkafka is its own adventure.&lt;/p&gt;

&lt;p&gt;Minimal ~/.config/queuepeek/config.toml:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight toml"&gt;&lt;code&gt;&lt;span class="nn"&gt;[profiles.local]&lt;/span&gt;
&lt;span class="py"&gt;type&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"rabbitmq"&lt;/span&gt;
&lt;span class="py"&gt;host&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"localhost"&lt;/span&gt;
&lt;span class="py"&gt;port&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;15672&lt;/span&gt;
&lt;span class="py"&gt;username&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"guest"&lt;/span&gt;
&lt;span class="py"&gt;password&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"guest"&lt;/span&gt;
&lt;span class="py"&gt;vhost&lt;/span&gt;    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"/"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Repo: &lt;a href="https://github.com/matutetandil/queuepeek" rel="noopener noreferrer"&gt;https://github.com/matutetandil/queuepeek&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Docs: /docs folder covers configuration, keyboard shortcuts, backends, and architecture.&lt;/p&gt;

&lt;p&gt;What I'd love feedback on&lt;/p&gt;

&lt;p&gt;It's a solo side project and I'm the main user. That means the keybindings reflect my own muscle memory, the defaults reflect my own workflows, and the rough edges are the ones I don't personally bump into.&lt;/p&gt;

&lt;p&gt;If you spend time in any of these brokers and something feels off — a missing shortcut, a feature you'd expect, a decoding format I'm not handling — please open an issue. Especially on MQTT, which I use least.&lt;/p&gt;

&lt;p&gt;MIT licensed. No telemetry. No signup. Just a binary that reads queues.&lt;/p&gt;

</description>
      <category>rust</category>
      <category>devops</category>
      <category>showdev</category>
      <category>terminal</category>
    </item>
    <item>
      <title>What actually happens when you `git merge --no-ff`</title>
      <dc:creator>Matías Denda</dc:creator>
      <pubDate>Tue, 21 Apr 2026 14:48:28 +0000</pubDate>
      <link>https://dev.to/mdenda/what-actually-happens-when-you-git-merge-no-ff-4il1</link>
      <guid>https://dev.to/mdenda/what-actually-happens-when-you-git-merge-no-ff-4il1</guid>
      <description>&lt;p&gt;Most developers use &lt;code&gt;git merge&lt;/code&gt; without ever thinking about what's happening internally. Then one day they see &lt;code&gt;--no-ff&lt;/code&gt; in a team's workflow documentation, Google it, read three Stack Overflow answers, and walk away with a vague sense that "it creates a merge commit or something."&lt;/p&gt;

&lt;p&gt;This post is the version I wish I'd read earlier. Two diagrams, one clear distinction, and why it actually matters for your team.&lt;/p&gt;

&lt;h2&gt;
  
  
  The setup
&lt;/h2&gt;

&lt;p&gt;You're on &lt;code&gt;main&lt;/code&gt;. Your coworker merged their feature. You branched off, added two commits, and now it's time to merge your branch back. What happens next depends on one flag.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# You're here&lt;/span&gt;
git checkout main
git merge feature
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Git has two ways to integrate your feature branch into &lt;code&gt;main&lt;/code&gt;. The one it picks by default depends on whether the branches have &lt;em&gt;diverged&lt;/em&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Case 1: Fast-forward (the default, when possible)
&lt;/h2&gt;

&lt;p&gt;If &lt;code&gt;main&lt;/code&gt; hasn't moved since you branched off, Git doesn't create a new commit. It just moves the &lt;code&gt;main&lt;/code&gt; pointer forward to the tip of your feature branch.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2bentvl8nu0tmo6qx8gp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2bentvl8nu0tmo6qx8gp.png" alt="Fast-forward merge: main pointer simply moves forward, no new commit" width="800" height="311"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That's it. No merge commit. The feature branch and &lt;code&gt;main&lt;/code&gt; now point to the same commit. If you look at &lt;code&gt;git log&lt;/code&gt; on &lt;code&gt;main&lt;/code&gt;, it reads like D and E were always there. The branch effectively disappears from history.&lt;/p&gt;

&lt;h2&gt;
  
  
  Case 2: &lt;code&gt;--no-ff&lt;/code&gt; (always create a merge commit)
&lt;/h2&gt;

&lt;p&gt;With &lt;code&gt;--no-ff&lt;/code&gt;, Git creates an explicit merge commit even when a fast-forward was possible:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnxei2tu2p244tldnal7j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnxei2tu2p244tldnal7j.png" alt="no-ff merge: a new merge commit M is created with two parents" width="800" height="311"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;M&lt;/code&gt; is a new commit whose parents are &lt;code&gt;C&lt;/code&gt; (the previous tip of main) and &lt;code&gt;E&lt;/code&gt; (the tip of feature). It has no code changes of its own — its diff is empty — but it records that &lt;em&gt;these commits were integrated together at this point&lt;/em&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why this distinction matters
&lt;/h2&gt;

&lt;p&gt;The two histories above contain the same code. So does it matter? Yes, and here's where it bites real teams.&lt;/p&gt;

&lt;h3&gt;
  
  
  It matters for &lt;code&gt;git bisect&lt;/code&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;git bisect&lt;/code&gt; helps you find which commit introduced a bug by doing a binary search through history. With fast-forward merges, the search descends into individual feature commits — you might land on a half-finished refactor where the bug is genuinely present but so is a broken test, making the bisect useless.&lt;/p&gt;

&lt;p&gt;With &lt;code&gt;--no-ff&lt;/code&gt;, you can run &lt;code&gt;git bisect --first-parent&lt;/code&gt; and bisect &lt;em&gt;merge commits only&lt;/em&gt;, treating each feature as an atomic unit. Found the regression? You know which feature to revert, not which arbitrary mid-feature commit to blame.&lt;/p&gt;

&lt;h3&gt;
  
  
  It matters for &lt;code&gt;git revert&lt;/code&gt;
&lt;/h3&gt;

&lt;p&gt;If you merged with &lt;code&gt;--no-ff&lt;/code&gt; and need to roll back the feature, you revert the single merge commit:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git revert &lt;span class="nt"&gt;-m&lt;/span&gt; 1 &amp;lt;merge-commit-hash&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That undoes all of D and E in one go. With fast-forward, you'd need to revert each commit individually — or figure out which commits belonged to the feature in the first place.&lt;/p&gt;

&lt;h3&gt;
  
  
  It matters for reading history
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;git log --graph --first-parent main&lt;/code&gt; with &lt;code&gt;--no-ff&lt;/code&gt; merges shows you a clean list of features integrated into main, one per line. Without merge commits, the log is a flat stream of every individual commit ever made. For a large team, the difference is between "I can see what shipped last week" and "good luck."&lt;/p&gt;

&lt;h2&gt;
  
  
  What GitHub and GitLab do
&lt;/h2&gt;

&lt;p&gt;When you click "Merge pull request" on GitHub or GitLab, they default to creating a merge commit (&lt;code&gt;--no-ff&lt;/code&gt; behavior). The "Rebase and merge" and "Squash and merge" options exist too, but the default merge commit exists precisely because of the benefits above.&lt;/p&gt;

&lt;p&gt;This is why teams that use the GitHub/GitLab UI religiously often have cleaner history than teams that merge locally on the command line — the UI forces a pattern that the command line leaves optional.&lt;/p&gt;

&lt;h2&gt;
  
  
  When fast-forward is fine
&lt;/h2&gt;

&lt;p&gt;For throwaway branches, personal experiments, or single-commit fixes where the commit already tells the whole story, fast-forward is perfectly appropriate. The rule of thumb I use:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Single commit fix&lt;/strong&gt; → fast-forward is fine&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Feature branch with 2+ commits&lt;/strong&gt; → &lt;code&gt;--no-ff&lt;/code&gt; preserves the grouping&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Release branch merge&lt;/strong&gt; → always &lt;code&gt;--no-ff&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Hotfix branch merge&lt;/strong&gt; → always &lt;code&gt;--no-ff&lt;/code&gt; (you want revertability)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Making it a team default
&lt;/h2&gt;

&lt;p&gt;If you want the team to use &lt;code&gt;--no-ff&lt;/code&gt; consistently, either set it at the repo level:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git config &lt;span class="nt"&gt;--global&lt;/span&gt; merge.ff &lt;span class="nb"&gt;false&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Or — better — require it via branch protection rules on your hosting platform. That way nobody's local config can bypass it.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This post is adapted from a chapter of &lt;em&gt;&lt;a href="https://mdenda.gumroad.com/l/git-in-depth" rel="noopener noreferrer"&gt;Git in Depth: From Solo Developer to Engineering Teams&lt;/a&gt;&lt;/em&gt;, a 658-page book I just released on Git for working developers — from day-to-day tools to CI/CD, branching strategies, and Git at organizational scale. Launch price $29 with code &lt;code&gt;EARLYBIRD&lt;/code&gt; (first 100 copies).&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Next week: &lt;em&gt;git worktree — the stash replacement nobody teaches you&lt;/em&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>git</category>
      <category>devops</category>
      <category>tutorial</category>
      <category>beginners</category>
    </item>
  </channel>
</rss>
