<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Marius-Florin Cristian</title>
    <description>The latest articles on DEV Community by Marius-Florin Cristian (@mfc_keibisoft).</description>
    <link>https://dev.to/mfc_keibisoft</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/mfc_keibisoft"/>
    <language>en</language>
    <item>
      <title>Can We Compute the Minimum Spanning Tree in Linear Time?</title>
      <dc:creator>Marius-Florin Cristian</dc:creator>
      <pubDate>Wed, 01 Apr 2026 11:43:03 +0000</pubDate>
      <link>https://dev.to/mfc_keibisoft/can-we-compute-the-minimum-spanning-tree-in-linear-time-5170</link>
      <guid>https://dev.to/mfc_keibisoft/can-we-compute-the-minimum-spanning-tree-in-linear-time-5170</guid>
      <description>&lt;p&gt;&lt;em&gt;Easter egg at the end of how the cover photo is related to the post&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Given a connected graph with weighted edges, the minimum spanning tree (MST) is the cheapest set of edges that keeps all vertices connected. Boruvka gave the first algorithm in 1926. Kruskal, Prim, and others refined it. The best randomized algorithm runs in O(m) expected time (Karger-Klein-Tarjan, 1995). The best deterministic algorithm runs in O(m α(m,n)) (Chazelle, 2000), where α is the inverse Ackermann function.&lt;/p&gt;

&lt;p&gt;α(m,n) ≤ 4 for any graph that could physically exist. In practice, the problem is solved. In theory, the question "can we do O(m) deterministically?" has been open for over forty years.&lt;/p&gt;

&lt;p&gt;This post covers the three attack vectors from &lt;a href="https://keibisoft.com/blog/mst-road-to-linear-complexity.html" rel="noopener noreferrer"&gt;my 2018 thesis at DIKU&lt;/a&gt;, updated with connections to research published since then.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where the bottleneck is
&lt;/h2&gt;

&lt;p&gt;Every MST algorithm uses some form of the disjoint-set (union-find) data structure. You process edges, and for each edge you ask: are these two vertices already connected? If not, merge their components.&lt;/p&gt;

&lt;p&gt;With path compression and union by rank, m operations on n elements cost O(m α(m,n)). That α factor is the bottleneck. It comes from the analysis of path compression on a pointer machine.&lt;/p&gt;

&lt;p&gt;On a Word RAM, Gabow and Tarjan (1983) showed you can do O(m + n) — if you know the union tree structure in advance. You precompute lookup tables for small components and answer FIND queries in O(1).&lt;/p&gt;

&lt;p&gt;The catch: knowing the union tree means knowing the MST, which is what you're computing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Two computation models
&lt;/h2&gt;

&lt;p&gt;This matters because the bottleneck lives in the gap between them.&lt;/p&gt;

&lt;p&gt;A &lt;strong&gt;pointer machine&lt;/strong&gt; can follow pointers, compare values, and create new nodes. No arithmetic on addresses, no random access. This is how most graph algorithms actually work. On this model, the α factor from path compression cannot be removed.&lt;/p&gt;

&lt;p&gt;A &lt;strong&gt;Word RAM&lt;/strong&gt; has O(1) random access and can do arithmetic and bitwise operations on words. This enables table lookups — precomputing answers for small inputs and reading them in constant time.&lt;/p&gt;

&lt;p&gt;The MST gap is a model gap. On a pointer machine, O(m α(m,n)) seems tight. On a Word RAM, O(m + n) is possible if the structure is known. The question is whether you can learn enough structure during computation to exploit the RAM model.&lt;/p&gt;

&lt;h2&gt;
  
  
  Attack vector 1: Approximate union-find
&lt;/h2&gt;

&lt;p&gt;What if you approximately know the union tree?&lt;/p&gt;

&lt;p&gt;Build a predicted tree T' using some fast heuristic (a few Boruvka phases, random sampling, anything that runs in O(m)). Use Gabow-Tarjan lookup tables based on T'. When a FIND query returns an answer that disagrees with the true structure, pay for a correction.&lt;/p&gt;

&lt;p&gt;If the prediction is close enough to the true MST, most queries hit the lookup tables and cost O(1). The faulty queries create "phantom cycles" — the algorithm temporarily believes two vertices are connected when they aren't, or vice versa. These phantoms are bounded by the prediction error and can be cleaned up in a linear-time verification pass using &lt;a href="https://dl.acm.org/doi/10.1007/BF01294131" rel="noopener noreferrer"&gt;King's MST verification algorithm&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;This approach fits a framework that didn't exist when the thesis was written: &lt;strong&gt;learning-augmented algorithms&lt;/strong&gt; (Mitzenmacher and Vassilvitskii, 2020). The formal question is: what prediction quality guarantees O(m + n) total cost?&lt;/p&gt;

&lt;p&gt;The thesis proves the disjoint-set operations can be bounded to O(m + n) on the RAM model when allowing a bounded degree of error. The precise bounds on prediction quality needed for this to work are stated but not tight.&lt;/p&gt;

&lt;h2&gt;
  
  
  Attack vector 2: Density partition
&lt;/h2&gt;

&lt;p&gt;Different algorithms are linear at different densities. Prim is linear on dense graphs (m ≥ n^p for p &amp;gt; 1). Kruskal-style algorithms are nearly linear on sparse graphs. Neither covers all graphs.&lt;/p&gt;

&lt;p&gt;The density partition classifies vertices by degree:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;HD&lt;/strong&gt; (high degree): degree &amp;gt; c · log³(n)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;LD&lt;/strong&gt; (low degree): degree &amp;lt; c · log³(n)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Then by neighbor types: &lt;strong&gt;HDN&lt;/strong&gt; (high-degree neighbors), &lt;strong&gt;LDN&lt;/strong&gt; (low-degree neighbors), &lt;strong&gt;MDN&lt;/strong&gt; (mixed). This classification takes O(m + n) — two passes over the adjacency lists.&lt;/p&gt;

&lt;p&gt;For HD-HDN components (dense), run Prim. For LD-LDN components (sparse), run modified Kruskal with the approximate union-find from attack vector 1. The MDN boundary edges form a small graph — O(n / log³(n)) edges — that can be solved recursively.&lt;/p&gt;

&lt;p&gt;At each recursion level, total work is O(m + n). The recursion depth is bounded. This is the glue that combines the other two subproblems.&lt;/p&gt;

&lt;h2&gt;
  
  
  Attack vector 3: Cycle hierarchy
&lt;/h2&gt;

&lt;p&gt;Consider a sparse graph that is almost a tree. Most vertices have degree 1 or 2. The MST must include all edges connecting degree-1 vertices — there is no alternative path. So you can trim those edges and contract the leaves.&lt;/p&gt;

&lt;p&gt;The problem: trimming cascades. Removing a leaf can create a new leaf, which creates another, propagating across the graph. Naive trimming is O(n²).&lt;/p&gt;

&lt;p&gt;If you know the cycle structure, you know where each cascade stops — at a vertex that participates in a cycle. With this information, trimming runs in O(n).&lt;/p&gt;

&lt;p&gt;The cycle hierarchy organizes cycles by nesting depth:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Level 0: triangles&lt;/li&gt;
&lt;li&gt;Level 1: 4-cycles not decomposable into triangles&lt;/li&gt;
&lt;li&gt;Level k: (k+3)-cycles that may contain lower-level cycles&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Building the hierarchy seems circular — you need a spanning tree to find cycles, but you're computing the MST. The escape: cycle detection doesn't require edge weights. Any spanning tree (from DFS or BFS, computed in O(m + n) without weight comparisons) reveals the cycle structure. Each non-tree edge closes exactly one fundamental cycle.&lt;/p&gt;

&lt;p&gt;You compute the topological hierarchy weight-free, then apply it to the weighted MST computation. The hierarchy tells trimming where to stop.&lt;/p&gt;

&lt;p&gt;This is the hardest subproblem. The construction is clear in outline, but the formal proof that it runs in O(m + n) for arbitrary graphs is incomplete.&lt;/p&gt;

&lt;h2&gt;
  
  
  How they compose
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Subproblem 1                    Subproblem 3
(Approximate Union-Find)        (Cycle Hierarchy)
         \                          /
          v                        v
           Subproblem 2
           (Density Partition)
                  |
                  v
          Full MST Algorithm
              O(m + n)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Subproblems 1 and 3 are independent. Subproblem 2 combines their results. If all three are solved, they compose into a deterministic O(m + n) MST algorithm on the Word RAM.&lt;/p&gt;

&lt;p&gt;On a pointer machine, this approach does not help. The disjoint-set operations without table lookup are bounded by O(m α(m,n)), and that factor cannot be removed without random access.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's incomplete
&lt;/h2&gt;

&lt;p&gt;Subproblem 1 has a theorem and proof sketch. The formal bounds on prediction quality are not tight. Subproblem 3 has a clear construction but the O(m + n) proof for arbitrary graphs is missing. Subproblem 2 depends on both.&lt;/p&gt;

&lt;p&gt;Since 2018, each subproblem has connected to active research:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Approximate union-find → learning-augmented algorithms&lt;/li&gt;
&lt;li&gt;Density partition → local graph algorithms and graph sparsification&lt;/li&gt;
&lt;li&gt;Cycle hierarchy → dynamic connectivity and top trees&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The three subproblems remain open. Whether O(m) MST is achievable deterministically, or whether α(m,n) is fundamental, is one of the oldest open problems in computer science.&lt;/p&gt;




&lt;p&gt;The full technical writeup with proofs, definitions, and historical context is on my blog: &lt;a href="https://keibisoft.com/blog/mst-road-to-linear-complexity.html" rel="noopener noreferrer"&gt;The MST Problem: Three Subproblems to Linear Time&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The thesis was written at &lt;a href="https://di.ku.dk/" rel="noopener noreferrer"&gt;DIKU, University of Copenhagen&lt;/a&gt;, supervised by Jyrki Katajainen.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Easter egg time&lt;/em&gt;: Bernard Chazelle the CS professor from Toronto who has the best algorithm for the problem up to date is the father of Damien Chazelle, the director of Whiplash and LaLa Land.&lt;/p&gt;

</description>
      <category>algorithms</category>
      <category>computerscience</category>
      <category>complexity</category>
    </item>
    <item>
      <title>Optimizing encrypted P2P file transfer - from 225 to 441 MB/s</title>
      <dc:creator>Marius-Florin Cristian</dc:creator>
      <pubDate>Sun, 29 Mar 2026 08:36:21 +0000</pubDate>
      <link>https://dev.to/mfc_keibisoft/optimizing-encrypted-p2p-file-transfer-from-225-to-441-mbs-1mm1</link>
      <guid>https://dev.to/mfc_keibisoft/optimizing-encrypted-p2p-file-transfer-from-225-to-441-mbs-1mm1</guid>
      <description>&lt;p&gt;&lt;em&gt;Part of the &lt;a href="https://keibisoft.com/tools/keibidrop.html" rel="noopener noreferrer"&gt;KEIBIDROP&lt;/a&gt; development blog. KEIBIDROP is in active development. Release is coming soon.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;KEIBIDROP transfers files between two peers over encrypted gRPC. The full stack:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Disk I/O -&amp;gt; FUSE kernel -&amp;gt; FUSE daemon -&amp;gt; gRPC framing -&amp;gt; ChaCha20-Poly1305 -&amp;gt; TCP -&amp;gt; Peer
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We built micro-benchmarks for each layer and measured throughput with 1GB files on an Intel MacBook Pro.&lt;/p&gt;

&lt;h2&gt;
  
  
  Baseline numbers
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Layer&lt;/th&gt;
&lt;th&gt;Throughput&lt;/th&gt;
&lt;th&gt;Overhead&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Raw disk (SSD)&lt;/td&gt;
&lt;td&gt;~5 GB/s&lt;/td&gt;
&lt;td&gt;--&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Raw gRPC (no encryption)&lt;/td&gt;
&lt;td&gt;981 MB/s&lt;/td&gt;
&lt;td&gt;5x vs disk&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Encrypted gRPC (ChaCha20)&lt;/td&gt;
&lt;td&gt;437 MB/s&lt;/td&gt;
&lt;td&gt;2.2x vs raw gRPC&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;FUSE end-to-end&lt;/td&gt;
&lt;td&gt;225 MB/s&lt;/td&gt;
&lt;td&gt;1.9x vs encrypted gRPC&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The encryption layer costs 2.2x. FUSE adds another 1.9x.&lt;/p&gt;

&lt;h2&gt;
  
  
  Six optimizations
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1. Cache the AEAD cipher.&lt;/strong&gt; The original code created a new ChaCha20-Poly1305 cipher for every message. Caching it in the constructor is safe because the nonce is a monotonic counter.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="c"&gt;// Before: creating cipher per-message&lt;/span&gt;
&lt;span class="n"&gt;aead&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;_&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;chacha20poly1305&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;NewX&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;s&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;key&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;  &lt;span class="c"&gt;// expensive!&lt;/span&gt;

&lt;span class="c"&gt;// After: created once in constructor&lt;/span&gt;
&lt;span class="k"&gt;type&lt;/span&gt; &lt;span class="n"&gt;SecureWriter&lt;/span&gt; &lt;span class="k"&gt;struct&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;aead&lt;/span&gt; &lt;span class="n"&gt;cipher&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;AEAD&lt;/span&gt;  &lt;span class="c"&gt;// created once&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;2. Single combined TCP write.&lt;/strong&gt; Two syscalls per message (header + payload) became one. At 512KB chunks, this halves the &lt;code&gt;write()&lt;/code&gt; call count.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. In-place decryption.&lt;/strong&gt; &lt;code&gt;aead.Open(ciphertext[:0], nonce, ciphertext, nil)&lt;/code&gt; reuses the ciphertext buffer instead of allocating a new one. ~2000 fewer allocations per 1GB transfer.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Async cache writes in FUSE Read.&lt;/strong&gt; FUSE needs the data returned to the kernel. Writing that data to the local cache file can happen in a background goroutine. A WaitGroup ensures everything completes before file close. Cache write overhead dropped from 16% to 1.8%.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="n"&gt;n&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="nb"&gt;copy&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;buff&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;cacheFD&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;CacheWg&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="m"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;go&lt;/span&gt; &lt;span class="k"&gt;func&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;defer&lt;/span&gt; &lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;CacheWg&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Done&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="n"&gt;cacheFD&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;WriteAt&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;cacheData&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;cacheOffset&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}()&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;n&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;5. Push-based StreamFile RPC.&lt;/strong&gt; The original streaming used request-response per chunk. A new server-streaming RPC pushes all chunks without waiting. On-demand reads (random access) still use bidirectional streaming.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6. Silencing hot-path logs.&lt;/strong&gt; ~120 &lt;code&gt;slog.Info/Debug/Warn&lt;/code&gt; calls in FUSE handlers. Even structured logging allocates. Commenting out non-error logs on Read/Write/Getattr gave 2-6% throughput improvement.&lt;/p&gt;

&lt;h2&gt;
  
  
  After optimizations
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Layer&lt;/th&gt;
&lt;th&gt;Before&lt;/th&gt;
&lt;th&gt;After&lt;/th&gt;
&lt;th&gt;Gain&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Encrypted gRPC&lt;/td&gt;
&lt;td&gt;437 MB/s&lt;/td&gt;
&lt;td&gt;441 MB/s&lt;/td&gt;
&lt;td&gt;+0.9%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;FUSE end-to-end&lt;/td&gt;
&lt;td&gt;225 MB/s&lt;/td&gt;
&lt;td&gt;231 MB/s&lt;/td&gt;
&lt;td&gt;+2.7%&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  The encryption tax: 1.49x
&lt;/h2&gt;

&lt;p&gt;We built a controlled benchmark comparing identical gRPC transfers over plain TCP vs ChaCha20-Poly1305:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Transport&lt;/th&gt;
&lt;th&gt;MB/s&lt;/th&gt;
&lt;th&gt;Duration (100MB)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Plain TCP + gRPC&lt;/td&gt;
&lt;td&gt;657&lt;/td&gt;
&lt;td&gt;152ms&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Encrypted (ChaCha20-Poly1305)&lt;/td&gt;
&lt;td&gt;441&lt;/td&gt;
&lt;td&gt;227ms&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;33% of transfer time goes to encryption. Our earlier 2.2x estimate was comparing across different test runs with different system load. The controlled A/B benchmark showed the real cost is much lower.&lt;/p&gt;

&lt;h2&gt;
  
  
  The irreducible 48%
&lt;/h2&gt;

&lt;p&gt;The gap between encrypted gRPC (441 MB/s) and FUSE end-to-end (231 MB/s) is 48%. Every FUSE Read requires two kernel-to-userspace context switches. We confirmed this by measuring raw FUSE overhead (local file read through FUSE vs direct): ~48%, matching exactly.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Plain gRPC:     657 MB/s  (ceiling)
  -33% encryption
Encrypted gRPC: 441 MB/s
  -48% FUSE kernel overhead
FUSE E2E:       231 MB/s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The FUSE overhead is nearly twice the encryption cost. If we wanted to improve throughput, reducing FUSE transitions would have more impact than switching ciphers. But FUSE is the cost of running a userspace filesystem, and for secure P2P file sharing the tradeoff is worth it.&lt;/p&gt;

&lt;h2&gt;
  
  
  What we learned
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Without isolating each layer, we would have optimized the wrong thing. The biggest win (async cache writes, 16% to 1.8%) was only visible when measured independently.&lt;/li&gt;
&lt;li&gt;Caching the AEAD cipher and in-place decryption are tiny code changes with measurable impact at 2000+ messages per second.&lt;/li&gt;
&lt;li&gt;Structured logging is not free on hot paths. Even checking the log level has overhead.&lt;/li&gt;
&lt;li&gt;FUSE has an inherent 48% overhead from context switches. No amount of userspace optimization can fix this.&lt;/li&gt;
&lt;li&gt;Server-streaming eliminates round-trip latency for sequential prefetch. Request-response is still needed for random access.&lt;/li&gt;
&lt;li&gt;Our initial 2.2x encryption estimate was wrong. A controlled A/B benchmark (same test, same system) showed 1.49x. Always compare apples to apples.&lt;/li&gt;
&lt;/ol&gt;




&lt;p&gt;More from the KEIBIDROP blog: &lt;a href="https://keibisoft.com/blog.html" rel="noopener noreferrer"&gt;full series&lt;/a&gt; | &lt;a href="https://keibisoft.com/tools/keibidrop.html" rel="noopener noreferrer"&gt;product page&lt;/a&gt; | &lt;a href="https://keibisoft.com/tools/keibidrop-faq.html" rel="noopener noreferrer"&gt;FAQ&lt;/a&gt;&lt;/p&gt;

</description>
      <category>go</category>
      <category>performance</category>
      <category>networking</category>
      <category>filesystem</category>
    </item>
    <item>
      <title>Make git clone work between encrypted P2P FUSE peers (7 bugs)</title>
      <dc:creator>Marius-Florin Cristian</dc:creator>
      <pubDate>Sun, 29 Mar 2026 08:34:07 +0000</pubDate>
      <link>https://dev.to/mfc_keibisoft/make-git-clone-work-between-encrypted-p2p-fuse-peers-7-bugs-1pb0</link>
      <guid>https://dev.to/mfc_keibisoft/make-git-clone-work-between-encrypted-p2p-fuse-peers-7-bugs-1pb0</guid>
      <description>&lt;p&gt;&lt;em&gt;Part of the &lt;a href="https://keibisoft.com/tools/keibidrop.html" rel="noopener noreferrer"&gt;KEIBIDROP&lt;/a&gt; development blog. KEIBIDROP is in active development. Release is coming soon.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;KEIBIDROP mounts a virtual FUSE filesystem for each peer. When Peer A creates or modifies files, the changes sync to Peer B's mount in real-time through encrypted gRPC. We already made git work on a single FUSE mount (&lt;a href="https://keibisoft.com/blog/keibidrop-git-inside-fuse.html" rel="noopener noreferrer"&gt;previous post&lt;/a&gt;). The question was: what happens when Peer A clones a repo into their mount, and Peer B tries to use it through theirs?&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Bob (FUSE mount)                     Alice (FUSE mount)
+-- .git/                            +-- .git/          &amp;lt;- synced from Bob
|   +-- HEAD                         |   +-- HEAD
|   +-- config                       |   +-- config
|   +-- objects/pack/                |   +-- objects/pack/
|   |   +-- pack-abc123.pack (2MB)   |   |   +-- pack-abc123.pack
|   +-- refs/heads/main              |   +-- refs/heads/main
+-- go.mod                           +-- go.mod
+-- README.md                        +-- README.md
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Seven bugs appeared across three debugging sessions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Bug 1: The rename race
&lt;/h2&gt;

&lt;p&gt;Git uses atomic writes: create &lt;code&gt;config.lock&lt;/code&gt;, write content, close, rename to &lt;code&gt;config&lt;/code&gt;. KEIBIDROP sends &lt;code&gt;ADD_FILE&lt;/code&gt; on close and &lt;code&gt;RENAME_FILE&lt;/code&gt; on rename. The prefetch goroutine would download &lt;code&gt;config.lock&lt;/code&gt;, finish before the &lt;code&gt;RENAME_FILE&lt;/code&gt; arrived, and skip the disk rename because the in-memory path hadn't changed yet.&lt;/p&gt;

&lt;p&gt;Fix: the &lt;code&gt;RENAME_FILE&lt;/code&gt; handler now does &lt;code&gt;os.Rename&lt;/code&gt; on disk before updating maps. The prefetch cleanup becomes a redundant safety net.&lt;/p&gt;

&lt;h2&gt;
  
  
  Bug 2: Pack file corruption (20 missing bytes)
&lt;/h2&gt;

&lt;p&gt;Alice's pack file was 2,182,186 bytes. Bob's was 2,182,206 bytes. Git's &lt;code&gt;index-pack&lt;/code&gt; appends a 20-byte SHA-1 checksum &lt;em&gt;after&lt;/em&gt; the file is closed but &lt;em&gt;before&lt;/em&gt; the rename. KEIBIDROP sent the &lt;code&gt;ADD_FILE&lt;/code&gt; notification at close time with the pre-checksum size.&lt;/p&gt;

&lt;p&gt;Fix: after renaming on disk, compare local file size with the size in the &lt;code&gt;RENAME_FILE&lt;/code&gt; notification. If they differ, trigger a re-download.&lt;/p&gt;

&lt;h2&gt;
  
  
  Bug 3: macFUSE cache poisoning
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;git status&lt;/code&gt; showed &lt;code&gt;deleted: go.mod&lt;/code&gt; even though the file existed on disk. macFUSE's &lt;code&gt;negative_vncache&lt;/code&gt; option caches ENOENT results. macOS probed Alice's mount (Spotlight, fsevents) before the file arrived, the kernel cached "doesn't exist", and subsequent reads never reached our FUSE handler.&lt;/p&gt;

&lt;p&gt;Fix: remove &lt;code&gt;negative_vncache&lt;/code&gt; from mount options.&lt;/p&gt;

&lt;h2&gt;
  
  
  Bug 4: The 612-file hang
&lt;/h2&gt;

&lt;p&gt;Cloning a large repo (612 files) hung after checkout completed. Every FUSE &lt;code&gt;Release&lt;/code&gt; sent a gRPC notification. 612 files closing in rapid succession overwhelmed the connection.&lt;/p&gt;

&lt;p&gt;Three-part fix:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;code&gt;BatchNotify&lt;/code&gt; RPC -- batch 64 notifications into a single gRPC call&lt;/li&gt;
&lt;li&gt;Per-path debounce (200ms deadline, reset on each update) on the sender side&lt;/li&gt;
&lt;li&gt;Prefetch semaphore (max 8 concurrent) on the receiver side
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight protobuf"&gt;&lt;code&gt;&lt;span class="k"&gt;rpc&lt;/span&gt; &lt;span class="n"&gt;BatchNotify&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;BatchNotifyRequest&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;returns&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;BatchNotifyResponse&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="kd"&gt;message&lt;/span&gt; &lt;span class="nc"&gt;BatchNotifyRequest&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;repeated&lt;/span&gt; &lt;span class="n"&gt;NotifyRequest&lt;/span&gt; &lt;span class="na"&gt;notifications&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="kt"&gt;uint64&lt;/span&gt; &lt;span class="na"&gt;seq&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="kt"&gt;uint64&lt;/span&gt; &lt;span class="na"&gt;timestamp&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Bug 5: 190,000 log lines
&lt;/h2&gt;

&lt;p&gt;After fixing the notification flood, the clone still hung for 30 seconds. Turns out we had 207,000 getattr calls and 190,000 ENOENT error logs, all written synchronously. macOS probes hundreds of paths per directory (Spotlight, fsevents, &lt;code&gt;.DS_Store&lt;/code&gt;, resource forks) and without &lt;code&gt;negative_vncache&lt;/code&gt; every probe hits our handler.&lt;/p&gt;

&lt;p&gt;Fix: remove trace logs from hot-path handlers. Stop logging ENOENT as an error.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s2"&gt;"FUSE getattr"&lt;/span&gt; Log_Bob.txt    207,137
&lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s2"&gt;"Failed to lstat"&lt;/span&gt; Log_Bob.txt  190,614
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Bug 6: Hook permission denied
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;fatal: cannot exec '.git/hooks/post-checkout': Operation not permitted&lt;/code&gt;. macOS Gatekeeper blocks executing scripts from FUSE mounts.&lt;/p&gt;

&lt;p&gt;Fix: add &lt;code&gt;defer_permissions&lt;/code&gt; to mount options.&lt;/p&gt;

&lt;h2&gt;
  
  
  Bug 7: LFS corruption (intermediate sizes + stale notifications)
&lt;/h2&gt;

&lt;p&gt;Git LFS downloads a 420MB XML file incrementally into &lt;code&gt;.git/lfs/incomplete/&amp;lt;hash&amp;gt;&lt;/code&gt;. Each intermediate close triggers &lt;code&gt;ADD_FILE&lt;/code&gt; with the current size. The peer starts prefetching at 100MB, restarts at 200MB, restarts at 300MB. Then LFS renames &lt;code&gt;incomplete/&amp;lt;hash&amp;gt;&lt;/code&gt; to &lt;code&gt;objects/&amp;lt;sha&amp;gt;/&amp;lt;hash&amp;gt;&lt;/code&gt;, but the debounced ADD_FILE for the old path fires after the rename, pointing to a file that no longer exists.&lt;/p&gt;

&lt;p&gt;Fix: per-path debounce with RENAME retargeting. When a RENAME arrives, the pending ADD_FILE for the old path gets its path rewritten to the new path instead of being deleted or sent stale.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="k"&gt;case&lt;/span&gt; &lt;span class="n"&gt;bindings&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;NotifyType_RENAME_FILE&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;old&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;exists&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;pending&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;req&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;OldPath&lt;/span&gt;&lt;span class="p"&gt;];&lt;/span&gt; &lt;span class="n"&gt;exists&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nb"&gt;delete&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;pending&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;req&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;OldPath&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;old&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;req&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Path&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;req&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Path&lt;/span&gt;  &lt;span class="c"&gt;// retarget to new path&lt;/span&gt;
        &lt;span class="n"&gt;pending&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;req&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Path&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;old&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="n"&gt;immediate&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;immediate&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;req&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We tried three approaches: deleting the pending notification (broke content delivery), sending it stale (wrong path), retargeting (correct).&lt;/p&gt;

&lt;h2&gt;
  
  
  The result
&lt;/h2&gt;

&lt;p&gt;After all fixes, the full git workflow works between encrypted P2P FUSE peers:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="gp"&gt;#&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;Bob clones a repo
&lt;span class="gp"&gt;$&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nb"&gt;cd &lt;/span&gt;MountBob &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; git clone git@github.com:user/repo.git
&lt;span class="go"&gt;Cloning into 'repo'... done.

&lt;/span&gt;&lt;span class="gp"&gt;#&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;Alice sees it immediately
&lt;span class="gp"&gt;$&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nb"&gt;cd &lt;/span&gt;MountAlice/repo &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; git status
&lt;span class="go"&gt;On branch main -- nothing to commit, working tree clean

&lt;/span&gt;&lt;span class="gp"&gt;#&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;Alice creates a branch and commits
&lt;span class="gp"&gt;$&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;git checkout &lt;span class="nt"&gt;-b&lt;/span&gt; test_me
&lt;span class="gp"&gt;$&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"test"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; tst.txt &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; git add tst.txt &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; git commit &lt;span class="nt"&gt;-m&lt;/span&gt; &lt;span class="s2"&gt;"Commit for test"&lt;/span&gt;
&lt;span class="go"&gt;
&lt;/span&gt;&lt;span class="gp"&gt;#&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;Bob sees the branch and commit
&lt;span class="gp"&gt;$&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nb"&gt;cd &lt;/span&gt;MountBob/repo &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; git log
&lt;span class="gp"&gt;commit 38e23bc (HEAD -&amp;gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;test_me&lt;span class="o"&gt;)&lt;/span&gt;
&lt;span class="go"&gt;    Commit for test
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Tested with a small repo (24 objects, 2MB, 8 files) and a large repo (2339 objects, 257MB + 3.71 GiB LFS, 612 files).&lt;/p&gt;

&lt;h2&gt;
  
  
  What we learned
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;FUSE handlers must rename on disk, not just in memory. Goroutines race with notification handlers, especially for tiny files that download in milliseconds.&lt;/li&gt;
&lt;li&gt;Git writes data, closes the file, then renames. Between close and rename, other processes can modify the file (index-pack appending checksums). The rename notification has the correct size. Use it.&lt;/li&gt;
&lt;li&gt;macFUSE's &lt;code&gt;negative_vncache&lt;/code&gt; silently caches ENOENT results. Our handler was never called. These bugs don't show up in FUSE logs because the kernel short-circuits the call.&lt;/li&gt;
&lt;li&gt;612 files = 612 goroutines = 612 gRPC streams = connection collapse. A bounded channel + batching worker gives backpressure without blocking the caller.&lt;/li&gt;
&lt;li&gt;190,000 synchronous log writes caused a 30-second hang. The logging seemed harmless with 20 files. 612 files with macOS probing turned it into 400,000 disk writes.&lt;/li&gt;
&lt;li&gt;For debounce + rename: you can't delete the pending notification (peer never gets content) and you can't send it stale (wrong path). Retarget it to the new path. Three attempts to get this right.&lt;/li&gt;
&lt;li&gt;None of these bugs are git-specific. Any application that creates files through one peer's mount and reads them through another would hit the same problems. Git just exercises every edge case.&lt;/li&gt;
&lt;/ol&gt;




&lt;p&gt;More from the KEIBIDROP blog: &lt;a href="https://keibisoft.com/blog.html" rel="noopener noreferrer"&gt;full series&lt;/a&gt; | &lt;a href="https://keibisoft.com/tools/keibidrop.html" rel="noopener noreferrer"&gt;product page&lt;/a&gt; | &lt;a href="https://keibisoft.com/tools/keibidrop-faq.html" rel="noopener noreferrer"&gt;FAQ&lt;/a&gt;&lt;/p&gt;

</description>
      <category>go</category>
      <category>git</category>
      <category>filesystems</category>
      <category>fml</category>
    </item>
    <item>
      <title>I built a peer-to-peer file transfer tool with post-quantum encryption</title>
      <dc:creator>Marius-Florin Cristian</dc:creator>
      <pubDate>Thu, 26 Mar 2026 08:58:20 +0000</pubDate>
      <link>https://dev.to/mfc_keibisoft/i-built-a-peer-to-peer-file-transfer-tool-with-post-quantum-encryption-2pc5</link>
      <guid>https://dev.to/mfc_keibisoft/i-built-a-peer-to-peer-file-transfer-tool-with-post-quantum-encryption-2pc5</guid>
      <description>&lt;p&gt;&lt;em&gt;KEIBIDROP is in active development. Release is coming soon.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;I've been building &lt;a href="https://keibisoft.com/tools/keibidrop.html" rel="noopener noreferrer"&gt;KEIBIDROP&lt;/a&gt; for the past year. It's a file sharing tool where files go directly from one computer to another. No cloud, no accounts, no middleman.&lt;/p&gt;

&lt;h2&gt;
  
  
  What it does
&lt;/h2&gt;

&lt;p&gt;Two people run KEIBIDROP, exchange a short code over Signal or Telegram, and connect. Both sides can browse each other's shared files in real time. With FUSE mode, the peer's files show up as a virtual folder in your file manager. You can open, copy, and read files as if they were local.&lt;/p&gt;

&lt;p&gt;The connection is encrypted with ChaCha20-Poly1305 over a hybrid ML-KEM + X25519 key exchange. ML-KEM is the NIST post-quantum standard (formerly Kyber). If quantum computers eventually break classical crypto, sessions established today are still protected.&lt;/p&gt;

&lt;h2&gt;
  
  
  How it's different from existing tools
&lt;/h2&gt;

&lt;p&gt;WeTransfer and Dropbox upload your files to their servers. AirDrop only works between Apple devices. croc and magic-wormhole are one-shot transfers. Syncthing is for background sync across your own devices.&lt;/p&gt;

&lt;p&gt;KeibiDrop is for two people who want to share files right now, see each other's files live, and not trust a third party with their data. Full comparison: &lt;a href="https://keibisoft.com/tools/keibidrop-vs-alternatives.html" rel="noopener noreferrer"&gt;KEIBIDROP vs alternatives&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The stack
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Go 1.24 for backend, networking, cryptography, FUSE&lt;/li&gt;
&lt;li&gt;Rust + Slint for the native desktop GUI (no Electron, 20MB binary)&lt;/li&gt;
&lt;li&gt;gRPC + Protocol Buffers for peer communication&lt;/li&gt;
&lt;li&gt;cgofuse for cross-platform FUSE (macOS, Linux, Windows)&lt;/li&gt;
&lt;li&gt;~11,000 lines of Go, ~5,000 lines of tests&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Three interfaces
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Desktop GUI&lt;/strong&gt; for drag-and-drop. &lt;strong&gt;Interactive CLI&lt;/strong&gt; for terminal users. &lt;strong&gt;Agent CLI (kd)&lt;/strong&gt; with JSON output over Unix socket, built for AI agents and automation.&lt;/p&gt;

&lt;h2&gt;
  
  
  IPv6 only (for now)
&lt;/h2&gt;

&lt;p&gt;Both peers need IPv6. This keeps the design simple and avoids leaking IP metadata to STUN/TURN servers. Most ISPs support IPv6 today. NAT traversal for IPv4 is planned.&lt;/p&gt;

&lt;h2&gt;
  
  
  Links
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://keibisoft.com/tools/keibidrop.html" rel="noopener noreferrer"&gt;Product page&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/KeibiSoft/KeibiDrop" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://keibisoft.com/tools/keibidrop-faq.html" rel="noopener noreferrer"&gt;FAQ&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://keibisoft.com/blog.html" rel="noopener noreferrer"&gt;Technical blog series&lt;/a&gt; (13 posts covering FUSE deadlocks, post-quantum gRPC, cross-platform sync, and more)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Free and open source under MPL 2.0.&lt;/p&gt;

</description>
      <category>go</category>
      <category>security</category>
      <category>opensource</category>
      <category>networking</category>
    </item>
  </channel>
</rss>
