<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Dmitrii Zatona</title>
    <description>The latest articles on DEV Community by Dmitrii Zatona (@dzatona).</description>
    <link>https://dev.to/dzatona</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/dzatona"/>
    <language>en</language>
    <item>
      <title>The Super-Tree: How One Merkle Tree Proves Another</title>
      <dc:creator>Dmitrii Zatona</dc:creator>
      <pubDate>Sun, 01 Mar 2026 02:10:02 +0000</pubDate>
      <link>https://dev.to/dzatona/the-super-tree-how-one-merkle-tree-proves-another-417c</link>
      <guid>https://dev.to/dzatona/the-super-tree-how-one-merkle-tree-proves-another-417c</guid>
      <description>&lt;p&gt;A transparency log that lives forever will eventually contain billions of entries. A single Merkle tree with a billion leaves has a depth of 30, which means inclusion proofs are 30 hashes long and every append touches 30 nodes. That is manageable. But the tree file on disk grows without bound, proof generation requires random access across the entire structure, and rotating keys or changing operational parameters means you are stuck with decisions you made on day one.&lt;/p&gt;

&lt;p&gt;ATL Protocol splits the problem into two levels: short-lived &lt;strong&gt;Data Trees&lt;/strong&gt; and an eternal &lt;strong&gt;Super-Tree&lt;/strong&gt;. Each Data Tree accumulates entries for a bounded period (configurable -- say, 24 hours or 100,000 entries). When the period ends, the tree is closed, its root hash becomes a leaf in the Super-Tree, and a fresh Data Tree starts accepting new entries. The Super-Tree is itself a Merkle tree -- it grows by one leaf every time a Data Tree closes.&lt;/p&gt;

&lt;p&gt;I want to walk through why this architecture exists, how the two levels connect cryptographically, and how two receipts held by two different people can independently prove that the entire log history was never rewritten -- without contacting the server.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Not One Big Tree
&lt;/h2&gt;

&lt;p&gt;The obvious approach is a single append-only Merkle tree. Every entry is a leaf. The root hash represents the entire log state. This works, but it has operational problems.&lt;/p&gt;

&lt;p&gt;First, the tree file grows forever. ATL uses memory-mapped slab files where every node (leaves and intermediates) is stored at a fixed offset. A tree with 1 million leaves has approximately 2 million nodes, each 32 bytes -- that is 64 MB per slab. With a single tree, you eventually have a multi-gigabyte file that must remain memory-mapped for proof generation. Splitting into bounded Data Trees means each slab is a fixed, predictable size.&lt;/p&gt;

&lt;p&gt;Second, key rotation. If the log operator wants to rotate their Ed25519 signing key, a single-tree design forces a choice: sign new entries with the new key (breaking consistency with old checkpoints) or keep the old key forever (defeating the purpose of rotation). With Data Trees, each tree has its own checkpoint signed at close time. Rotating keys between trees is a natural boundary.&lt;/p&gt;

&lt;p&gt;Third, anchoring granularity. In ATL Protocol v2.0, RFC 3161 timestamps anchor Data Tree roots, and Bitcoin OTS anchors the Super Root. The Super Root changes every time a Data Tree closes, so each new snapshot gets its own OTS anchor -- but each anchor commits to the entire log history up to that point, not just one tree. TSA timestamps give every Data Tree a seconds-level proof of time, while Bitcoin gives each Super Root snapshot permanent immutability. A single-tree design would force a choice between frequent expensive Bitcoin anchoring or infrequent timestamps.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Genesis Leaf
&lt;/h2&gt;

&lt;p&gt;When a new Data Tree starts, its first entry is not user data. It is a &lt;strong&gt;genesis leaf&lt;/strong&gt; -- a cryptographic link to the previous tree.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="k"&gt;const&lt;/span&gt; &lt;span class="n"&gt;GENESIS_DOMAIN&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;u8&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;b"ATL-CHAIN-v1"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="k"&gt;fn&lt;/span&gt; &lt;span class="nf"&gt;compute_genesis_leaf_hash&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;prev_root_hash&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;Hash&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;prev_tree_size&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;u64&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;Hash&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="k"&gt;mut&lt;/span&gt; &lt;span class="n"&gt;hasher&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nn"&gt;Sha256&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="n"&gt;hasher&lt;/span&gt;&lt;span class="nf"&gt;.update&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="n"&gt;LEAF_PREFIX&lt;/span&gt;&lt;span class="p"&gt;]);&lt;/span&gt;
    &lt;span class="n"&gt;hasher&lt;/span&gt;&lt;span class="nf"&gt;.update&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;GENESIS_DOMAIN&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="n"&gt;hasher&lt;/span&gt;&lt;span class="nf"&gt;.update&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;prev_root_hash&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="n"&gt;hasher&lt;/span&gt;&lt;span class="nf"&gt;.update&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;prev_tree_size&lt;/span&gt;&lt;span class="nf"&gt;.to_le_bytes&lt;/span&gt;&lt;span class="p"&gt;());&lt;/span&gt;
    &lt;span class="n"&gt;hasher&lt;/span&gt;&lt;span class="nf"&gt;.finalize&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="nf"&gt;.into&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The genesis leaf hash is:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SHA256(0x00 || "ATL-CHAIN-v1" || prev_root_hash || prev_tree_size_le)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This binds the new tree to the exact state of the previous tree at close time -- not just its root hash, but also its size. If the operator tries to rewrite the previous tree (changing entries, adding entries, removing entries), the root hash or the size changes, and the genesis leaf in the next tree no longer matches. The chain is broken, and any verifier who holds a receipt from the previous tree will detect the inconsistency.&lt;/p&gt;

&lt;p&gt;The domain separator &lt;code&gt;ATL-CHAIN-v1&lt;/code&gt; prevents confusion between genesis leaves and regular data leaves. Without it, an attacker could craft a data entry whose payload hash and metadata hash happen to collide with a genesis leaf hash, potentially confusing the chain verification. The domain separator makes this a different hash domain entirely -- the input space for genesis leaves and data leaves does not overlap.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;0x00&lt;/code&gt; prefix is the RFC 6962 leaf prefix, shared with data leaves. This is intentional: the genesis leaf occupies a regular leaf slot in the Data Tree. The Merkle tree does not need special handling for it. It is leaf 0, hashed like any other leaf, included in proofs like any other leaf. The distinction between "genesis" and "data" exists only in the semantic layer, not in the tree structure.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Super-Tree
&lt;/h2&gt;

&lt;p&gt;When a Data Tree is closed, its root hash is appended to the Super-Tree as a new leaf. The Super-Tree is a standard RFC 6962 Merkle tree -- the same data structure, the same inclusion proofs, the same consistency proofs. The only difference is what its leaves represent: not individual documents, but entire Data Trees.&lt;/p&gt;

&lt;p&gt;The Super-Tree root (&lt;code&gt;super_root&lt;/code&gt;) is the single hash that represents the entire log history. Every Data Tree that ever existed, every entry that was ever written, is committed to by this one root. If you trust the &lt;code&gt;super_root&lt;/code&gt;, you trust every entry in every Data Tree.&lt;/p&gt;

&lt;p&gt;In ATL Protocol v2.0, Bitcoin OTS anchors the &lt;code&gt;super_root&lt;/code&gt;, not individual Data Tree roots. Each time the Super-Tree grows, the new Super Root gets its own OTS anchor -- and each anchor commits to every Data Tree in the log up to that point. TSA timestamps anchor individual Data Tree roots for faster, per-tree time proofs.&lt;/p&gt;

&lt;h2&gt;
  
  
  What a Receipt Contains
&lt;/h2&gt;

&lt;p&gt;Every Receipt-Full in ATL Protocol v2.0 includes a &lt;code&gt;super_proof&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"genesis_super_root"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"sha256:aabb..."&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"data_tree_index"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;150&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"super_tree_size"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;152&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"super_root"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"sha256:ccdd..."&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"inclusion"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"sha256:..."&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"sha256:..."&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"sha256:..."&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"consistency_to_origin"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"sha256:..."&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"sha256:..."&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Six fields. Each one does something specific.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;genesis_super_root&lt;/code&gt; is the Super-Tree root when it contained exactly one leaf -- the first Data Tree ever closed. This is the identity of the log. Two receipts with the same &lt;code&gt;genesis_super_root&lt;/code&gt; claim to be from the same log instance.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;data_tree_index&lt;/code&gt; is the position of this receipt's Data Tree in the Super-Tree. Data Tree 0 is the first tree the log ever created. Data Tree 150 is the 151st.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;super_tree_size&lt;/code&gt; is how many Data Trees the Super-Tree contained at the time this receipt was issued.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;super_root&lt;/code&gt; is the Super-Tree root hash at that size.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;inclusion&lt;/code&gt; is a Merkle inclusion proof -- the sibling hashes needed to reconstruct the &lt;code&gt;super_root&lt;/code&gt; from the Data Tree root at position &lt;code&gt;data_tree_index&lt;/code&gt;. This proves that this specific Data Tree is part of the Super-Tree.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;consistency_to_origin&lt;/code&gt; is a Merkle consistency proof from Super-Tree size 1 to &lt;code&gt;super_tree_size&lt;/code&gt;. This proves that the current Super-Tree is an append-only extension of the genesis state -- no historical Data Trees were removed or modified.&lt;/p&gt;

&lt;h2&gt;
  
  
  Verifying Super-Tree Inclusion
&lt;/h2&gt;

&lt;p&gt;The first verification step: is this Data Tree actually in the Super-Tree?&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="k"&gt;fn&lt;/span&gt; &lt;span class="nf"&gt;verify_super_inclusion&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;data_tree_root&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;Hash&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;super_proof&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;SuperProof&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;AtlResult&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nb"&gt;bool&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;super_proof&lt;/span&gt;&lt;span class="py"&gt;.super_tree_size&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;Err&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nn"&gt;AtlError&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;InvalidTreeSize&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="n"&gt;size&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;reason&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s"&gt;"super_tree_size cannot be zero"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="p"&gt;});&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;super_proof&lt;/span&gt;&lt;span class="py"&gt;.data_tree_index&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;=&lt;/span&gt; &lt;span class="n"&gt;super_proof&lt;/span&gt;&lt;span class="py"&gt;.super_tree_size&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;Err&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nn"&gt;AtlError&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;LeafIndexOutOfBounds&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="n"&gt;index&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;super_proof&lt;/span&gt;&lt;span class="py"&gt;.data_tree_index&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;tree_size&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;super_proof&lt;/span&gt;&lt;span class="py"&gt;.super_tree_size&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="p"&gt;});&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;expected_super_root&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;super_proof&lt;/span&gt;&lt;span class="nf"&gt;.super_root_bytes&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="o"&gt;?&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;inclusion_path&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;super_proof&lt;/span&gt;&lt;span class="nf"&gt;.inclusion_path_bytes&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="o"&gt;?&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;inclusion_proof&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;InclusionProof&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;leaf_index&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;super_proof&lt;/span&gt;&lt;span class="py"&gt;.data_tree_index&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;tree_size&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;super_proof&lt;/span&gt;&lt;span class="py"&gt;.super_tree_size&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;path&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;inclusion_path&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;};&lt;/span&gt;

    &lt;span class="nf"&gt;verify_inclusion&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;data_tree_root&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;inclusion_proof&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;expected_super_root&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The Data Tree root is the leaf. The Super-Tree is the tree. Standard Merkle inclusion verification -- the same algorithm used for verifying individual entries within a Data Tree. The Super-Tree does not need special proof algorithms. It reuses the same &lt;code&gt;verify_inclusion&lt;/code&gt; function because it is the same data structure.&lt;/p&gt;

&lt;p&gt;Two structural checks before any cryptographic work: tree size cannot be zero (there is no Super-Tree with no Data Trees), and the index cannot exceed the size (you cannot be at position 150 in a tree with 100 leaves). These catch malformed proofs before touching the expensive hash operations.&lt;/p&gt;

&lt;h2&gt;
  
  
  Verifying Consistency to Origin
&lt;/h2&gt;

&lt;p&gt;The second verification step: is the Super-Tree an honest extension of the genesis state?&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="k"&gt;fn&lt;/span&gt; &lt;span class="nf"&gt;verify_consistency_to_origin&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;super_proof&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;SuperProof&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;AtlResult&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nb"&gt;bool&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;super_proof&lt;/span&gt;&lt;span class="py"&gt;.super_tree_size&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;Err&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nn"&gt;AtlError&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;InvalidTreeSize&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="n"&gt;size&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;reason&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s"&gt;"super_tree_size cannot be zero"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="p"&gt;});&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;genesis_super_root&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;super_proof&lt;/span&gt;&lt;span class="nf"&gt;.genesis_super_root_bytes&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="o"&gt;?&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;super_root&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;super_proof&lt;/span&gt;&lt;span class="nf"&gt;.super_root_bytes&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="o"&gt;?&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;super_proof&lt;/span&gt;&lt;span class="py"&gt;.super_tree_size&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;super_proof&lt;/span&gt;&lt;span class="py"&gt;.consistency_to_origin&lt;/span&gt;&lt;span class="nf"&gt;.is_empty&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;Ok&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;use_constant_time_eq&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;genesis_super_root&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;super_root&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;Err&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nn"&gt;AtlError&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;InvalidProofStructure&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="n"&gt;reason&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nd"&gt;format!&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
                &lt;span class="s"&gt;"consistency_to_origin must be empty for super_tree_size 1, got {} hashes"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="n"&gt;super_proof&lt;/span&gt;&lt;span class="py"&gt;.consistency_to_origin&lt;/span&gt;&lt;span class="nf"&gt;.len&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
            &lt;span class="p"&gt;),&lt;/span&gt;
        &lt;span class="p"&gt;});&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;consistency_path&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;super_proof&lt;/span&gt;&lt;span class="nf"&gt;.consistency_to_origin_bytes&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="o"&gt;?&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;consistency_proof&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;ConsistencyProof&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;from_size&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;to_size&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;super_proof&lt;/span&gt;&lt;span class="py"&gt;.super_tree_size&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;path&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;consistency_path&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;};&lt;/span&gt;

    &lt;span class="nf"&gt;verify_consistency&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;consistency_proof&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;genesis_super_root&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;super_root&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is an RFC 9162 consistency proof, but with a specific twist: &lt;code&gt;from_size&lt;/code&gt; is always 1. The proof says: "the Super-Tree at size 1 (containing just the first Data Tree) is a prefix of the Super-Tree at size N." If this holds, then Data Tree 0 is still at position 0, unchanged. Data Tree 1 is still at position 1. And so on, up to the current size.&lt;/p&gt;

&lt;p&gt;The genesis case (super_tree_size == 1) is a degenerate consistency proof: the only valid state is &lt;code&gt;genesis_super_root == super_root&lt;/code&gt; with an empty proof path. If someone provides a non-empty path for size 1, the proof is structurally invalid -- there is nothing to prove consistency between a tree of size 1 and itself except equality.&lt;/p&gt;

&lt;p&gt;Again, this reuses &lt;code&gt;verify_consistency&lt;/code&gt; -- the same RFC 9162 implementation I wrote about in a previous post. The Super-Tree does not reinvent consistency proofs. It parametrizes them differently (always from size 1), but the algorithm is identical.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cross-Receipt Verification
&lt;/h2&gt;

&lt;p&gt;This is the part that makes the two-level architecture worth the complexity.&lt;/p&gt;

&lt;p&gt;Suppose Alice receives a receipt today, and Bob receives a receipt next month. They have never communicated. The server might have been compromised in between. Can Alice and Bob independently determine whether the log was tampered with, without talking to the server?&lt;/p&gt;

&lt;p&gt;Yes.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="k"&gt;fn&lt;/span&gt; &lt;span class="nf"&gt;verify_cross_receipts&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;receipt_a&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;Receipt&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;receipt_b&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;Receipt&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;CrossReceiptVerificationResult&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;// ...&lt;/span&gt;

    &lt;span class="c1"&gt;// Step 1: Both receipts must have super_proof&lt;/span&gt;
    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;super_proof_a&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;receipt_a&lt;/span&gt;&lt;span class="py"&gt;.super_proof&lt;/span&gt;&lt;span class="nf"&gt;.as_ref&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="o"&gt;?&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;super_proof_b&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;receipt_b&lt;/span&gt;&lt;span class="py"&gt;.super_proof&lt;/span&gt;&lt;span class="nf"&gt;.as_ref&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="o"&gt;?&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="c1"&gt;// Step 2: Same genesis?&lt;/span&gt;
    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;genesis_a&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;super_proof_a&lt;/span&gt;&lt;span class="nf"&gt;.genesis_super_root_bytes&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="o"&gt;?&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;genesis_b&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;super_proof_b&lt;/span&gt;&lt;span class="nf"&gt;.genesis_super_root_bytes&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="o"&gt;?&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nf"&gt;use_constant_time_eq&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;genesis_a&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;genesis_b&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="c1"&gt;// Different logs entirely&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="c1"&gt;// Step 3: Both consistent with genesis?&lt;/span&gt;
    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;consistency_a&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;verify_consistency_to_origin&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;super_proof_a&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;consistency_b&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;verify_consistency_to_origin&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;super_proof_b&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

    &lt;span class="k"&gt;match&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;consistency_a&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;consistency_b&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;Ok&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;true&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="nf"&gt;Ok&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;true&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="k"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="py"&gt;.history_consistent&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;true&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="c1"&gt;// ...&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="n"&gt;result&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Three checks, no server required:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Same genesis?&lt;/strong&gt; If &lt;code&gt;genesis_super_root&lt;/code&gt; differs, the receipts are from different log instances. Done.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Receipt A consistent with genesis?&lt;/strong&gt; If the consistency proof from size 1 to A's &lt;code&gt;super_tree_size&lt;/code&gt; verifies, then the Super-Tree at A's snapshot is an honest extension of genesis.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Receipt B consistent with genesis?&lt;/strong&gt; Same check for B.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If both receipts are consistent with the same genesis, then by the transitive property of Merkle consistency, the history between them was not modified. The operator cannot have deleted Data Trees, reordered them, or substituted different content -- because both receipts independently prove that their respective Super-Tree snapshots are append-only extensions of the same origin.&lt;/p&gt;

&lt;p&gt;This works because consistency proofs are transitive. If the Super-Tree at size 50 is consistent with size 1, and the Super-Tree at size 100 is consistent with size 1, then size 100 is consistent with size 50. Any modification to the first 50 Data Trees would break at least one of the two consistency proofs.&lt;/p&gt;

&lt;p&gt;No communication between Alice and Bob. No server access. No trusted third party. Two receipts, one function call, and the math proves the rest.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why &lt;code&gt;from_size&lt;/code&gt; Is Always 1
&lt;/h2&gt;

&lt;p&gt;I could have designed the consistency proof to go from the previous receipt's Super-Tree size to the current one. That would be a shorter proof -- proving consistency from size 50 to size 100 requires fewer hashes than proving from size 1 to size 100.&lt;/p&gt;

&lt;p&gt;But it would also require the verifier to have the previous receipt. Cross-receipt verification would become sequential: to verify receipt C, you need receipt B, and to verify receipt B, you need receipt A, all the way back to the genesis.&lt;/p&gt;

&lt;p&gt;By always proving consistency from size 1, every receipt is self-contained. Each receipt independently proves its relationship to the origin. Verification is O(1) receipts, not O(N). Any single receipt, in isolation, proves that the entire log history up to that point is an append-only extension of genesis.&lt;/p&gt;

&lt;p&gt;The cost is a slightly longer proof path. For a Super-Tree with 1000 Data Trees, the consistency proof from size 1 is at most &lt;code&gt;2 * ceil(log2(1000))&lt;/code&gt; = 20 hashes -- 640 bytes. For a Super-Tree with a million Data Trees, it is 40 hashes -- 1280 bytes. This is negligible compared to the value of self-contained verification.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Two-Level Trust Chain
&lt;/h2&gt;

&lt;p&gt;Putting it all together, the verification chain for a single receipt is:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Entry level:&lt;/strong&gt; Verify the document hash matches the receipt's &lt;code&gt;payload_hash&lt;/code&gt;. This proves "this document is what was anchored."&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Data Tree level:&lt;/strong&gt; Verify the Merkle inclusion proof from the entry's leaf hash to the Data Tree root (&lt;code&gt;proof.root_hash&lt;/code&gt;). This proves "this entry is in this Data Tree."&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Super-Tree level (inclusion):&lt;/strong&gt; Verify the Super-Tree inclusion proof from the Data Tree root to the Super Root (&lt;code&gt;super_proof.super_root&lt;/code&gt;). This proves "this Data Tree is in the Super-Tree."&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Super-Tree level (consistency):&lt;/strong&gt; Verify the consistency proof from genesis to the current Super Root. This proves "the Super-Tree was never rewritten."&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Anchor level:&lt;/strong&gt; Verify external anchors (TSA on the Data Tree root, Bitcoin OTS on the Super Root). This proves "these roots existed at these specific times, attested by parties outside the operator's control."&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Each level builds on the one below it. Compromise any level and the chain breaks -- the verifier gets a definitive failure, not a silent pass.&lt;/p&gt;

&lt;p&gt;And because every level uses standard Merkle proofs (RFC 9162 inclusion and consistency), the entire verification stack is built from two primitives. One for "this leaf is in this tree." One for "this smaller tree is a prefix of this larger tree." Everything else is composition.&lt;/p&gt;




&lt;p&gt;The full implementation is open source: &lt;a href="https://github.com/evidentum-io/atl-core" rel="noopener noreferrer"&gt;github.com/evidentum-io/atl-core&lt;/a&gt; (Apache-2.0)&lt;/p&gt;

&lt;p&gt;Files discussed in this post:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;src/core/verify/super_tree.rs&lt;/code&gt; -- Super-Tree verification: inclusion, consistency to origin, cross-receipt verification&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;src/core/merkle/crypto.rs&lt;/code&gt; -- genesis leaf hash with &lt;code&gt;ATL-CHAIN-v1&lt;/code&gt; domain separator&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;src/core/receipt.rs&lt;/code&gt; -- &lt;code&gt;SuperProof&lt;/code&gt; struct definition&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>rust</category>
      <category>cryptography</category>
      <category>security</category>
      <category>opensource</category>
    </item>
    <item>
      <title>98 Bytes That Prove Your Document Existed</title>
      <dc:creator>Dmitrii Zatona</dc:creator>
      <pubDate>Sat, 21 Feb 2026 04:48:01 +0000</pubDate>
      <link>https://dev.to/dzatona/98-bytes-that-prove-your-document-existed-c5i</link>
      <guid>https://dev.to/dzatona/98-bytes-that-prove-your-document-existed-c5i</guid>
      <description>&lt;p&gt;A checkpoint in ATL Protocol is a signed snapshot of the transparency log state. It captures the root hash of the Merkle tree at a specific tree size, at a specific moment in time, from a specific log instance. If you have a checkpoint and the corresponding signature verifies, you know the exact state of the log at that point.&lt;/p&gt;

&lt;p&gt;The entire wire format is 98 bytes. Fixed size. No variable-length fields. No parser required.&lt;/p&gt;

&lt;p&gt;I want to walk through why it looks the way it does, byte by byte.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Layout
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Offset  Size   Field
0       18     Magic bytes: "ATL-Protocol-v1-CP" (ASCII)
18      32     Origin ID (SHA256 of instance UUID)
50      8      Tree size (u64 little-endian)
58      8      Timestamp (u64 little-endian, Unix nanoseconds)
66      32     Root hash (SHA256)
---
Total: 98 bytes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That is the complete wire format. The Ed25519 signature (64 bytes) and the key ID (32 bytes) are stored separately -- they are not part of the 98-byte blob. This separation is a deliberate design decision that I will explain below.&lt;/p&gt;

&lt;h2&gt;
  
  
  Fixed Size, No Parser
&lt;/h2&gt;

&lt;p&gt;Everyone reaches for JSON first. Or Protobuf. Or CBOR. Or MessagePack. These are fine formats for APIs, configuration, and data interchange. They are not fine for the thing you cryptographically sign.&lt;/p&gt;

&lt;p&gt;The problem with variable-length encodings in a signed context is ambiguity. JSON has key ordering issues -- the same logical object can produce different byte sequences depending on which keys come first. This is well-known enough that RFC 8785 (JSON Canonicalization Scheme) exists specifically to address it. Protobuf has field ordering that is technically undefined in the spec; different implementations can serialize the same message into different bytes. CBOR has multiple valid encodings for the same value.&lt;/p&gt;

&lt;p&gt;When I sign a checkpoint, I need to know exactly what I signed. Not "a checkpoint with these fields" but "these exact 98 bytes." If the serialization is ambiguous, the verification is ambiguous. If two implementations serialize the same checkpoint differently, one of them will fail to verify a signature produced by the other.&lt;/p&gt;

&lt;p&gt;A fixed 98-byte blob has zero ambiguity. Any implementation in any language reads it the same way:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Read 18 bytes of magic.&lt;/li&gt;
&lt;li&gt;Read 32 bytes of origin ID.&lt;/li&gt;
&lt;li&gt;Read 8 bytes, interpret as u64 little-endian -- that is the tree size.&lt;/li&gt;
&lt;li&gt;Read 8 bytes, interpret as u64 little-endian -- that is the timestamp.&lt;/li&gt;
&lt;li&gt;Read 32 bytes of root hash.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Done. No length prefixes to parse. No delimiters to scan for. No TLV encoding to decode. No canonical form debates, because the bytes ARE the canonical form.&lt;/p&gt;

&lt;p&gt;The serialization in Rust:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="k"&gt;fn&lt;/span&gt; &lt;span class="nf"&gt;to_bytes&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="k"&gt;self&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;u8&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="n"&gt;CHECKPOINT_BLOB_SIZE&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="k"&gt;mut&lt;/span&gt; &lt;span class="n"&gt;blob&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0u8&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="n"&gt;CHECKPOINT_BLOB_SIZE&lt;/span&gt;&lt;span class="p"&gt;];&lt;/span&gt;
    &lt;span class="n"&gt;blob&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="o"&gt;..&lt;/span&gt;&lt;span class="mi"&gt;18&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="nf"&gt;.copy_from_slice&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;CHECKPOINT_MAGIC&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="n"&gt;blob&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;18&lt;/span&gt;&lt;span class="o"&gt;..&lt;/span&gt;&lt;span class="mi"&gt;50&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="nf"&gt;.copy_from_slice&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="k"&gt;self&lt;/span&gt;&lt;span class="py"&gt;.origin&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="n"&gt;blob&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;50&lt;/span&gt;&lt;span class="o"&gt;..&lt;/span&gt;&lt;span class="mi"&gt;58&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="nf"&gt;.copy_from_slice&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="k"&gt;self&lt;/span&gt;&lt;span class="py"&gt;.tree_size&lt;/span&gt;&lt;span class="nf"&gt;.to_le_bytes&lt;/span&gt;&lt;span class="p"&gt;());&lt;/span&gt;
    &lt;span class="n"&gt;blob&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;58&lt;/span&gt;&lt;span class="o"&gt;..&lt;/span&gt;&lt;span class="mi"&gt;66&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="nf"&gt;.copy_from_slice&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="k"&gt;self&lt;/span&gt;&lt;span class="py"&gt;.timestamp&lt;/span&gt;&lt;span class="nf"&gt;.to_le_bytes&lt;/span&gt;&lt;span class="p"&gt;());&lt;/span&gt;
    &lt;span class="n"&gt;blob&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;66&lt;/span&gt;&lt;span class="o"&gt;..&lt;/span&gt;&lt;span class="mi"&gt;98&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="nf"&gt;.copy_from_slice&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="k"&gt;self&lt;/span&gt;&lt;span class="py"&gt;.root_hash&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="n"&gt;blob&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;No allocations. No error paths. The return type is &lt;code&gt;[u8; 98]&lt;/code&gt;, not &lt;code&gt;Vec&amp;lt;u8&amp;gt;&lt;/code&gt; or &lt;code&gt;Result&amp;lt;Vec&amp;lt;u8&amp;gt;&amp;gt;&lt;/code&gt;. It cannot fail.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Signature Lives Outside
&lt;/h2&gt;

&lt;p&gt;This is the design decision that seems obvious in hindsight but that many implementations get wrong.&lt;/p&gt;

&lt;p&gt;The Ed25519 signature and the key ID are stored alongside the checkpoint, but they are not part of the 98-byte signed blob. The method &lt;code&gt;to_bytes()&lt;/code&gt; always produces the exact data that was signed. Always. No exceptions.&lt;/p&gt;

&lt;p&gt;Why does this matter? Consider the alternative: include the signature in the serialized format. Now the signature signs... what? The data plus the signature itself? That is a chicken-and-egg problem. You cannot compute a signature over data that includes the signature, because you do not have the signature yet when you need to hash the data.&lt;/p&gt;

&lt;p&gt;The common workaround is to define a "signing input" that differs from the "stored format" -- serialize the checkpoint without the signature for signing, then serialize it again with the signature for storage. Now you have two serialization formats for the same logical object, and every implementation must know which one to use when. Bugs creep in. Someone verifies the stored format instead of the signing input. The signature fails. Hours of debugging follow.&lt;/p&gt;

&lt;p&gt;I avoided this entirely. &lt;code&gt;to_bytes()&lt;/code&gt; returns what was signed. The signature is a separate field. Verification reconstructs the blob and checks:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="k"&gt;fn&lt;/span&gt; &lt;span class="nf"&gt;verify&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="k"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;verifier&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;CheckpointVerifier&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;AtlResult&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="k"&gt;self&lt;/span&gt;&lt;span class="py"&gt;.key_id&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="n"&gt;verifier&lt;/span&gt;&lt;span class="py"&gt;.key_id&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;Err&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nn"&gt;AtlError&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;InvalidSignature&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nd"&gt;format!&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="s"&gt;"key_id mismatch: checkpoint has {}, verifier has {}"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="nn"&gt;hex&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;encode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;self&lt;/span&gt;&lt;span class="py"&gt;.key_id&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="nn"&gt;hex&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;encode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;verifier&lt;/span&gt;&lt;span class="py"&gt;.key_id&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="p"&gt;)));&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;blob&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;self&lt;/span&gt;&lt;span class="nf"&gt;.to_bytes&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="n"&gt;verifier&lt;/span&gt;&lt;span class="nf"&gt;.verify&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;blob&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="k"&gt;self&lt;/span&gt;&lt;span class="py"&gt;.signature&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Notice the key ID check before the signature verification. &lt;code&gt;key_id&lt;/code&gt; is &lt;code&gt;SHA256(public_key)&lt;/code&gt;, and it is checked first. Ed25519 verification is not free -- it involves elliptic curve operations. If the key ID does not match, I reject immediately without touching the expensive crypto. This is a fast-rejection pattern: filter out obviously wrong keys with a cheap hash comparison before doing real work.&lt;/p&gt;

&lt;h2&gt;
  
  
  Magic Bytes With a Version Baked In
&lt;/h2&gt;

&lt;p&gt;The first 18 bytes of every checkpoint are the ASCII string &lt;code&gt;"ATL-Protocol-v1-CP"&lt;/code&gt;. This serves two purposes.&lt;/p&gt;

&lt;p&gt;First, it identifies the format. If someone accidentally feeds a JPEG, a Protobuf message, or a random 98-byte buffer to a checkpoint parser, the magic bytes will not match. The error is &lt;code&gt;InvalidCheckpointMagic&lt;/code&gt;, not a confusing downstream failure from misinterpreted fields.&lt;/p&gt;

&lt;p&gt;Second, the &lt;code&gt;v1&lt;/code&gt; in the magic string bakes the wire format version into the data itself. If I ever need to change the checkpoint format -- add fields, change sizes, use a different hash function -- the new format gets different magic bytes (say, &lt;code&gt;"ATL-Protocol-v2-CP"&lt;/code&gt;). A v1 parser encountering a v2 checkpoint cleanly rejects it instead of silently misinterpreting bytes 18-50 as an origin ID when v2 might use those bytes for something else entirely.&lt;/p&gt;

&lt;p&gt;Eighteen bytes is generous for magic bytes. I chose a human-readable string over a shorter binary magic number because checkpoint blobs end up in hex dumps, log files, and debugging sessions. Seeing &lt;code&gt;41544C2D50726F746F636F6C2D76312D4350&lt;/code&gt; in a hex dump is recognizable. Seeing &lt;code&gt;89504E47&lt;/code&gt; requires you to remember that those are PNG magic bytes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Nanosecond Timestamps
&lt;/h2&gt;

&lt;p&gt;The timestamp field is a u64 encoding Unix nanoseconds. Not seconds, not milliseconds -- nanoseconds.&lt;/p&gt;

&lt;p&gt;The range of a u64 in nanoseconds covers from 1970 to approximately the year 2554. That is sufficient.&lt;/p&gt;

&lt;p&gt;I chose nanosecond precision because a transparency log can process multiple entries within the same millisecond. If two checkpoints have the same millisecond-resolution timestamp, their ordering becomes ambiguous. With nanosecond resolution, even entries processed microseconds apart have distinct timestamps. The timestamp is generated by &lt;code&gt;current_timestamp_nanos()&lt;/code&gt; with clamping to &lt;code&gt;u64::MAX&lt;/code&gt; to handle the theoretical case where system time exceeds the representable range.&lt;/p&gt;

&lt;h2&gt;
  
  
  Little-Endian Encoding
&lt;/h2&gt;

&lt;p&gt;The two u64 fields (tree size and timestamp) are encoded as little-endian. This is an explicit choice, not a default.&lt;/p&gt;

&lt;p&gt;Little-endian matches the native byte order of modern hardware: x86, ARM (in its default configuration), and RISC-V. On these architectures, encoding a u64 as little-endian is a no-op -- the in-memory representation is already little-endian. This eliminates an entire class of byte-swapping bugs on the most common platforms.&lt;/p&gt;

&lt;p&gt;I wrote an explicit test that verifies the encoding:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;test_endianness: 0x0102_0304_0506_0708 encodes as
[0x08, 0x07, 0x06, 0x05, 0x04, 0x03, 0x02, 0x01]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The test name is &lt;code&gt;test_endianness&lt;/code&gt;. It exists because byte order is the kind of thing that is trivially correct on one platform and silently wrong on another. The test makes the encoding a documented, verified property of the format, not an assumption.&lt;/p&gt;

&lt;h2&gt;
  
  
  JSON Roundtrip, but Not for Crypto
&lt;/h2&gt;

&lt;p&gt;Checkpoints need a human-readable representation for APIs, debugging, and storage in systems that do not handle raw binary well. I implemented a &lt;code&gt;CheckpointJson&lt;/code&gt; struct with string encodings:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Hashes are encoded as &lt;code&gt;"sha256:&amp;lt;64 hex characters&amp;gt;"&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Signatures are encoded as &lt;code&gt;"base64:&amp;lt;base64 string&amp;gt;"&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The &lt;code&gt;to_json()&lt;/code&gt; and &lt;code&gt;from_json()&lt;/code&gt; methods convert between the binary checkpoint and this JSON representation. But -- and this is important -- cryptographic operations always operate on the 98-byte binary blob, never on JSON. The Ed25519 signature is computed over &lt;code&gt;to_bytes()&lt;/code&gt;, not over &lt;code&gt;serde_json::to_string()&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;To enforce this, the main &lt;code&gt;Checkpoint&lt;/code&gt; struct does not derive &lt;code&gt;Serialize&lt;/code&gt; or &lt;code&gt;Deserialize&lt;/code&gt;. There is no &lt;code&gt;#[derive(Serialize)]&lt;/code&gt; on it. This is intentional. If someone accidentally tries to JSON-serialize a &lt;code&gt;Checkpoint&lt;/code&gt; directly, the compiler stops them. They must go through &lt;code&gt;to_json()&lt;/code&gt; explicitly, which forces them to think about which representation they are working with.&lt;/p&gt;

&lt;h2&gt;
  
  
  Trust Model: Signature as Integrity Check
&lt;/h2&gt;

&lt;p&gt;Per ATL Protocol v2.0, the Ed25519 signature on a checkpoint is an integrity check, not a trust anchor. This is a deliberate departure from PKI-style trust.&lt;/p&gt;

&lt;p&gt;The signature proves: "this checkpoint was issued by the holder of this private key." It does not prove: "you should trust this key." The distinction matters.&lt;/p&gt;

&lt;p&gt;Trust in ATL Protocol comes from external anchors: RFC 3161 TSA timestamps (a trusted third-party timestamping authority attests to when the checkpoint existed) and Bitcoin OTS (the checkpoint hash is anchored in the Bitcoin blockchain, providing a timestamp that no single party can forge). The Ed25519 signature binds the checkpoint to a specific log instance. The external anchors prove the checkpoint existed at a specific point in time.&lt;/p&gt;

&lt;p&gt;This design means that compromising the Ed25519 signing key does not retroactively compromise past checkpoints. If a checkpoint was anchored via RFC 3161 or Bitcoin OTS before the key was compromised, that anchor remains valid regardless of what happens to the key afterwards. The key is not the trust root -- it is a consistency mechanism.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Test Suite
&lt;/h2&gt;

&lt;p&gt;The checkpoint implementation has a test suite that covers the wire format exhaustively:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;test_checkpoint_blob_size&lt;/code&gt; -- verifies the blob is exactly 98 bytes.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;test_magic_bytes&lt;/code&gt; -- verifies the first 18 bytes are &lt;code&gt;"ATL-Protocol-v1-CP"&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;test_endianness&lt;/code&gt; -- verifies little-endian encoding of u64 fields.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;test_wire_format_layout&lt;/code&gt; -- verifies every field is at the correct byte offset.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;test_sign_and_verify&lt;/code&gt; -- round-trip: create checkpoint, sign, verify.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;test_verify_wrong_key_fails&lt;/code&gt; -- signature from key A does not verify with key B.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;test_verify_tampered_data_fails&lt;/code&gt; -- flip a byte in the checkpoint, verification fails.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;test_verify_tampered_signature_fails&lt;/code&gt; -- flip a byte in the signature, verification fails.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;test_json_roundtrip&lt;/code&gt; -- binary to JSON to binary produces identical bytes.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;test_empty_tree_checkpoint&lt;/code&gt; -- a checkpoint for an empty tree (tree_size = 0) is valid.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each test name documents a specific property of the format. If a future change breaks any of these properties, the test name tells you exactly what broke and why it matters.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why 98 Bytes
&lt;/h2&gt;

&lt;p&gt;The number 98 is not arbitrary. It is the sum of the minimum fields needed to identify a unique log state:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;18 bytes to identify the format and version.&lt;/li&gt;
&lt;li&gt;32 bytes to identify which log instance (origin).&lt;/li&gt;
&lt;li&gt;8 bytes to identify how many entries are in the log (tree size).&lt;/li&gt;
&lt;li&gt;8 bytes to identify when this snapshot was taken (timestamp).&lt;/li&gt;
&lt;li&gt;32 bytes to cryptographically commit to the entire log contents (root hash).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There is nothing to remove. The magic bytes prevent misidentification. The origin prevents cross-log confusion. The tree size and timestamp locate the snapshot in the log's history. The root hash commits to every entry ever written. Remove any field and the checkpoint becomes ambiguous or forgeable.&lt;/p&gt;

&lt;p&gt;There is nothing to add, either. Anything beyond these fields -- the signature, the key ID, metadata, annotations -- belongs outside the signed blob. The 98 bytes are the statement. Everything else is commentary on the statement.&lt;/p&gt;




&lt;p&gt;The full implementation is open source: &lt;a href="https://github.com/evidentum-io/atl-core" rel="noopener noreferrer"&gt;github.com/evidentum-io/atl-core&lt;/a&gt; (Apache-2.0)&lt;/p&gt;

&lt;p&gt;The file discussed in this post:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;src/core/checkpoint.rs&lt;/code&gt; -- checkpoint wire format, serialization, signing, verification (1080 lines)&lt;/li&gt;
&lt;li&gt;Ed25519 signatures via &lt;code&gt;ed25519-dalek&lt;/code&gt; crate&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>rust</category>
      <category>cryptography</category>
      <category>security</category>
      <category>opensource</category>
    </item>
    <item>
      <title>I Shipped a Broken Consistency Proof Verifier. Here's How I Found Out.</title>
      <dc:creator>Dmitrii Zatona</dc:creator>
      <pubDate>Sat, 21 Feb 2026 03:52:45 +0000</pubDate>
      <link>https://dev.to/dzatona/i-shipped-a-broken-consistency-proof-verifier-heres-how-i-found-out-go8</link>
      <guid>https://dev.to/dzatona/i-shipped-a-broken-consistency-proof-verifier-heres-how-i-found-out-go8</guid>
      <description>&lt;p&gt;ATL Protocol needs append-only integrity. Every entry that was ever written to a transparency log must remain exactly where it is, exactly as it was. If an operator, a compromised server, or a software defect silently drops or mutates a past record, the entire trust model is gone. Not degraded -- gone.&lt;/p&gt;

&lt;p&gt;Consistency proofs enforce this. A consistency proof takes two snapshots of the same log -- say, one at size 4 and another at size 8 -- and proves that the first four entries in the larger log are byte-for-byte identical to the entries in the smaller log. No deletions. No substitutions. No reordering. The proof is a short sequence of hashes that lets any verifier independently confirm the relationship between the two tree roots.&lt;/p&gt;

&lt;p&gt;RFC 9162 (Certificate Transparency v2) specifies the exact algorithm for generating and verifying these proofs. I implemented it from scratch in Rust for ATL Protocol. Not a wrapper around an existing C library. Not a condensed version. The complete SUBPROOF algorithm from Section 2.1.4.&lt;/p&gt;

&lt;p&gt;Or at least, that was the plan.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Shortcut That Bit Me
&lt;/h2&gt;

&lt;p&gt;When I first read Section 2.1.4 of RFC 9162, the verification algorithm looked overengineered. Bit shifting, a boolean flag, nested loops, an alignment phase. I thought I understood the essence of what it was doing and could distill it to something simpler.&lt;/p&gt;

&lt;p&gt;So I wrote a simplified verifier. It did four things:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Check that the proof path is not empty.&lt;/li&gt;
&lt;li&gt;If &lt;code&gt;from_size&lt;/code&gt; is a power of two, check that &lt;code&gt;path[0]&lt;/code&gt; matches &lt;code&gt;old_root&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Check that &lt;code&gt;path.len()&lt;/code&gt; does not exceed &lt;code&gt;2 * log2(to_size)&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Return &lt;code&gt;true&lt;/code&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;That last line is the problem. My simplified implementation never reconstructed the tree roots. It checked surface properties -- non-empty path, plausible length, matching first element in the power-of-two case -- and called it good. The tests I had at the time all passed, because valid proofs do have these properties. I moved on to other parts of the codebase.&lt;/p&gt;

&lt;p&gt;I do not remember exactly when the doubt crept in. Probably while re-reading the RFC for an unrelated reason. The verification algorithm does two parallel root reconstructions from the same proof path, and my version did zero. That is not a minor difference. That is the entire security property missing.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Attack
&lt;/h2&gt;

&lt;p&gt;I sat down and tried to break my own code. It took about five minutes.&lt;/p&gt;

&lt;p&gt;The old root is public -- anyone monitoring the log already has it. An attacker constructs a proof starting with &lt;code&gt;old_root&lt;/code&gt; (passing the "first hash matches" check), followed by arbitrary garbage. The proof length of 3 is within any reasonable bound for an 8-leaf tree. My simplified verifier checks these surface properties, never reconstructs either root, and returns &lt;code&gt;true&lt;/code&gt;. The attacker has just "proved" that the log grew from 4 to 8 entries with content they control.&lt;/p&gt;

&lt;p&gt;The concrete attack:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="nd"&gt;#[test]&lt;/span&gt;
&lt;span class="k"&gt;fn&lt;/span&gt; &lt;span class="nf"&gt;test_regression_simplified_impl_vulnerability&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;leaves&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;Vec&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;Hash&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="o"&gt;..&lt;/span&gt;&lt;span class="mi"&gt;8&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="nf"&gt;.map&lt;/span&gt;&lt;span class="p"&gt;(|&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;|&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="nb"&gt;u8&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="mi"&gt;32&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;&lt;span class="nf"&gt;.collect&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;old_root&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;compute_root&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;leaves&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="o"&gt;..&lt;/span&gt;&lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="p"&gt;]);&lt;/span&gt;
    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;new_root&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;compute_root&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;leaves&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;attack_proof&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;ConsistencyProof&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;from_size&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;to_size&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;8&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;path&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nd"&gt;vec!&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;
            &lt;span class="n"&gt;old_root&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;   &lt;span class="c1"&gt;// Passes simplified check&lt;/span&gt;
            &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0x00&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="mi"&gt;32&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="c1"&gt;// Garbage&lt;/span&gt;
            &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0x00&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="mi"&gt;32&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="c1"&gt;// Garbage&lt;/span&gt;
        &lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="p"&gt;};&lt;/span&gt;

    &lt;span class="nd"&gt;assert!&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nf"&gt;verify_consistency&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;attack_proof&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;old_root&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;new_root&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="nf"&gt;.unwrap&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
        &lt;span class="s"&gt;"CRITICAL: Simplified implementation vulnerability not fixed!"&lt;/span&gt;
    &lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This proof passes every check the simplified verifier performs. The path is non-empty. &lt;code&gt;from_size&lt;/code&gt; is 4 (a power of two), and &lt;code&gt;path[0]&lt;/code&gt; is indeed &lt;code&gt;old_root&lt;/code&gt;. The path length of 3 is well within &lt;code&gt;2 * log2(8) = 6&lt;/code&gt;. My old code would have returned &lt;code&gt;true&lt;/code&gt; and accepted a completely forged log extension.&lt;/p&gt;

&lt;p&gt;The test name is &lt;code&gt;test_regression_simplified_impl_vulnerability&lt;/code&gt;. The word "regression" is deliberate -- it means this is not a hypothetical attack I imagined for educational purposes. I wrote the broken code first. I found the hole. I rewrote the verifier. The test exists so that no future refactor can quietly reintroduce the same vulnerability. My mistake, immortalized in the test suite.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Consistency Proofs Actually Work
&lt;/h2&gt;

&lt;p&gt;A Merkle tree is a binary hash tree where leaves and internal nodes are domain-separated:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;leaf_hash = SHA256(0x00 || data)
node_hash = SHA256(0x01 || left_hash || right_hash)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In an append-only log, new entries are added to the right side of the tree. The left subtrees are immutable once formed. A consistency proof leverages this structure: it provides the minimum set of intermediate hashes that a verifier needs to reconstruct both the old root (from the smaller tree) and the new root (from the larger tree) using a single walk over the proof path.&lt;/p&gt;

&lt;p&gt;The critical property -- the one my simplified version missed entirely -- is that the verifier does not simply check whether &lt;code&gt;old_root&lt;/code&gt; appears somewhere in the new tree. It must reconstruct both roots from the same proof path. If any hash in the path is incorrect -- even by a single bit -- both reconstructions produce wrong results, and the proof fails.&lt;/p&gt;

&lt;p&gt;The RFC 9162 algorithm works by decomposing the tree at the largest power-of-two boundary, then recursively collecting the sibling hashes that the verifier will need. The bit-level operations in the verification algorithm encode the tree structure implicitly: each bit of the size counters tells the verifier whether a proof hash belongs on the left or right at each level.&lt;/p&gt;

&lt;h2&gt;
  
  
  Five Structural Invariants
&lt;/h2&gt;

&lt;p&gt;After the rewrite, before the verification algorithm processes a single hash, my implementation enforces five structural invariants. Each invariant eliminates a category of malformed or malicious proofs with zero cryptographic work:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Invariant 1: Valid bounds.&lt;/strong&gt; &lt;code&gt;from_size&lt;/code&gt; must not exceed &lt;code&gt;to_size&lt;/code&gt;. A proof that claims the tree shrank is structurally impossible in an append-only log. Rejecting it immediately avoids nonsensical bit arithmetic downstream.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;proof&lt;/span&gt;&lt;span class="py"&gt;.from_size&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;proof&lt;/span&gt;&lt;span class="py"&gt;.to_size&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;Err&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nn"&gt;AtlError&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;InvalidConsistencyBounds&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;from_size&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;proof&lt;/span&gt;&lt;span class="py"&gt;.from_size&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;to_size&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;proof&lt;/span&gt;&lt;span class="py"&gt;.to_size&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Invariant 2: Same-size proofs require an empty path.&lt;/strong&gt; When &lt;code&gt;from_size == to_size&lt;/code&gt;, the only valid consistency proof is an empty one -- verification reduces to &lt;code&gt;old_root == new_root&lt;/code&gt;. If someone submits a non-empty path for equal sizes, they are attempting to inject hashes into the verification pipeline. Reject it without processing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Invariant 3: Zero old size requires an empty path.&lt;/strong&gt; Every tree is consistent with the empty tree by definition. A non-empty proof from size zero is an attempt to force the verifier to process attacker-controlled data for a case that requires no proof at all.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Invariant 4: Non-trivial proofs need at least one hash.&lt;/strong&gt; When &lt;code&gt;from_size&lt;/code&gt; is not a power of two and &lt;code&gt;from_size != to_size&lt;/code&gt;, the proof must contain at least one hash. The RFC 9162 algorithm prepends &lt;code&gt;old_root&lt;/code&gt; to the proof path only when &lt;code&gt;from_size&lt;/code&gt; is a power of two. For non-power-of-two sizes, an empty path means the proof is incomplete -- the verifier has nothing to work with.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Invariant 5: Path length bounded by O(log n).&lt;/strong&gt; A Merkle tree of depth d requires at most O(d) hashes in a consistency proof. I compute the bound as &lt;code&gt;2 * ceil(log2(to_size))&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;max_proof_len&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="mi"&gt;64&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="n"&gt;proof&lt;/span&gt;&lt;span class="py"&gt;.to_size&lt;/span&gt;&lt;span class="nf"&gt;.leading_zeros&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="nb"&gt;usize&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nf"&gt;.saturating_mul&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;proof&lt;/span&gt;&lt;span class="py"&gt;.path&lt;/span&gt;&lt;span class="nf"&gt;.len&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;max_proof_len&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;Err&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nn"&gt;AtlError&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;InvalidProofStructure&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="o"&gt;...&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A 100-hash proof for an 8-leaf tree is rejected before any hashing occurs. Without this bound, an attacker could force the verifier into an arbitrarily expensive hash chain.&lt;/p&gt;

&lt;p&gt;These five checks are structural preconditions, not defense-in-depth extras. Every class of malformed proof that can be rejected without hashing should be rejected without hashing. My simplified version skipped this discipline entirely.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Full Verification Algorithm
&lt;/h2&gt;

&lt;p&gt;The replacement verifier is a faithful implementation of RFC 9162. A single pass over the proof path, maintaining two running hashes and two bit-shifted size counters:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Step 1: If from_size is a power of 2, prepend old_root to path&lt;/span&gt;
&lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;path_vec&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="nf"&gt;is_power_of_two&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;from_size&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="k"&gt;mut&lt;/span&gt; &lt;span class="n"&gt;v&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nd"&gt;vec!&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;old_root&lt;/span&gt;&lt;span class="p"&gt;];&lt;/span&gt;
    &lt;span class="n"&gt;v&lt;/span&gt;&lt;span class="nf"&gt;.extend_from_slice&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;path&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="n"&gt;v&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;path&lt;/span&gt;&lt;span class="nf"&gt;.to_vec&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;

&lt;span class="c1"&gt;// Step 2: Initialize bit counters with checked arithmetic&lt;/span&gt;
&lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="k"&gt;mut&lt;/span&gt; &lt;span class="n"&gt;fn_&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;from_size&lt;/span&gt;&lt;span class="nf"&gt;.checked_sub&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nf"&gt;.ok_or&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nn"&gt;AtlError&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;ArithmeticOverflow&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;operation&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s"&gt;"consistency verification: from_size - 1"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;})&lt;/span&gt;&lt;span class="o"&gt;?&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="k"&gt;mut&lt;/span&gt; &lt;span class="n"&gt;sn&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;to_size&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="c1"&gt;// Step 3: Align -- shift right while LSB(fn) is set&lt;/span&gt;
&lt;span class="k"&gt;while&lt;/span&gt; &lt;span class="n"&gt;fn_&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;fn_&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&amp;gt;=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="n"&gt;sn&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&amp;gt;=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;// Step 4: Initialize running hashes from the first proof element&lt;/span&gt;
&lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="k"&gt;mut&lt;/span&gt; &lt;span class="n"&gt;fr&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;path_vec&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;];&lt;/span&gt;
&lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="k"&gt;mut&lt;/span&gt; &lt;span class="n"&gt;sr&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;path_vec&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;];&lt;/span&gt;

&lt;span class="c1"&gt;// Step 5: Process each subsequent proof element&lt;/span&gt;
&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;c&lt;/span&gt; &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="n"&gt;path_vec&lt;/span&gt;&lt;span class="nf"&gt;.iter&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="nf"&gt;.skip&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;sn&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;Ok&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;false&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;fn_&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="p"&gt;||&lt;/span&gt; &lt;span class="n"&gt;fn_&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="n"&gt;sn&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="c1"&gt;// Proof hash is a left sibling&lt;/span&gt;
        &lt;span class="n"&gt;fr&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;hash_children&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;c&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;fr&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="n"&gt;sr&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;hash_children&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;c&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;sr&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="k"&gt;while&lt;/span&gt; &lt;span class="n"&gt;fn_&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="n"&gt;fn_&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="n"&gt;fn_&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&amp;gt;=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
            &lt;span class="n"&gt;sn&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&amp;gt;=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="c1"&gt;// Proof hash is a right sibling (only affects new root)&lt;/span&gt;
        &lt;span class="n"&gt;sr&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;hash_children&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;sr&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;c&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="n"&gt;fn_&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&amp;gt;=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="n"&gt;sn&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&amp;gt;=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;// Step 6: Final check&lt;/span&gt;
&lt;span class="nf"&gt;Ok&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;use_constant_time_eq&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;fr&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;old_root&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nf"&gt;use_constant_time_eq&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;sr&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;new_root&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="n"&gt;sn&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The bit operations encode the tree structure. &lt;code&gt;fn_&lt;/code&gt; tracks the position within the old tree boundary, &lt;code&gt;sn&lt;/code&gt; tracks the position within the new tree. When a proof hash is a left sibling (&lt;code&gt;fn_ &amp;amp; 1 == 1&lt;/code&gt; or &lt;code&gt;fn_ == sn&lt;/code&gt;), it contributes to both root reconstructions. When it is a right sibling, it only contributes to the new root -- the old tree did not extend that far.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;fn_ == sn&lt;/code&gt; condition handles the transition point where both trees share a common subtree root and then diverge. The alignment loop at the start skips tree levels where the old tree's boundary falls at an odd index, synchronizing the bit counters with the proof path.&lt;/p&gt;

&lt;p&gt;This is the part I tried to skip. Every bit operation matters. Getting any of it wrong means either accepting invalid proofs or rejecting valid ones.&lt;/p&gt;

&lt;h2&gt;
  
  
  Constant-Time Hash Comparison
&lt;/h2&gt;

&lt;p&gt;The final verification step compares reconstructed hashes against expected roots. A standard &lt;code&gt;==&lt;/code&gt; comparison on byte arrays short-circuits on the first differing byte, leaking timing information proportional to the position of the first mismatch.&lt;/p&gt;

&lt;p&gt;I use the &lt;code&gt;subtle&lt;/code&gt; crate for constant-time comparison:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="k"&gt;fn&lt;/span&gt; &lt;span class="nf"&gt;use_constant_time_eq&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;a&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;Hash&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;b&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;Hash&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;bool&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;use&lt;/span&gt; &lt;span class="nn"&gt;subtle&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;ConstantTimeEq&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="n"&gt;a&lt;/span&gt;&lt;span class="nf"&gt;.ct_eq&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;b&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="nf"&gt;.into&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Root hashes are public in a transparency log, so timing side-channels here are less exploitable than in password verification. I use constant-time comparison anyway -- the cost is zero for 32 bytes, and if the function is ever reused in a context where the hash is not public, there is no latent vulnerability waiting to be discovered.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;sn == 0&lt;/code&gt; check in the final expression is part of the RFC specification: after processing all proof elements, the bit-shifted &lt;code&gt;to_size - 1&lt;/code&gt; counter must have reached zero. If it has not, the proof path was the wrong length for the claimed tree sizes. This catches a specific class of attack where the proof contains valid hashes but claims incorrect sizes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Checked Arithmetic
&lt;/h2&gt;

&lt;p&gt;If &lt;code&gt;from_size&lt;/code&gt; is 0 and the code computes &lt;code&gt;from_size - 1&lt;/code&gt; without checking, the result wraps to &lt;code&gt;u64::MAX&lt;/code&gt; on release builds. Every subsequent bit operation produces garbage.&lt;/p&gt;

&lt;p&gt;Every arithmetic operation uses Rust's checked arithmetic:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="k"&gt;mut&lt;/span&gt; &lt;span class="n"&gt;fn_&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;from_size&lt;/span&gt;&lt;span class="nf"&gt;.checked_sub&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nf"&gt;.ok_or&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nn"&gt;AtlError&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;ArithmeticOverflow&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;operation&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s"&gt;"consistency verification: from_size - 1"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;})&lt;/span&gt;&lt;span class="o"&gt;?&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;No &lt;code&gt;wrapping_sub&lt;/code&gt;. No &lt;code&gt;unchecked_add&lt;/code&gt;. No silent truncation. If an operation would overflow, it returns an explicit error naming the specific operation. The structural invariants already prevent &lt;code&gt;from_size == 0&lt;/code&gt; from reaching this code path. The checked arithmetic is a second layer: if someone refactors the invariant checks, the arithmetic still will not silently produce wrong results.&lt;/p&gt;

&lt;h2&gt;
  
  
  Adversarial Test Suite
&lt;/h2&gt;

&lt;p&gt;After the simplified-implementation incident, I was not going to rely on happy-path tests alone. The adversarial test suite in &lt;code&gt;adversarial_tests.rs&lt;/code&gt; (344 lines) exists specifically to verify that incorrect, malicious, and boundary-case inputs produce correct rejections. Every test includes both a positive case (valid proof accepted) and a negative case (tampered proof rejected):&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Replay attacks across trees.&lt;/strong&gt; Generate a valid consistency proof for one tree, then attempt to verify it against a different tree with the same sizes but different leaf data:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="nd"&gt;#[test]&lt;/span&gt;
&lt;span class="k"&gt;fn&lt;/span&gt; &lt;span class="nf"&gt;test_adversarial_replay_different_tree&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;proof_a&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;generate_consistency_proof&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;8&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;get_node_a&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="nf"&gt;.unwrap&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;old_root_b&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;compute_root&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;leaves_b&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="o"&gt;..&lt;/span&gt;&lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="p"&gt;]);&lt;/span&gt;
    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;new_root_b&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;compute_root&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;leaves_b&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

    &lt;span class="c1"&gt;// Proof from tree A must not verify against tree B&lt;/span&gt;
    &lt;span class="nd"&gt;assert!&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nf"&gt;verify_consistency&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;proof_a&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;old_root_b&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;new_root_b&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="nf"&gt;.unwrap&lt;/span&gt;&lt;span class="p"&gt;());&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The proof is cryptographically bound to specific leaf content. Same structure, different data, different hashes, verification fails.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Replay attacks across sizes.&lt;/strong&gt; Take a valid proof for &lt;code&gt;(4 -&amp;gt; 8)&lt;/code&gt; and relabel it as &lt;code&gt;(3 -&amp;gt; 7)&lt;/code&gt;. The bit operations in the verification algorithm are size-dependent -- each bit of the size counter determines left-vs-right sibling ordering. A proof path that is valid for one pair of sizes is invalid for another, even if the underlying data is the same.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Boundary size testing.&lt;/strong&gt; Sizes at or near powers of two trigger different code paths in the bit manipulation logic. I test pairs around every boundary: 63/64, 64/65, 127/128, 128/129, 255/256. Off-by-one errors here are the most common failure mode in Merkle tree implementations because &lt;code&gt;is_power_of_two&lt;/code&gt; gates whether &lt;code&gt;old_root&lt;/code&gt; is prepended to the proof path.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;All-ones binary sizes.&lt;/strong&gt; Values like 7 (&lt;code&gt;0b111&lt;/code&gt;), 15 (&lt;code&gt;0b1111&lt;/code&gt;), and 31 (&lt;code&gt;0b11111&lt;/code&gt;) have every bit set, maximizing the iterations in the alignment loop and exercising every branch condition. These are the sizes most likely to expose off-by-one errors in the bit-shifting logic.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Proof length attacks.&lt;/strong&gt; A proof with 100 elements for an 8-leaf tree is rejected by Invariant 5 before any hashing occurs. Without the O(log n) bound, an attacker could force the verifier into an extended hash computation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Duplicate hash attacks.&lt;/strong&gt; A proof where every element is &lt;code&gt;old_root&lt;/code&gt; is rejected because the hash reconstruction deterministically produces wrong intermediate values. The algorithm's correctness does not depend on hash diversity.&lt;/p&gt;

&lt;p&gt;Each test is accompanied by single-bit-flip verification: flipping one byte in any proof hash causes the proof to fail. This confirms that the verification is sensitive to every byte of every hash in the proof path.&lt;/p&gt;

&lt;p&gt;The 415 lines of &lt;code&gt;consistency.rs&lt;/code&gt; and 344 lines of adversarial tests do not prove the implementation is correct in a formal sense -- that would require a proof assistant. But they do prove that every attack vector I could identify is covered, and they document those vectors permanently in the test names and assertions. Including the vector I accidentally created myself.&lt;/p&gt;




&lt;p&gt;The full implementation is open source: &lt;a href="https://github.com/evidentum-io/atl-core" rel="noopener noreferrer"&gt;github.com/evidentum-io/atl-core&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Files discussed in this post:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;src/core/merkle/consistency.rs&lt;/code&gt; -- proof generation and verification (415 lines)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;src/core/merkle/tests/adversarial_tests.rs&lt;/code&gt; -- adversarial test suite (344 lines)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;src/core/merkle/crypto.rs&lt;/code&gt; -- RFC 6962 hash functions&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;src/core/merkle/helpers.rs&lt;/code&gt; -- tree navigation utilities&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>rust</category>
      <category>cryptography</category>
      <category>security</category>
      <category>opensource</category>
    </item>
  </channel>
</rss>
