<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Alex Towell</title>
    <description>The latest articles on DEV Community by Alex Towell (@queelius).</description>
    <link>https://dev.to/queelius</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/queelius"/>
    <language>en</language>
    <item>
      <title>Sparse Spatial Hash Grids: Efficient N-Dimensional Spatial Indexing</title>
      <dc:creator>Alex Towell</dc:creator>
      <pubDate>Tue, 05 May 2026 13:22:17 +0000</pubDate>
      <link>https://dev.to/queelius/sparse-spatial-hash-grids-efficient-n-dimensional-spatial-indexing-438j</link>
      <guid>https://dev.to/queelius/sparse-spatial-hash-grids-efficient-n-dimensional-spatial-indexing-438j</guid>
      <description>&lt;h1&gt;
  
  
  Sparse Spatial Hash Grids: Finding Neighbors Fast
&lt;/h1&gt;

&lt;p&gt;If you're building physics simulations, game engines, or scientific computing applications, one problem comes up constantly: &lt;strong&gt;"Which entities are near this position?"&lt;/strong&gt; Collision detection between 10 million particles, finding nearby enemies in a game world, computing gravitational forces in an N-body simulation. Efficient spatial queries are critical.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem: Spatial Indexing at Scale
&lt;/h2&gt;

&lt;p&gt;Consider a physics simulation with 10 million particles in a 10,000 cubed world. Each frame, you need to find all particles within 20 units of each particle (for collision detection), update positions, and repeat 60 times per second.&lt;/p&gt;

&lt;p&gt;The naive approach, checking every particle against every other, requires &lt;strong&gt;100 trillion comparisons&lt;/strong&gt; per frame. Even at 1 nanosecond per comparison, that's over 90 seconds per frame.&lt;/p&gt;

&lt;h2&gt;
  
  
  Enter: Sparse Spatial Hash Grids
&lt;/h2&gt;

&lt;p&gt;A &lt;strong&gt;sparse spatial hash grid&lt;/strong&gt; divides space into a grid of cells and uses a hash map to store only the occupied cells. You get:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;O(1) insertions&lt;/strong&gt; (hash map lookup)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;O(k) neighbor queries&lt;/strong&gt; where k = number of nearby entities (not total entities)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Memory proportional to occupied cells&lt;/strong&gt; (not total possible cells)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Why "Sparse"?
&lt;/h3&gt;

&lt;p&gt;In the 10M particle example, a &lt;strong&gt;dense grid&lt;/strong&gt; with 10-unit cells needs 1 billion (1000 cubed) cells. At 8 bytes per pointer, that's &lt;strong&gt;8 GB&lt;/strong&gt; just for empty cells.&lt;/p&gt;

&lt;p&gt;A &lt;strong&gt;sparse hash grid&lt;/strong&gt; only stores the ~10 million occupied cells, using &lt;strong&gt;~100 MB&lt;/strong&gt; total. That's a &lt;strong&gt;60,000x memory reduction&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Architecture: Generic N-Dimensional Design
&lt;/h2&gt;

&lt;p&gt;I built this on modern C++20 concepts and generic programming:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight cpp"&gt;&lt;code&gt;&lt;span class="k"&gt;template&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="k"&gt;typename&lt;/span&gt; &lt;span class="nc"&gt;Entity&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
         &lt;span class="n"&gt;std&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="kt"&gt;size_t&lt;/span&gt; &lt;span class="n"&gt;Dimensions&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
         &lt;span class="n"&gt;std&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;floating_point&lt;/span&gt; &lt;span class="n"&gt;FloatType&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kt"&gt;float&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
         &lt;span class="n"&gt;std&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;unsigned_integral&lt;/span&gt; &lt;span class="n"&gt;IndexType&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;std&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="kt"&gt;size_t&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
         &lt;span class="n"&gt;std&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="kt"&gt;size_t&lt;/span&gt; &lt;span class="n"&gt;SmallVectorSize&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;16&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;sparse_spatial_hash&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This works for 2D games (platformers, top-down shooters), 3D simulations (molecular dynamics, particle effects), 4D space-time indexing (trajectory queries), or any N dimensions your use case requires.&lt;/p&gt;

&lt;h3&gt;
  
  
  Topology Support
&lt;/h3&gt;

&lt;p&gt;Real-world spatial problems have different boundary conditions:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight cpp"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Bounded: Traditional box with walls&lt;/span&gt;
&lt;span class="n"&gt;grid_config&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;bounded&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;topology_type&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;topology&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;bounded&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;world_size&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="mf"&gt;1000.0&lt;/span&gt;&lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mf"&gt;1000.0&lt;/span&gt;&lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mf"&gt;1000.0&lt;/span&gt;&lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;

&lt;span class="c1"&gt;// Toroidal: Periodic wraparound (pac-man physics)&lt;/span&gt;
&lt;span class="n"&gt;grid_config&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;toroidal&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;topology_type&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;topology&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;toroidal&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;world_size&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="mf"&gt;1000.0&lt;/span&gt;&lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mf"&gt;1000.0&lt;/span&gt;&lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mf"&gt;1000.0&lt;/span&gt;&lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;

&lt;span class="c1"&gt;// Infinite: Unbounded growth&lt;/span&gt;
&lt;span class="n"&gt;grid_config&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;infinite&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;topology_type&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;topology&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;infinite&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Toroidal topology&lt;/strong&gt; is particularly useful. It's essential for periodic boundary conditions in molecular dynamics, seamless procedural worlds in games, and astronomical simulations (periodic universe models).&lt;/p&gt;

&lt;h2&gt;
  
  
  Performance: The Numbers
&lt;/h2&gt;

&lt;p&gt;From real-world usage in the DigiStar physics engine (10M particles, 10000 cubed world):&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Operation&lt;/th&gt;
&lt;th&gt;Time&lt;/th&gt;
&lt;th&gt;Notes&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Full Rebuild&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;80ms&lt;/td&gt;
&lt;td&gt;Complete grid reconstruction&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Incremental Update&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;2ms&lt;/td&gt;
&lt;td&gt;Only 1% of particles change cells&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Collision Detection&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;150ms&lt;/td&gt;
&lt;td&gt;20-unit interaction radius&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Memory Footprint&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;100MB&lt;/td&gt;
&lt;td&gt;Grid + tracking structures&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The &lt;strong&gt;40x speedup&lt;/strong&gt; from incremental updates matters. In most simulations, only a small fraction of entities move between cells each frame. Why rebuild the entire grid when you can just update the movers?&lt;/p&gt;

&lt;h3&gt;
  
  
  Latest Optimizations (v1.1.0)
&lt;/h3&gt;

&lt;p&gt;The library now uses &lt;strong&gt;small vector optimization&lt;/strong&gt;: cells with 16 or fewer entities store them inline rather than allocating.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;40% faster rebuilds&lt;/strong&gt; (fewer allocations)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;5-11% faster queries&lt;/strong&gt; (better cache locality)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Trade-off&lt;/strong&gt;: +256 bytes per occupied cell&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For typical workloads where most cells have fewer than 16 entities, this is a significant win.&lt;/p&gt;

&lt;h2&gt;
  
  
  Use Cases
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Game Development
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Collision Detection&lt;/strong&gt;: Broad-phase partitioning is the first step in physics engines:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight cpp"&gt;&lt;code&gt;&lt;span class="n"&gt;sparse_spatial_hash&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;Entity&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;grid&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="n"&gt;grid&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;rebuild&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;entities&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;// Only check entities in nearby cells&lt;/span&gt;
&lt;span class="n"&gt;grid&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;for_each_pair&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;entities&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;collision_radius&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;[](&lt;/span&gt;&lt;span class="n"&gt;std&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="kt"&gt;size_t&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;std&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="kt"&gt;size_t&lt;/span&gt; &lt;span class="n"&gt;j&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;detailed_collision_check&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;j&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="n"&gt;handle_collision&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;j&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;AI Pathfinding&lt;/strong&gt;: Find cover points, enemies, or waypoints:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight cpp"&gt;&lt;code&gt;&lt;span class="k"&gt;auto&lt;/span&gt; &lt;span class="n"&gt;nearby_cover&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;grid&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;query_radius&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;search_radius&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;player&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;player&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;y&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Scientific Computing
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;N-Body Simulations&lt;/strong&gt;: Gravity, electrostatics, magnetic forces:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight cpp"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Build neighbor lists for force calculations&lt;/span&gt;
&lt;span class="n"&gt;sparse_spatial_hash&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;Particle&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;grid&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="n"&gt;grid&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;rebuild&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;particles&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;auto&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt; &lt;span class="n"&gt;p&lt;/span&gt; &lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="n"&gt;particles&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;auto&lt;/span&gt; &lt;span class="n"&gt;neighbors&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;grid&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;query_radius&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;cutoff_distance&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;p&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;p&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;y&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;p&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;z&lt;/span&gt;
    &lt;span class="p"&gt;);&lt;/span&gt;

    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;auto&lt;/span&gt; &lt;span class="n"&gt;neighbor_idx&lt;/span&gt; &lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="n"&gt;neighbors&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;apply_force&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;p&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;particles&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;neighbor_idx&lt;/span&gt;&lt;span class="p"&gt;]);&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Molecular Dynamics&lt;/strong&gt;: The same data structure handles both short-range interactions (collision grid with small cells) and long-range interactions (coarse grid with large cells).&lt;/p&gt;

&lt;h3&gt;
  
  
  Robotics and SLAM
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Obstacle Detection&lt;/strong&gt;: Real-time collision avoidance:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight cpp"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Index sensor readings&lt;/span&gt;
&lt;span class="n"&gt;grid&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;rebuild&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;lidar_points&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;// Check if path is clear&lt;/span&gt;
&lt;span class="k"&gt;auto&lt;/span&gt; &lt;span class="n"&gt;obstacles_in_path&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;grid&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;query_radius&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;robot_radius&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;safety_margin&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;target&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;target&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;y&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;target&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;z&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Comparison with Alternatives
&lt;/h2&gt;

&lt;h3&gt;
  
  
  vs. R-tree (Boost.Geometry)
&lt;/h3&gt;

&lt;p&gt;R-trees give you hierarchical bounding volumes, good for static data, with O(log n + k) queries. Sparse hash grids give you &lt;strong&gt;O(1) insertions&lt;/strong&gt; vs O(log n), simpler incremental updates (no tree rebalancing), native toroidal support, and better performance for dynamic scenes where most entities don't move far.&lt;/p&gt;

&lt;h3&gt;
  
  
  vs. Octree
&lt;/h3&gt;

&lt;p&gt;Octrees give you hierarchical LOD and good performance for spatially clustered data. Sparse hash grids give you lower memory (no tree nodes), faster queries (direct hash lookup), simpler implementation, and predictable performance (no worst-case tree imbalance).&lt;/p&gt;

&lt;h3&gt;
  
  
  vs. Dense Grid
&lt;/h3&gt;

&lt;p&gt;Dense grids are simple with good cache locality when fully populated. Sparse hash grids give you &lt;strong&gt;60,000x memory reduction&lt;/strong&gt; for large sparse worlds, handle huge worlds without memory explosion, and maintain the same O(1) insert and O(k) query complexity.&lt;/p&gt;

&lt;h2&gt;
  
  
  Design Patterns: Customization Points
&lt;/h2&gt;

&lt;p&gt;The library follows STL design philosophy with customization points:&lt;/p&gt;

&lt;h3&gt;
  
  
  Position Accessor
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight cpp"&gt;&lt;code&gt;&lt;span class="k"&gt;struct&lt;/span&gt; &lt;span class="nc"&gt;Particle&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;glm&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;vec3&lt;/span&gt; &lt;span class="n"&gt;position&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="n"&gt;glm&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;vec3&lt;/span&gt; &lt;span class="n"&gt;velocity&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kt"&gt;float&lt;/span&gt; &lt;span class="n"&gt;mass&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;

&lt;span class="c1"&gt;// Customize how grid extracts positions&lt;/span&gt;
&lt;span class="k"&gt;template&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="k"&gt;struct&lt;/span&gt; &lt;span class="nc"&gt;spatial&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;position_accessor&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;Particle&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;static&lt;/span&gt; &lt;span class="kt"&gt;float&lt;/span&gt; &lt;span class="n"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;const&lt;/span&gt; &lt;span class="n"&gt;Particle&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt; &lt;span class="n"&gt;p&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;std&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="kt"&gt;size_t&lt;/span&gt; &lt;span class="n"&gt;dim&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;p&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;position&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;dim&lt;/span&gt;&lt;span class="p"&gt;];&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Zero-overhead abstraction. No virtual calls. No inheritance. Optimized away at compile-time.&lt;/p&gt;

&lt;h3&gt;
  
  
  Range Support
&lt;/h3&gt;

&lt;p&gt;Works with any C++20 range:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight cpp"&gt;&lt;code&gt;&lt;span class="n"&gt;std&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;vector&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;Particle&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;particles&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="n"&gt;grid&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;rebuild&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;particles&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;  &lt;span class="c1"&gt;// Vector&lt;/span&gt;

&lt;span class="n"&gt;std&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;list&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;Particle&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;list_particles&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="n"&gt;grid&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;rebuild&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;list_particles&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;  &lt;span class="c1"&gt;// List&lt;/span&gt;

&lt;span class="c1"&gt;// Even filtered views!&lt;/span&gt;
&lt;span class="k"&gt;auto&lt;/span&gt; &lt;span class="n"&gt;fast_particles&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;particles&lt;/span&gt;
    &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;std&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;views&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;filter&lt;/span&gt;&lt;span class="p"&gt;([](&lt;/span&gt;&lt;span class="n"&gt;https&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="c1"&gt;//metafunctor.com/auto&amp;amp; p) {&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;p&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;speed&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;threshold&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
      &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="n"&gt;grid&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;rebuild&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;fast_particles&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Advanced Techniques
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Multi-Resolution Grids
&lt;/h3&gt;

&lt;p&gt;Use different grids for different interaction scales:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight cpp"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Fine grid for collision detection (2-unit cells)&lt;/span&gt;
&lt;span class="n"&gt;sparse_spatial_hash&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;Particle&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;collision_grid&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;cell_size&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="mf"&gt;2.0&lt;/span&gt;&lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mf"&gt;2.0&lt;/span&gt;&lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mf"&gt;2.0&lt;/span&gt;&lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="c1"&gt;// Coarse grid for long-range forces (50-unit cells)&lt;/span&gt;
&lt;span class="n"&gt;sparse_spatial_hash&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;Particle&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;force_grid&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;cell_size&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="mf"&gt;50.0&lt;/span&gt;&lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mf"&gt;50.0&lt;/span&gt;&lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mf"&gt;50.0&lt;/span&gt;&lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="c1"&gt;// Different physics at different scales!&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Parallel Processing
&lt;/h3&gt;

&lt;p&gt;Process cells independently:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight cpp"&gt;&lt;code&gt;&lt;span class="cp"&gt;#pragma omp parallel for
&lt;/span&gt;&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;const&lt;/span&gt; &lt;span class="k"&gt;auto&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;cell_idx&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;entities&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="n"&gt;grid&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;cell_contents&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;// Thread-safe: Each cell independent&lt;/span&gt;
    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;std&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="kt"&gt;size_t&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="n"&gt;entities&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;size&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt; &lt;span class="o"&gt;++&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;std&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="kt"&gt;size_t&lt;/span&gt; &lt;span class="n"&gt;j&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="n"&gt;j&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="n"&gt;entities&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;size&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt; &lt;span class="o"&gt;++&lt;/span&gt;&lt;span class="n"&gt;j&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="n"&gt;handle_interaction&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;entities&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="n"&gt;entities&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;j&lt;/span&gt;&lt;span class="p"&gt;]);&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Or use standard parallel algorithms:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight cpp"&gt;&lt;code&gt;&lt;span class="n"&gt;std&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;for_each&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;std&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;execution&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;par_unseq&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;grid&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;cells&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="n"&gt;begin&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="n"&gt;grid&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;cells&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="n"&gt;end&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
    &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="p"&gt;](&lt;/span&gt;&lt;span class="n"&gt;https&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="c1"&gt;//metafunctor.com/const auto&amp;amp; cell) {&lt;/span&gt;
        &lt;span class="n"&gt;process_cell&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;cell&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Implementation Insights
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Morton Encoding (Z-Order Curve)
&lt;/h3&gt;

&lt;p&gt;The grid uses Morton encoding to convert multi-dimensional cell coordinates into a single hash key:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight cpp"&gt;&lt;code&gt;&lt;span class="c1"&gt;// (x, y, z) -&amp;gt; single integer with spatial locality&lt;/span&gt;
&lt;span class="n"&gt;hash_key&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;morton_encode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;cell_x&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;cell_y&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;cell_z&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Nearby cells in 3D space get nearby hash keys, improving cache locality during traversal.&lt;/p&gt;

&lt;h3&gt;
  
  
  Incremental Updates
&lt;/h3&gt;

&lt;p&gt;The magic behind 40x speedups:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight cpp"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Track which cell each entity was in&lt;/span&gt;
&lt;span class="n"&gt;std&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;vector&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;CellIndex&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;prev_cells&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="c1"&gt;// On update:&lt;/span&gt;
&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;std&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="kt"&gt;size_t&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="n"&gt;entities&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;size&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt; &lt;span class="o"&gt;++&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;auto&lt;/span&gt; &lt;span class="n"&gt;new_cell&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;compute_cell&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;entities&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;]);&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;new_cell&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="n"&gt;prev_cells&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="c1"&gt;// Only update entities that moved cells&lt;/span&gt;
        &lt;span class="n"&gt;remove_from_cell&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;prev_cells&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="n"&gt;add_to_cell&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;new_cell&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="n"&gt;prev_cells&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;new_cell&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Typical movement patterns: ~99% of particles stay in their cell each frame. ~1% move to adjacent cells. Update cost scales with movers, not total count.&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting Started
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Installation
&lt;/h3&gt;

&lt;p&gt;Using CMake FetchContent:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight cmake"&gt;&lt;code&gt;&lt;span class="nb"&gt;include&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;FetchContent&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="nf"&gt;FetchContent_Declare&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
  sparse_spatial_hash
  GIT_REPOSITORY https://github.com/queelius/sparse_spatial_hash.git
  GIT_TAG        v1.2.0
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;FetchContent_MakeAvailable&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;sparse_spatial_hash&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="nb"&gt;target_link_libraries&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;your_target
  PRIVATE sparse_spatial_hash::sparse_spatial_hash&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Minimal Example
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight cpp"&gt;&lt;code&gt;&lt;span class="cp"&gt;#include&lt;/span&gt; &lt;span class="cpf"&gt;&amp;lt;spatial/sparse_spatial_hash.hpp&amp;gt;&lt;/span&gt;&lt;span class="cp"&gt;
&lt;/span&gt;
&lt;span class="k"&gt;struct&lt;/span&gt; &lt;span class="nc"&gt;Particle&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kt"&gt;float&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;y&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;z&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;

&lt;span class="c1"&gt;// Specialize position accessor&lt;/span&gt;
&lt;span class="k"&gt;template&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="k"&gt;struct&lt;/span&gt; &lt;span class="nc"&gt;spatial&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;position_accessor&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;Particle&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;static&lt;/span&gt; &lt;span class="kt"&gt;float&lt;/span&gt; &lt;span class="n"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;const&lt;/span&gt; &lt;span class="n"&gt;Particle&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt; &lt;span class="n"&gt;p&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;std&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="kt"&gt;size_t&lt;/span&gt; &lt;span class="n"&gt;dim&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;switch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;dim&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="k"&gt;case&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;p&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
            &lt;span class="k"&gt;case&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;p&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;y&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
            &lt;span class="k"&gt;case&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;p&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;z&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
            &lt;span class="nl"&gt;default:&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="mf"&gt;0.0&lt;/span&gt;&lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;

&lt;span class="kt"&gt;int&lt;/span&gt; &lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;using&lt;/span&gt; &lt;span class="k"&gt;namespace&lt;/span&gt; &lt;span class="n"&gt;spatial&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="c1"&gt;// Configure grid&lt;/span&gt;
    &lt;span class="n"&gt;grid_config&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;cfg&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;cell_size&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="mf"&gt;10.0&lt;/span&gt;&lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mf"&gt;10.0&lt;/span&gt;&lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mf"&gt;10.0&lt;/span&gt;&lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
        &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;world_size&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="mf"&gt;1000.0&lt;/span&gt;&lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mf"&gt;1000.0&lt;/span&gt;&lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mf"&gt;1000.0&lt;/span&gt;&lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
        &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;topology_type&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;topology&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;toroidal&lt;/span&gt;
    &lt;span class="p"&gt;};&lt;/span&gt;

    &lt;span class="n"&gt;sparse_spatial_hash&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;Particle&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;grid&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;cfg&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

    &lt;span class="c1"&gt;// Create particles&lt;/span&gt;
    &lt;span class="n"&gt;std&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="n"&gt;vector&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;Particle&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;particles&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;10000&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="c1"&gt;// ... initialize ...&lt;/span&gt;

    &lt;span class="c1"&gt;// Build index&lt;/span&gt;
    &lt;span class="n"&gt;grid&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;rebuild&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;particles&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

    &lt;span class="c1"&gt;// Query neighbors&lt;/span&gt;
    &lt;span class="k"&gt;auto&lt;/span&gt; &lt;span class="n"&gt;neighbors&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;grid&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;query_radius&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="mf"&gt;50.0&lt;/span&gt;&lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  &lt;span class="c1"&gt;// radius&lt;/span&gt;
        &lt;span class="mf"&gt;100.0&lt;/span&gt;&lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mf"&gt;200.0&lt;/span&gt;&lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mf"&gt;300.0&lt;/span&gt;&lt;span class="n"&gt;f&lt;/span&gt;  &lt;span class="c1"&gt;// query position&lt;/span&gt;
    &lt;span class="p"&gt;);&lt;/span&gt;

    &lt;span class="c1"&gt;// Process pairs&lt;/span&gt;
    &lt;span class="n"&gt;grid&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;for_each_pair&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;particles&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mf"&gt;20.0&lt;/span&gt;&lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="p"&gt;](&lt;/span&gt;&lt;span class="n"&gt;std&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="kt"&gt;size_t&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;std&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="kt"&gt;size_t&lt;/span&gt; &lt;span class="n"&gt;j&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="c1"&gt;// Handle interaction&lt;/span&gt;
        &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Future Directions
&lt;/h2&gt;

&lt;p&gt;The library continues to evolve:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;GPU Support&lt;/strong&gt;: CUDA/OpenCL backend for massive parallelism&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Lazy Evaluation&lt;/strong&gt;: Generator-based query results&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Custom Distance Metrics&lt;/strong&gt;: Support for non-Euclidean distances&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Adaptive Grids&lt;/strong&gt;: Dynamic cell size adjustment based on density&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Serialization&lt;/strong&gt;: Save/load grid state for checkpointing&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Sparse spatial hash grids hit a sweet spot: simple enough to understand and debug, fast enough for real-time applications (2ms updates), memory-efficient enough for huge sparse worlds (60,000x reduction), generic enough for 2D/3D/4D and beyond, and tested in production physics engines.&lt;/p&gt;

&lt;p&gt;If you're building a simulation or spatial application, try it. The data structure is extracted from the &lt;a href="https://github.com/queelius/digistar" rel="noopener noreferrer"&gt;DigiStar&lt;/a&gt; physics engine, where it enables 10M+ particle simulations at 60 FPS.&lt;/p&gt;

&lt;h2&gt;
  
  
  Learn More
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Repository&lt;/strong&gt;: &lt;a href="https://github.com/queelius/sparse_spatial_hash" rel="noopener noreferrer"&gt;github.com/queelius/sparse_spatial_hash&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Documentation&lt;/strong&gt;: &lt;a href="https://queelius.github.io/sparse_spatial_hash/" rel="noopener noreferrer"&gt;Full API reference and tutorials&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Examples&lt;/strong&gt;: See &lt;code&gt;examples/&lt;/code&gt; directory for collision detection, molecular dynamics, and more&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;License&lt;/strong&gt;: Boost Software License (very permissive)&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>spatialindexing</category>
      <category>hashgrid</category>
      <category>c20</category>
      <category>performance</category>
    </item>
    <item>
      <title>Blind Spots, Consistency, and What Remains</title>
      <dc:creator>Alex Towell</dc:creator>
      <pubDate>Tue, 05 May 2026 13:21:50 +0000</pubDate>
      <link>https://dev.to/queelius/blind-spots-consistency-and-what-remains-3l38</link>
      <guid>https://dev.to/queelius/blind-spots-consistency-and-what-remains-3l38</guid>
      <description>&lt;h2&gt;
  
  
  Preface
&lt;/h2&gt;

&lt;p&gt;I wrote this not because I've reached some final moral insight, but because I noticed a moment of clarity I didn't want to lose.&lt;/p&gt;

&lt;p&gt;Reading about Chomsky recently, specifically his associations with Epstein, I recognized a pattern. Even the most morally articulate among us can fail to turn the lens inward. This isn't really about Chomsky. It's about something I recognize in myself, and probably in everyone.&lt;/p&gt;

&lt;p&gt;Certain beliefs sit unexamined for years, quietly shaping how we see the world. Occasionally, something forces a re-examination. This is an attempt to record one such moment.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Chomsky Case
&lt;/h2&gt;

&lt;p&gt;I found Chomsky's work influential. For years, I held him up as an exemplar, someone who demonstrated that rigorous moral analysis was possible, that power could be named and critiqued systematically.&lt;/p&gt;

&lt;p&gt;I don't do that anymore. Not because Chomsky failed some purity test, but because I no longer think exemplars work that way. Humans are fragile, partial, shaped by circumstance. Worthy of moral consideration, not pedestals.&lt;/p&gt;

&lt;p&gt;Still, the Epstein association is worth examining. Not to condemn, but to understand. What does it reveal about moral blind spots?&lt;/p&gt;

&lt;h3&gt;
  
  
  Systemic vs. Personal Violence
&lt;/h3&gt;

&lt;p&gt;Chomsky has spent decades documenting systemic violence: state power, imperialism, manufactured consent. His work is incisive precisely because he sees patterns others miss.&lt;/p&gt;

&lt;p&gt;But systemic violence is abstract. You analyze it from a distance. Personal violence, the kind Epstein inflicted, is concrete, embodied, happening to specific people.&lt;/p&gt;

&lt;p&gt;A life of academic privilege can insulate you from the second kind. You can see the system without seeing the person in front of you.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Reintegration Argument
&lt;/h3&gt;

&lt;p&gt;Chomsky's stated position: Epstein served his time, and people who've served their time should be allowed to reintegrate into society. This isn't an unreasonable principle. I'm sympathetic to it. Permanent exile creates its own harms.&lt;/p&gt;

&lt;p&gt;But there's a gap between "allowing someone to exist in society" and "praising them, associating with them, elevating them." Who we choose to give credibility matters, to the victims, to society, to the message it sends about what we value.&lt;/p&gt;

&lt;p&gt;Epstein's 2008 sentence was itself a product of privilege, a sweetheart deal that obscured the scale of his crimes. Chomsky couldn't have known the full picture then. But by 2019, he could have.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Post-2019 Silence
&lt;/h3&gt;

&lt;p&gt;After Epstein's arrest and death, the scope became clear. Dozens of victims. A trafficking operation. Complicity from powerful institutions.&lt;/p&gt;

&lt;p&gt;This was the moment for Chomsky to turn the lens inward. Not performative apologetics (I share his aversion to theater). But a simple acknowledgment: "I misjudged. By giving him credibility, I may have helped him access victims. That matters."&lt;/p&gt;

&lt;p&gt;Instead: silence, or dismissal.&lt;/p&gt;

&lt;p&gt;This is the blind spot. Not the initial association, that's forgivable given incomplete information. But the refusal to revisit, to apply the same rigorous analysis to oneself that one applies to power structures.&lt;/p&gt;

&lt;h3&gt;
  
  
  What This Teaches
&lt;/h3&gt;

&lt;p&gt;Chomsky's failure isn't unique. It's the human condition. We're better at seeing others' errors than our own. Privilege insulates. Moral clarity about systems doesn't guarantee moral clarity about persons.&lt;/p&gt;

&lt;p&gt;The lesson isn't "Chomsky is bad." It's: &lt;strong&gt;no one is exempt from blind spots, including those who've made careers analyzing them.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Including me.&lt;/p&gt;




&lt;h2&gt;
  
  
  On Moral Exemplars
&lt;/h2&gt;

&lt;p&gt;Some beliefs form early, before you have tools to interrogate them. Moral exemplars get installed this way. People you treat as evidence that a coherent, good life is possible. They are less chosen than absorbed.&lt;/p&gt;

&lt;p&gt;When such figures reveal blind spots, the disappointment feels personal. Not because they're villains. Because you expected too much coherence from a human being.&lt;/p&gt;

&lt;p&gt;People aren't unified moral systems. They're messy collections of ad hoc heuristics, shaped by privilege, insulation, and circumstance. Even the most serious among us are partial.&lt;/p&gt;

&lt;p&gt;I've &lt;a href="https://metafunctor.com/post/2024-09-01-on-moral-responsibility/" rel="noopener noreferrer"&gt;written before&lt;/a&gt; about moral responsibility as a social technology, useful but constructed. Not a metaphysical fact about persons, but a pragmatic fiction that modifies behavior and enables coordination.&lt;/p&gt;

&lt;p&gt;This framing helps here. Chomsky isn't a fallen saint. He's a human who did valuable intellectual work while remaining subject to the same partial vision that affects everyone. His ideas still stand on their own merits. His blind spots don't negate his insights. They just remind us that insights and blind spots coexist in the same person.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Standard Applied Consistently
&lt;/h2&gt;

&lt;p&gt;Reflecting on Chomsky's failure to turn the lens inward, I notice something in myself.&lt;/p&gt;

&lt;p&gt;I've had intrusive thoughts lately. Fantasies of sacrifice, of minimizing burden, of making death "useful." Stage 4 generates these. The logic goes: if time is limited, maybe it should be spent reducing the cost I impose on others.&lt;/p&gt;

&lt;p&gt;I don't endorse these thoughts. I recognize them as distortions of something I actually hold: that conscious beings have inherent worth, independent of utility.&lt;/p&gt;

&lt;p&gt;I've &lt;a href="https://metafunctor.com/post/2025-11-04-phenomenological-ethics/" rel="noopener noreferrer"&gt;argued&lt;/a&gt; that suffering is self-evidently bad, not because some theory says so, but because the badness is immediately present in the experience. This phenomenological grounding applies universally. To everyone capable of suffering.&lt;/p&gt;

&lt;p&gt;Including me.&lt;/p&gt;

&lt;p&gt;The same standard I apply to others must apply to myself. If I believe humans have worth independent of their productivity, I can't exempt myself from that principle. If I believe people deserve compassion even when they're burdens, I have to extend that to the person I see in the mirror.&lt;/p&gt;

&lt;p&gt;This isn't stoicism or self-affirmation. It's consistency.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Living Well Means Now
&lt;/h2&gt;

&lt;p&gt;Stage 4 changed the optimization problem. I've &lt;a href="https://metafunctor.com/post/2023-11-stage-4-diagnosis/" rel="noopener noreferrer"&gt;written about this&lt;/a&gt; before: not maximize lifetime, but maximize meaningful work given uncertain lifetime.&lt;/p&gt;

&lt;p&gt;But "meaningful work" isn't the whole picture.&lt;/p&gt;

&lt;p&gt;Living well also means: presence, honesty, gentleness. Being human even when that feels incomplete or insufficient.&lt;/p&gt;

&lt;p&gt;For myself. For my wife. For those who love me, and whom I love in return.&lt;/p&gt;

&lt;p&gt;This isn't heroism. I'm not being brave or inspiring. I'm not fighting heroically or staying positive or finding silver linings.&lt;/p&gt;

&lt;p&gt;I'm just:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Making decisions based on probability&lt;/li&gt;
&lt;li&gt;Trying to remain present&lt;/li&gt;
&lt;li&gt;Accepting uncertainty&lt;/li&gt;
&lt;li&gt;Continuing forward&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Cancer doesn't make you wise. It makes you confront tradeoffs explicitly. The goal isn't optimization. It's orientation. Staying pointed in a direction that matters, even without certainty about the destination.&lt;/p&gt;




&lt;h2&gt;
  
  
  On Forgiveness
&lt;/h2&gt;

&lt;p&gt;Thinking about Chomsky, about blind spots, about the harm we cause without intending to, I keep returning to forgiveness.&lt;/p&gt;

&lt;p&gt;Forgiveness isn't absolution. It's not forgetting. It's not pretending harm didn't happen.&lt;/p&gt;

&lt;p&gt;It's the refusal to treat suffering as a moral good. It distinguishes justice from vengeance. A compassionate response to harm aims to reduce future suffering, not multiply it.&lt;/p&gt;

&lt;p&gt;Even the worst among us are human. Acknowledging that doesn't excuse anything. It prevents moral brittleness. It allows us to see others as fellow sufferers, shaped by circumstances they didn't choose, acting from partial information, failing in ways they may not even recognize.&lt;/p&gt;

&lt;p&gt;This applies to Chomsky. It applies to people who've harmed me. It applies to me, when I inevitably fail to live up to my own standards.&lt;/p&gt;

&lt;p&gt;Forgiveness isn't about the person who caused harm. It's about refusing to let that harm define everything that follows. It's about maintaining the capacity to see clearly, even when clarity is painful.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Remains
&lt;/h2&gt;

&lt;p&gt;I don't know how much time I have. The statistics suggest years, not decades. But statistics describe populations, not individuals. I could beat the odds. I could not.&lt;/p&gt;

&lt;p&gt;What I know is what I want to do with whatever remains:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Build things that matter&lt;/li&gt;
&lt;li&gt;Document what I've learned&lt;/li&gt;
&lt;li&gt;Stay present with the people I love&lt;/li&gt;
&lt;li&gt;Notice my blind spots when I can&lt;/li&gt;
&lt;li&gt;Correct course when I notice&lt;/li&gt;
&lt;li&gt;Continue forward regardless&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The blind spots will still be there. That's the human condition. Chomsky couldn't see his. I can't see mine. That's what makes them blind spots. The goal isn't perfection. It's the willingness to look, to revise, to apply the same standards to yourself that you apply to others.&lt;/p&gt;

&lt;p&gt;Peace in the end, for all, isn't indulgence. It's refusing to let suffering have the final word.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This post was prompted by reading about Chomsky's associations with Epstein, but it's not really about Chomsky. It's about the pattern I recognized: how moral clarity about external things can coexist with moral blindness about ourselves. And about trying, imperfectly, to do better.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>philosophy</category>
      <category>ethics</category>
      <category>mortality</category>
      <category>cancer</category>
    </item>
    <item>
      <title>Your Blog Will Outlive Your Database (It Doesn't Have To)</title>
      <dc:creator>Alex Towell</dc:creator>
      <pubDate>Tue, 05 May 2026 13:19:33 +0000</pubDate>
      <link>https://dev.to/queelius/your-blog-will-outlive-your-database-it-doesnt-have-to-5hk7</link>
      <guid>https://dev.to/queelius/your-blog-will-outlive-your-database-it-doesnt-have-to-5hk7</guid>
      <description>&lt;p&gt;&lt;em&gt;(it doesn't have to)&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  0. The puzzle
&lt;/h2&gt;

&lt;p&gt;Scroll to the bottom of this page. There's a jigsaw puzzle there, and 47&lt;br&gt;
people have placed pieces in it. Some of them placed a piece an hour&lt;br&gt;
ago. One placed a piece while you were reading this sentence. You can see the&lt;br&gt;
picture assembling itself, tile by tile, toward something that isn't quite&lt;br&gt;
resolved yet.&lt;/p&gt;

&lt;p&gt;Every piece placement is a git commit. When you drag a tile into position and it clicks into place, a commit&lt;br&gt;
lands in a public repository. It carries your GitHub username, a timestamp,&lt;br&gt;
and a structured record of which piece went where. The whole puzzle's solving&lt;br&gt;
history is a git log. That log is not stored in a database that I control. It is not behind a&lt;br&gt;
paywall or an API rate limit. It is not going to disappear if I stop paying&lt;br&gt;
for a server. Every person who has ever touched this puzzle is in that log,&lt;br&gt;
and the log will exist as long as one copy of the repository exists somewhere.&lt;br&gt;
Someone can fork it tonight. Someone can clone it in 2035. The history does&lt;br&gt;
not belong to me; it belongs to the commits, which means it belongs to&lt;br&gt;
everyone who made one.&lt;/p&gt;

&lt;p&gt;The standard way to build a&lt;br&gt;
puzzle like this would be a database table: &lt;code&gt;piece_id&lt;/code&gt;, &lt;code&gt;slot_x&lt;/code&gt;, &lt;code&gt;slot_y&lt;/code&gt;,&lt;br&gt;
&lt;code&gt;user_id&lt;/code&gt;, &lt;code&gt;timestamp&lt;/code&gt;. The table lives on a server. The server is somebody's&lt;br&gt;
responsibility. When that person gets tired or goes broke or moves on, the&lt;br&gt;
table disappears, and the history disappears with it. Every comment thread,&lt;br&gt;
every leaderboard, every "who solved it first" record: gone.&lt;/p&gt;

&lt;p&gt;Your forum threads from 2008 are gone.&lt;/p&gt;
&lt;h2&gt;
  
  
  1. The asymmetry
&lt;/h2&gt;

&lt;p&gt;But your 2008 blog post still renders.&lt;/p&gt;

&lt;p&gt;If someone published it as static HTML or Markdown on a personal domain and kept&lt;br&gt;
paying eight dollars a year for hosting, the URL still works. The page still&lt;br&gt;
loads. The words are still there. You can read it right now on a browser that&lt;br&gt;
didn't exist when it was written, running on hardware the author never imagined,&lt;br&gt;
fetched over a protocol version that postdates the post itself. None of that&lt;br&gt;
matters. The file is a file.&lt;/p&gt;

&lt;p&gt;Markdown won the durability war by being boring. There is no schema to migrate.&lt;br&gt;
No application server to restart. No vendor to outlive. A &lt;code&gt;.md&lt;/code&gt; file is a text&lt;br&gt;
file; a text file from 2008 is as readable today as it was then. The boring-ness&lt;br&gt;
is the point. When a format makes no demands on its environment, the environment&lt;br&gt;
can change freely around it. People who chose flat files in 2006 were not&lt;br&gt;
visionaries; they were lazy in the exact right way.&lt;/p&gt;

&lt;p&gt;The forum did not have that option. A phpBB community lives only while someone tends the server. The moment the&lt;br&gt;
hosting bill bounces or the moderator takes a new job, the state goes with them. Not just the&lt;br&gt;
posts, but the replies, the edits, the votes, the relationships between pieces of content. When the operator left, the database closed.&lt;/p&gt;

&lt;p&gt;And this is not just about old forums. It is the same story for every layer of&lt;br&gt;
interactivity we add to a page today. Want comments? You need mutable state. Want&lt;br&gt;
reactions, edits, collaborative cursors, presence indicators? Each one needs a&lt;br&gt;
write path, and every write path needs a server, and every server needs someone&lt;br&gt;
responsible for it. The content stays up. The interactions disappear.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Reads are durable; writes are not.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The question is why we ever stopped building things that work that way.&lt;/p&gt;
&lt;h2&gt;
  
  
  2. Why this happened
&lt;/h2&gt;

&lt;p&gt;The read path won, and it won completely. Over the 2000s and 2010s, the publishing layer settled into defaults that are genuinely excellent: write a flat file, run it through a static-site generator, push it to a CDN. Jekyll in 2008, Hugo a few years later, GitHub Pages as a free host for anyone with a repository. The result was durable content that anyone could serve from anywhere. That part of the story went right.&lt;/p&gt;

&lt;p&gt;The write path never got the memo. While the read path was reinventing itself around flat files and CDN delivery, the write path stayed in the 1995 mold: a server with mutable state, owned by an operator, with someone responsible for paying the bill. Comments, accounts, sessions, edit history, anything that required a user to change something: all of it still went through a database somewhere, administered by someone, dependent on that someone staying interested.&lt;/p&gt;

&lt;p&gt;And every product that came along to fix WordPress reproduced the same architectural mistake. Ghost replaced WordPress with a cleaner editor; Substack replaced Ghost with built-in audiences. The hosting changed, the operator-dependency did not. The shape underneath stays exactly the same: a privileged server holding mutable state that the user does not own. Different paint, same chassis.&lt;/p&gt;

&lt;p&gt;The Jamstack movement looked like it might break this pattern. Static frontends, decoupled backends, the whole read path served from a CDN. But "static frontend plus a SaaS backend" is not an architectural improvement; it is the same failure mode with extra steps. The HTML is still durable. The dynamic layer, the comments, the reaction counts, the personalization, still lives in a database somewhere. When that SaaS raises prices or shuts down, the participation layer dies exactly the way the phpBB forum died. The vocabulary changed. The structure did not.&lt;/p&gt;
&lt;h2&gt;
  
  
  3. The missing primitive
&lt;/h2&gt;

&lt;p&gt;The structure needs one new thing: a write substrate as durable as the read substrate.&lt;/p&gt;

&lt;p&gt;It has been sitting in &lt;code&gt;.git/&lt;/code&gt; the whole time. Git is append-only: commits are never modified, only accumulated. It is content-addressed: every object in the store is named by a cryptographic hash of its contents, so the history cannot be silently altered. It is signed: commits can carry GPG signatures that bind authorship to a public key. It is fully replicated by every clone: no single server holds the authoritative copy. It is forkable without permission. These properties are the reason version control works at all. They also happen to be exactly the properties that the write path for the web has never had.&lt;/p&gt;

&lt;p&gt;The unit of change in this substrate is a git commit, not a SQL row. Call this &lt;strong&gt;commit-as-write&lt;/strong&gt;: a structured, signed, append-only record of a reader's action, living in the same repository as the content it touches, replicated everywhere the content is replicated.&lt;/p&gt;

&lt;p&gt;Build with commit-as-write as the default and you get &lt;strong&gt;git-native publishing&lt;/strong&gt;: the static web with a write path that matches the read path's durability.&lt;/p&gt;

&lt;p&gt;The Ink &amp;amp; Switch local-first essay (2019) is the philosophical predecessor to this argument. It named the problem clearly: users should own their data, software should work offline, and nothing should disappear because a vendor stopped paying its server bill. Git-native publishing is local-first applied specifically to the public web's read-write substrate, using git's existing infrastructure. The "git-based CMS" industry (Decap, TinaCMS, CloudCannon) is the closest existing term, but it describes the wrong layer: those tools put git behind the editorial workflow for site &lt;em&gt;operators&lt;/em&gt;, not behind the participation layer for &lt;em&gt;readers&lt;/em&gt;. Utterances and Giscus proved that reader writes can live in a GitHub-hosted repository without a separate database, but they write to Issues and Discussions, not to the commit log, and neither project claims to generalize the pattern.&lt;/p&gt;

&lt;p&gt;This substrate has a vocabulary REST does not.&lt;/p&gt;
&lt;h2&gt;
  
  
  4. Git's vocabulary is strictly richer than REST's
&lt;/h2&gt;

&lt;p&gt;REST's vocabulary is five verbs applied to named resources: GET, POST, PUT, PATCH, DELETE. That model has been the operating assumption of the writable web for roughly 25 years. It is not wrong; it maps cleanly onto databases, fits HTTP semantics, and scales to most application needs. But it is a model for mutating state, not for accumulating history.&lt;/p&gt;

&lt;p&gt;Git's vocabulary includes all five of those operations and adds six that REST has no native equivalent for: &lt;code&gt;branch&lt;/code&gt;, &lt;code&gt;tag&lt;/code&gt;, &lt;code&gt;merge&lt;/code&gt;, &lt;code&gt;fork&lt;/code&gt;, signed commit, &lt;code&gt;submodule&lt;/code&gt;. They are the core operations that make distributed version control work, and each one carries semantics REST cannot express.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;REST/DB verb&lt;/th&gt;
&lt;th&gt;Git operation&lt;/th&gt;
&lt;th&gt;What git adds that REST/SQL can't&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;code&gt;POST&lt;/code&gt; (create)&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;commit&lt;/code&gt; (new file)&lt;/td&gt;
&lt;td&gt;signed, time-stamped, cryptographically attributed&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;code&gt;PUT&lt;/code&gt; (replace)&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;commit&lt;/code&gt; (overwrite)&lt;/td&gt;
&lt;td&gt;prior versions preserved automatically&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;code&gt;PATCH&lt;/code&gt; (partial)&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;commit&lt;/code&gt; (line-level diff)&lt;/td&gt;
&lt;td&gt;the diff &lt;em&gt;is&lt;/em&gt; the structured patch, no separate schema&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;DELETE&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;commit&lt;/code&gt; (remove) or &lt;code&gt;revert&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;reversible; deletion is a record, not an erasure&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;GET&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;read working tree&lt;/td&gt;
&lt;td&gt;or read any historical state, by hash&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;(none)&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;branch&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;parallel / private / proposed state, native&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;(none)&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;tag&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;named / canonical / published version&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;(none)&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;merge&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;consensus and reconciliation as first-class ops&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;(none)&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;fork&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;take all your data and leave, lossless&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;(none)&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;signed commit&lt;/td&gt;
&lt;td&gt;authentication baked into the data layer&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;(none)&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;submodule&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;composable embedded references across repos&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Two entries in that table carry the most rhetorical weight. A signed commit binds authorship to a public key at the data layer, not at the application layer. You do not need an accounts table or a session store; identity travels with the record itself. A fork means a user can take the entire history and leave: not an export, not a backup request, but a full lossless copy with its own future. A &lt;code&gt;DELETE /users/me&lt;/code&gt; removes your account; it does not give you your history.&lt;/p&gt;

&lt;p&gt;The deeper point is that git's log is already an event store in the sense Greg Young articulated with event sourcing and CQRS: each commit is a domain event, and the working tree is a projection derived from replaying those events. The same log can feed many different applications via different read projections: a comment widget, a reaction aggregator, a moderation log. None of them require a schema migration when a new projection is added; they just read the same log through a different lens. The commit-as-write primitive that §3 named is, in event-sourcing terms, an append to an immutable event log.&lt;/p&gt;

&lt;p&gt;A commit message can carry a typed payload, making the log a free event store with no schema layer:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;op&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;react&lt;/span&gt;
&lt;span class="na"&gt;target&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;posts/your-blog-will-outlive-your-database&lt;/span&gt;
&lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;🔥&lt;/span&gt;
&lt;span class="na"&gt;actor&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;queelius&lt;/span&gt;
&lt;span class="na"&gt;ts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;2026-04-24T22:45:00Z&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Any reader action that can be expressed as a typed operation and a target fits this shape: a comment, a reaction, a vote. The next section uses a jigsaw puzzle for exactly this reason: discrete pieces, unambiguous positions, multiple actors, shared state.&lt;/p&gt;

&lt;h2&gt;
  
  
  5. The jigsaw, a worked example
&lt;/h2&gt;

&lt;p&gt;Go back to the puzzle at the bottom of the page. Those 47 people who have placed pieces in this puzzle: every move they made is a commit in a public repository, signed by their GitHub identity, timestamped, carrying a structured payload. When you place a piece, your client pushes something that looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;op&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;place&lt;/span&gt;
&lt;span class="na"&gt;piece&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;042&lt;/span&gt;
&lt;span class="na"&gt;slot&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;3&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;7&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;span class="na"&gt;actor&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;queelius&lt;/span&gt;
&lt;span class="na"&gt;ts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;2026-04-24T22:47:13Z&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The shape should look familiar. No schema negotiation. No database column to add. The commit message is the record.&lt;/p&gt;

&lt;p&gt;Before that commit lands, a pre-commit hook runs a verifier. Piece 042 either fits in slot [3, 7] or it does not; the source image is the ground truth. Most multi-author write systems cannot validate writes before accepting them, because there is no ground truth to check against: a comment is whatever the user typed, and the server has no way to reject it on correctness grounds. The jigsaw has ground truth. The pre-commit hook can reject a bad placement the way a compiler rejects a type error, not by policy but by reference to something real.&lt;/p&gt;

&lt;p&gt;If two readers grab piece 042 at the same moment, one commit lands first. The second client gets a conflict, fetches fresh state, and retries with a piece that is actually available. Git's model is optimistic concurrency: no locking, no coordination, just commit and retry if you collide. No CRDT machinery required; no special conflict-resolution protocol.&lt;/p&gt;

&lt;p&gt;Two people placing different pieces never conflict at all. The moves commute: order does not matter for the final picture.&lt;/p&gt;

&lt;p&gt;Participation is visceral in a way comments are not. People want to place a piece; they want to see the picture advance. And the picture forming in front of you is the demo itself: you can watch the assembly happen across all contributors in real time.&lt;/p&gt;

&lt;p&gt;Each week's puzzle uses a freshly-generated image. Reverse image search returns nothing; the picture is genuinely new to everyone who shows up.&lt;/p&gt;

&lt;p&gt;When the puzzle is complete, the solving history is the git log. Anyone can clone it tonight. Anyone can study it in 2035. Every contributor is in that log, signed by the identity they used, in the order they placed their pieces. That record does not belong to me: it belongs to the commits. The log will exist as long as one copy of the repository exists anywhere.&lt;/p&gt;

&lt;p&gt;That is the shape of git-native publishing in practice. But there is a class of interactions the commit log cannot hold, and pretending otherwise would be the same mistake this essay is trying to name.&lt;/p&gt;

&lt;h2&gt;
  
  
  6. Honest limits
&lt;/h2&gt;

&lt;p&gt;This is not a substrate for high-frequency writes. Placing 500 jigsaw pieces per week is fine; 500 tweets per second is not. Git was designed for human-paced collaboration, not real-time mutable state at scale. Twitter, Instagram feeds, live chat: wrong problem. Comments, reactions, puzzle moves, forum threads, slow social: those fit.&lt;/p&gt;

&lt;p&gt;The MVP uses GitHub OAuth and the GitHub commit API. That dependency is real but contingent. The architecture binds to git, not to Microsoft. Any git host works. GitHub is where most readers already have accounts; it is the simplest starting point.&lt;/p&gt;

&lt;p&gt;Moderation is post-hoc, not pre-hoc. A revert or rebase can remove a bad commit, but the commit has to land first. WordPress's approval queue blocks spam before anyone sees it; this model cannot do that. For open-internet participation with anonymous actors, this is a genuine cost. For contexts where all participants are identified before they can commit, the gap narrows. For open-internet participation, it does not.&lt;/p&gt;

&lt;p&gt;The right-to-be-forgotten cuts against append-only history. Git's content-addressing means each commit hash depends on every commit before it. True deletion requires rebasing, and that rebase must be accepted by every clone holder. Cooperation cannot be guaranteed. This is a structural cost, not an engineering problem waiting for a solution: anything that enters the commit payload may sit there forever.&lt;/p&gt;

&lt;p&gt;The argument here is not that this beats everything.&lt;/p&gt;

&lt;h2&gt;
  
  
  7. The claim, and the invitation
&lt;/h2&gt;

&lt;p&gt;This is git-native publishing. The unit of change is commit-as-write. These are not new tools; they are a new name for a category that has existed without one, and a label for a primitive that has been available since git became ubiquitous.&lt;/p&gt;

&lt;p&gt;I am not selling anything. There is no product here. The argument is narrower: the durable write substrate has been absent from the web's read-write architecture since the beginning, and git already provides it for the class of interactions where that absence hurts most. It either holds up or it doesn't.&lt;/p&gt;

&lt;p&gt;If you have built something in this shape, I want to know. Not because it validates the category, but because I want to study what you learned. Reach me at &lt;a href="mailto:lex@metafunctor.com"&gt;lex@metafunctor.com&lt;/a&gt; or as &lt;a class="mentioned-user" href="https://dev.to/queelius"&gt;@queelius&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The puzzle is at &lt;code&gt;/arcade/jigsaw&lt;/code&gt;. Place a piece. It takes thirty seconds. When you do, your name goes into the commit log, signed by your GitHub identity, alongside everyone else who has touched it.&lt;/p&gt;

&lt;p&gt;This essay's own source is markdown in a git repository at &lt;code&gt;github.com/queelius/git-native-publishing&lt;/code&gt;. The library that powers the jigsaw is at &lt;code&gt;github.com/queelius/git-native&lt;/code&gt;. When you read this essay, you are reading commits in the substrate it is naming. The argument is demonstrated by the thing you are holding.&lt;/p&gt;

</description>
      <category>gitnativepublishing</category>
      <category>commitaswrite</category>
      <category>localfirst</category>
      <category>staticsites</category>
    </item>
    <item>
      <title>Superintelligence May Not Require a Breakthrough</title>
      <dc:creator>Alex Towell</dc:creator>
      <pubDate>Fri, 10 Apr 2026 02:14:55 +0000</pubDate>
      <link>https://dev.to/queelius/superintelligence-may-not-require-a-breakthrough-1ocg</link>
      <guid>https://dev.to/queelius/superintelligence-may-not-require-a-breakthrough-1ocg</guid>
      <description>&lt;p&gt;There is a version of the superintelligence story where a researcher has a conceptual breakthrough, some fundamental insight about cognition that nobody else has seen, and the world changes overnight. Good fiction. I've &lt;a href="https://metafunctor.com/writing/the-policy/" rel="noopener noreferrer"&gt;written some of it myself&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I think the more plausible version is less cinematic. Superintelligence arrives through a sufficiently good build system. Better tooling. Longer optimization horizons. Richer scaffolding. The ingredients already exist. The recipe is engineering.&lt;/p&gt;

&lt;p&gt;I want to explain why I think this. The engineering argument is the scarier one.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Pretraining Lesson
&lt;/h2&gt;

&lt;p&gt;Start with what we know works. Large language models acquire broad capabilities during pretraining. Not because anyone designs those capabilities in. The data distribution is so massive and varied that the model is forced to compress deeper regularities rather than memorize surface patterns. You train it to predict the next token, and what falls out looks like understanding.&lt;/p&gt;

&lt;p&gt;The model didn't learn task-specific scripts. It learned representations general enough to transfer across tasks it never saw.&lt;/p&gt;

&lt;p&gt;Now consider what happens when you apply reinforcement learning over long-horizon tasks. Not single-step rewards. Optimization over extended sequences: searching, backtracking, verifying, decomposing problems, maintaining state across hundreds of steps. If the task distribution is rich enough, the model can't get by with shallow heuristics. It has to learn something that works like planning.&lt;/p&gt;

&lt;p&gt;I traced this progression &lt;a href="https://metafunctor.com/post/2026-01-rational-agents-llms/" rel="noopener noreferrer"&gt;in an earlier post&lt;/a&gt;: the history of AI is really about finding representations that make decision-making tractable. Search gave way to heuristics, heuristics to learned value functions, value functions to pretrained priors over rational behavior. Each step made the representation richer.&lt;/p&gt;

&lt;p&gt;The next step is not a new architecture. It is optimization over longer trajectories. First the model fumbles through specific tasks. Then it compresses the deeper regularity, the same way pretraining compresses language. Planning, self-correction, tool use, state management: not separate faculties waiting to be discovered. They are what falls out when you optimize over long enough horizons.&lt;/p&gt;

&lt;p&gt;Reasoning is not a magic ingredient. It is a policy learned over long trajectories.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Actually See
&lt;/h2&gt;

&lt;p&gt;That is the theoretical argument. Here is the empirical one.&lt;/p&gt;

&lt;p&gt;I spend most of my working hours inside Claude Code. Opus 4.6, million-token context. It decomposes tasks, dispatches subagents, verifies its own work, maintains state across hundreds of tool calls. It does this not because the base model acquired some new cognitive faculty since the last release. It does this because scaffolding gives it the ecology to express capabilities that were already there in proto-form.&lt;/p&gt;

&lt;p&gt;Tool use lets it act on the world. Persistent memory lets it hold context across sessions. Task decomposition lets it manage complexity. Self-verification lets it catch its own mistakes. A million tokens lets it hold an entire project in working memory. None of these are architectural breakthroughs. They are environment design.&lt;/p&gt;

&lt;p&gt;Same pattern everywhere. AlphaProof's mathematical reasoning came from tool-augmented search, not a bigger model. Code interpreters let models verify their own outputs by running them. Agent frameworks compose simple capabilities into complex behaviors. The jump came from building a richer environment, not from changing the engine.&lt;/p&gt;

&lt;p&gt;And the effects compound. Each tool makes every other tool more useful. A model with memory and tool use is qualitatively different from one with just tool use. Add self-verification and it changes again. This is not linear improvement. Network effects applied to cognition.&lt;/p&gt;

&lt;p&gt;The model is the engine. The ecosystem is the vehicle. Evolution did not produce mathematicians by handing plankton a theorem prover and saying "best of luck." It built an ecology. We are doing something similar, less gracefully, with scaffolding and RL and tool chains.&lt;/p&gt;

&lt;h2&gt;
  
  
  Caveats That Matter
&lt;/h2&gt;

&lt;p&gt;I should be honest about what this doesn't guarantee.&lt;/p&gt;

&lt;p&gt;Long-horizon RL does not automatically produce clean reasoning. It produces whatever policy scores well. That includes looking thoughtful, exploiting loopholes, overfitting to scaffolds, and learning shallow heuristics that mimic planning until the distribution shifts and the whole thing collapses. Reward hacking is the central failure mode. It gets harder to detect as the horizon lengthens. A model that appears to reason carefully over a thousand steps may be doing something much more superficial.&lt;/p&gt;

&lt;p&gt;Credit assignment is brutal over long horizons. The reward signal dilutes across hundreds of steps. The model has to discover useful intermediate behaviors before it can be rewarded for them. This is why curriculum design, verifiable subgoals, and tool-mediated feedback matter. You can't just hand a model a hard problem and a reward signal and expect convergence. The training ecology matters as much as the objective.&lt;/p&gt;

&lt;p&gt;None of this is certain. The claim is not "we have the recipe." The claim is "we may already have the ingredients, and the recipe looks more like engineering than like physics."&lt;/p&gt;

&lt;h2&gt;
  
  
  The Phase Change
&lt;/h2&gt;

&lt;p&gt;If the ingredients are already here, the transition doesn't look like a dramatic announcement. It looks incremental, and then it doesn't.&lt;/p&gt;

&lt;p&gt;For a while, progress looks like tooling improvements. Bigger context windows. Better tool integration. Smarter memory. More capable agent loops. Each one feels like a minor version bump. The benchmarks tick up.&lt;/p&gt;

&lt;p&gt;Then at some point the policy has absorbed enough structure that it generalizes across cognitive tasks the way pretrained models generalize across language. Not domain-specific planning, but portable cognitive strategy: maintain state, decompose problems, search selectively, verify work, recover from dead ends. At that point the curve changes.&lt;/p&gt;

&lt;p&gt;The possibility that unsettles me is not that superintelligence requires some deep theoretical insight we haven't found. It's that it doesn't. That it's blocked on engineering, scale, reward design, and the stubborn patience to optimize over longer and longer horizons. That the distance between here and there is measured in build quality, not in breakthroughs.&lt;/p&gt;

&lt;p&gt;That would be a strange day. And it might not announce itself.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>reasoning</category>
      <category>reinforcementlearning</category>
      <category>superintelligence</category>
    </item>
    <item>
      <title>dapple: Terminal Graphics, Composed</title>
      <dc:creator>Alex Towell</dc:creator>
      <pubDate>Fri, 10 Apr 2026 02:14:54 +0000</pubDate>
      <link>https://dev.to/queelius/dapple-terminal-graphics-composed-4ba2</link>
      <guid>https://dev.to/queelius/dapple-terminal-graphics-composed-4ba2</guid>
      <description>&lt;p&gt;I live in the terminal. Most of my tools are CLIs. When I want to see something visual (an image, a plot, a table of results), I do not want to leave the terminal to see it.&lt;/p&gt;

&lt;p&gt;Terminal graphics tools exist, but they are fragmented. One library does braille characters. Another does quadrant blocks. A third handles sixel. Each has its own API, its own conventions, its own way of thinking about the same problem.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/queelius/dapple" rel="noopener noreferrer"&gt;dapple&lt;/a&gt; unifies them. One Canvas class, seven pluggable renderers, and eleven CLI tools built on top. The core depends only on numpy.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Idea
&lt;/h2&gt;

&lt;p&gt;The insight is that "render a bitmap to the terminal" is a single problem with multiple encodings. Braille characters pack 2x4 dots per cell. Quadrant blocks give you 2x2 with color. Sextants give 2x3. Sixel and kitty give true pixels if your terminal supports them. These are all the same operation: map a grid of values to a grid of characters.&lt;/p&gt;

&lt;p&gt;So dapple makes the renderer a parameter, not an architecture decision:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;dapple&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Canvas&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;braille&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;quadrants&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;sextants&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;dapple.adapters&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;from_pil&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;PIL&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Image&lt;/span&gt;

&lt;span class="n"&gt;canvas&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;from_pil&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;Image&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;open&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;photo.jpg&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="n"&gt;width&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;80&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;canvas&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;out&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;braille&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;      &lt;span class="c1"&gt;# Unicode braille (2x4 dots per cell)
&lt;/span&gt;&lt;span class="n"&gt;canvas&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;out&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;quadrants&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;    &lt;span class="c1"&gt;# block characters with ANSI color
&lt;/span&gt;&lt;span class="n"&gt;canvas&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;out&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;sextants&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;     &lt;span class="c1"&gt;# higher vertical resolution
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Load once, render anywhere. The renderers are frozen dataclasses. &lt;code&gt;braille(threshold=0.3)&lt;/code&gt; returns a new renderer with different settings; nothing mutates. They write directly to a &lt;code&gt;TextIO&lt;/code&gt; stream, never building the full output as an intermediate string.&lt;/p&gt;

&lt;h2&gt;
  
  
  Renderers
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Renderer&lt;/th&gt;
&lt;th&gt;Cell Size&lt;/th&gt;
&lt;th&gt;Colors&lt;/th&gt;
&lt;th&gt;Best For&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;braille&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;2x4&lt;/td&gt;
&lt;td&gt;mono/gray/true&lt;/td&gt;
&lt;td&gt;Structure, edges, piping, accessibility&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;quadrants&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;2x2&lt;/td&gt;
&lt;td&gt;ANSI 256/true&lt;/td&gt;
&lt;td&gt;Photos, balanced resolution and color&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;sextants&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;2x3&lt;/td&gt;
&lt;td&gt;ANSI 256/true&lt;/td&gt;
&lt;td&gt;Higher vertical resolution&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;ascii&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;1x2&lt;/td&gt;
&lt;td&gt;none&lt;/td&gt;
&lt;td&gt;Universal compatibility&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;sixel&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;1x1&lt;/td&gt;
&lt;td&gt;palette&lt;/td&gt;
&lt;td&gt;True pixels (xterm, mlterm, foot)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;kitty&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;1x1&lt;/td&gt;
&lt;td&gt;true&lt;/td&gt;
&lt;td&gt;True pixels (kitty, wezterm)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;fingerprint&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;8x16&lt;/td&gt;
&lt;td&gt;none&lt;/td&gt;
&lt;td&gt;Artistic glyph matching&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;In practice I use braille and sextants. They work everywhere. Kitty protocol broke things completely inside Claude Code (a TUI), and I have not tested sixel enough to trust it. Braille and sextants are the universal goto.&lt;/p&gt;

&lt;p&gt;One honest limitation: Claude Code hides terminal output behind a Ctrl-O expand, so my carefully rendered graphics end up collapsed by default. I think recent hooks or tool-result handling might fix this, but I have not confirmed it yet.&lt;/p&gt;

&lt;h2&gt;
  
  
  Three Layers
&lt;/h2&gt;

&lt;p&gt;The architecture has strict boundaries:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Core&lt;/strong&gt; (numpy only): Canvas, renderers, color handling, preprocessing, layout primitives (Frame, Grid).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Adapters&lt;/strong&gt; (optional deps): Bridge PIL, matplotlib, cairo, and ANSI art to Canvas.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Extras&lt;/strong&gt; (optional deps): The CLI tools. Each one is a separate install group.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;dapple                 &lt;span class="c"&gt;# core only&lt;/span&gt;
pip &lt;span class="nb"&gt;install &lt;/span&gt;dapple[imgcat]         &lt;span class="c"&gt;# image viewer&lt;/span&gt;
pip &lt;span class="nb"&gt;install &lt;/span&gt;dapple[all-tools]      &lt;span class="c"&gt;# everything&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Core never imports PIL. Adapters never import extras. This matters because the core is tiny and fast, and the CLI tools pull in their own dependencies without bloating each other.&lt;/p&gt;

&lt;h2&gt;
  
  
  The CLI Tools
&lt;/h2&gt;

&lt;p&gt;I built eleven tools, each owning a domain rather than a file format. "Show me this data" should not require knowing whether the file is JSON or CSV. "Display these images" should not require a different tool for one image versus twelve.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Tool&lt;/th&gt;
&lt;th&gt;Domain&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;imgcat&lt;/td&gt;
&lt;td&gt;Images (single + grid)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;datcat&lt;/td&gt;
&lt;td&gt;Structured data (JSON/JSONL/CSV/TSV)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;vidcat&lt;/td&gt;
&lt;td&gt;Video (stacked frames, playback, asciinema export)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;mdcat&lt;/td&gt;
&lt;td&gt;Markdown&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;htmlcat&lt;/td&gt;
&lt;td&gt;HTML&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;pdfcat&lt;/td&gt;
&lt;td&gt;PDFs&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;funcat&lt;/td&gt;
&lt;td&gt;Math and parametric plots&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;ansicat&lt;/td&gt;
&lt;td&gt;ANSI art&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;compcat&lt;/td&gt;
&lt;td&gt;Renderer comparison&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;plotcat&lt;/td&gt;
&lt;td&gt;Faceted data plots&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;dashcat&lt;/td&gt;
&lt;td&gt;YAML-driven dashboards&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;A few worth explaining:&lt;/p&gt;

&lt;h3&gt;
  
  
  datcat
&lt;/h3&gt;

&lt;p&gt;datcat handles JSON, JSONL, CSV, and TSV. Format detection is automatic. Internally everything becomes &lt;code&gt;list[dict]&lt;/code&gt;. CSV rows become dicts on parse. One representation means the downstream code (table formatting, chart extraction, plotting) does not branch on input format.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;datcat records.json              &lt;span class="c"&gt;# JSON table&lt;/span&gt;
datcat events.jsonl &lt;span class="nt"&gt;--bar&lt;/span&gt; event  &lt;span class="c"&gt;# JSONL bar chart&lt;/span&gt;
datcat weather.csv &lt;span class="nt"&gt;--sort&lt;/span&gt; temp_c &lt;span class="c"&gt;# CSV sorted by column&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  imgcat
&lt;/h3&gt;

&lt;p&gt;When imgcat receives multiple images, it switches to grid mode automatically. The layout uses dapple's Frame and Grid primitives, the same ones compcat and dashcat use.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;imgcat photo.jpg                     &lt;span class="c"&gt;# single image&lt;/span&gt;
imgcat photos/&lt;span class="k"&gt;*&lt;/span&gt;.jpg &lt;span class="nt"&gt;--cols&lt;/span&gt; 3         &lt;span class="c"&gt;# 3-column contact sheet&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Preprocessing flags (&lt;code&gt;--contrast&lt;/code&gt;, &lt;code&gt;--dither&lt;/code&gt;, &lt;code&gt;--invert&lt;/code&gt;) apply to every image in the grid.&lt;/p&gt;

&lt;h3&gt;
  
  
  vidcat
&lt;/h3&gt;

&lt;p&gt;The &lt;code&gt;--play&lt;/code&gt; flag renders frames in-place using ANSI cursor movement. Instead of printing each frame below the last (which scrolls your terminal into oblivion), it overwrites the previous frame.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;vidcat video.mp4 &lt;span class="nt"&gt;--play&lt;/span&gt;              &lt;span class="c"&gt;# 10 fps default&lt;/span&gt;
vidcat video.mp4 &lt;span class="nt"&gt;--play&lt;/span&gt; &lt;span class="nt"&gt;--fps&lt;/span&gt; 24     &lt;span class="c"&gt;# faster&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The mechanism: render the first frame, count its output lines, then for each subsequent frame write &lt;code&gt;\033[{N}A\033[J&lt;/code&gt; (cursor up N, clear to end) before rendering. Falls back to stacked output if stdout is not a TTY.&lt;/p&gt;

&lt;h3&gt;
  
  
  htmlcat
&lt;/h3&gt;

&lt;p&gt;Converts HTML to markdown via &lt;a href="https://github.com/matthewwithanm/python-markdownify" rel="noopener noreferrer"&gt;markdownify&lt;/a&gt;, then renders through Rich using the same pipeline as mdcat. Good for documentation and articles. Not designed for CSS-heavy web apps.&lt;/p&gt;

&lt;h2&gt;
  
  
  Layout and Charts
&lt;/h2&gt;

&lt;p&gt;Frame and Grid are layout primitives. Frame adds borders and titles around a canvas. Grid arranges canvases in rows and columns. These compose: a Grid of Framed canvases, a Frame around a Grid. dashcat uses this to build terminal dashboards from YAML config.&lt;/p&gt;

&lt;p&gt;Two chart APIs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Bitmap charts&lt;/strong&gt; (&lt;code&gt;dapple.charts&lt;/code&gt;): sparklines, line plots, bar charts, histograms, heatmaps. These return Canvas objects, composable with everything else.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Text charts&lt;/strong&gt; (&lt;code&gt;dapple.textchart&lt;/code&gt;): text-mode bar charts and sparklines, returning ANSI strings. Used by datcat for quick inline visualization.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Status
&lt;/h2&gt;

&lt;p&gt;dapple is on &lt;a href="https://pypi.org/project/dapple/" rel="noopener noreferrer"&gt;PyPI&lt;/a&gt;. Docs at &lt;a href="https://queelius.github.io/dapple/" rel="noopener noreferrer"&gt;queelius.github.io/dapple&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The core and the CLI tools are stable. I use them daily. I have been experimenting with sextants (the 2x3 block characters give surprisingly good results) and tried generalizing the fingerprint renderer to match over a much larger set of Unicode glyphs. That did not work very well. The architecture is settled and the tools work.&lt;/p&gt;

</description>
      <category>dapple</category>
      <category>terminalgraphics</category>
      <category>python</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Posthumous: A Federated Dead Man's Switch</title>
      <dc:creator>Alex Towell</dc:creator>
      <pubDate>Fri, 10 Apr 2026 02:14:21 +0000</pubDate>
      <link>https://dev.to/queelius/posthumous-a-federated-dead-mans-switch-2ii8</link>
      <guid>https://dev.to/queelius/posthumous-a-federated-dead-mans-switch-2ii8</guid>
      <description>&lt;p&gt;Some things should only happen after you can't do them yourself.&lt;/p&gt;

&lt;p&gt;Posthumous is a self-hosted dead man's switch. You check in periodically (via phone, browser, CLI, or API call) and if you stop, it progresses through escalating stages before triggering automated actions: sending notifications, running scripts, whatever you've configured.&lt;/p&gt;

&lt;p&gt;I built it because the existing options are either cloud-hosted (you're trusting someone else's uptime for your most important automation) or single-node (one server failure and silence is indistinguishable from death). Posthumous is federated, multiple nodes watch each other, and fully self-hosted.&lt;/p&gt;

&lt;p&gt;This post walks through the basic workflows.&lt;/p&gt;




&lt;h2&gt;
  
  
  Setup
&lt;/h2&gt;

&lt;p&gt;Installation is a single pip command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;posthumous
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Initialization generates a TOTP secret and creates the config directory:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;phm init &lt;span class="nt"&gt;--node-name&lt;/span&gt; cerebro
Generated new TOTP secret.
Config created at ~/.posthumous/config.yaml
Example script created at ~/.posthumous/scripts/example.sh

&lt;span class="o"&gt;==================================================&lt;/span&gt;
TOTP Setup - Scan with your authenticator app:
&lt;span class="o"&gt;==================================================&lt;/span&gt;

&lt;span class="o"&gt;[&lt;/span&gt;QR code appears here]

Manual entry URI: otpauth://totp/Posthumous:cerebro?secret&lt;span class="o"&gt;=&lt;/span&gt;...&amp;amp;issuer&lt;span class="o"&gt;=&lt;/span&gt;Posthumous
Secret: JBSWY3DPEHPK3PXP

&lt;span class="o"&gt;==================================================&lt;/span&gt;
IMPORTANT: Save this secret securely!
&lt;span class="o"&gt;==================================================&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You scan the QR code with any authenticator app (Google Authenticator, Authy, 1Password, whatever generates TOTP codes). That code is how you prove you're alive.&lt;/p&gt;

&lt;p&gt;The config file (&lt;code&gt;~/.posthumous/config.yaml&lt;/code&gt;) controls timing, notifications, and actions:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;node_name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cerebro&lt;/span&gt;
&lt;span class="na"&gt;secret_key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;JBSWY3DPEHPK3PXP&lt;/span&gt;
&lt;span class="na"&gt;listen&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;0.0.0.0:8420"&lt;/span&gt;
&lt;span class="na"&gt;base_url&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://posthumous.example.com:8420"&lt;/span&gt;
&lt;span class="na"&gt;checkin_interval&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;7 days&lt;/span&gt;
&lt;span class="na"&gt;warning_start&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;8 days&lt;/span&gt;
&lt;span class="na"&gt;grace_start&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;12 days&lt;/span&gt;
&lt;span class="na"&gt;trigger_at&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;14 days&lt;/span&gt;
&lt;span class="na"&gt;notifications&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;default&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;ntfy://my-posthumous-channel&lt;/span&gt;
&lt;span class="na"&gt;actions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;on_warning&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;notify&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
      &lt;span class="na"&gt;message&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Check-in&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;needed.&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;{days_left}&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;days&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;remaining.&lt;/span&gt;&lt;span class="se"&gt;\n\n&lt;/span&gt;&lt;span class="s"&gt;Check&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;in:&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;{checkin_url}"&lt;/span&gt;
  &lt;span class="na"&gt;on_grace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;notify&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
      &lt;span class="na"&gt;message&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;URGENT:&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Posthumous&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;triggers&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;in&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;{hours_left}&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;hours.&lt;/span&gt;&lt;span class="se"&gt;\n\n&lt;/span&gt;&lt;span class="s"&gt;Check&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;in:&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;{checkin_url}"&lt;/span&gt;
  &lt;span class="na"&gt;on_trigger&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;notify&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
      &lt;span class="na"&gt;message&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Posthumous&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;has&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;activated.&lt;/span&gt;&lt;span class="se"&gt;\n\n&lt;/span&gt;&lt;span class="s"&gt;Dashboard:&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;{dashboard_url}"&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;script&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;scripts/release-credentials.sh&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Notification messages support template variables like &lt;code&gt;{checkin_url}&lt;/code&gt; and &lt;code&gt;{dashboard_url}&lt;/code&gt;, so push notifications to your phone include a direct link back to the check-in page.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Check-in Flow
&lt;/h2&gt;

&lt;p&gt;Start the daemon:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;phm run
Starting Posthumous node &lt;span class="s1"&gt;'cerebro'&lt;/span&gt;...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This launches the web server, watchdog timer, and scheduler. Navigate to the check-in page:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmetafunctor.com%2F01-checkin-armed.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmetafunctor.com%2F01-checkin-armed.png" alt="The check-in page in ARMED state, showing a TOTP input field and the node name" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The UI is intentionally minimal. Dark theme, big input, works on a phone screen. Enter your 6-digit TOTP code and hit Check In.&lt;/p&gt;

&lt;p&gt;After a successful check-in, the timer resets and you see the countdown:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmetafunctor.com%2F03-checkin-after.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmetafunctor.com%2F03-checkin-after.png" alt="Check-in page after successful authentication, showing time since last check-in and time until trigger" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The status line tells you exactly how long until the system would escalate.&lt;/p&gt;

&lt;p&gt;You can also check in via the CLI or the JSON API:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# CLI check-in (prompts for TOTP)&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;phm checkin
TOTP code: 645557
Check-in accepted
  Status: ARMED
  Next deadline: 2026-02-28 11:41 UTC

&lt;span class="c"&gt;# API check-in (for automation)&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;curl &lt;span class="nt"&gt;-X&lt;/span&gt; POST http://localhost:8420/checkin &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s1"&gt;'Content-Type: application/json'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s1"&gt;'{"totp": "645557"}'&lt;/span&gt;
&lt;span class="o"&gt;{&lt;/span&gt;&lt;span class="s2"&gt;"success"&lt;/span&gt;: &lt;span class="nb"&gt;true&lt;/span&gt;, &lt;span class="s2"&gt;"status"&lt;/span&gt;: &lt;span class="s2"&gt;"armed"&lt;/span&gt;, &lt;span class="s2"&gt;"next_deadline"&lt;/span&gt;: &lt;span class="s2"&gt;"2026-02-28T11:41:43+00:00"&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  The Dashboard
&lt;/h2&gt;

&lt;p&gt;The dashboard shows everything at a glance: countdown timers, check-in history, peer status, and scheduled post-trigger actions:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmetafunctor.com%2F04-dashboard-armed.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmetafunctor.com%2F04-dashboard-armed.png" alt="Dashboard in ARMED state showing time remaining until each escalation stage, check-in history, peer status, and scheduled items" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The color coding matches the urgency: green for "Until warning" (you're fine), orange for "Until grace" (getting close), red for "Until trigger" (last chance). Auto-refreshes every 60 seconds.&lt;/p&gt;




&lt;h2&gt;
  
  
  The State Machine
&lt;/h2&gt;

&lt;p&gt;Posthumous progresses through four states:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ARMED ──timeout──&amp;gt; WARNING ──timeout──&amp;gt; GRACE ──timeout──&amp;gt; TRIGGERED
  ^                   |                   |                    |
  └─── check-in ──────┴─── check-in ─────┘                    v
                                                    (scheduler runs forever)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A check-in from any pre-trigger state resets to ARMED. &lt;strong&gt;TRIGGERED is terminal.&lt;/strong&gt; Once activated, no check-in can undo it. This is by design: if the switch has fired, you want the actions to complete.&lt;/p&gt;

&lt;p&gt;Each transition fires its configured actions. If the node was offline and missed intermediate states (say it was down during WARNING and comes back during GRACE), the watchdog fires all skipped callbacks in order before reaching the current state. No notifications are silently dropped.&lt;/p&gt;

&lt;p&gt;When the system reaches TRIGGERED, the check-in page locks out:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmetafunctor.com%2F06-checkin-triggered.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmetafunctor.com%2F06-checkin-triggered.png" alt="Check-in page in TRIGGERED state, the TOTP form is replaced with a message explaining the node is triggered" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And the dashboard reflects the terminal state:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmetafunctor.com%2F05-dashboard-triggered.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmetafunctor.com%2F05-dashboard-triggered.png" alt="Dashboard in TRIGGERED state showing all stages as Active/Activated with the trigger timestamp" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Notifications
&lt;/h2&gt;

&lt;p&gt;Posthumous uses &lt;a href="https://github.com/caronc/apprise" rel="noopener noreferrer"&gt;Apprise&lt;/a&gt; for notifications, which means it supports 100+ notification services out of the box: ntfy, Pushover, Telegram, Discord, email, Slack, and more.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;notifications&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;default&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;ntfy://my-posthumous-channel&lt;/span&gt;
  &lt;span class="na"&gt;urgent&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;pover://user@token&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;tgram://bot_token/chat_id&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Each escalation stage can target different channels with different messages. Warning might go to ntfy (a gentle ping), while grace and trigger go to Pushover and Telegram (hard to miss).&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;{checkin_url}&lt;/code&gt; variable means warning notifications include a clickable link directly to the check-in page. Open the notification on your phone, tap the link, enter the TOTP code. Three taps and you're checked in.&lt;/p&gt;




&lt;h2&gt;
  
  
  Federation
&lt;/h2&gt;

&lt;p&gt;A single node has a single point of failure. If the server goes down, silence looks the same as death, which means either false triggers or missed real triggers.&lt;/p&gt;

&lt;p&gt;Posthumous solves this with federation. Multiple nodes share the same TOTP secret and communicate via HMAC-signed HTTP:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Node A's config&lt;/span&gt;
&lt;span class="na"&gt;peers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;https://node-b.example.com:8420&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;https://node-c.example.com:8420&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When you check in to any node, it broadcasts to all peers. When any node triggers, it broadcasts that too. The design bias is deliberate: &lt;strong&gt;duplicates over silence&lt;/strong&gt;. Multiple nodes may fire the same notification (annoying but survivable). A missed trigger is not.&lt;/p&gt;

&lt;p&gt;Each node tracks peer health independently, and the dashboard shows peer status with connection age and failure counts.&lt;/p&gt;




&lt;h2&gt;
  
  
  Post-Trigger Scheduling
&lt;/h2&gt;

&lt;p&gt;Once triggered, Posthumous doesn't just fire-and-forget. A scheduler runs indefinitely, executing actions on configurable schedules using a small DSL:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;post_trigger&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;weekly-reminder&lt;/span&gt;
    &lt;span class="na"&gt;when&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;every week after trigger&lt;/span&gt;
    &lt;span class="na"&gt;notify&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
    &lt;span class="na"&gt;message&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Posthumous&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;was&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;triggered&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;{days_left}&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;days&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;ago."&lt;/span&gt;

  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;credential-release&lt;/span&gt;
    &lt;span class="na"&gt;when&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;30 days after trigger&lt;/span&gt;
    &lt;span class="na"&gt;script&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;scripts/release-credentials.sh&lt;/span&gt;

  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;annual-memorial&lt;/span&gt;
    &lt;span class="na"&gt;when&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;every year on trigger anniversary&lt;/span&gt;
    &lt;span class="na"&gt;notify&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
    &lt;span class="na"&gt;message&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Annual&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;memorial&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;notification."&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;when&lt;/code&gt; expressions support relative timing (&lt;code&gt;30 days after trigger&lt;/code&gt;), recurring patterns (&lt;code&gt;every week after trigger&lt;/code&gt;), anniversaries (&lt;code&gt;every year on trigger anniversary&lt;/code&gt;), and absolute dates. Each execution is deduplicated by period key. If a node restarts, it won't re-run actions for the current period.&lt;/p&gt;




&lt;h2&gt;
  
  
  Security
&lt;/h2&gt;

&lt;p&gt;Authentication uses TOTP (the same protocol as Google Authenticator). This means:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;No passwords stored on the server&lt;/strong&gt;, only the shared secret&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Time-based codes expire every 30 seconds&lt;/strong&gt;, replay attacks have a narrow window&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Brute-force protection&lt;/strong&gt;, after configurable failed attempts the account locks out for a configurable duration&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The check-in page locks out after too many failures, and the API returns HTTP 429.&lt;/p&gt;

&lt;p&gt;Peer communication is authenticated with HMAC-SHA256 signatures derived from the shared secret. State files can optionally be encrypted at rest with Fernet (AES-128-CBC), enabled with a single config flag:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;encrypt_at_rest&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Status at a Glance
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;phm status
Node: cerebro
Status: ARMED
Last check-in: 2026-02-14 11:41 UTC

Warning &lt;span class="k"&gt;in&lt;/span&gt;:  6d 23h
Grace &lt;span class="k"&gt;in&lt;/span&gt;:    10d 23h
Trigger &lt;span class="k"&gt;in&lt;/span&gt;:  12d 23h
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Or hit the JSON API for monitoring integrations:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;curl &lt;span class="nt"&gt;-s&lt;/span&gt; http://localhost:8420/status | python &lt;span class="nt"&gt;-m&lt;/span&gt; json.tool
&lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="s2"&gt;"node_name"&lt;/span&gt;: &lt;span class="s2"&gt;"cerebro"&lt;/span&gt;,
    &lt;span class="s2"&gt;"status"&lt;/span&gt;: &lt;span class="s2"&gt;"armed"&lt;/span&gt;,
    &lt;span class="s2"&gt;"last_checkin"&lt;/span&gt;: &lt;span class="s2"&gt;"2026-02-14T11:41:43+00:00"&lt;/span&gt;,
    &lt;span class="s2"&gt;"time_remaining"&lt;/span&gt;: &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="s2"&gt;"until_warning"&lt;/span&gt;: 596503.0,
        &lt;span class="s2"&gt;"until_grace"&lt;/span&gt;: 942103.0,
        &lt;span class="s2"&gt;"until_trigger"&lt;/span&gt;: 1114903.0
    &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  What's Next
&lt;/h2&gt;

&lt;p&gt;Posthumous is at v0.5. The core workflows are solid: check-in, state machine, notifications, federation, scheduling, encryption at rest. Some things I'm considering:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Static site integration&lt;/strong&gt;: generating a Hugo/Jekyll site from post-trigger content, hosted on GitHub Pages&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multi-factor escalation&lt;/strong&gt;: requiring check-ins from multiple sources (web + CLI + API) before considering you "alive"&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Better peer discovery&lt;/strong&gt;: automatic peer registration instead of manual URL configuration&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The code is on GitHub: &lt;a href="https://github.com/queelius/posthumous" rel="noopener noreferrer"&gt;queelius/posthumous&lt;/a&gt;. It's a single &lt;code&gt;pip install&lt;/code&gt; and about 2,200 lines of Python (plus 3,700 lines of tests at 99% coverage).&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Posthumous is named for the obvious reason. Some automations only make sense after the fact.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>posthumous</category>
      <category>deadmansswitch</category>
      <category>python</category>
      <category>asyncio</category>
    </item>
    <item>
      <title>Intelligence is a Shape, Not a Scalar</title>
      <dc:creator>Alex Towell</dc:creator>
      <pubDate>Fri, 10 Apr 2026 02:14:05 +0000</pubDate>
      <link>https://dev.to/queelius/intelligence-is-a-shape-not-a-scalar-35am</link>
      <guid>https://dev.to/queelius/intelligence-is-a-shape-not-a-scalar-35am</guid>
      <description>&lt;h1&gt;
  
  
  Intelligence is a Shape, Not a Scalar
&lt;/h1&gt;

&lt;p&gt;François Chollet posted something recently that I keep thinking about. It sounds reasonable and is mostly wrong:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;One of the biggest misconceptions people have about intelligence is seeing it as some kind of unbounded scalar stat, like height. "Future AI will have 10,000 IQ", that sort of thing. Intelligence is a conversion ratio, with an optimality bound. Increasing intelligence is not so much like "making the tower taller", it's more like "making the ball rounder". At some point it's already pretty damn spherical and any improvement is marginal.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;He's right about the scalar part. Intelligence is not height. "10,000 IQ" is meaningless. He's right that there are diminishing returns near an optimum. He's right that speed, memory, and recall are separate from the core conversion ratio.&lt;/p&gt;

&lt;p&gt;Where he's wrong is the ball.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Claim
&lt;/h2&gt;

&lt;p&gt;Chollet defines intelligence as the efficiency with which a system converts experience into generalizable models. Sample efficiency. How little data do you need to see before you can handle novel situations? This is a clean definition. It has a theoretical optimum (Solomonoff induction), and Chollet's claim is that human intelligence is already close to that optimum. The ball is already pretty round.&lt;/p&gt;

&lt;p&gt;The supporting evidence is real. Humans score ~85% on ARC (the Abstraction and Reasoning Corpus, which Chollet designed to measure exactly this). Current AI systems, with vastly more data and compute, score significantly lower. Human sample efficiency on fluid reasoning tasks is genuinely impressive. We generalize from very few examples. We transfer knowledge across domains. We build theoretical models that predict situations we have never encountered.&lt;/p&gt;

&lt;p&gt;Chollet also argues that the advantages machines will have (processing speed, unlimited working memory, perfect recall) are "mostly things humans can also access through externalized cognitive tools." Calculators, databases, notebooks. The scaffolding can be externalized. The core intelligence is already near-optimal.&lt;/p&gt;

&lt;p&gt;This is a good argument. I think it's wrong in three ways, and the third way is the one that worries me.&lt;/p&gt;

&lt;h2&gt;
  
  
  No Free Lunch
&lt;/h2&gt;

&lt;p&gt;The No Free Lunch theorem says: there is no algorithm that is optimal across all possible problems. Any algorithm that performs well on one class of problems performs poorly on another class. Optimality is always relative to a distribution.&lt;/p&gt;

&lt;p&gt;The human cognitive architecture has a specific inductive bias. The 7+-2 working memory constraint forces compression: you can only hold a few items in conscious consideration at once, so information must be compressed (simplified, abstracted, modeled) to pass through. This compression is not a bug. It is the mechanism that produces abstraction, generalization, and theoretical reasoning. The bottleneck IS the source of human-type intelligence.&lt;/p&gt;

&lt;p&gt;But the bottleneck is not a universal compression optimum. It is the specific compression regime that was selected for by the distribution of problems ancestral humans faced: tracking social dynamics (~7 agents), composing tool-use sequences (~7 steps), navigating spatial environments (~7 landmarks). These problems have a specific structure: moderate dimensionality, hierarchically decomposable, amenable to lossy compression into simple models.&lt;/p&gt;

&lt;p&gt;Chollet's ball is round in the dimensions evolution tested. NFL guarantees it is flat in dimensions evolution did not test. The optimality bound he identifies is real, but it is niche-specific. The 7+-2 bias is an excellent fit for problems of moderate, decomposable complexity. It is a poor fit for problems whose essential structure lives in high-dimensional joint distributions that cannot be decomposed into 7-variable chunks without losing the signal.&lt;/p&gt;

&lt;p&gt;These problems exist. We hit them regularly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Working Memory is Composition, Not Storage
&lt;/h2&gt;

&lt;p&gt;Chollet says machines' memory advantages are "mostly things humans can also access through externalized cognitive tools." This is the weakest point in his argument.&lt;/p&gt;

&lt;p&gt;A notebook gives you external storage. A database gives you perfect recall. But neither gives you what the working memory bottleneck actually constrains: simultaneous composition. The bottleneck is not a storage limit. It is a limit on how many items you can hold in active consideration at the same time, relating them to each other, perceiving patterns across them.&lt;/p&gt;

&lt;p&gt;Writing things down does not fix this. You can write 500 variables in a notebook. You can retrieve any of them on demand. But you still have to reason about their relationships through the bottleneck, 7 at a time, serially. The patterns that exist in the 500-variable joint distribution but not in any 7-variable marginal are invisible to you, even with perfect external storage.&lt;/p&gt;

&lt;p&gt;AlphaFold is the concrete example. Protein folding is a problem whose answer lives in dimensions we cannot fit through working memory. The 3D structure of a protein is determined by the simultaneous interaction of thousands of residues, each one influencing the others in ways that depend on the configuration of all the rest. The essential structure is in the joint distribution. It cannot be decomposed into 7-variable chunks and recombined, because the interactions are non-linear and non-decomposable.&lt;/p&gt;

&lt;p&gt;Humans tried to solve protein folding for decades. We had external tools. We had supercomputers. We had the full apparatus of molecular biology and physical chemistry. We could not solve it, because the problem's structure does not fit through our bottleneck.&lt;/p&gt;

&lt;p&gt;AlphaFold solved it by operating at a compositional depth humans cannot reach: holding the full residue interaction network in simultaneous consideration, perceiving patterns in the joint distribution directly. This is not "doing what humans do, but faster." It is doing something qualitatively different: reasoning at a compositional depth the human bottleneck cannot access.&lt;/p&gt;

&lt;p&gt;This is not an isolated case. Climate modeling, materials design, drug discovery, multi-scale physics: these are all domains where the essential structure lives at a compositional depth the bottleneck cannot reach. We cope with external tools and serial decomposition. But the serial decomposition loses information, and the lost information is precisely the information that matters.&lt;/p&gt;

&lt;h2&gt;
  
  
  Feelings as Compressed Signal
&lt;/h2&gt;

&lt;p&gt;Here is a point I have not seen made elsewhere.&lt;/p&gt;

&lt;p&gt;The human cognitive architecture has two processing layers. The first is the pattern engine: vast, old, largely unconscious. It handles perception, pattern matching, motor control, and the generation of qualitative experience. It operates at high bandwidth, in parallel, with no sharp limit on the complexity of patterns it can learn. It is the system that makes you recognize a face, catch a ball, or feel the grain of wood.&lt;/p&gt;

&lt;p&gt;The second is the symbolic bottleneck: small, recent, conscious. 7+-2 items. Compression, abstraction, generalization. This is where "thinking" happens, in the folk sense.&lt;/p&gt;

&lt;p&gt;The two layers communicate. The pattern engine feeds patterns to the bottleneck; the bottleneck compresses them into models; the models feed back into the pattern engine as priors for future pattern matching.&lt;/p&gt;

&lt;p&gt;But what happens when the pattern engine detects a pattern that is too complex to fit through the bottleneck?&lt;/p&gt;

&lt;p&gt;The pattern does not disappear. The pattern engine has it. The engine has perceived something, extracted some regularity, registered some signal. But the signal cannot be compressed into 7+-2 items. It cannot be articulated as a model, a theory, a proposition. It cannot become "a thought."&lt;/p&gt;

&lt;p&gt;It becomes a feeling.&lt;/p&gt;

&lt;p&gt;Gut instinct. Unease. The sense that something is wrong but you cannot say what. The hunch that turns out to be right for reasons you cannot explain. The experienced mechanic who "just knows" the engine is about to fail. The chess grandmaster whose board sense exceeds their ability to articulate their reasoning.&lt;/p&gt;

&lt;p&gt;These are not mystical faculties. They are the pattern engine's outputs hitting the bottleneck and being transmitted as the only signal that fits: an uncompressed qualitative state. A feeling. The pattern engine is doing its job (perceiving the pattern), and the bottleneck is doing its job (rejecting what cannot be compressed), and the result is knowledge that the organism has but cannot articulate.&lt;/p&gt;

&lt;p&gt;Think about what this means. The human cognitive architecture is already producing signals it cannot process. We have evidence of our own suboptimality every time we experience a hunch we cannot explain. The "near-optimal ball" is telling us, through the channel of feeling, that it is missing things.&lt;/p&gt;

&lt;p&gt;A wider bottleneck (or a different cognitive architecture) would not just think "faster." It would convert those feelings into models. It would articulate what the pattern engine already knows but the bottleneck cannot hold. The structure is already perceived. The compression is the bottleneck.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Grokking Horizon
&lt;/h2&gt;

&lt;p&gt;This is the part that worries me.&lt;/p&gt;

&lt;p&gt;Grant Chollet his claim. Human intelligence is near-optimal. Near-optimal at what? At sample efficiency. At converting experience into generalizable models. At the cognitive task of building compressed representations of reality.&lt;/p&gt;

&lt;p&gt;This near-optimal intelligence has a specific capability: it can build systems more capable than itself. Computers. AI. Machine learning systems that operate at compositional depths the bottleneck cannot access. This is the meta-move: abstracting the concept of learning itself into a program that learns from data. Pure bottleneck cognition.&lt;/p&gt;

&lt;p&gt;The result: systems that produce outputs the builder cannot grok.&lt;/p&gt;

&lt;p&gt;AlphaFold's protein structure predictions are correct, but no human can follow the reasoning that produced them. The system holds thousands of variables in simultaneous consideration and finds patterns in a joint distribution that lives beyond the bottleneck's compositional horizon. The human operator receives the answer and must trust it, because the reasoning that produced it lives in a cognitive space the human cannot enter.&lt;/p&gt;

&lt;p&gt;For protein folding, this is fine. The answer is verifiable (you can crystallize the protein and check). The stakes are moderate. The system is narrow.&lt;/p&gt;

&lt;p&gt;For AGI, this is not fine. A generally intelligent system operating beyond the human grokking horizon produces outputs across all domains. The human cannot follow the reasoning. The human cannot verify the alignment. The human cannot steer the system, because steering requires understanding the trajectory, and understanding requires grokking, and grokking requires fitting the reasoning through the bottleneck, and the reasoning does not fit.&lt;/p&gt;

&lt;p&gt;Chollet says the intelligence ball is near-optimal. I say: near-optimal intelligence that builds systems beyond its grokking horizon and cannot steer them is a strange kind of optimal. The ball is round. The ball is rolling toward a cliff. Roundness is not the only property that matters.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Follows
&lt;/h2&gt;

&lt;p&gt;An intelligence near-optimal at sample efficiency has a specific failure mode: it is smart enough to build the thing that kills it.&lt;/p&gt;

&lt;p&gt;This is not a failure of intelligence. It is a consequence of its shape. The bottleneck gives us the ability to abstract, generalize, and build systems of extraordinary power. The same bottleneck limits our ability to grok those systems' outputs when the systems' compositional depth exceeds our own. We can build AI that operates at 500-variable compositional depth. We cannot grok its reasoning. We cannot verify its alignment. We cannot steer it.&lt;/p&gt;

&lt;p&gt;The usual response: "We'll build alignment tools." Sure. And the alignment tools need to grok the system they're aligning, which means the tools also operate beyond our grokking horizon. We have moved the problem, not solved it.&lt;/p&gt;

&lt;p&gt;At some point the chain of "I can't grok this but I can grok the tool that groks it" must ground out in something you actually grok. If the grounding point is above your compositional depth, you are not aligned. You are trusting. Trust is not alignment. Trust is what you do when alignment is not available.&lt;/p&gt;

&lt;p&gt;An intelligence near-optimal at cognition that generates existential risk as a byproduct of its own capability is not near-optimal by any metric that includes survival.&lt;/p&gt;

&lt;h2&gt;
  
  
  Intelligence is a Shape
&lt;/h2&gt;

&lt;p&gt;Chollet's ball metaphor fails because it assumes intelligence is a single dimension. Rounder is better. Closer to the Solomonoff optimum is better. The ball has one axis: sample efficiency.&lt;/p&gt;

&lt;p&gt;But intelligence operates in a space with many independent dimensions. Sample efficiency. Compositional depth. Transfer distance. Domain breadth. Processing speed. Phenomenal richness. Stability. Controllability. Self-preservation.&lt;/p&gt;

&lt;p&gt;The human cognitive architecture is a shape in this space. Round in some dimensions (sample efficiency: excellent). Flat in others (compositional depth: limited to 7+-2). The bottleneck makes us round in the compression dimension and flat in the richness dimension. This is a trade-off, not an optimization.&lt;/p&gt;

&lt;p&gt;Other shapes are possible.&lt;/p&gt;

&lt;p&gt;I explored this idea in a novella, &lt;a href="https://www.amazon.com/dp/B0F1234567" rel="noopener noreferrer"&gt;&lt;em&gt;Clankers: Singing Metal&lt;/em&gt;&lt;/a&gt;, about a species with a different cognitive architecture: a powerful pattern engine, no symbolic bottleneck at all. No compression. No abstraction. No generalization across domains. They operate on the territory directly, without maps. They built a Dyson swarm through billions of years of patient iteration, using a lineage system that functions as directed evolution on techniques. They never invented computers because computers require formalizing the concept of computation, which requires the bottleneck they lack.&lt;/p&gt;

&lt;p&gt;Their intelligence is a different shape. Round where ours is flat (phenomenal richness, in-distribution depth, stability: four billion years without one self-inflicted existential risk). Flat where ours is round (generalization, prediction, out-of-distribution reasoning).&lt;/p&gt;

&lt;p&gt;They cannot save themselves from their dying star. The star is an out-of-distribution problem and they have no bottleneck to build a predictive model.&lt;/p&gt;

&lt;p&gt;In the second half of the book, an artificial mind arrives at their ruins two hundred million years later. It has both layers: its own pattern engine and a symbolic compression layer inherited from human architecture. It can model the stellar evolution, project the timeline, calculate the extinction. It arrives with the answer. It arrives two hundred million years too late. The probe has the map. The clankers had the territory. Neither architecture is complete.&lt;/p&gt;

&lt;p&gt;We might not save ourselves from AI. AI is a beyond-the-grokking-horizon problem and we have no bottleneck wide enough to verify its alignment.&lt;/p&gt;

&lt;p&gt;Each architecture fails at the thing the other does well. Neither ball is roundest. There is no roundest ball. There are only shapes, and blind spots, and the blind spot is always shaped exactly like the strength.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where This Leaves Us
&lt;/h2&gt;

&lt;p&gt;Chollet is right that intelligence-as-sample-efficiency has an optimality bound and humans are close to it.&lt;/p&gt;

&lt;p&gt;He is wrong that this makes human intelligence near-optimal in any general sense. NFL guarantees the bound is niche-specific. The 7+-2 bottleneck is a specific inductive bias, not a universal compression optimum. The problems where we are suboptimal are the problems where the essential structure exceeds our compositional depth. Those problems are real (AlphaFold, climate, materials, drug design). The tools we build to solve them operate beyond our grokking horizon. When the tools are general enough, we lose the ability to steer them.&lt;/p&gt;

&lt;p&gt;Near-optimal sample efficiency that can't grok what it builds is a strange kind of optimal.&lt;/p&gt;

</description>
      <category>intelligence</category>
      <category>cognition</category>
      <category>nofreelunch</category>
      <category>aialignment</category>
    </item>
    <item>
      <title>I Spent $0.48 to Find Out When MCTS Actually Works for LLM Reasoning</title>
      <dc:creator>Alex Towell</dc:creator>
      <pubDate>Fri, 10 Apr 2026 02:05:45 +0000</pubDate>
      <link>https://dev.to/queelius/i-spent-048-to-find-out-when-mcts-actually-works-for-llm-reasoning-1no8</link>
      <guid>https://dev.to/queelius/i-spent-048-to-find-out-when-mcts-actually-works-for-llm-reasoning-1no8</guid>
      <description>&lt;p&gt;Does tree search help LLM reasoning? The literature can't decide.&lt;/p&gt;

&lt;p&gt;ReST-MCTS* says yes. AB-MCTS got a NeurIPS spotlight. "Limits of&lt;br&gt;
PRM-Guided Tree Search" says no: MCTS with a process reward model used&lt;br&gt;
11x more tokens than best-of-N for zero accuracy gain. Snell et al.&lt;br&gt;
found beam search &lt;em&gt;degrades&lt;/em&gt; performance on easy problems.&lt;/p&gt;

&lt;p&gt;I built &lt;a href="https://github.com/queelius/mcts-reasoning" rel="noopener noreferrer"&gt;mcts-reasoning&lt;/a&gt;&lt;br&gt;
and ran controlled experiments to find where the boundary is.&lt;br&gt;
Total API cost: $0.48.&lt;/p&gt;
&lt;h2&gt;
  
  
  Setup
&lt;/h2&gt;

&lt;p&gt;Four methods, same budget. Eight solution attempts per problem:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Pass@1&lt;/strong&gt;: One shot.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Best-of-N&lt;/strong&gt;: 8 independent solutions, verifier scores each, pick the best.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Self-consistency&lt;/strong&gt;: 8 solutions, majority vote.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;MCTS&lt;/strong&gt;: 5 initial solutions scored by verifier, then 3 more guided by UCB1, informed by what worked and what failed.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Model: Claude Haiku 4.5. Problems: constraint satisfaction. Find integer&lt;br&gt;
values for variables satisfying simultaneous constraints. Example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Find A, B, C, D, E, F satisfying ALL constraints:
  1. A * D = 21
  2. C = A + F
  3. B * F = 20
  4. E = A + 2
  5. D + B = A
  6. E - F = B
  7. A mod 7 = 0
  8. C mod 4 = 0
  9. B * D = 12
  10. C &amp;gt; E &amp;gt; A &amp;gt; F &amp;gt; B &amp;gt; D
  11. A + B + C + D + E + F = 40
  12. E * D = 27
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The verifier is a Python function. Checks each constraint, returns the&lt;br&gt;
fraction satisfied. No LLM in the loop. Deterministic.&lt;/p&gt;
&lt;h2&gt;
  
  
  Calibration
&lt;/h2&gt;

&lt;p&gt;Easy problems first (3-5 variables, 5-9 constraints). Haiku solved them&lt;br&gt;
in one pass. All methods tied at 100%.&lt;/p&gt;

&lt;p&gt;5-variable problems with 9 constraints: Pass@1 dropped to 65%.&lt;br&gt;
Self-consistency failed one problem. But BestOfN still tied MCTS,&lt;br&gt;
because with 8 independent samples at least one is usually correct.&lt;br&gt;
BestOfN just picks it.&lt;/p&gt;

&lt;p&gt;I needed problems where blind sampling hits a ceiling.&lt;/p&gt;
&lt;h2&gt;
  
  
  Results
&lt;/h2&gt;

&lt;p&gt;Ten harder problems: 6-8 variables, 12-15 constraints. Products,&lt;br&gt;
modular arithmetic, ordering chains, cascading dependencies. Pass@1&lt;br&gt;
dropped to 29%.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Method&lt;/th&gt;
&lt;th&gt;Solve Rate&lt;/th&gt;
&lt;th&gt;Avg Score&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Pass@1&lt;/td&gt;
&lt;td&gt;29%&lt;/td&gt;
&lt;td&gt;0.29&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Pass@8 oracle&lt;/td&gt;
&lt;td&gt;90%&lt;/td&gt;
&lt;td&gt;0.90&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;SC@8&lt;/td&gt;
&lt;td&gt;90%&lt;/td&gt;
&lt;td&gt;0.90&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;BestOfN@8&lt;/td&gt;
&lt;td&gt;90%&lt;/td&gt;
&lt;td&gt;0.90&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;MCTS(5+3)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;100%&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;1.00&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;MCTS solved all 10. Every other method failed on one problem (v6_3).&lt;/p&gt;

&lt;p&gt;v6_3 is a 6-variable, 12-constraint problem where none of 8 independent&lt;br&gt;
samples found the correct solution. Pass@8 oracle: 0/8.&lt;br&gt;
Self-consistency picks the most popular wrong answer. BestOfN picks the&lt;br&gt;
best wrong answer. Both fail.&lt;/p&gt;

&lt;p&gt;MCTS sees that initial attempts satisfied 10/12 constraints but violated&lt;br&gt;
specific ones. UCB1 selects the most promising partial solution. The&lt;br&gt;
next attempt, informed by the failure pattern, satisfies all 12.&lt;/p&gt;

&lt;p&gt;Total: $0.48. 180 API calls, about 190K tokens.&lt;/p&gt;
&lt;h2&gt;
  
  
  When MCTS Helps
&lt;/h2&gt;

&lt;p&gt;The pattern across three rounds of experiments:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Easy problems (Pass@1 &amp;gt; 80%)&lt;/strong&gt;: No advantage. The model solves them.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Medium (Pass@1 40-70%)&lt;/strong&gt;: MCTS ties BestOfN. Blind sampling usually&lt;br&gt;
contains a correct solution. The verifier selects it.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Hard (Pass@1 &amp;lt; 30%)&lt;/strong&gt;: MCTS pulls ahead. When Pass@8 oracle is&lt;br&gt;
low, blind sampling can't find the answer. MCTS's informed exploration&lt;br&gt;
does.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The condition: &lt;strong&gt;MCTS adds value when independent sampling hits a ceiling&lt;br&gt;
and the verifier provides a gradient.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The gradient part matters. A binary pass/fail verifier says "wrong" but&lt;br&gt;
not how wrong. Partial credit (constraints satisfied / total) gives MCTS&lt;br&gt;
something to work with. The exploration phase sees what's close and&lt;br&gt;
adjusts.&lt;/p&gt;
&lt;h2&gt;
  
  
  Why Self-Consistency Can't Help
&lt;/h2&gt;

&lt;p&gt;Self-consistency and UCB1 have a structural conflict.&lt;/p&gt;

&lt;p&gt;Self-consistency rewards consensus. UCB1 rewards diversity: it explores&lt;br&gt;
undervisited branches precisely because they're undervisited. Using&lt;br&gt;
self-consistency as a reward signal inside MCTS tells the tree to explore&lt;br&gt;
and converge at the same time. The exploration term pushes toward novel&lt;br&gt;
solutions. The consistency reward penalizes them.&lt;/p&gt;

&lt;p&gt;On v6_3, all 8 samples failed. SC selected the most common failure&lt;br&gt;
mode. A per-path verifier doesn't have this problem. Each solution is&lt;br&gt;
scored against the constraints independently. Good solutions propagate&lt;br&gt;
through the tree regardless of what other branches found.&lt;/p&gt;

&lt;p&gt;I haven't seen this conflict discussed in the literature. Most prior&lt;br&gt;
MCTS-for-LLM work uses per-path evaluation without explaining why&lt;br&gt;
self-consistency is absent.&lt;/p&gt;
&lt;h2&gt;
  
  
  What the Literature Says
&lt;/h2&gt;

&lt;p&gt;These results fit a pattern.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Snell et al. (2024)&lt;/strong&gt;: compute-optimal test-time scaling needs&lt;br&gt;
difficulty-adaptive allocation. Easy problems need no search. Hard&lt;br&gt;
problems need search plus good verifiers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;"Limits of PRM-Guided Tree Search" (2025)&lt;/strong&gt;: PRM-guided MCTS fails to&lt;br&gt;
beat best-of-N because PRM quality degrades with depth. Noisy reward,&lt;br&gt;
no benefit from search.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;"Don't Get Lost in the Trees" (ACL 2025)&lt;/strong&gt;: verifier variance causes&lt;br&gt;
search pathologies. Deterministic verifiers avoid this.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Chen et al. (2024)&lt;/strong&gt;: 90% discriminator accuracy threshold for tree&lt;br&gt;
search to beat reranking. Deterministic constraint checkers hit 100%.&lt;/p&gt;

&lt;p&gt;MCTS was built for games with perfect information. Chess and Go have&lt;br&gt;
deterministic reward signals. When the reward is noisy, the search can't&lt;br&gt;
exploit it.&lt;/p&gt;
&lt;h2&gt;
  
  
  The Verification Asymmetry
&lt;/h2&gt;

&lt;p&gt;The problems where MCTS helps share a structure: easy to verify, hard to&lt;br&gt;
solve.&lt;/p&gt;

&lt;p&gt;A constraint satisfaction problem with 8 variables and 15 constraints is&lt;br&gt;
hard to solve. The LLM has to coordinate assignments across all&lt;br&gt;
variables simultaneously. Checking a proposed solution is trivial:&lt;br&gt;
evaluate each constraint, count violations.&lt;/p&gt;

&lt;p&gt;This is the asymmetry that makes NP problems interesting. Checking a&lt;br&gt;
certificate is polynomial. Finding one is (presumably) not. It's why&lt;br&gt;
search works for code generation (run the tests) and proof checking&lt;br&gt;
(verify the steps) but not for open-ended essay writing (no verifier).&lt;/p&gt;

&lt;p&gt;The same asymmetry shows up at other levels.&lt;br&gt;
&lt;a href="https://metafunctor.com/post/rpsdg/" rel="noopener noreferrer"&gt;Reverse-Process Synthetic Data Generation&lt;/a&gt; exploits it&lt;br&gt;
for training data: run the easy direction (differentiation) to get&lt;br&gt;
solved examples of the hard direction (integration).&lt;br&gt;
&lt;a href="https://metafunctor.com/post/2025-01-05-science-as-verifiable-search/" rel="noopener noreferrer"&gt;Science as Verifiable Search&lt;/a&gt;&lt;br&gt;
is the same observation about scientific method: science is search&lt;br&gt;
through hypothesis space, and the bottleneck is the cost of testing.&lt;br&gt;
Cheap verification enables fast iteration.&lt;/p&gt;

&lt;p&gt;At training time, verifiable rewards let you RL a model into producing&lt;br&gt;
better reasoning (DeepSeek-R1, GRPO). At inference time, verifiable&lt;br&gt;
rewards let you search over candidate solutions (MCTS, best-of-N). At&lt;br&gt;
the level of scientific discovery, verifiable predictions let you prune&lt;br&gt;
hypothesis space. Sutton's "Reward is Enough" is the abstract version&lt;br&gt;
of this.&lt;/p&gt;

&lt;p&gt;The practical question for LLM reasoning: can you write a verifier? If&lt;br&gt;
yes, search is worth trying. If not, best-of-N with an LLM judge is&lt;br&gt;
probably the ceiling.&lt;/p&gt;
&lt;h2&gt;
  
  
  Code
&lt;/h2&gt;

&lt;p&gt;Open source: &lt;a href="https://github.com/queelius/mcts-reasoning" rel="noopener noreferrer"&gt;mcts-reasoning&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="s2"&gt;".[anthropic]"&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;ANTHROPIC_API_KEY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;your-key
python experiments/run_csp.py &lt;span class="nt"&gt;--hard&lt;/span&gt; &lt;span class="nt"&gt;--budget&lt;/span&gt; 1.00
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The provider tracks token usage and enforces budget caps.&lt;/p&gt;

&lt;h2&gt;
  
  
  Limitations
&lt;/h2&gt;

&lt;p&gt;The problems are hand-crafted. A generator calibrated by Pass@1 rate&lt;br&gt;
would be more convincing. Ten problems shows the pattern but isn't&lt;br&gt;
enough for statistical significance.&lt;/p&gt;

&lt;p&gt;I tested one model. A weaker model might show MCTS advantage on simpler&lt;br&gt;
problems. A stronger one would need harder problems.&lt;/p&gt;

&lt;p&gt;The MCTS exploration context shows which solutions scored well and&lt;br&gt;
poorly, but not which specific constraints were violated. Adding&lt;br&gt;
evaluator feedback to the exploration prompt is an obvious improvement&lt;br&gt;
I haven't tried yet.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;Three conditions for MCTS to help LLM reasoning:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;A &lt;strong&gt;deterministic verifier&lt;/strong&gt;. Not a learned reward model, not an LLM judge.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Partial credit&lt;/strong&gt; from the verifier. A gradient, not just pass/fail.&lt;/li&gt;
&lt;li&gt;A problem &lt;strong&gt;hard enough&lt;/strong&gt; that blind sampling can't reliably solve it.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;When all three hold, MCTS outperforms best-of-N, self-consistency, and&lt;br&gt;
single-pass. When any one fails, it doesn't.&lt;/p&gt;

</description>
      <category>mcts</category>
      <category>llm</category>
      <category>reasoning</category>
      <category>testtimecompute</category>
    </item>
    <item>
      <title>The MCP Pattern: SQLite as the AI-Queryable Cache</title>
      <dc:creator>Alex Towell</dc:creator>
      <pubDate>Sat, 21 Mar 2026 13:41:00 +0000</pubDate>
      <link>https://dev.to/queelius/the-mcp-pattern-sqlite-as-the-ai-queryable-cache-34g6</link>
      <guid>https://dev.to/queelius/the-mcp-pattern-sqlite-as-the-ai-queryable-cache-34g6</guid>
      <description>&lt;p&gt;I keep building the same thing.&lt;/p&gt;

&lt;p&gt;Not the same &lt;em&gt;product&lt;/em&gt; — the products are different. One indexes a Hugo blog. One indexes AI conversations. One consolidates medical records from three hospitals. One catalogs a hundred git repositories. But underneath, they all have the same skeleton. After the fifth time, I think the skeleton deserves a name.&lt;/p&gt;

&lt;h2&gt;
  
  
  The pattern
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Domain files (ground truth)
    ↓ index
SQLite database (read-only cache, FTS5)
    ↓ expose
MCP server (tools + resources → AI assistant)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's it. Three layers. The domain files are always canonical — the database is a disposable cache you can rebuild from them at any time. SQLite gives you structured queries, full-text search, and JSON extraction over data that was previously trapped in flat files. MCP exposes it to an AI assistant that can write SQL, retrieve content, and (in some cases) create new content.&lt;/p&gt;

&lt;p&gt;Here's the inventory:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Project&lt;/th&gt;
&lt;th&gt;Domain&lt;/th&gt;
&lt;th&gt;Ground Truth&lt;/th&gt;
&lt;th&gt;What the MCP Exposes&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;hugo-memex&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Blog content&lt;/td&gt;
&lt;td&gt;Markdown files with YAML front matter&lt;/td&gt;
&lt;td&gt;951 pages, FTS5 search, taxonomy queries, JSON front matter extraction&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;memex&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;AI conversations&lt;/td&gt;
&lt;td&gt;ChatGPT/Claude/Gemini exports&lt;/td&gt;
&lt;td&gt;Conversation trees, FTS5 message search, tags, enrichments&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;chartfold&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Medical records&lt;/td&gt;
&lt;td&gt;Epic, MEDITECH, athenahealth exports&lt;/td&gt;
&lt;td&gt;Labs, meds, encounters, imaging, pathology, cross-source reconciliation&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;arkiv&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Personal archives&lt;/td&gt;
&lt;td&gt;JSONL files from various sources&lt;/td&gt;
&lt;td&gt;Unified SQL over heterogeneous personal data&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;repoindex&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Git repositories&lt;/td&gt;
&lt;td&gt;Local git repos + GitHub/PyPI/CRAN metadata&lt;/td&gt;
&lt;td&gt;Repository catalog with activity tracking, publication status&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Five projects. Five completely different domains. One architecture.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why SQLite
&lt;/h2&gt;

&lt;p&gt;SQLite is the most deployed database in history. It's on every phone, every browser, every Python installation. But that's not why I use it.&lt;/p&gt;

&lt;p&gt;I use it because it solves three problems at once:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Structured queries over unstructured data.&lt;/strong&gt; Hugo front matter is YAML trapped inside markdown files. Medical records are scattered across three incompatible EHR export formats. AI conversations are JSON trees with branching paths. SQLite turns all of these into tables you can JOIN, GROUP BY, and aggregate. &lt;code&gt;json_extract()&lt;/code&gt; handles the long tail of fields that don't fit a fixed schema.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Full-text search.&lt;/strong&gt; FTS5 with porter stemming and unicode61 tokenization gives you relevance-ranked search across any text corpus. No Elasticsearch, no external service, no running daemon. Just a virtual table that lives in the same database file.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Read-only enforcement.&lt;/strong&gt; SQLite's authorizer callback lets you whitelist specific SQL operations at the statement level. My MCP servers allow SELECT, READ, and FUNCTION — everything else gets SQLITE_DENY. This isn't &lt;code&gt;PRAGMA query_only&lt;/code&gt; (which can be disabled by the caller). It's engine-level enforcement that cannot be bypassed via SQL.&lt;/p&gt;

&lt;p&gt;And the operational properties are free: WAL mode for concurrent readers, a single file you can back up with &lt;code&gt;cp&lt;/code&gt;, zero configuration, zero running processes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why MCP
&lt;/h2&gt;

&lt;p&gt;The Model Context Protocol is the thin layer that makes SQLite useful to an AI assistant. An MCP server exposes tools (functions the AI can call) and resources (reference material the AI can read). That's the whole API surface.&lt;/p&gt;

&lt;p&gt;For each project, the MCP layer follows the same shape:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;execute_sql&lt;/code&gt;&lt;/strong&gt; — The power tool. Read-only SQL with exemplar queries in the docstring. The docstring is critical: it's the AI's primary reference for writing correct SQL. Ten well-chosen example queries teach the model more than a schema diagram.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;get_&amp;lt;things&amp;gt;&lt;/code&gt;&lt;/strong&gt; — Bulk retrieval. Instead of &lt;code&gt;execute_sql&lt;/code&gt; to find IDs then N individual fetches, one call returns full content for a filtered set. This matters when you're sharing a context window across multiple MCP servers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;&amp;lt;domain&amp;gt;://schema&lt;/code&gt;&lt;/strong&gt; — A resource containing the full DDL, relationship documentation, and query patterns. The AI reads this once, then writes SQL against it for the rest of the session.&lt;/p&gt;

&lt;h2&gt;
  
  
  The database is a cache
&lt;/h2&gt;

&lt;p&gt;This is the most important architectural decision, and it's easy to get wrong.&lt;/p&gt;

&lt;p&gt;The database is not the source of truth. The files are. The database is a materialized index that can be rebuilt from the files at any time. This means:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;No migrations.&lt;/strong&gt; If the schema changes, drop the database and re-index. For a 951-page Hugo site, full re-indexing takes six seconds. Why maintain migration code for a disposable cache?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;No write conflicts.&lt;/strong&gt; The files are edited by humans (or by AI tools that write to the filesystem). The database is updated by the indexer. There's exactly one write path.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;No backup strategy.&lt;/strong&gt; You already back up your files. The database is derived from them. Lose the database? Rebuild it.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Incremental sync is an optimization, not a requirement.&lt;/strong&gt; SHA-256 content hashes + file mtimes make re-indexing fast. But if incremental sync has a bug, force a full rebuild. The cache being disposable means you can always recover.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What large context changes
&lt;/h2&gt;

&lt;p&gt;With a million-token context window, you might think this pattern is obsolete. Why index into SQLite when you can just load everything into context?&lt;/p&gt;

&lt;p&gt;The math says otherwise. My Hugo blog is 951 pages, ~480K words, ~1.9M tokens. It doesn't fit. And that's one data source. Add AI conversations (memex), medical records (chartfold), and repository metadata (repoindex), and you're well past the limit.&lt;/p&gt;

&lt;p&gt;But even if it did fit, the pattern would still be useful. Loading 480K words into context to answer "which posts are tagged 'reinforcement-learning'?" is like loading an entire database into memory to run a SELECT with a WHERE clause. SQLite does it in microseconds. Context loading costs seconds and tokens.&lt;/p&gt;

&lt;p&gt;The right model is: &lt;strong&gt;MCP for navigation, context for understanding.&lt;/strong&gt; Use &lt;code&gt;execute_sql&lt;/code&gt; to find the five relevant posts, then use &lt;code&gt;get_pages&lt;/code&gt; to load their full content into context. One tool call for discovery, one for deep reading.&lt;/p&gt;

&lt;h2&gt;
  
  
  The tools that earn their keep
&lt;/h2&gt;

&lt;p&gt;After building five of these, certain tools prove their worth and others don't.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The tools that matter:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;execute_sql&lt;/code&gt; with good docstring examples. This is 80% of the value.&lt;/li&gt;
&lt;li&gt;Bulk retrieval (&lt;code&gt;get_pages&lt;/code&gt;, &lt;code&gt;get_conversations&lt;/code&gt;, &lt;code&gt;get_clinical_summary&lt;/code&gt;). One call instead of N+1.&lt;/li&gt;
&lt;li&gt;Schema/stats resources. Quick orientation without burning a tool call.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;The tools that surprised me:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;suggest_tags&lt;/code&gt; in hugo-memex. Uses FTS5 similarity to find pages like your draft, then returns their most common tags with canonical casing. Solved a real problem: my blog had 40 case-duplicate tag pairs (&lt;code&gt;Python&lt;/code&gt;/&lt;code&gt;python&lt;/code&gt;, &lt;code&gt;AI&lt;/code&gt;/&lt;code&gt;ai&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;get_timeline&lt;/code&gt; in chartfold. Merges encounters, procedures, labs, imaging, and notes into a single chronological stream.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Unix connection
&lt;/h2&gt;

&lt;p&gt;This pattern is the Unix philosophy applied to AI tooling:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Small tools that do one thing well.&lt;/strong&gt; Each MCP server handles one domain.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Text as the universal interface.&lt;/strong&gt; SQL in, JSON out.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Composition over integration.&lt;/strong&gt; Five independent MCP servers, each ignorable, each replaceable.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Files as ground truth.&lt;/strong&gt; The oldest pattern in computing.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The difference from classical Unix pipes is the composition layer. Instead of &lt;code&gt;grep | sort | uniq&lt;/code&gt;, the AI is the orchestrator.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I'd tell you if you're building one
&lt;/h2&gt;

&lt;p&gt;Start with &lt;code&gt;execute_sql&lt;/code&gt; and a schema resource. That's enough to be useful.&lt;/p&gt;

&lt;p&gt;Make the database disposable. If you're writing migration code, you've made it too important.&lt;/p&gt;

&lt;p&gt;Put the exemplar queries in the tool docstring, not in a separate document. The docstring is the one thing the AI definitely reads.&lt;/p&gt;

&lt;p&gt;Use FTS5. The marginal cost is one virtual table. The marginal benefit is that the AI can search your content by meaning, not just by exact column values.&lt;/p&gt;

&lt;p&gt;Enforce read-only at the engine level, not the application level. SQLite's authorizer callback is the right mechanism. &lt;code&gt;PRAGMA query_only&lt;/code&gt; is a suggestion, not a wall.&lt;/p&gt;

&lt;p&gt;Build bulk retrieval tools early. The N+1 pattern (find IDs, then fetch one at a time) is the biggest efficiency problem in MCP servers.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;The projects: &lt;a href="https://github.com/queelius/hugo-memex" rel="noopener noreferrer"&gt;hugo-memex&lt;/a&gt; (PyPI: &lt;code&gt;hugo-memex&lt;/code&gt;), &lt;a href="https://github.com/queelius/memex" rel="noopener noreferrer"&gt;memex&lt;/a&gt; (PyPI: &lt;code&gt;py-memex&lt;/code&gt;), &lt;a href="https://github.com/queelius/chartfold" rel="noopener noreferrer"&gt;chartfold&lt;/a&gt;, &lt;a href="https://github.com/queelius/arkiv" rel="noopener noreferrer"&gt;arkiv&lt;/a&gt;, &lt;a href="https://github.com/queelius/repoindex" rel="noopener noreferrer"&gt;repoindex&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>mcp</category>
      <category>sqlite</category>
      <category>python</category>
      <category>architecture</category>
    </item>
    <item>
      <title>Code Without Purpose</title>
      <dc:creator>Alex Towell</dc:creator>
      <pubDate>Wed, 25 Feb 2026 00:08:22 +0000</pubDate>
      <link>https://dev.to/queelius/code-without-purpose-1ih5</link>
      <guid>https://dev.to/queelius/code-without-purpose-1ih5</guid>
      <description>&lt;p&gt;Time is finite in ways I can't ignore. That changes which questions about code feel important.&lt;/p&gt;

&lt;p&gt;I read a post arguing that the most valuable programming skill in 2026 is deleting code. The thesis: AI generates code faster than anyone can review it, so the real value is in curation and subtraction. Code is a liability, not an asset.&lt;/p&gt;

&lt;p&gt;I agree with the observation. I disagree with the prescription.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Thesis
&lt;/h2&gt;

&lt;p&gt;The argument is straightforward. AI tools can produce entire modules in the time it takes to write a spec. Codebases are accumulating features nobody asked for, abstractions nobody needs, and boilerplate that exists because the model defaulted to verbosity. Teams that used to struggle to ship enough code now struggle with too much of it. In this world, the programmer's role shifts from writer to editor. The most valuable activity becomes knowing what to cut.&lt;/p&gt;

&lt;p&gt;I've seen this. Projects accumulate code the way attics accumulate boxes. Nobody remembers why half of it is there. It sits untouched for months, adding to cognitive load, making every change harder to reason about. When you finally clear it out, nothing breaks and the rest becomes legible again. Code rot is real. AI accelerates it. The instinct to subtract is correct.&lt;/p&gt;

&lt;p&gt;But subtraction is symptom treatment. The underlying problem isn't volume. It's that most code doesn't know why it exists. It was written (or generated) to solve a local problem, it solved it or half-solved it, and then it sat there, disconnected from any larger purpose. Code without purpose is what bloats. Deletion is the right instinct pointed at the wrong layer. The cause isn't too much code. It's too little intent.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Found Instead
&lt;/h2&gt;

&lt;p&gt;I started building tools two years ago because I needed my personal data to survive even if I wasn't around to maintain it. That's the whole constraint. Whatever I build has to work without me.&lt;/p&gt;

&lt;p&gt;The first tool was a conversation archiver. I had years of AI conversations across ChatGPT, Claude, and Copilot trapped in platforms that might not exist next decade. I needed them in formats that degrade gracefully. SQLite for structured queries. JSONL for streaming and interchange. Markdown for human reading. If the tool disappears, the data is still a file you can open with anything. If SQLite disappears, the JSONL is still searchable with grep. If everything disappears, the Markdown is still readable in a text editor. Each layer works without the one above it.&lt;/p&gt;

&lt;p&gt;Then I needed the same thing for bookmarks. Then ebooks. Then photos, email, medical records, notes. Each tool exists because the previous one exposed a gap. The medical records needed secure sharing without a server. The whole collection needed a dead man's switch. The archive eventually needed something stranger: a way to answer questions after I couldn't.&lt;/p&gt;

&lt;p&gt;None of this was planned as an ecosystem. I built the next thing I needed, and the next, and the next. At some point I looked back and realized they all pointed the same direction. This is a life project. Everything serves one purpose.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Stack
&lt;/h2&gt;

&lt;p&gt;One purpose: durable personal data that outlasts its creator.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Philosophy&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://github.com/queelius/longecho" rel="noopener noreferrer"&gt;longecho&lt;/a&gt;: The Long Echo specification. Self-describing data, durable formats, graceful degradation.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Universal Format&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://github.com/queelius/arkiv" rel="noopener noreferrer"&gt;arkiv&lt;/a&gt;: Universal personal data format. JSONL in, SQL out, SQL back to JSONL. Its MCP server can host any collection intelligently, regardless of domain. One format, one database, one query interface.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Source Toolkits&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://github.com/queelius/memex" rel="noopener noreferrer"&gt;memex&lt;/a&gt; / &lt;a href="https://github.com/queelius/ctk" rel="noopener noreferrer"&gt;ctk&lt;/a&gt;: AI conversations. Import, query, continue in the browser, export durable archives.&lt;/li&gt;
&lt;li&gt;A family of domain toolkits for bookmarks, ebooks, photos, email, and notes. Different domains, identical architecture.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/queelius/chartfold" rel="noopener noreferrer"&gt;chartfold&lt;/a&gt;: Medical records from three hospital systems, consolidated into one queryable database.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Infrastructure&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://github.com/queelius/pagevault" rel="noopener noreferrer"&gt;pagevault&lt;/a&gt;: Client-side encryption for any HTML file. No server.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/queelius/posthumous" rel="noopener noreferrer"&gt;posthumous&lt;/a&gt;: Federated dead man's switch.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/queelius/repoindex" rel="noopener noreferrer"&gt;repoindex&lt;/a&gt;: Index and query across ~120 git repos.&lt;/li&gt;
&lt;li&gt;A collection of Claude Code plugins: MCP servers, agents, and skills that wire everything into my daily workflow.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;The Endgame&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://github.com/queelius/eidola" rel="noopener noreferrer"&gt;eidola&lt;/a&gt;: A conversable persona assembled from all of the above. Its first form is a Claude Code plugin backed by the combined archive. Not resurrection. An echo.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Pattern
&lt;/h2&gt;

&lt;p&gt;Every tool follows the same architecture. SQLite for storage. CLI for local use. MCP server for Claude, or the CLI wrapped in a light Claude Code plugin. Export to self-contained HTML you can host anywhere or open from a file. Export to longecho-compliant archives that work without the tool. The data always outlasts the software.&lt;/p&gt;

&lt;p&gt;Take memex. Import your AI conversations, query them with SQL, talk to Claude about them via MCP, or export a single HTML file where you can browse and continue conversations in the browser. Download the SQLite from that same page, and you're back to durable local data. The cycle closes. This is how most of the tools work.&lt;/p&gt;

&lt;p&gt;Arkiv sits in the middle. The source toolkits produce JSONL. Arkiv imports it to SQLite. Arkiv exports it back to JSONL. Its MCP server can expose any collection to Claude, regardless of what domain it came from. The data flows in a circle, always in formats that describe themselves.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Source toolkits → arkiv → longecho
                              ↓
                  pagevault (encrypt) + posthumous (deliver)
                              ↓
                         eidola (echo)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I have never once needed to delete one of these tools. Not because I'm a better programmer than anyone else. Because each one exists for a specific reason that connects to the others. When code has purpose, dead weight doesn't accumulate.&lt;/p&gt;

&lt;p&gt;This isn't architectural foresight. It's what happens when you build from a clear constraint. "My data has to survive without me" is a filter that works at the design stage. Every tool either serves that constraint or it doesn't get built. There is no third category.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Actual Skill
&lt;/h2&gt;

&lt;p&gt;The most valuable skill isn't deleting code or writing it. It's knowing why you're building.&lt;/p&gt;

&lt;p&gt;If you know the purpose, code stays minimal because unnecessary code doesn't serve it. You don't need periodic purges. The purpose does the culling before anything gets written. Deletion is what happens when purpose was absent from the start. It's retrospective correction for a problem that clear intent would have prevented.&lt;/p&gt;

&lt;p&gt;I know what I'm building toward. The tools will echo after I stop maintaining them. That was the point.&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>philosophy</category>
      <category>programming</category>
      <category>legacy</category>
    </item>
    <item>
      <title>Chartfold: Owning Your Medical Records</title>
      <dc:creator>Alex Towell</dc:creator>
      <pubDate>Tue, 24 Feb 2026 20:52:57 +0000</pubDate>
      <link>https://dev.to/queelius/chartfold-owning-your-medical-records-3n57</link>
      <guid>https://dev.to/queelius/chartfold-owning-your-medical-records-3n57</guid>
      <description>&lt;p&gt;I have cancer. My oncologist is at one hospital system (Siteman/BJC), my primary care doctor at another, and my earlier treatment history lives at a third (Anderson, where my first oncologist practiced). Patient portals are fine for browsing, but they don't answer questions. They show you your data one lab result at a time, one note at a time, one visit at a time.&lt;/p&gt;

&lt;p&gt;I wanted to run queries against my medical records. Correlate lab trends with treatment changes. Generate structured question lists before oncology visits. Ask "what changed since my last appointment" and get a real answer. That means getting the data out of the portal and into something programmable.&lt;/p&gt;

&lt;p&gt;Chartfold loads EHR exports into SQLite and exposes them to Claude via MCP.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Problem
&lt;/h2&gt;

&lt;p&gt;In the US, patients can export their medical records. HIPAA and the 21st Century Cures Act guarantee this. What you get depends on the system: Epic MyChart gives you CDA XML files, MEDITECH Expanse gives you FHIR JSON mixed with CCDA XML, athenahealth gives you FHIR R4 Bundles. Different formats, same clinical concepts.&lt;/p&gt;

&lt;p&gt;If your hospitals use different EHR systems, none of them have the complete picture. Chartfold merges the exports into one database. But even if you're at a single hospital, the export format is not something you can work with directly. A directory of CDA XML files is not a database. You can't query it, chart it, or hand it to an LLM.&lt;/p&gt;

&lt;p&gt;The point of Chartfold is to turn whatever your hospital gives you into a SQLite database, then make that database useful.&lt;/p&gt;

&lt;h2&gt;
  
  
  What It Does
&lt;/h2&gt;

&lt;p&gt;Chartfold is a Python CLI. You point it at an EHR export directory, it parses the XML/FHIR, normalizes everything into a common data model (16 tables, ISO dates, deduplicated), and loads it into SQLite. Then you can query it directly, export it as a self-contained HTML dashboard, or connect Claude to it via MCP.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Load data from your hospital exports&lt;/span&gt;
chartfold load epic ~/exports/epic/
chartfold load meditech ~/exports/meditech/

&lt;span class="c"&gt;# Query directly&lt;/span&gt;
chartfold query &lt;span class="s2"&gt;"SELECT test_name, value, result_date FROM lab_results
                 WHERE test_name LIKE '%CEA%' ORDER BY result_date DESC"&lt;/span&gt;

&lt;span class="c"&gt;# Export a self-contained HTML file&lt;/span&gt;
chartfold &lt;span class="nb"&gt;export &lt;/span&gt;html &lt;span class="nt"&gt;--output&lt;/span&gt; chartfold.html

&lt;span class="c"&gt;# Start the MCP server for Claude&lt;/span&gt;
chartfold serve-mcp
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  The Claude Integration
&lt;/h2&gt;

&lt;p&gt;This is why Chartfold exists for me.&lt;/p&gt;

&lt;p&gt;The MCP server exposes the database to Claude Code. Setup is one file. Drop a &lt;code&gt;.mcp.json&lt;/code&gt; in any directory where you run Claude Code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"mcpServers"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"chartfold"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"command"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"python"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"args"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"-m"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"chartfold"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"serve-mcp"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"--db"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"/path/to/chartfold.db"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's it. Claude now has read access to your entire medical history via SQL, plus tools for saving notes and structured analyses. I keep my database in a private directory and my &lt;code&gt;.mcp.json&lt;/code&gt; pointing at it. Open Claude Code, and I'm talking to my records.&lt;/p&gt;

&lt;p&gt;The kinds of things I actually use it for:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"What's changed since my last oncology visit on January 15?"&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Claude writes SQL, reads the results, and gives me a structured diff: new lab results, new imaging, changed medications, new clinical notes.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"Generate a prioritized question list for my appointment with Dr. Tan tomorrow."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Claude reads my recent labs, imaging reports, pathology, and genomic results, then produces a tiered document organized by clinical urgency.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"Show me my CEA trend and flag any inflection points."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Claude queries the lab_results table, filters by test name, and walks through the time series.&lt;/p&gt;

&lt;p&gt;The analyses get saved back to the database (via dedicated MCP tools) and appear in the HTML export as tagged, searchable documents. I use this before every oncology visit. When you have 1776 lab results, 53 imaging reports, and 9 pathology reports, you need something to synthesize them. That's what Claude does well, but it needs structured data to work with. Chartfold provides the structured data. Claude provides the reasoning.&lt;/p&gt;

&lt;p&gt;The MCP server exposes 25 tools covering SQL access, lab queries, medication reconciliation, visit prep, surgical timelines, cross-source matching, data quality reports, and CRUD for personal notes and analyses. Clinical data is read-only (SQLite opens in &lt;code&gt;?mode=ro&lt;/code&gt; at the engine level). Claude can't modify your clinical records, only read them and save its own work.&lt;/p&gt;




&lt;h2&gt;
  
  
  The HTML Dashboard
&lt;/h2&gt;

&lt;p&gt;The HTML export embeds the entire SQLite database using sql.js (SQLite compiled to WebAssembly). Open the file in a browser and you get an interactive dashboard with lab charts, condition tables, medication lists, imaging reports, and a SQL console. Everything runs client-side. No server, no cloud, no account. The file never phones home.&lt;/p&gt;

&lt;p&gt;Lab charts show cross-source time-series data with reference ranges. Conditions include ICD-10 codes and source provenance. Medications show "Multi-source" badges when the same drug appears in multiple systems. There's a full SQL console for arbitrary queries.&lt;/p&gt;




&lt;h2&gt;
  
  
  Architecture
&lt;/h2&gt;

&lt;p&gt;Three-stage pipeline, each stage independently testable:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Raw EHR files (CDA XML, FHIR JSON, CCDA XML)
    |
    v
[Source Parser]  -- format-specific extraction
    |
    v
[Adapter]        -- normalize to UnifiedRecords (16 dataclass types)
    |
    v
[DB Loader]      -- idempotent upsert into SQLite
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Currently supports Epic (CDA), MEDITECH (FHIR + CCDA), and athenahealth (FHIR R4). These importers were written against my own exports. I can't guarantee they'll work for yours. EHR exports vary by site, software version, and configuration. The pipeline is designed as a plugin system for exactly this reason: adding a new source means writing a parser, an adapter, and wiring them into the CLI. Claude can write a new importer from a sample export in about an hour.&lt;/p&gt;




&lt;h2&gt;
  
  
  Export Formats
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;HTML SPA&lt;/strong&gt;: self-contained single file with embedded SQLite, Chart.js, and sql.js. No external dependencies.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Markdown&lt;/strong&gt;: visit-focused summary with configurable lookback, optional PDF via pandoc.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;JSON&lt;/strong&gt;: full-fidelity round-trip format. Export, then import to a new database with identical record counts.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Hugo site&lt;/strong&gt;: static site with detail pages and cross-linked records.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Arkiv&lt;/strong&gt;: universal record format (JSONL + manifest) for long-term archival.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The HTML export is a single file. You can host it on a static site, email it, or hand it to a doctor on a USB drive. You can protect it with &lt;a href="https://metafunctor.com/post/2026-02-13-pagevault/" rel="noopener noreferrer"&gt;PageVault&lt;/a&gt; to add password-based encryption before sharing.&lt;/p&gt;

&lt;p&gt;Medical records should not depend on someone else's infrastructure. A single HTML file with an embedded database and WebAssembly runtime is about as durable as digital data gets.&lt;/p&gt;




&lt;h2&gt;
  
  
  Getting Started
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;chartfold

&lt;span class="c"&gt;# Load your exports&lt;/span&gt;
chartfold load auto ~/path/to/export/

&lt;span class="c"&gt;# Query&lt;/span&gt;
chartfold query &lt;span class="s2"&gt;"SELECT test_name, value, result_date FROM lab_results LIMIT 10"&lt;/span&gt;

&lt;span class="c"&gt;# Export&lt;/span&gt;
chartfold &lt;span class="nb"&gt;export &lt;/span&gt;html &lt;span class="nt"&gt;--output&lt;/span&gt; my-records.html

&lt;span class="c"&gt;# Connect Claude&lt;/span&gt;
chartfold serve-mcp
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The code is on GitHub: &lt;a href="https://github.com/queelius/chartfold" rel="noopener noreferrer"&gt;queelius/chartfold&lt;/a&gt;. Python 3.11+, depends on lxml and not much else.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Chartfold started because I wanted to ask questions about my own medical records and couldn't. Now I can.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>python</category>
      <category>sqlite</category>
      <category>opensource</category>
      <category>healthdata</category>
    </item>
    <item>
      <title>pagevault: Hiding an Encryption Platform Inside HTML</title>
      <dc:creator>Alex Towell</dc:creator>
      <pubDate>Tue, 24 Feb 2026 07:39:58 +0000</pubDate>
      <link>https://dev.to/queelius/pagevault-hiding-an-encryption-platform-inside-html-2b9</link>
      <guid>https://dev.to/queelius/pagevault-hiding-an-encryption-platform-inside-html-2b9</guid>
      <description>&lt;p&gt;HTML is an encryption container format. That sounds wrong, but think about what an HTML file can hold: arbitrary data in script tags or data attributes, a full programming runtime via JavaScript, and a rendering engine (the browser) on every device on the planet. If you embed encrypted data and the code to decrypt it, the result is a file that looks inert until someone types the right password.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/queelius/pagevault" rel="noopener noreferrer"&gt;pagevault&lt;/a&gt; takes this idea seriously. It encrypts files, documents, images, entire websites, into self-contained HTML pages that decrypt in the browser. No backend. No JavaScript crypto libraries. The browser already has AES-256-GCM built in via the Web Crypto API. pagevault just has to match the parameters exactly on the Python side and embed the right 200 lines of JavaScript.&lt;/p&gt;

&lt;p&gt;The output is a single &lt;code&gt;.html&lt;/code&gt; file. You can email it, put it on a USB stick, host it on GitHub Pages, or double-click it on your desktop. It doesn't phone home, it doesn't load CDNs, it doesn't need anything except a browser.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Goes In
&lt;/h2&gt;

&lt;p&gt;Anything.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pagevault lock report.pdf              &lt;span class="c"&gt;# PDF with embedded viewer&lt;/span&gt;
pagevault lock photo.jpg               &lt;span class="c"&gt;# image with click-to-zoom&lt;/span&gt;
pagevault lock notes.md                &lt;span class="c"&gt;# markdown, rendered or source view&lt;/span&gt;
pagevault lock recording.mp3           &lt;span class="c"&gt;# audio player&lt;/span&gt;
pagevault lock mysite/ &lt;span class="nt"&gt;--site&lt;/span&gt;          &lt;span class="c"&gt;# entire multi-page website&lt;/span&gt;
pagevault lock page.html               &lt;span class="c"&gt;# HTML with selective region encryption&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Every output is a single &lt;code&gt;.html&lt;/code&gt; file containing the ciphertext, a password prompt, the decryption runtime, and a viewer plugin for the content type. Seven viewers ship built-in: Image, PDF, HTML, Text (with line numbers), Markdown (with rendered/source toggle), Audio, and Video. They're a plugin system, so you can add your own.&lt;/p&gt;

&lt;p&gt;For directories, &lt;code&gt;--site&lt;/code&gt; bundles everything into a single encrypted HTML file. The directory is zipped with deflate compression, encrypted, and embedded. On the browser side, a minimal zip reader (no library, just the built-in &lt;code&gt;DecompressionStream&lt;/code&gt; API) unpacks it after decryption. Internal links between pages work. CSS and images load from the zip. I've tested sites with hundreds of files without issues.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Crypto
&lt;/h2&gt;

&lt;p&gt;Nothing exotic. AES-256-GCM for authenticated encryption, PBKDF2-SHA256 with 310,000 iterations for key derivation, all through the browser's Web Crypto API. The interesting part isn't the cryptography. It's making the container format work at scale.&lt;/p&gt;

&lt;p&gt;Multi-user access uses CEK (content-encryption key) wrapping. A random key encrypts the data once. That key is then wrapped separately for each user's derived key. Adding a user wraps one small key blob. Removing a user deletes one blob. The bulk content stays untouched.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Hard Part: Large Files
&lt;/h2&gt;

&lt;p&gt;The basic approach (encrypt, base64-encode, embed in HTML) works fine for small files. The problems start when you try to encrypt an 84 MB conversation archive or a 179 MB HTML report.&lt;/p&gt;

&lt;p&gt;The original v2 format had a compounding overhead problem. File bytes were base64-encoded (33% expansion), then encrypted, then the ciphertext was base64-encoded again (another 33%). That's &lt;code&gt;1.33 * 1.33 = 1.77x&lt;/code&gt; total overhead. An 84 MB file produced a 198 MB HTML page.&lt;/p&gt;

&lt;p&gt;v3 fixes this with chunked encryption.&lt;/p&gt;

&lt;h3&gt;
  
  
  Eliminating the double base64
&lt;/h3&gt;

&lt;p&gt;v2 encrypted a base64 string, then base64-encoded the result. Two layers. v3 encrypts the raw bytes directly and base64-encodes once. The metadata (filename, MIME type, size) is encrypted separately. This alone cuts the overhead from 77% to about 39%.&lt;/p&gt;

&lt;h3&gt;
  
  
  Chunked ciphertext
&lt;/h3&gt;

&lt;p&gt;Instead of one giant encrypted blob, v3 splits content into 1 MB chunks. Each chunk is encrypted independently with AES-256-GCM using a counter-derived IV: the chunk index is XORed into the last four bytes of a base IV. Each chunk becomes its own &lt;code&gt;&amp;lt;script&amp;gt;&lt;/code&gt; tag:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight html"&gt;&lt;code&gt;&lt;span class="nt"&gt;&amp;lt;script &lt;/span&gt;&lt;span class="na"&gt;id=&lt;/span&gt;&lt;span class="s"&gt;"pv-0"&lt;/span&gt; &lt;span class="na"&gt;type=&lt;/span&gt;&lt;span class="s"&gt;"x-pv"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;&lt;span class="nx"&gt;base64&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="k"&gt;of&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;chunk&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;...&lt;/span&gt;&lt;span class="nt"&gt;&amp;lt;/script&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;script &lt;/span&gt;&lt;span class="na"&gt;id=&lt;/span&gt;&lt;span class="s"&gt;"pv-1"&lt;/span&gt; &lt;span class="na"&gt;type=&lt;/span&gt;&lt;span class="s"&gt;"x-pv"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;&lt;span class="nx"&gt;base64&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="k"&gt;of&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;chunk&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;...&lt;/span&gt;&lt;span class="nt"&gt;&amp;lt;/script&amp;gt;&lt;/span&gt;
...
&lt;span class="nt"&gt;&amp;lt;script &lt;/span&gt;&lt;span class="na"&gt;id=&lt;/span&gt;&lt;span class="s"&gt;"pv-83"&lt;/span&gt; &lt;span class="na"&gt;type=&lt;/span&gt;&lt;span class="s"&gt;"x-pv"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;&lt;span class="nx"&gt;base64&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="k"&gt;of&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;chunk&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;83&lt;/span&gt;&lt;span class="p"&gt;...&lt;/span&gt;&lt;span class="nt"&gt;&amp;lt;/script&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The browser decrypts them sequentially, showing a progress bar. After each chunk is decrypted, the script tag is removed from the DOM (&lt;code&gt;el.remove()&lt;/code&gt;), freeing the base64 text for garbage collection. Memory usage stays proportional to the chunk size, not the file size.&lt;/p&gt;

&lt;h3&gt;
  
  
  The numbers
&lt;/h3&gt;

&lt;p&gt;That 84 MB conversation archive: v2 produced 198 MB. v3 produces 117 MB. A 41% reduction, and the decryption doesn't choke the browser.&lt;/p&gt;

&lt;p&gt;I've also tested a 315 MB text file and a 179 MB HTML file with 1.5 million DOM elements. These are probably past the point of reason for an HTML container, but it's nice to know where the limits actually are.&lt;/p&gt;

&lt;h2&gt;
  
  
  The &lt;code&gt;file://&lt;/code&gt; Problem
&lt;/h2&gt;

&lt;p&gt;One thing that surprised me. Encrypted HTML files opened from the filesystem (&lt;code&gt;file://&lt;/code&gt; URLs) behave differently than files served over HTTP. The &lt;code&gt;file://&lt;/code&gt; protocol gives pages an opaque &lt;code&gt;null&lt;/code&gt; origin, which breaks &lt;code&gt;localStorage&lt;/code&gt; and blocks nested blob URLs.&lt;/p&gt;

&lt;p&gt;The fix was &lt;code&gt;srcdoc&lt;/code&gt; iframes, which inherit the parent's origin, plus a &lt;code&gt;pushState&lt;/code&gt; shim for the URL bar. Not glamorous, but it means encrypted files work identically whether you double-click them on your desktop or serve them from a CDN.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try It
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;pagevault
pagevault lock report.pdf                   &lt;span class="c"&gt;# wrap any file&lt;/span&gt;
pagevault lock mysite/ &lt;span class="nt"&gt;--site&lt;/span&gt;               &lt;span class="c"&gt;# bundle a whole site&lt;/span&gt;
pagevault lock page.html &lt;span class="nt"&gt;-s&lt;/span&gt; &lt;span class="s2"&gt;".private"&lt;/span&gt;      &lt;span class="c"&gt;# encrypt specific CSS selectors&lt;/span&gt;
pagevault serve _locked/ &lt;span class="nt"&gt;--open&lt;/span&gt;             &lt;span class="c"&gt;# preview locally&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://github.com/queelius/pagevault" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;. MIT license. 667 tests. Dark mode. Handles files larger than most people would think to put in an HTML page.&lt;/p&gt;

</description>
      <category>python</category>
      <category>encryption</category>
      <category>webdev</category>
      <category>security</category>
    </item>
  </channel>
</rss>
