<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Jonathan West</title>
    <description>The latest articles on DEV Community by Jonathan West (@unrealjon).</description>
    <link>https://dev.to/unrealjon</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/unrealjon"/>
    <language>en</language>
    <item>
      <title>Distributed Transform Domain Representation (DTDR) for persistent storage of semantically-encoded high-dimensional data</title>
      <dc:creator>Jonathan West</dc:creator>
      <pubDate>Fri, 13 Feb 2026 13:36:54 +0000</pubDate>
      <link>https://dev.to/unrealjon/distributed-transform-domain-representation-for-persistent-storage-of-semantically-encoded-28d2</link>
      <guid>https://dev.to/unrealjon/distributed-transform-domain-representation-for-persistent-storage-of-semantically-encoded-28d2</guid>
      <description>&lt;p&gt;I’ve been experimenting with a representation I call DTDR (Distributed Transform-Domain Representation).The idea is to store vectors or model parameters as quantised coefficients of a structured orthogonal transform and treat that representation as the persistent form rather than a preprocessing step.&lt;br&gt;
Some interesting behaviours seem to emerge:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;- similarity relationships are often still usable directly in coefficient space&lt;/li&gt;
&lt;li&gt;- approximate nearest-neighbour search sometimes improves due to coefficient dilution&lt;/li&gt;
&lt;li&gt;- quantised representations can be reconstructed into standard FP tensors for normal inference&lt;/li&gt;
&lt;li&gt;The stored form often remains further losslessly compressible&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I’m trying to understand whether this is just another way of viewing existing quantisation techniques, or whether it has genuinely different implications for storage/search pipelines.&lt;/p&gt;

&lt;p&gt;Constructive critique very welcome — especially pointers to prior work I may have missed.&lt;br&gt;
Code and demos: &lt;a href="https://github.com/UnrealJon/" rel="noopener noreferrer"&gt;https://github.com/UnrealJon/&lt;/a&gt; &lt;/p&gt;

</description>
      <category>ai</category>
      <category>machinelearning</category>
      <category>similarity</category>
      <category>vectordatabase</category>
    </item>
  </channel>
</rss>
