<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Will badr</title>
    <description>The latest articles on DEV Community by Will badr (@walebadr).</description>
    <link>https://dev.to/walebadr</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/walebadr"/>
    <language>en</language>
    <item>
      <title>I Built a Database That Never Forgets — Here's Why</title>
      <dc:creator>Will badr</dc:creator>
      <pubDate>Sat, 07 Mar 2026 00:26:51 +0000</pubDate>
      <link>https://dev.to/walebadr/tensordb-ai-native-bitemporal-ledger-database-with-full-sql-and-postgresql-wire-protocol-3eal</link>
      <guid>https://dev.to/walebadr/tensordb-ai-native-bitemporal-ledger-database-with-full-sql-and-postgresql-wire-protocol-3eal</guid>
      <description>&lt;p&gt;Last year, a financial services team I was working with had a nightmare scenario: a regulatory audit required them to prove exactly what their system showed on a specific Tuesday six months ago. Not what the data &lt;em&gt;currently&lt;/em&gt; says. What it said &lt;em&gt;then&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Their production Postgres had the current state. Their audit table had some breadcrumbs. Their application logs were partially rotated. Reconstructing the answer took two engineers three weeks of forensic archaeology through backups, WAL archives, and prayer.&lt;/p&gt;

&lt;p&gt;This is the problem that drove me to build &lt;a href="https://github.com/tensor-db/TensorDB" rel="noopener noreferrer"&gt;TensorDB&lt;/a&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Problem With UPDATE
&lt;/h2&gt;

&lt;p&gt;Here's what most databases do when you update a row:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;UPDATE&lt;/span&gt; &lt;span class="n"&gt;accounts&lt;/span&gt; &lt;span class="k"&gt;SET&lt;/span&gt; &lt;span class="n"&gt;balance&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;5000&lt;/span&gt; &lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The old value is gone. Destroyed. Overwritten. If you need history, you build it yourself — trigger-based audit tables, event sourcing patterns, CDC pipelines feeding into a data lake. You end up with a Rube Goldberg machine of infrastructure just to answer &lt;em&gt;"what was this value last week?"&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Bitemporal databases solve this at the storage layer.&lt;/strong&gt; Every write is an immutable fact. Nothing is ever overwritten or deleted. The database tracks two independent timelines for every record:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Timeline&lt;/th&gt;
&lt;th&gt;What it tracks&lt;/th&gt;
&lt;th&gt;Example question&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;System time&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;When the database &lt;em&gt;recorded&lt;/em&gt; this fact&lt;/td&gt;
&lt;td&gt;"What did our system show last Tuesday?"&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Business time&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;When this fact was &lt;em&gt;true in the real world&lt;/em&gt;
&lt;/td&gt;
&lt;td&gt;"What was the contract price on Jan 1?"&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The distinction matters more than you'd think. A bank discovers today that a transaction from January had the wrong amount. With a bitemporal model, you correct the business-time record while preserving the system-time history of what you &lt;em&gt;previously believed&lt;/em&gt;. Both truths coexist. Auditors can see both.&lt;/p&gt;




&lt;h2&gt;
  
  
  See It in 30 Seconds
&lt;/h2&gt;

&lt;p&gt;You can have TensorDB running in under a minute:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;tensordb
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;tensordb&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;PyDatabase&lt;/span&gt;

&lt;span class="n"&gt;db&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;PyDatabase&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;open&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/tmp/demo&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Create a table and insert data
&lt;/span&gt;&lt;span class="n"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;sql&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;CREATE TABLE accounts (id INT, owner TEXT, balance REAL)&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;sql&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;INSERT INTO accounts VALUES (1, &lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Alice&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;, 10000)&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Update the balance
&lt;/span&gt;&lt;span class="n"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;sql&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;UPDATE accounts SET balance = 7500 WHERE id = 1&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Time-travel: what was Alice's balance BEFORE the update?
&lt;/span&gt;&lt;span class="n"&gt;rows&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;sql&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;SELECT * FROM accounts FOR SYSTEM_TIME ALL WHERE id = 1&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;rows&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="c1"&gt;# → Both versions: the 10000 AND the 7500, with timestamps
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's it. No configuration. No schema migration for audit columns. No background workers. The history is automatic.&lt;/p&gt;

&lt;p&gt;Or if you prefer Rust:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;cargo add tensordb
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;db&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nn"&gt;tensordb&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nn"&gt;Database&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;open&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"./mydb"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;?&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="n"&gt;db&lt;/span&gt;&lt;span class="nf"&gt;.sql&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"CREATE TABLE events (id INT PRIMARY KEY, type TEXT, amount REAL)"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;?&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="n"&gt;db&lt;/span&gt;&lt;span class="nf"&gt;.sql&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"INSERT INTO events VALUES
    (1, 'deposit', 1000),
    (2, 'withdrawal', 250),
    (3, 'deposit', 500)"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;?&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="c1"&gt;// What did the ledger look like at any point in time?&lt;/span&gt;
&lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;snapshot&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;db&lt;/span&gt;&lt;span class="nf"&gt;.sql&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="s"&gt;"SELECT * FROM events AS OF SYSTEM TIME '2026-03-07 12:00:00'"&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;?&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Why Should You Care?
&lt;/h2&gt;

&lt;h3&gt;
  
  
  It's Fast. Really Fast.
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Operation&lt;/th&gt;
&lt;th&gt;TensorDB&lt;/th&gt;
&lt;th&gt;SQLite (WAL)&lt;/th&gt;
&lt;th&gt;Factor&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Point read&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;276 ns&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;~400 ns&lt;/td&gt;
&lt;td&gt;1.4x faster&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Point write&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;1.9 us&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;~15 us&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;8x faster&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Batch insert (10k rows)&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;18 ms&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;~35 ms&lt;/td&gt;
&lt;td&gt;2x faster&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;These aren't synthetic benchmarks on a tuned cluster. This is single-node, embedded, with full durability guarantees. The write path uses lock-free atomic CAS — no mutexes, no channels, no actor messages on the hot path.&lt;/p&gt;

&lt;h3&gt;
  
  
  It Speaks PostgreSQL
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Start the server&lt;/span&gt;
tensordb-server &lt;span class="nt"&gt;--data-dir&lt;/span&gt; ./mydb &lt;span class="nt"&gt;--port&lt;/span&gt; 5433

&lt;span class="c"&gt;# Connect with literally anything that speaks Postgres&lt;/span&gt;
psql &lt;span class="nt"&gt;-h&lt;/span&gt; localhost &lt;span class="nt"&gt;-p&lt;/span&gt; 5433 &lt;span class="nt"&gt;-d&lt;/span&gt; mydb
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Your existing tools work — psql, pgAdmin, DBeaver, SQLAlchemy, Prisma, any Postgres driver. You get standard SQL plus temporal queries that Postgres doesn't natively support:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="c1"&gt;-- Standard SQL&lt;/span&gt;
&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;TABLE&lt;/span&gt; &lt;span class="n"&gt;orders&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;id&lt;/span&gt; &lt;span class="nb"&gt;SERIAL&lt;/span&gt; &lt;span class="k"&gt;PRIMARY&lt;/span&gt; &lt;span class="k"&gt;KEY&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;customer&lt;/span&gt; &lt;span class="nb"&gt;TEXT&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;total&lt;/span&gt; &lt;span class="nb"&gt;REAL&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="k"&gt;INSERT&lt;/span&gt; &lt;span class="k"&gt;INTO&lt;/span&gt; &lt;span class="n"&gt;orders&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;customer&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;total&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;VALUES&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'acme'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;9999&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="n"&gt;RETURNING&lt;/span&gt; &lt;span class="n"&gt;id&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="c1"&gt;-- Temporal queries (the superpower)&lt;/span&gt;
&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;orders&lt;/span&gt; &lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="k"&gt;OF&lt;/span&gt; &lt;span class="k"&gt;SYSTEM&lt;/span&gt; &lt;span class="nb"&gt;TIME&lt;/span&gt; &lt;span class="s1"&gt;'2026-01-15'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;orders&lt;/span&gt; &lt;span class="k"&gt;FOR&lt;/span&gt; &lt;span class="n"&gt;SYSTEM_TIME&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="s1"&gt;'2026-01-01'&lt;/span&gt; &lt;span class="k"&gt;TO&lt;/span&gt; &lt;span class="s1"&gt;'2026-03-01'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;orders&lt;/span&gt; &lt;span class="k"&gt;VALID&lt;/span&gt; &lt;span class="k"&gt;AT&lt;/span&gt; &lt;span class="nb"&gt;DATE&lt;/span&gt; &lt;span class="s1"&gt;'2026-02-15'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  It Embeds in Your Binary
&lt;/h3&gt;

&lt;p&gt;No daemon process. No Docker container. No ops overhead. One function call:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;db&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nn"&gt;Database&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;open&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"./path"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;?&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Ship the database &lt;em&gt;inside&lt;/em&gt; your application. Ideal for edge deployments, CLI tools, desktop apps, or anywhere a full Postgres deployment is overkill.&lt;/p&gt;

&lt;h3&gt;
  
  
  The SQL Surface Is Complete
&lt;/h3&gt;

&lt;p&gt;This isn't a toy query language. It's a full SQL engine with a hand-written recursive descent parser, cost-based query planner, and vectorized execution:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;DDL/DML:&lt;/strong&gt; &lt;code&gt;CREATE TABLE&lt;/code&gt;, &lt;code&gt;ALTER TABLE&lt;/code&gt;, &lt;code&gt;INSERT ... ON CONFLICT&lt;/code&gt; (upsert), &lt;code&gt;UPDATE ... RETURNING&lt;/code&gt;, &lt;code&gt;DELETE ... RETURNING&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Queries:&lt;/strong&gt; JOINs (inner, left, right, full outer, cross), subqueries, CTEs (including &lt;code&gt;WITH RECURSIVE&lt;/code&gt;), window functions, &lt;code&gt;GROUP BY&lt;/code&gt;/&lt;code&gt;HAVING&lt;/code&gt;, &lt;code&gt;UNION&lt;/code&gt;/&lt;code&gt;INTERSECT&lt;/code&gt;/&lt;code&gt;EXCEPT&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Types:&lt;/strong&gt; &lt;code&gt;INTEGER&lt;/code&gt;, &lt;code&gt;REAL&lt;/code&gt;, &lt;code&gt;TEXT&lt;/code&gt;, &lt;code&gt;BOOLEAN&lt;/code&gt;, &lt;code&gt;DATE&lt;/code&gt;, &lt;code&gt;TIMESTAMP&lt;/code&gt;, &lt;code&gt;INTERVAL&lt;/code&gt;, &lt;code&gt;JSON&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Functions:&lt;/strong&gt; 50+ built-in (string, numeric, date/time, aggregate, window)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Advanced:&lt;/strong&gt; foreign keys, materialized views, triggers, user-defined functions, generated columns, JSON operators (&lt;code&gt;-&amp;gt;&lt;/code&gt;, &lt;code&gt;-&amp;gt;&amp;gt;&lt;/code&gt;, &lt;code&gt;@&amp;gt;&lt;/code&gt;)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And the error messages are actually helpful:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ERROR T2001: Table "ordres" not found. Did you mean "orders"?
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  How It Works Under the Hood
&lt;/h2&gt;

&lt;p&gt;For those who like to understand the machinery.&lt;/p&gt;

&lt;h3&gt;
  
  
  Immutable Key Encoding
&lt;/h3&gt;

&lt;p&gt;Every record gets this internal key:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;user_key || 0x00 || commit_ts (8B big-endian) || kind (1B)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;user_key&lt;/code&gt; prefix means prefix scans retrieve all versions. Big-endian timestamps give chronological ordering for free. The &lt;code&gt;kind&lt;/code&gt; byte distinguishes puts from tombstones. Updates don't modify anything — they append new facts with higher timestamps.&lt;/p&gt;

&lt;h3&gt;
  
  
  LSM Storage Stack
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Write ─→ WAL (CRC-framed) ─→ Memtable (BTreeMap)
                                     │ flush
                                     ▼
                            L0 SSTables (sorted)
                                     │ compaction
                                     ▼
                       L1 → L2 → ... → L6
                  (LZ4 for L0-L2, Zstd for L3+)
                  (bloom filters, block cache)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Lock-free writes:&lt;/strong&gt; &lt;code&gt;AtomicU64::compare_exchange&lt;/code&gt; claims a commit timestamp, then writes directly to memtable. No locks on the hot path.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Direct reads:&lt;/strong&gt; &lt;code&gt;ShardReadHandle&lt;/code&gt; with &lt;code&gt;parking_lot::RwLock&lt;/code&gt; bypasses shard actors entirely. This is how reads hit 276ns.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Batched durability:&lt;/strong&gt; A &lt;code&gt;DurabilityThread&lt;/code&gt; coalesces WAL fsyncs across shards on a 1ms interval. Individual writes don't pay fsync cost.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cost-Based Query Planner
&lt;/h3&gt;

&lt;p&gt;The planner evaluates plan variants — &lt;code&gt;PointLookup&lt;/code&gt;, &lt;code&gt;IndexScan&lt;/code&gt;, &lt;code&gt;FullScan&lt;/code&gt;, &lt;code&gt;HashJoin&lt;/code&gt; — using table statistics. A learned cost model tracks actual vs. estimated cardinalities and adjusts its estimates from observed query performance.&lt;/p&gt;




&lt;h2&gt;
  
  
  Production-Ready Features
&lt;/h2&gt;

&lt;p&gt;Things you'll need when you go beyond prototyping:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Security:&lt;/strong&gt; RBAC with users/roles/permissions, row-level security policies, mTLS on pgwire, column-level AES-256-GCM encryption, encryption key rotation without downtime.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Audit:&lt;/strong&gt; SHA-256 hash-chained audit log. Every DDL and DML event is recorded in a tamper-evident chain. Run &lt;code&gt;VERIFY AUDIT LOG&lt;/code&gt; to cryptographically verify integrity.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;GDPR:&lt;/strong&gt; &lt;code&gt;FORGET KEY 'user:42'&lt;/code&gt; creates a cryptographic tombstone across all versions — satisfying right-to-erasure while preserving audit log structure.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Observability:&lt;/strong&gt; 8 diagnostic SQL commands — &lt;code&gt;SHOW STATS&lt;/code&gt;, &lt;code&gt;SHOW SLOW QUERIES&lt;/code&gt;, &lt;code&gt;SHOW ACTIVE QUERIES&lt;/code&gt;, &lt;code&gt;SHOW STORAGE&lt;/code&gt;, &lt;code&gt;SHOW COMPACTION STATUS&lt;/code&gt;, &lt;code&gt;SHOW WAL STATUS&lt;/code&gt;, &lt;code&gt;SHOW AUDIT LOG&lt;/code&gt;, &lt;code&gt;SHOW PLAN GUIDES&lt;/code&gt;. Plus a health HTTP endpoint.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Specialized engines:&lt;/strong&gt; Full-text search (BM25), time-series (bucketing, gap fill, LOCF, interpolation), vector search (HNSW + IVF-PQ), event sourcing, graph queries.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Honest Comparison
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Use Postgres&lt;/strong&gt; if you need a battle-tested, general-purpose OLTP database with 30 years of production hardening and a massive extension ecosystem.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Try TensorDB&lt;/strong&gt; if:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Bitemporality is your &lt;em&gt;primary requirement&lt;/em&gt;, not an afterthought bolted on with triggers&lt;/li&gt;
&lt;li&gt;You want to embed the database directly in your application&lt;/li&gt;
&lt;li&gt;You need structurally append-only storage for compliance (not just "we log changes")&lt;/li&gt;
&lt;li&gt;Sub-microsecond embedded reads matter to you&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;TensorDB is younger software. It doesn't have Postgres's ecosystem depth. But for the specific problem it solves — immutable, bitemporal, embedded storage with full SQL — it's purpose-built.&lt;/p&gt;




&lt;h2&gt;
  
  
  Get Started in 60 Seconds
&lt;/h2&gt;

&lt;p&gt;Pick your language:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Rust — embed in your binary&lt;/span&gt;
cargo add tensordb

&lt;span class="c"&gt;# Python — pip install and go&lt;/span&gt;
pip &lt;span class="nb"&gt;install &lt;/span&gt;tensordb

&lt;span class="c"&gt;# Any language — connect via PostgreSQL protocol&lt;/span&gt;
cargo &lt;span class="nb"&gt;install &lt;/span&gt;tensordb-server
tensordb-server &lt;span class="nt"&gt;--data-dir&lt;/span&gt; ./mydb &lt;span class="nt"&gt;--port&lt;/span&gt; 5433
&lt;span class="c"&gt;# Then: psql -h localhost -p 5433&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Links:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://github.com/tensor-db/TensorDB" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt; — star it if you find it useful&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://tensor-db.github.io/TensorDB/" rel="noopener noreferrer"&gt;Documentation&lt;/a&gt; — quickstart, SQL reference, architecture guide&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://pypi.org/project/tensordb/" rel="noopener noreferrer"&gt;PyPI&lt;/a&gt; — &lt;code&gt;pip install tensordb&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://crates.io/crates/tensordb" rel="noopener noreferrer"&gt;crates.io&lt;/a&gt; — &lt;code&gt;cargo add tensordb&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;If you're building financial systems, compliance infrastructure, audit trails, healthcare records, or anything where &lt;em&gt;the history of data matters as much as the current state&lt;/em&gt; — give it a try and tell me what you think. I read every issue and discussion on GitHub.&lt;/p&gt;

&lt;p&gt;And if it breaks, file a bug. That's how it gets better.&lt;/p&gt;

&lt;h2&gt;
  
  
  (base) walebadr@spark:~/tensorDB$ cat blog-post-devto.md
&lt;/h2&gt;

&lt;p&gt;title: "I Built a Database That Speaks SQL, Vectors, Time-Series, and Natural Language — All at Once"&lt;br&gt;
published: true&lt;br&gt;
description: "TensorDB is an open-source, AI-native multi-model database written in pure Rust. Vector search, NL-to-SQL, bitemporal time travel, full-text search, event sourcing — one embedded binary, 276ns reads."&lt;br&gt;
tags: [rust, database, ai, opensource]&lt;/p&gt;
&lt;h2&gt;
  
  
  cover_image: &lt;a href="https://raw.githubusercontent.com/tensor-db/TensorDB/main/docs/cover.png" rel="noopener noreferrer"&gt;https://raw.githubusercontent.com/tensor-db/TensorDB/main/docs/cover.png&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Every AI application I've built in the last two years had the same architecture smell: a Postgres for the relational data, a Pinecone or Qdrant for the vectors, an Elasticsearch for full-text search, a TimescaleDB or InfluxDB for metrics, and some Kafka-based event stream gluing it all together. Five databases. Five failure modes. Five sets of credentials, schemas, backup strategies, and 3 AM pages.&lt;/p&gt;

&lt;p&gt;One night, staring at a &lt;code&gt;docker-compose.yml&lt;/code&gt; that had more services than my application had features, I asked a question that wouldn't leave me alone:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What if one database could do all of it?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Not a lowest-common-denominator compromise. A database purpose-built for the AI era — where vectors live next to relational rows, where time-series data flows in alongside event streams, where you can ask questions in English and get SQL back, and where every single write is an immutable, auditable, time-traveling fact.&lt;/p&gt;

&lt;p&gt;That's &lt;a href="https://github.com/tensor-db/TensorDB" rel="noopener noreferrer"&gt;TensorDB&lt;/a&gt;.&lt;/p&gt;


&lt;h2&gt;
  
  
  What TensorDB Actually Is
&lt;/h2&gt;

&lt;p&gt;TensorDB is an &lt;strong&gt;open-source, AI-native, multi-model database&lt;/strong&gt; written in pure Rust. It's a single embedded library — no server process, no Docker container, no JVM — that gives you:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Capability&lt;/th&gt;
&lt;th&gt;What You'd Normally Need&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Relational SQL (full JOINs, CTEs, window functions)&lt;/td&gt;
&lt;td&gt;PostgreSQL&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Vector search (HNSW, IVF-PQ, hybrid search)&lt;/td&gt;
&lt;td&gt;Pinecone / Qdrant / Weaviate&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Full-text search (BM25, stemming, highlighting)&lt;/td&gt;
&lt;td&gt;Elasticsearch / Meilisearch&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Time-series (bucketing, gap fill, rate calculations)&lt;/td&gt;
&lt;td&gt;TimescaleDB / InfluxDB&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Event sourcing (aggregates, snapshots, projections)&lt;/td&gt;
&lt;td&gt;EventStoreDB / custom Kafka&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Change data capture (durable cursors, consumer groups)&lt;/td&gt;
&lt;td&gt;Debezium / Kafka Connect&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Bitemporal time travel (system + business time)&lt;/td&gt;
&lt;td&gt;Custom audit layer / prayer&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;All of this runs in-process. &lt;code&gt;pip install tensordb&lt;/code&gt;. That's it.&lt;/p&gt;


&lt;h2&gt;
  
  
  Show Me the Code
&lt;/h2&gt;
&lt;h3&gt;
  
  
  Vector Search + SQL in the Same Query
&lt;/h3&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="c1"&gt;-- Store documents with embeddings alongside regular columns&lt;/span&gt;
&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;TABLE&lt;/span&gt; &lt;span class="n"&gt;docs&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;id&lt;/span&gt; &lt;span class="nb"&gt;INTEGER&lt;/span&gt; &lt;span class="k"&gt;PRIMARY&lt;/span&gt; &lt;span class="k"&gt;KEY&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;title&lt;/span&gt; &lt;span class="nb"&gt;TEXT&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;body&lt;/span&gt; &lt;span class="nb"&gt;TEXT&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;embedding&lt;/span&gt; &lt;span class="n"&gt;VECTOR&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;384&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;-- Create an HNSW index for fast approximate nearest neighbors&lt;/span&gt;
&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="n"&gt;VECTOR&lt;/span&gt; &lt;span class="k"&gt;INDEX&lt;/span&gt; &lt;span class="n"&gt;idx_docs_emb&lt;/span&gt; &lt;span class="k"&gt;ON&lt;/span&gt; &lt;span class="n"&gt;docs&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;embedding&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;USING&lt;/span&gt; &lt;span class="n"&gt;HNSW&lt;/span&gt; &lt;span class="k"&gt;WITH&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;m&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;32&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;ef_construction&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;metric&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'cosine'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;-- k-NN search using the &amp;lt;-&amp;gt; distance operator&lt;/span&gt;
&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;title&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;embedding&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;-&amp;gt;&lt;/span&gt; &lt;span class="s1"&gt;'[0.12, 0.45, ...]'&lt;/span&gt; &lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="n"&gt;distance&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;docs&lt;/span&gt;
&lt;span class="k"&gt;ORDER&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt; &lt;span class="n"&gt;distance&lt;/span&gt;
&lt;span class="k"&gt;LIMIT&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;No sidecar vector database. No syncing embeddings between services. The vectors live in the same table as your relational data, indexed by the same engine that handles your JOINs.&lt;/p&gt;
&lt;h3&gt;
  
  
  Hybrid Search: Vectors Meet Full-Text
&lt;/h3&gt;

&lt;p&gt;This is where it gets interesting. Combine semantic similarity with keyword relevance in a single query:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="c1"&gt;-- BM25 full-text index on the body column&lt;/span&gt;
&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="n"&gt;FULLTEXT&lt;/span&gt; &lt;span class="k"&gt;INDEX&lt;/span&gt; &lt;span class="n"&gt;idx_body&lt;/span&gt; &lt;span class="k"&gt;ON&lt;/span&gt; &lt;span class="n"&gt;docs&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;body&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;-- Hybrid scoring: 70% vector similarity, 30% BM25 text relevance&lt;/span&gt;
&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;title&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;HYBRID_SCORE&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;embedding&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;-&amp;gt;&lt;/span&gt; &lt;span class="s1"&gt;'[0.12, 0.45, ...]'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="k"&gt;MATCH&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;body&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'quantum computing'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
        &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;7&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="n"&gt;score&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;docs&lt;/span&gt;
&lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="k"&gt;MATCH&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;body&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'quantum computing'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;ORDER&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt; &lt;span class="n"&gt;score&lt;/span&gt; &lt;span class="k"&gt;DESC&lt;/span&gt;
&lt;span class="k"&gt;LIMIT&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;One query. Two ranking signals. Zero infrastructure.&lt;/p&gt;

&lt;h3&gt;
  
  
  Natural Language to SQL
&lt;/h3&gt;

&lt;p&gt;TensorDB ships with a &lt;strong&gt;pure-Rust inference engine&lt;/strong&gt; — no Python, no external API calls, no GPU required. A bundled Qwen3 0.6B model runs in-process with constrained SQL decoding:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;tensordb&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;PyDatabase&lt;/span&gt;

&lt;span class="n"&gt;db&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;PyDatabase&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;open&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/tmp/mydb&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;sql&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;CREATE TABLE sales (id INT, product TEXT, amount REAL, region TEXT)&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;sql&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;INSERT INTO sales VALUES (1, &lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Widget&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;, 99.99, &lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;US&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;), (2, &lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Gadget&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;, 149.99, &lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;EU&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;)&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Ask in English, get SQL results
&lt;/span&gt;&lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;natural_language&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;What&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;s the total sales amount by region?&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="c1"&gt;# Internally generates: SELECT region, SUM(amount) FROM sales GROUP BY region
# Returns: [{"region": "US", "amount": 99.99}, {"region": "EU", "amount": 149.99}]
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The model runs a tool-calling loop: it introspects table schemas, generates SQL, executes it, and returns structured results. All in-process. All at the speed of Rust.&lt;/p&gt;

&lt;h3&gt;
  
  
  Time Travel That Just Works
&lt;/h3&gt;

&lt;p&gt;Every write in TensorDB is an immutable fact. Nothing is ever overwritten or deleted. You get automatic time travel for free:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="c1"&gt;-- Insert a record&lt;/span&gt;
&lt;span class="k"&gt;INSERT&lt;/span&gt; &lt;span class="k"&gt;INTO&lt;/span&gt; &lt;span class="n"&gt;accounts&lt;/span&gt; &lt;span class="k"&gt;VALUES&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'Alice'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;10000&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="c1"&gt;-- Update it&lt;/span&gt;
&lt;span class="k"&gt;UPDATE&lt;/span&gt; &lt;span class="n"&gt;accounts&lt;/span&gt; &lt;span class="k"&gt;SET&lt;/span&gt; &lt;span class="n"&gt;balance&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;7500&lt;/span&gt; &lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="c1"&gt;-- Update it again&lt;/span&gt;
&lt;span class="k"&gt;UPDATE&lt;/span&gt; &lt;span class="n"&gt;accounts&lt;/span&gt; &lt;span class="k"&gt;SET&lt;/span&gt; &lt;span class="n"&gt;balance&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;3200&lt;/span&gt; &lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="c1"&gt;-- What was Alice's balance at any point in time?&lt;/span&gt;
&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;accounts&lt;/span&gt; &lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="k"&gt;OF&lt;/span&gt; &lt;span class="k"&gt;SYSTEM&lt;/span&gt; &lt;span class="nb"&gt;TIME&lt;/span&gt; &lt;span class="s1"&gt;'2026-03-01 12:00:00'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="c1"&gt;-- Show me EVERY version that ever existed&lt;/span&gt;
&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;accounts&lt;/span&gt; &lt;span class="k"&gt;FOR&lt;/span&gt; &lt;span class="n"&gt;SYSTEM_TIME&lt;/span&gt; &lt;span class="k"&gt;ALL&lt;/span&gt; &lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="c1"&gt;-- Returns all three versions: 10000, 7500, 3200 — with timestamps&lt;/span&gt;

&lt;span class="c1"&gt;-- Business time: when was this fact true in the real world?&lt;/span&gt;
&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;accounts&lt;/span&gt; &lt;span class="k"&gt;FOR&lt;/span&gt; &lt;span class="n"&gt;APPLICATION_TIME&lt;/span&gt; &lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="k"&gt;OF&lt;/span&gt; &lt;span class="s1"&gt;'2026-01-15'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Two independent timelines — &lt;strong&gt;system time&lt;/strong&gt; (when was it recorded?) and &lt;strong&gt;business time&lt;/strong&gt; (when was it true?) — tracked automatically on every row. This is SQL:2011 bitemporal, not a hack bolted on with triggers.&lt;/p&gt;

&lt;h3&gt;
  
  
  Time-Series, Natively
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="n"&gt;TIMESERIES&lt;/span&gt; &lt;span class="k"&gt;TABLE&lt;/span&gt; &lt;span class="n"&gt;metrics&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ts&lt;/span&gt; &lt;span class="nb"&gt;TIMESTAMP&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;sensor&lt;/span&gt; &lt;span class="nb"&gt;TEXT&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;value&lt;/span&gt; &lt;span class="nb"&gt;REAL&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;WITH&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;bucket_size&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'1h'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;-- Downsample to hourly averages&lt;/span&gt;
&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;TIME_BUCKET&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'1h'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;ts&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="n"&gt;hour&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;sensor&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;AVG&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;value&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;metrics&lt;/span&gt;
&lt;span class="k"&gt;GROUP&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt; &lt;span class="n"&gt;hour&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;sensor&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="c1"&gt;-- Fill gaps in sparse data&lt;/span&gt;
&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;TIME_BUCKET_GAPFILL&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'1h'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;ts&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="n"&gt;hour&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
       &lt;span class="n"&gt;LOCF&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;AVG&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;value&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="n"&gt;value&lt;/span&gt;  &lt;span class="c1"&gt;-- Last Observation Carried Forward&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;metrics&lt;/span&gt;
&lt;span class="k"&gt;GROUP&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt; &lt;span class="n"&gt;hour&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="c1"&gt;-- Rate of change per second&lt;/span&gt;
&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;ts&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;RATE&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;value&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="n"&gt;change_per_sec&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;metrics&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Event Sourcing, Built In
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="c1"&gt;-- Aggregate-centric event streams with snapshots&lt;/span&gt;
&lt;span class="c1"&gt;-- Idempotency keys prevent duplicate processing&lt;/span&gt;
&lt;span class="c1"&gt;-- Cross-aggregate queries with full SQL&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Why Should You Care About the Performance?
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Operation&lt;/th&gt;
&lt;th&gt;TensorDB&lt;/th&gt;
&lt;th&gt;SQLite (WAL)&lt;/th&gt;
&lt;th&gt;sled&lt;/th&gt;
&lt;th&gt;redb&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Point read&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;273 ns&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;1,145 ns&lt;/td&gt;
&lt;td&gt;278 ns&lt;/td&gt;
&lt;td&gt;570 ns&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Point write&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;3.6 us&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;41.9 us&lt;/td&gt;
&lt;td&gt;4.3 us&lt;/td&gt;
&lt;td&gt;1,392 us&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Throughput (reads/s)&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;3.8M&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;—&lt;/td&gt;
&lt;td&gt;4.6M&lt;/td&gt;
&lt;td&gt;—&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;4.2x faster reads than SQLite. 11.6x faster writes.&lt;/strong&gt; And this isn't a stripped-down key-value store — it's a full SQL engine with JOINs, window functions, and a cost-based query planner.&lt;/p&gt;

&lt;p&gt;How?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Lock-free writes&lt;/strong&gt;: &lt;code&gt;AtomicU64::compare_exchange&lt;/code&gt; claims a commit timestamp, writes directly to memtable. No mutexes on the hot path.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Direct shard reads&lt;/strong&gt;: &lt;code&gt;ShardReadHandle&lt;/code&gt; bypasses the actor channel entirely — 273ns from function call to result.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Group-commit WAL&lt;/strong&gt;: A &lt;code&gt;DurabilityThread&lt;/code&gt; coalesces fsyncs across shards. Individual writes don't pay the fsync penalty.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Vectorized execution&lt;/strong&gt;: Columnar 1024-row &lt;code&gt;RecordBatch&lt;/code&gt; engine for analytical queries.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Optional SIMD&lt;/strong&gt;: AVX2/NEON-accelerated bloom probes and checksums behind &lt;code&gt;--features simd&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  The AI-Native Difference
&lt;/h2&gt;

&lt;p&gt;TensorDB doesn't just store data for AI applications. It uses AI to make itself better.&lt;/p&gt;

&lt;h3&gt;
  
  
  Learned Cost Model
&lt;/h3&gt;

&lt;p&gt;The query planner doesn't just use static heuristics. It trains a lightweight online model from observed query performance — tracking actual vs. estimated cardinalities and adjusting its cost estimates via SGD. Your queries get faster the more you run them.&lt;/p&gt;

&lt;h3&gt;
  
  
  Write-Rate Anomaly Detection
&lt;/h3&gt;

&lt;p&gt;Per-table EMA-based statistics flag unusual write patterns in real time. A sudden 10x spike in writes to your &lt;code&gt;payments&lt;/code&gt; table? TensorDB notices. Budget: &amp;lt;500ns per write — you won't feel it.&lt;/p&gt;

&lt;h3&gt;
  
  
  Query &amp;amp; Compaction Advisors
&lt;/h3&gt;

&lt;p&gt;The engine analyzes your access patterns and recommends optimizations:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;SHOW&lt;/span&gt; &lt;span class="n"&gt;AUTO&lt;/span&gt; &lt;span class="n"&gt;TUNE&lt;/span&gt; &lt;span class="n"&gt;RECOMMENDATIONS&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="c1"&gt;-- "Consider adding an index on orders.customer_id (hit 847 times in scans)"&lt;/span&gt;
&lt;span class="c1"&gt;-- "Cache hit rate for 'events' table is 23% — consider increasing block_cache_bytes"&lt;/span&gt;

&lt;span class="n"&gt;SUGGEST&lt;/span&gt; &lt;span class="k"&gt;INDEX&lt;/span&gt; &lt;span class="k"&gt;FOR&lt;/span&gt; &lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;orders&lt;/span&gt; &lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;status&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'pending'&lt;/span&gt; &lt;span class="k"&gt;AND&lt;/span&gt; &lt;span class="n"&gt;total&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="c1"&gt;-- Recommends: CREATE INDEX idx_orders_status_total ON orders (status, total)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  In-Process Inference
&lt;/h3&gt;

&lt;p&gt;The embedded Qwen3 model isn't just for NL-to-SQL. It powers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Risk scoring&lt;/strong&gt;: Inline anomaly detection per write with fast pattern matching&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Auto insights&lt;/strong&gt;: Pattern synthesis from write streams&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SQL template speculation&lt;/strong&gt;: Pre-generates candidate SQL templates to reduce query latency&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All running in Rust. No Python runtime. No HTTP calls. No cold starts.&lt;/p&gt;




&lt;h2&gt;
  
  
  Production-Ready, Not a Toy
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Security
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="c1"&gt;-- RBAC with granular permissions&lt;/span&gt;
&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;USER&lt;/span&gt; &lt;span class="n"&gt;analyst&lt;/span&gt; &lt;span class="k"&gt;WITH&lt;/span&gt; &lt;span class="n"&gt;PASSWORD&lt;/span&gt; &lt;span class="s1"&gt;'secure'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;GRANT&lt;/span&gt; &lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="k"&gt;ON&lt;/span&gt; &lt;span class="n"&gt;orders&lt;/span&gt; &lt;span class="k"&gt;TO&lt;/span&gt; &lt;span class="n"&gt;analyst&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="c1"&gt;-- Row-level security&lt;/span&gt;
&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="n"&gt;POLICY&lt;/span&gt; &lt;span class="n"&gt;region_filter&lt;/span&gt; &lt;span class="k"&gt;ON&lt;/span&gt; &lt;span class="n"&gt;orders&lt;/span&gt;
    &lt;span class="k"&gt;USING&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;region&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;CURRENT_USER_ATTRIBUTE&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'region'&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;

&lt;span class="c1"&gt;-- Column-level AES-256-GCM encryption&lt;/span&gt;
&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;TABLE&lt;/span&gt; &lt;span class="n"&gt;patients&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;id&lt;/span&gt; &lt;span class="nb"&gt;INTEGER&lt;/span&gt; &lt;span class="k"&gt;PRIMARY&lt;/span&gt; &lt;span class="k"&gt;KEY&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;name&lt;/span&gt; &lt;span class="nb"&gt;TEXT&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;ssn&lt;/span&gt; &lt;span class="nb"&gt;TEXT&lt;/span&gt; &lt;span class="k"&gt;ENCRYPTED&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;diagnosis&lt;/span&gt; &lt;span class="nb"&gt;TEXT&lt;/span&gt; &lt;span class="k"&gt;ENCRYPTED&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Plus mTLS on the PostgreSQL wire protocol, encryption key rotation without downtime, and a &lt;strong&gt;SHA-256 hash-chained audit log&lt;/strong&gt; you can cryptographically verify:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="n"&gt;VERIFY&lt;/span&gt; &lt;span class="n"&gt;AUDIT&lt;/span&gt; &lt;span class="n"&gt;LOG&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="c1"&gt;-- { verified: 12847, broken_at: null }&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  GDPR Erasure
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="n"&gt;FORGET&lt;/span&gt; &lt;span class="k"&gt;KEY&lt;/span&gt; &lt;span class="s1"&gt;'user:42'&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;users&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Cryptographic tombstone across all temporal versions. Right-to-erasure satisfied while preserving the integrity of the append-only ledger.&lt;/p&gt;

&lt;h3&gt;
  
  
  PostgreSQL Wire Protocol
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;tensordb-server &lt;span class="nt"&gt;--data-dir&lt;/span&gt; ./mydb &lt;span class="nt"&gt;--port&lt;/span&gt; 5433

&lt;span class="c"&gt;# Connect with any Postgres client&lt;/span&gt;
psql &lt;span class="nt"&gt;-h&lt;/span&gt; localhost &lt;span class="nt"&gt;-p&lt;/span&gt; 5433 &lt;span class="nt"&gt;-d&lt;/span&gt; mydb
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;psql, pgAdmin, DBeaver, SQLAlchemy, Prisma, any Postgres driver — they all work. Your existing tools connect without changes.&lt;/p&gt;

&lt;h3&gt;
  
  
  Observability
&lt;/h3&gt;

&lt;p&gt;Eight built-in diagnostic commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;SHOW&lt;/span&gt; &lt;span class="n"&gt;STATS&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;              &lt;span class="c1"&gt;-- Cache hit/miss, operation counts&lt;/span&gt;
&lt;span class="k"&gt;SHOW&lt;/span&gt; &lt;span class="n"&gt;SLOW&lt;/span&gt; &lt;span class="n"&gt;QUERIES&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;       &lt;span class="c1"&gt;-- Queries exceeding threshold&lt;/span&gt;
&lt;span class="k"&gt;SHOW&lt;/span&gt; &lt;span class="n"&gt;ACTIVE&lt;/span&gt; &lt;span class="n"&gt;QUERIES&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;     &lt;span class="c1"&gt;-- Currently running&lt;/span&gt;
&lt;span class="k"&gt;SHOW&lt;/span&gt; &lt;span class="k"&gt;STORAGE&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;            &lt;span class="c1"&gt;-- SSTable levels, file sizes&lt;/span&gt;
&lt;span class="k"&gt;SHOW&lt;/span&gt; &lt;span class="n"&gt;COMPACTION&lt;/span&gt; &lt;span class="n"&gt;STATUS&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;  &lt;span class="c1"&gt;-- Background compaction state&lt;/span&gt;
&lt;span class="k"&gt;SHOW&lt;/span&gt; &lt;span class="n"&gt;WAL&lt;/span&gt; &lt;span class="n"&gt;STATUS&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;         &lt;span class="c1"&gt;-- Write-ahead log health&lt;/span&gt;
&lt;span class="k"&gt;SHOW&lt;/span&gt; &lt;span class="n"&gt;AUDIT&lt;/span&gt; &lt;span class="n"&gt;LOG&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;          &lt;span class="c1"&gt;-- Tamper-evident event history&lt;/span&gt;
&lt;span class="k"&gt;SHOW&lt;/span&gt; &lt;span class="n"&gt;PLAN&lt;/span&gt; &lt;span class="n"&gt;GUIDES&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;        &lt;span class="c1"&gt;-- Pinned query plans&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Plus a &lt;code&gt;/health&lt;/code&gt; HTTP endpoint on the server port + 1.&lt;/p&gt;

&lt;h3&gt;
  
  
  Complete SQL
&lt;/h3&gt;

&lt;p&gt;This isn't a half-baked query parser. It's a full SQL engine:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;60+ built-in functions&lt;/strong&gt;: string, numeric, date/time, aggregate, window, vector, time-series, full-text&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;JOINs&lt;/strong&gt;: inner, left, right, full outer, cross&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Subqueries&lt;/strong&gt;: IN, EXISTS, scalar subqueries in WHERE&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CTEs&lt;/strong&gt;: including &lt;code&gt;WITH RECURSIVE&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Window functions&lt;/strong&gt;: ROW_NUMBER, RANK, DENSE_RANK, LEAD, LAG&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Upserts&lt;/strong&gt;: &lt;code&gt;INSERT ... ON CONFLICT DO UPDATE&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Transactions&lt;/strong&gt;: &lt;code&gt;BEGIN&lt;/code&gt;, &lt;code&gt;COMMIT&lt;/code&gt;, &lt;code&gt;ROLLBACK&lt;/code&gt;, &lt;code&gt;SAVEPOINT&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Data interchange&lt;/strong&gt;: &lt;code&gt;COPY TO/FROM&lt;/code&gt; CSV, JSON, Parquet. &lt;code&gt;read_csv()&lt;/code&gt;, &lt;code&gt;read_json()&lt;/code&gt;, &lt;code&gt;read_parquet()&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And when you misspell something:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ERROR T2001: Table "ordres" not found. Did you mean "orders"?
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  The Architecture, for the Curious
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Client Layer:  Rust API │ Python (PyO3) │ Node.js (napi-rs) │ pgwire │ CLI
                            │
                   ┌────────┴────────┐
                   │   SQL Engine     │
                   │ Parser → Planner │──→ Vectorized Execution
                   │   → Executor     │    (1024-row RecordBatch)
                   └────────┬────────┘
                            │
            ┌───────┬───────┼───────┬───────┐
            │       │       │       │       │
        Relational  FTS   Vector  TimeSeries Event
        (typed SQL) (BM25) (HNSW)  (bucket)  Sourcing
            │       │       │       │       │
            └───────┴───────┼───────┴───────┘
                            │
                   ┌────────┴────────┐
                   │  Shard Engine    │
                   │ Lock-free writes │──→ CDC (durable cursors)
                   │ Direct reads     │
                   └────────┬────────┘
                            │
                   ┌────────┴────────┐
                   │  LSM Storage     │
                   │ WAL → Memtable → │
                   │ SSTables (L0-L6) │
                   │ Bloom │ LZ4/Zstd │
                   └─────────────────┘
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Immutable key encoding&lt;/strong&gt;: &lt;code&gt;user_key || 0x00 || commit_ts (8B) || kind (1B)&lt;/code&gt; — prefix scans retrieve all versions, big-endian timestamps give chronological order for free, the kind byte distinguishes puts from tombstones.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;LSM-tree storage&lt;/strong&gt;: WAL (CRC-framed, group-commit) → Memtable (BTreeMap) → SSTables with bloom filters, block cache, LZ4/Zstd compression, multi-level compaction from L0 through L6.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Epoch-Ordered Append-Only Concurrency (EOAC)&lt;/strong&gt;: A single &lt;code&gt;Arc&amp;lt;AtomicU64&amp;gt;&lt;/code&gt; global epoch counter unifies transactions, MVCC, point-in-time recovery, and incremental backup under one mechanism. No two-phase locking. No undo logs.&lt;/p&gt;




&lt;h2&gt;
  
  
  How It Compares — Honestly
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;TensorDB&lt;/th&gt;
&lt;th&gt;PostgreSQL&lt;/th&gt;
&lt;th&gt;SQLite&lt;/th&gt;
&lt;th&gt;Pinecone&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Relational SQL&lt;/td&gt;
&lt;td&gt;Full&lt;/td&gt;
&lt;td&gt;Full&lt;/td&gt;
&lt;td&gt;Full&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Vector search&lt;/td&gt;
&lt;td&gt;HNSW + IVF-PQ&lt;/td&gt;
&lt;td&gt;pgvector (extension)&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Native&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Full-text search&lt;/td&gt;
&lt;td&gt;BM25 native&lt;/td&gt;
&lt;td&gt;tsvector&lt;/td&gt;
&lt;td&gt;FTS5&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Bitemporal&lt;/td&gt;
&lt;td&gt;Native (SQL:2011)&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Time-series&lt;/td&gt;
&lt;td&gt;Native&lt;/td&gt;
&lt;td&gt;TimescaleDB (ext)&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Embedded (no server)&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;NL-to-SQL&lt;/td&gt;
&lt;td&gt;In-process&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Point read latency&lt;/td&gt;
&lt;td&gt;273 ns&lt;/td&gt;
&lt;td&gt;~5 us&lt;/td&gt;
&lt;td&gt;1,145 ns&lt;/td&gt;
&lt;td&gt;Network&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Immutable ledger&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Use PostgreSQL&lt;/strong&gt; if you need 30 years of production hardening and the deepest extension ecosystem on the planet.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Use TensorDB&lt;/strong&gt; if you're building an AI-native application that needs vectors + SQL + time travel + search in one place, especially if you want to embed it directly in your binary.&lt;/p&gt;




&lt;h2&gt;
  
  
  Get Started in 60 Seconds
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Python&lt;/span&gt;
pip &lt;span class="nb"&gt;install &lt;/span&gt;tensordb

&lt;span class="c"&gt;# Rust&lt;/span&gt;
cargo add tensordb

&lt;span class="c"&gt;# Any language — PostgreSQL wire protocol&lt;/span&gt;
cargo &lt;span class="nb"&gt;install &lt;/span&gt;tensordb-server
tensordb-server &lt;span class="nt"&gt;--data-dir&lt;/span&gt; ./mydb &lt;span class="nt"&gt;--port&lt;/span&gt; 5433
psql &lt;span class="nt"&gt;-h&lt;/span&gt; localhost &lt;span class="nt"&gt;-p&lt;/span&gt; 5433
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;tensordb&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;PyDatabase&lt;/span&gt;

&lt;span class="n"&gt;db&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;PyDatabase&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;open&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/tmp/mydb&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Relational
&lt;/span&gt;&lt;span class="n"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;sql&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;CREATE TABLE users (id INT PRIMARY KEY, name TEXT, plan TEXT)&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;sql&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;INSERT INTO users VALUES (1, &lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Alice&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;, &lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;pro&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;), (2, &lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Bob&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;, &lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;free&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;)&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Vector search
&lt;/span&gt;&lt;span class="n"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;sql&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;CREATE TABLE embeddings (id INT PRIMARY KEY, vec VECTOR(3))&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;sql&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;INSERT INTO embeddings VALUES (1, &lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;[0.1, 0.2, 0.3]&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;)&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;sql&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;SELECT id, vec &amp;lt;-&amp;gt; &lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;[0.15, 0.25, 0.35]&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt; AS dist FROM embeddings ORDER BY dist LIMIT 5&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Time travel
&lt;/span&gt;&lt;span class="n"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;sql&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;UPDATE users SET plan = &lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;enterprise&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt; WHERE id = 1&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;sql&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;SELECT * FROM users FOR SYSTEM_TIME ALL WHERE id = 1&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="c1"&gt;# Returns both versions: 'pro' and 'enterprise'
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  The Bigger Picture
&lt;/h2&gt;

&lt;p&gt;The AI era needs a new kind of database. Not a relational database with vectors bolted on. Not a vector database that added SQL as an afterthought. Not five databases duct-taped together with Kafka.&lt;/p&gt;

&lt;p&gt;TensorDB is built from scratch for this world: where embeddings are first-class citizens, where every fact is immutable and auditable, where you can search by meaning AND by keyword AND by SQL — and where the database itself uses machine learning to optimize its own performance.&lt;/p&gt;

&lt;p&gt;It's open source, it's written in Rust, and it fits in a single binary.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Links:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://github.com/tensor-db/TensorDB" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt; — star it if this resonates&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://tensor-db.github.io/TensorDB/" rel="noopener noreferrer"&gt;Docs&lt;/a&gt; — SQL reference, architecture guide, interactive playground&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://pypi.org/project/tensordb/" rel="noopener noreferrer"&gt;PyPI&lt;/a&gt; — &lt;code&gt;pip install tensordb&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://crates.io/crates/tensordb" rel="noopener noreferrer"&gt;crates.io&lt;/a&gt; — &lt;code&gt;cargo add tensordb&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you're building something where AI meets data — RAG pipelines, agent memory, recommendation engines, compliance systems, real-time analytics — I'd love to hear what you think. Every issue and discussion on GitHub gets a response.&lt;/p&gt;

&lt;p&gt;And if it breaks, file a bug. That's how databases grow up.&lt;/p&gt;

</description>
      <category>rust</category>
      <category>database</category>
      <category>sql</category>
      <category>opensource</category>
    </item>
  </channel>
</rss>
