<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: aldielshala</title>
    <description>The latest articles on DEV Community by aldielshala (@aldielshala).</description>
    <link>https://dev.to/aldielshala</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/aldielshala"/>
    <language>en</language>
    <item>
      <title>llm.sql - Run a 640MB LLM on SQLite, with 210MB peak RSS and 7.4 tok/s</title>
      <dc:creator>aldielshala</dc:creator>
      <pubDate>Thu, 23 Apr 2026 14:59:06 +0000</pubDate>
      <link>https://dev.to/aldielshala/llmsql-run-a-640mb-llm-on-sqlite-with-210mb-peak-rss-and-74-toks-2eej</link>
      <guid>https://dev.to/aldielshala/llmsql-run-a-640mb-llm-on-sqlite-with-210mb-peak-rss-and-74-toks-2eej</guid>
      <description>&lt;p&gt;I built llm.sql, an LLM inference framework that reimagines the LLM execution pipeline as a series of structured SQL queries atop SQLite.&lt;/p&gt;

&lt;p&gt;The motivation: Edge LLMs are getting better, but hardware remains a bottleneck, especially RAM (size and bandwidth).&lt;/p&gt;

&lt;p&gt;When available memory is less than the model size and KV cache, the OS incurs page faults and swaps pages using LRU-like strategies, resulting in throughput degradation that's hard to notice and even harder to debug. In fact, the memory access pattern during LLM inference is deterministic - we know exactly which weights are needed and when. This means Bélády's optimal page replacement algorithm is actually applicable here.&lt;/p&gt;

&lt;p&gt;So instead of letting the OS manage memory, llm.sql takes over:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Model parameters are stored in SQLite BLOB tables&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Computational logic is implemented as SQLite C extensions&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Memory management is handled explicitly, not by the OS&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Zero heavy dependencies. No PyTorch, no Transformers. Just Python, C, or C++&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This gives us explicit, deterministic control over what's in memory at each step of inference.&lt;/p&gt;

&lt;p&gt;Results:&lt;/p&gt;

&lt;p&gt;Running Qwen2.5-0.5B-INT8 (~640MB model) with a peak RSS ~210MB and 7.40 tokens/s throughput.  &lt;/p&gt;

&lt;p&gt;Alpha version is available on GitHub: &lt;a href="https://github.com/xuxianghong12/llm.sql" rel="noopener noreferrer"&gt;https://github.com/xuxianghong12/llm.sql&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I'm the developer, happy to answer any technical questions about the design and implementation.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmpidfp83gzkf90z3cird.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmpidfp83gzkf90z3cird.png" alt=" " width="800" height="351"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
    </item>
  </channel>
</rss>
