DEV Community

aldielshala
aldielshala

Posted on

llm.sql - Run a 640MB LLM on SQLite, with 210MB peak RSS and 7.4 tok/s

#ai

I built llm.sql, an LLM inference framework that reimagines the LLM execution pipeline as a series of structured SQL queries atop SQLite.

The motivation: Edge LLMs are getting better, but hardware remains a bottleneck, especially RAM (size and bandwidth).

When available memory is less than the model size and KV cache, the OS incurs page faults and swaps pages using LRU-like strategies, resulting in throughput degradation that's hard to notice and even harder to debug. In fact, the memory access pattern during LLM inference is deterministic - we know exactly which weights are needed and when. This means Bélády's optimal page replacement algorithm is actually applicable here.

So instead of letting the OS manage memory, llm.sql takes over:

  • Model parameters are stored in SQLite BLOB tables

  • Computational logic is implemented as SQLite C extensions

  • Memory management is handled explicitly, not by the OS

  • Zero heavy dependencies. No PyTorch, no Transformers. Just Python, C, or C++

This gives us explicit, deterministic control over what's in memory at each step of inference.

Results:

Running Qwen2.5-0.5B-INT8 (~640MB model) with a peak RSS ~210MB and 7.40 tokens/s throughput.  

Alpha version is available on GitHub: https://github.com/xuxianghong12/llm.sql

I'm the developer, happy to answer any technical questions about the design and implementation.

Top comments (0)