<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Nitay Rabinovich</title>
    <description>The latest articles on DEV Community by Nitay Rabinovich (@nitay_rabinovich_d7cc35f5).</description>
    <link>https://dev.to/nitay_rabinovich_d7cc35f5</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/nitay_rabinovich_d7cc35f5"/>
    <language>en</language>
    <item>
      <title>Bringing RLM to TypeScript: Building rllm</title>
      <dc:creator>Nitay Rabinovich</dc:creator>
      <pubDate>Tue, 06 Jan 2026 11:45:18 +0000</pubDate>
      <link>https://dev.to/nitay_rabinovich_d7cc35f5/bringing-rlm-to-typescript-building-rllm-20p8</link>
      <guid>https://dev.to/nitay_rabinovich_d7cc35f5/bringing-rlm-to-typescript-building-rllm-20p8</guid>
      <description>&lt;p&gt;Large Language Models struggle with very large contexts. Long documents or complex data structures quickly exceed token limits or degrade reasoning when everything is placed into a single prompt.&lt;/p&gt;

&lt;p&gt;Recursive Large Language Models (RLMs) address this by letting the model generate and execute code that recursively explores and processes context. Instead of seeing all the data at once, the model learns how to navigate it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;This idea was originally described in Python-focused work:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://alexzhang13.github.io/blog/2025/rlm/" rel="noopener noreferrer"&gt;https://alexzhang13.github.io/blog/2025/rlm/&lt;/a&gt;&lt;br&gt;
&lt;a href="https://github.com/alexzhang13/rlm" rel="noopener noreferrer"&gt;https://github.com/alexzhang13/rlm&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I recently open-sourced rllm, a TypeScript implementation of the RLM approach, designed for the JavaScript ecosystem.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Repo: &lt;a href="https://github.com/code-rabi/rllm" rel="noopener noreferrer"&gt;https://github.com/code-rabi/rllm&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Letting LLMs run code&lt;/p&gt;

&lt;p&gt;Allowing LLMs to write and execute code is already a widespread and powerful idea. We see it in systems like code mode, tool calling, and agent frameworks. RLMs build on this foundation by making code execution the core reasoning mechanism rather than a side feature.&lt;/p&gt;

&lt;p&gt;This approach opens the door to much richer interactions with large and structured data. Instead of summarizing raw text, models can iterate over trees, filter datasets, and decompose problems dynamically.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why TypeScript?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;rllm&lt;/code&gt; is built to work naturally in Node, Bun, and Deno environments:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Runs model-generated code in V8 isolates for sandboxed execution&lt;/li&gt;
&lt;li&gt;Uses Zod schemas to describe structured context to the model&lt;/li&gt;
&lt;li&gt;Avoids Python subprocesses or external services&lt;/li&gt;
&lt;li&gt;Fits cleanly into existing TypeScript codebases&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Minimal example&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;createRLLM&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;rllm&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;rlm&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;createRLLM&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;model&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;gpt-4o-mini&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;rlm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;completion&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;What are the key findings in this document?&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;context&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;hugeDocument&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;answer&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The key benefit is that the model can explore &lt;code&gt;hugeDocument&lt;/code&gt; step by step rather than consuming it all at once.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Closing thoughts&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Recursive execution models like RLMs feel like a natural next step for LLM systems. As models improve at writing code, giving them safe and structured execution environments enables more scalable reasoning over large data.&lt;/p&gt;

&lt;p&gt;If you are working with LLMs in TypeScript and want to experiment with this idea, I would love your feedback.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Repo: &lt;a href="https://github.com/code-rabi/rllm" rel="noopener noreferrer"&gt;https://github.com/code-rabi/rllm&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>typescript</category>
      <category>llm</category>
      <category>rag</category>
      <category>opensource</category>
    </item>
  </channel>
</rss>
