<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: ContextTree</title>
    <description>The latest articles on DEV Community by ContextTree (@contexttree).</description>
    <link>https://dev.to/contexttree</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/contexttree"/>
    <language>en</language>
    <item>
      <title>I built a visual LLM canvas where every branch has its own model, prompt, and context settings</title>
      <dc:creator>ContextTree</dc:creator>
      <pubDate>Sat, 02 May 2026 11:31:08 +0000</pubDate>
      <link>https://dev.to/contexttree/i-built-a-visual-llm-canvas-where-every-branch-has-its-own-model-prompt-and-context-settings-468g</link>
      <guid>https://dev.to/contexttree/i-built-a-visual-llm-canvas-where-every-branch-has-its-own-model-prompt-and-context-settings-468g</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F19biwx9z1g6v0ojw2uue.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F19biwx9z1g6v0ojw2uue.png" alt=" " width="800" height="390"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The problem I kept hitting
&lt;/h2&gt;

&lt;p&gt;Every time I went deep on a topic with ChatGPT, one tangent would &lt;br&gt;
poison the whole thread. You ask a follow-up question, and suddenly &lt;br&gt;
your entire conversation context is contaminated with an irrelevant &lt;br&gt;
detour. The LLM loses the plot.&lt;/p&gt;

&lt;p&gt;The standard workaround? Open a new chat. Paste context manually. &lt;br&gt;
Repeat. That's not a solution, that's giving up.&lt;/p&gt;

&lt;p&gt;I wanted branches — real ones. Not tabs. Not separate threads you &lt;br&gt;
manage yourself. Branches that inherit the right context automatically &lt;br&gt;
and stay isolated from each other.&lt;/p&gt;

&lt;p&gt;So I built ContextTree.&lt;/p&gt;

&lt;h2&gt;
  
  
  What it does
&lt;/h2&gt;

&lt;p&gt;ContextTree is a node-based visual canvas for LLM conversations. &lt;br&gt;
Every message is a node. Branching is a first-class action, not a &lt;br&gt;
workaround.&lt;/p&gt;

&lt;p&gt;The core invariant: &lt;strong&gt;a child node only inherits its direct parent &lt;br&gt;
lineage — never siblings, never cousins.&lt;/strong&gt; No cross-contamination.&lt;/p&gt;

&lt;p&gt;But the feature that surprised me most during development is what &lt;br&gt;
each node carries independently:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Its own &lt;strong&gt;LLM model&lt;/strong&gt; (GPT-4o on one branch, Gemini Flash on another)&lt;/li&gt;
&lt;li&gt;Its own &lt;strong&gt;custom system prompt&lt;/strong&gt; (scoped to that node and its children)&lt;/li&gt;
&lt;li&gt;Its own &lt;strong&gt;advanced settings&lt;/strong&gt;: temperature, max output tokens, 
history mode, last K messages, context budget in tokens, 
external context chunk count&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This means on one canvas you can have a general assistant node, fork &lt;br&gt;
into a strict legal-persona branch with a lawyer system prompt and &lt;br&gt;
tight context budget, then fork again into a summarizer with low &lt;br&gt;
temperature. Three personalities, zero interference, one visual graph.&lt;/p&gt;




&lt;h2&gt;
  
  
  The hardest design decision: context inheritance
&lt;/h2&gt;

&lt;p&gt;The honest rule in the codebase:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;A child node never reads parent live state — no shared LangGraph &lt;br&gt;
state, no reads of the parent's current summary after the fork &lt;br&gt;
moment. Each node evolves independently.&lt;/p&gt;

&lt;p&gt;However, ancestry-scoped vector search lets a child retrieve &lt;br&gt;
relevant snippets from any ancestor's history, capped at the &lt;br&gt;
fork point. Branches inherit &lt;strong&gt;knowledge&lt;/strong&gt;, not &lt;strong&gt;state&lt;/strong&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This distinction took a while to nail. "Knowledge not state" is the &lt;br&gt;
mental model that made the architecture clean. If you want hard &lt;br&gt;
isolation, set &lt;code&gt;SIMILAR_CONTEXT_LIMIT=0&lt;/code&gt; per node.&lt;/p&gt;




&lt;h2&gt;
  
  
  What I'm still figuring out
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Prompt stack order — should users be able to reorder layers?&lt;/li&gt;
&lt;li&gt;Is per-node system prompt enough, or do people want per-node RAG 
sources pinned differently?&lt;/li&gt;
&lt;li&gt;The multi-LLM branching UX — is it obvious enough what's happening?&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Try it
&lt;/h2&gt;

&lt;p&gt;Demo:&lt;a href="https://www.contexttree.tech/" rel="noopener noreferrer"&gt;CONTEXTTREE&lt;/a&gt;&lt;br&gt;
Video walkthrough: &lt;a href="https://youtu.be/AqmICcc26VI" rel="noopener noreferrer"&gt;https://youtu.be/AqmICcc26VI&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Built solo. Early stage. Brutal feedback welcome — especially from &lt;br&gt;
anyone who's built multi-agent or prompt engineering tooling.&lt;/p&gt;

</description>
      <category>conversation</category>
      <category>canvas</category>
      <category>chatgpt</category>
      <category>conversationcanvas</category>
    </item>
  </channel>
</rss>
