<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: enigmaticsloth</title>
    <description>The latest articles on DEV Community by enigmaticsloth (@enigmaticsloth).</description>
    <link>https://dev.to/enigmaticsloth</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/enigmaticsloth"/>
    <language>en</language>
    <item>
      <title>I built an AI workspace in 4 days with zero frameworks — the LLM controls the entire UI</title>
      <dc:creator>enigmaticsloth</dc:creator>
      <pubDate>Tue, 17 Mar 2026 13:28:57 +0000</pubDate>
      <link>https://dev.to/enigmaticsloth/i-built-an-ai-workspace-in-4-days-with-zero-frameworks-the-llm-controls-the-entire-ui-8fc</link>
      <guid>https://dev.to/enigmaticsloth/i-built-an-ai-workspace-in-4-days-with-zero-frameworks-the-llm-controls-the-entire-ui-8fc</guid>
      <description>&lt;h2&gt;
  
  
  What I built
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://enigmaticsloth.github.io/sloth-space/index.html#home" rel="noopener noreferrer"&gt;Sloth Space&lt;/a&gt; is an AI-native workspace with three modes — slides, docs, and sheets. You describe what you need in natural language, and the AI drafts it in seconds.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Source:&lt;/strong&gt; &lt;a href="https://github.com/enigmaticsloth/sloth-space" rel="noopener noreferrer"&gt;github.com/enigmaticsloth/sloth-space&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The interesting part: unified architecture
&lt;/h2&gt;

&lt;p&gt;The three modes aren't three separate apps bolted together. They share one state object, one intent router, one context menu system, one save pipeline, and one interaction model (single-click select, double-click edit). The AI sees the entire app as one system, not three.&lt;/p&gt;

&lt;p&gt;~30 ES module files, pure vanilla JS, zero build step, zero frameworks.&lt;/p&gt;

&lt;h2&gt;
  
  
  The AI controls the UI
&lt;/h2&gt;

&lt;p&gt;This is the part I'm most excited about. The LLM doesn't just generate text — it operates the application interface directly.&lt;/p&gt;

&lt;p&gt;27 whitelisted JS functions are exposed to the model. It can switch modes, create projects, organize files, link documents together, and navigate — all from a single chat input.&lt;/p&gt;

&lt;p&gt;Try typing: &lt;em&gt;"Create a project called Q2, write a budget report and put it in there"&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The system runs this as a multi-step sequence:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create project "Q2"&lt;/li&gt;
&lt;li&gt;Switch to doc mode&lt;/li&gt;
&lt;li&gt;Generate budget report&lt;/li&gt;
&lt;li&gt;Auto-link the doc to the project&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;With a Monet-orange overlay showing each step in real-time.&lt;/p&gt;

&lt;h3&gt;
  
  
  How it works under the hood
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;An intent router (Llama 8B on Groq free tier) classifies every user input into one of 11 intents&lt;/li&gt;
&lt;li&gt;The router returns JSON with function names + arguments&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;executeUIActions()&lt;/code&gt; does whitelist check → schema validation → execution&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;resolveActionRefs()&lt;/code&gt; handles fuzzy name-to-ID resolution — "open that file" finds the right file&lt;/li&gt;
&lt;li&gt;Destructive actions require user confirmation before execution&lt;/li&gt;
&lt;li&gt;Context memory lets the AI resolve "that file" or "open it" from recent actions&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Cross-file context injection
&lt;/h2&gt;

&lt;p&gt;Link files to a project, and the AI reads ALL of them when generating new content.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"Summarize this project" → reads every linked doc and sheet&lt;/li&gt;
&lt;li&gt;"Create slides from the research" → cross-references all project files&lt;/li&gt;
&lt;li&gt;Mention any file by name in chat → AI pulls it into the prompt automatically&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;No copy-pasting between files. No manual context-setting.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tech stack
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Pure vanilla JS ES modules — no React, no Vue, no bundler&lt;/li&gt;
&lt;li&gt;All state in one &lt;code&gt;S&lt;/code&gt; object (no state management library)&lt;/li&gt;
&lt;li&gt;All module exports auto-bound to &lt;code&gt;window&lt;/code&gt; via &lt;code&gt;Object.entries&lt;/code&gt; loop&lt;/li&gt;
&lt;li&gt;CSS split into 14 module files&lt;/li&gt;
&lt;li&gt;Supabase for optional cloud sync + GitHub OAuth&lt;/li&gt;
&lt;li&gt;BYO API key — works with Groq, OpenAI, Grok, Ollama, Claude&lt;/li&gt;
&lt;li&gt;PPTX export, 5 slide themes, 18 spreadsheet formulas&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Built solo in 4 days
&lt;/h2&gt;

&lt;p&gt;Would love to hear thoughts — especially on the "LLM as UI controller" pattern. Is giving the model direct (whitelisted) control over the interface the right direction, or is there a better way to sandbox it?&lt;/p&gt;

</description>
      <category>javascript</category>
      <category>ai</category>
      <category>webdev</category>
      <category>showdev</category>
    </item>
  </channel>
</rss>
