<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Odmar Goyvaerts</title>
    <description>The latest articles on DEV Community by Odmar Goyvaerts (@elketron).</description>
    <link>https://dev.to/elketron</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/elketron"/>
    <language>en</language>
    <item>
      <title>Rendering is Prompting: Stop Inventing Things Your LLM Already Knows</title>
      <dc:creator>Odmar Goyvaerts</dc:creator>
      <pubDate>Sun, 05 Apr 2026 09:00:34 +0000</pubDate>
      <link>https://dev.to/elketron/rendering-is-prompting-stop-inventing-things-your-llm-already-knows-3fef</link>
      <guid>https://dev.to/elketron/rendering-is-prompting-stop-inventing-things-your-llm-already-knows-3fef</guid>
      <description>&lt;p&gt;There's a mindset trap that most people fall into when building LLM agents: they treat the prompt as something special. A carefully engineered control surface. A contract. Something you have to design from scratch.&lt;/p&gt;

&lt;p&gt;It's just text.&lt;/p&gt;

&lt;p&gt;And once you really internalize that, something shifts. Because if the prompt is just text, then it can &lt;em&gt;render&lt;/em&gt; things. And the model already has incredibly dense associations with environments that humans have been using for 30+ years.&lt;/p&gt;

&lt;p&gt;That's the core insight I want to share: &lt;strong&gt;rendering is prompting&lt;/strong&gt;. You don't have to describe a world to your agent — you can just show it one it already knows.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Tax of Invention
&lt;/h2&gt;

&lt;p&gt;Every time you invent custom syntax, custom instructions, or a custom abstraction for your agent, you pay a tax. You spend tokens explaining it. The model has weaker associations with it. Behavior is less predictable.&lt;/p&gt;

&lt;p&gt;Most agent frameworks are full of this. Custom tool schemas. Elaborate system prompts describing what the agent "can do". Invented DSLs for multi-step reasoning.&lt;/p&gt;

&lt;p&gt;Compare these two approaches to web search:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Invented:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="nf"&gt;web_search&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;query&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;latest transformer research&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Existing:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl https://www.google.com/search?q&lt;span class="o"&gt;=&lt;/span&gt;latest+transformer+research
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The second one is just a shell command. But the model already knows &lt;code&gt;curl&lt;/code&gt; deeply — it implies HTTP, headers, response codes, piping output. It composes naturally with everything else in a shell environment. You didn't have to document anything.&lt;/p&gt;

&lt;p&gt;Every tool you can map to an existing unix command is a tool you don't have to explain.&lt;/p&gt;

&lt;p&gt;That said — this isn't a rule against ever inventing anything. Sometimes a custom action is the right call, especially when the operation has no natural existing equivalent. The point is to reach for existing conventions &lt;em&gt;first&lt;/em&gt;, and only invent when you genuinely have to. The less you invent, the less you explain.&lt;/p&gt;




&lt;h2&gt;
  
  
  Shell Agents: Flexibility Through Familiarity
&lt;/h2&gt;

&lt;p&gt;A shell agent is simple: the prompt renders like a real terminal.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;(master) [~/project/src] write_file utils.py
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That one line gives the agent its git branch, its virtual environment, its current directory — all the situational awareness a developer would have. You didn't describe any of it. You just rendered it.&lt;/p&gt;

&lt;p&gt;The model pattern-matches immediately to "I am a developer in a terminal" and pulls in decades of implicit knowledge about conventions, available operations, and expected output format. The rendering primes not just what the agent &lt;em&gt;does&lt;/em&gt; but how it &lt;em&gt;communicates&lt;/em&gt; — shell output is terse, precise, no fluff. The model picks that up naturally.&lt;/p&gt;

&lt;p&gt;MCP calls? Just commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;(master) [~/project] mcp filesystem read_file config.json
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Sub-shells, REPLs, switching contexts — all of it just falls out of the shell metaphor. &lt;code&gt;python3&lt;/code&gt; shifts the prompt to &lt;code&gt;&amp;gt;&amp;gt;&amp;gt;&lt;/code&gt;. &lt;code&gt;exit&lt;/code&gt; brings you back. The model already knows what those mean.&lt;/p&gt;




&lt;h2&gt;
  
  
  DOM Agents: Structure Through Rendering
&lt;/h2&gt;

&lt;p&gt;When you need more rigidity — structured multi-step execution, parallel actions, inspectable state — you can render a different kind of environment: a document.&lt;/p&gt;

&lt;p&gt;The action syntax is minimal:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;^-- &amp;lt;action_name&amp;gt; [optional parameter]
[body]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The agent can emit multiple actions in a single response:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;^-- think
I need to fetch both files before writing the output.

^-- read_file config.json

^-- read_file user.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A parser sweeps the response, dispatches all actions (potentially in parallel), and injects results back inline after the node that generated them:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="err"&gt;^--&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;read_file&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;config.json&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="err"&gt;^--&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;result&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"theme"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"dark"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"lang"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"en"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;span class="err"&gt;^--&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;read_file&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;user.json&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="err"&gt;^--&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;result&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Alex"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"role"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"admin"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Think of it like a webpage for the model. The document &lt;em&gt;is&lt;/em&gt; the state. The agent reads it, acts, the document updates, it reads again. No external memory to sync. No state object to maintain. The model always has an unambiguous read of what happened and in what order just by reading top to bottom.&lt;/p&gt;

&lt;p&gt;And because results are injected locally — immediately after the action that triggered them — the model always sees cause and effect together.&lt;/p&gt;




&lt;h2&gt;
  
  
  Combining Them
&lt;/h2&gt;

&lt;p&gt;The shell and DOM aren't separate systems. They're just different render modes. The DOM is the default environment, and the agent can drop into a shell whenever it needs to:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;^-- open_shell
ls -la &amp;amp;&amp;amp; cat config.json

^-- result
drwxr-xr-x  src/
-rw-r--r--  config.json
{"theme": "dark"}
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then &lt;code&gt;exit&lt;/code&gt; re-renders back to the document. The agent doesn't context-switch mentally — it's still reading one document, it just has a shell widget embedded in it. Like a terminal pane in an IDE.&lt;/p&gt;

&lt;p&gt;Which raises an interesting point.&lt;/p&gt;




&lt;h2&gt;
  
  
  Buffers All the Way Down
&lt;/h2&gt;

&lt;p&gt;Here's a realization that took me a while to land on: the DOM and shell models are just different buffers in a prompt. Different regions of text with different render modes.&lt;/p&gt;

&lt;p&gt;An IDE is also just multiple buffers with different roles — editor, file tree, terminal, status bar. If the prompt is just text, and different sections are different buffers, then an IDE-like environment for a model is just... a layout problem.&lt;/p&gt;

&lt;p&gt;For the editor buffer specifically, something like &lt;code&gt;ed&lt;/code&gt; is a natural fit. It's line-addressed, text-based, and the model knows it deeply. No GUI needed — the entire editing interface is just text in a buffer:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;master&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="o"&gt;~/&lt;/span&gt;&lt;span class="n"&gt;project&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="n"&gt;ed&lt;/span&gt; &lt;span class="n"&gt;utils&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;py&lt;/span&gt;
&lt;span class="mi"&gt;156&lt;/span&gt;
&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="n"&gt;p&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;foo&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="k"&gt;pass&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The status buffer handles ambient context:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[utils.py:24] [errors: 2] [git: 3 changes]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That one line gives the model file position, error state, and git status — without explaining any of it.&lt;/p&gt;

&lt;p&gt;One important clarification: when I say the model "already knows" these commands, that doesn't mean &lt;code&gt;curl&lt;/code&gt; or &lt;code&gt;ed&lt;/code&gt; literally execute. These are still custom actions implemented by the developer — &lt;code&gt;curl&lt;/code&gt; dispatches to whatever HTTP handler you've written, &lt;code&gt;ed&lt;/code&gt; dispatches to your file editing logic. The key is that the &lt;em&gt;names and conventions&lt;/em&gt; carry dense associations. The model already knows what &lt;code&gt;curl&lt;/code&gt; implies, what valid usage looks like, what output to expect. You're borrowing the semantics, not the binary.&lt;/p&gt;

&lt;p&gt;This is the direction I'm taking the framework next: buffers as a first-class abstraction. Named, independently updatable, composable into layouts. Control over which buffers appear in the prompt at any given time to manage context window pressure.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Takeaway
&lt;/h2&gt;

&lt;p&gt;The model absorbed 30+ years of human computer interaction during training. Terminals, browsers, REPLs, document formats, editors — it knows these environments with extraordinary depth.&lt;/p&gt;

&lt;p&gt;You're not locked into treating the prompt as a blank canvas you have to fill with instructions. You can render an environment the model already inhabits.&lt;/p&gt;

&lt;p&gt;Stop describing worlds to your agents. Start rendering them.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>programming</category>
    </item>
  </channel>
</rss>
