<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Gianna</title>
    <description>The latest articles on DEV Community by Gianna (@gianna).</description>
    <link>https://dev.to/gianna</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/gianna"/>
    <language>en</language>
    <item>
      <title>How to Build a Prompt-Friendly UI with React &amp; TypeScript</title>
      <dc:creator>Gianna</dc:creator>
      <pubDate>Wed, 14 May 2025 04:09:24 +0000</pubDate>
      <link>https://dev.to/gianna/how-to-build-a-prompt-friendly-ui-with-react-typescript-2766</link>
      <guid>https://dev.to/gianna/how-to-build-a-prompt-friendly-ui-with-react-typescript-2766</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;Designing composable, maintainable, and developer-oriented interfaces for LLM apps&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Introduction: Rethinking Interfaces for Prompt-Driven Workflows
&lt;/h2&gt;

&lt;p&gt;Large language models (LLMs) introduce a fundamentally different interaction model. Instead of submitting static form data to deterministic APIs, users compose flexible, evolving instructions( prompts)to drive behavior. The UI is no longer just a form. It becomes a live environment for crafting, executing, and refining these prompts.&lt;/p&gt;

&lt;p&gt;This shift introduces new engineering challenges:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Unstructured, contextual input&lt;/strong&gt;: prompts resemble natural language, code, or hybrids&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Probabilistic, variable outputs&lt;/strong&gt;: repeated prompts yield different results&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Exploratory iteration patterns&lt;/strong&gt;: success depends on trying, comparing, adjusting&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To support this, frontend engineers must rethink how they model state, compose components, and structure control flows. This article presents a practical architecture for building scalable, maintainable prompt interfaces using &lt;strong&gt;React + TypeScript&lt;/strong&gt;, with a focus on modular composition, layered state separation, and UI as execution surface. how to build a scalable, prompt-centric UI system in &lt;strong&gt;React + TypeScript&lt;/strong&gt;, with clean component boundaries, state isolation, and production-ready UX patterns.&lt;/p&gt;




&lt;h2&gt;
  
  
  System Breakdown: A Prompt-Centric Architecture
&lt;/h2&gt;

&lt;p&gt;Consider the system as a composition of modular parts. A well-structured LLM interface supports experimentation, parameter tuning, and iterative refinement. To enable this, the UI should be decomposed into clear and focused modules:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[PromptTemplateForm] ↘
                     [PromptCompiler] → [LLM API Caller] → [ResponseRenderer]
[RawPromptEditor]   ↗         ↘                ↘
              [PromptHistoryManager]      [StreamingHandler]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Each module serves a distinct function:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;PromptTemplateForm&lt;/strong&gt;: Structured input controls that compile into a reusable prompt template.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;RawPromptEditor&lt;/strong&gt;: Freeform editor for power users and debugging.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;PromptCompiler&lt;/strong&gt;: Central utility that assembles runtime-ready prompt strings.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;LLM API Caller&lt;/strong&gt;: Handles API request lifecycle, including streaming and error management.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;PromptHistoryManager&lt;/strong&gt;: Tracks session history and supports prompt reuse.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ResponseRenderer&lt;/strong&gt;: Displays model output with proper formatting and UX affordances.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;StreamingHandler&lt;/strong&gt;: Manages low-latency output rendering.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This modular breakdown enables separation of concerns, testability, and progressive enhancement.&lt;/p&gt;

&lt;p&gt;To improve long-term maintainability and scalability, isolate &lt;strong&gt;presentational components&lt;/strong&gt; (UI rendering only) from &lt;strong&gt;container components&lt;/strong&gt; (stateful logic). For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tsx"&gt;&lt;code&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;PromptFormContainer&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
  ↳ &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;PromptTemplateForm&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
  ↳ &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;PromptPreview&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nc"&gt;PromptFormContainer&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;

&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;HistoryPanelContainer&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
  ↳ &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;HistoryList&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
  ↳ &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;OutputComparison&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nc"&gt;HistoryPanelContainer&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This allows business logic (e.g. retry, compile, fork) to live in container layers while UI layers remain reusable and declarative.&lt;/p&gt;




&lt;h2&gt;
  
  
  Core Components &amp;amp; Hooks
&lt;/h2&gt;

&lt;p&gt;This section details not just the technical roles of each component, but also how they collaborate within a real-world LLM interface, and why their design matters for clarity, testability, and iterative development. Below is a complete collaboration walkthrough:&lt;/p&gt;

&lt;h3&gt;
  
  
  ⛓ Example: A User Types a New Prompt Using the UI
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[PromptFormContainer] -- owns --&amp;gt; (topic, tone)
       |
       v
[PromptTemplateForm] -- controlled input --&amp;gt; [TextInput, Select]
       |
       v
[PromptFormContainer] -- compile --&amp;gt; [PromptCompiler]
       |
       v
       -- call --&amp;gt; [LLMApiCaller] -- fetch --&amp;gt; /api/llm
                                          |
                                          v
                                stream response via ReadableStream
                                          |
                                          v
[PromptFormContainer] -- updates --&amp;gt; (responseText)
       |
       v
[ResponseRenderer] -- render --&amp;gt; formatted LLM output
       |
       v
[PromptFormContainer] -- save --&amp;gt; [usePromptHistory().add()]
       |
       v
[HistoryPanelContainer] -- render --&amp;gt; [HistoryList]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;PromptFormContainer&lt;/code&gt;&lt;/strong&gt; owns the local state of form fields (e.g. &lt;code&gt;topic&lt;/code&gt;, &lt;code&gt;tone&lt;/code&gt;) and passes them to:&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;PromptTemplateForm&lt;/code&gt;&lt;/strong&gt; which renders &lt;code&gt;&amp;lt;TextInput&amp;gt;&lt;/code&gt; and &lt;code&gt;&amp;lt;Select&amp;gt;&lt;/code&gt; components. These are purely presentational.&lt;/li&gt;
&lt;li&gt;When the user submits, the container uses &lt;strong&gt;&lt;code&gt;PromptCompiler&lt;/code&gt;&lt;/strong&gt; to transform the filled-in variables into a complete string prompt.&lt;/li&gt;
&lt;li&gt;This prompt is passed to &lt;strong&gt;&lt;code&gt;LLMApiCaller&lt;/code&gt;&lt;/strong&gt;, which triggers a fetch request to your &lt;code&gt;/api/llm&lt;/code&gt; endpoint. It also optionally sets up a streaming reader.&lt;/li&gt;
&lt;li&gt;As tokens arrive, &lt;strong&gt;&lt;code&gt;PromptFormContainer&lt;/code&gt;&lt;/strong&gt; pipes them into local state (e.g. &lt;code&gt;responseText&lt;/code&gt;), which gets passed to:&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;ResponseRenderer&lt;/code&gt;&lt;/strong&gt;, a presentational component that renders the output using &lt;code&gt;&amp;lt;Markdown /&amp;gt;&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;When the generation completes, the container calls &lt;strong&gt;&lt;code&gt;usePromptHistory().add()&lt;/code&gt;&lt;/strong&gt; to store the prompt/output pair.&lt;/li&gt;
&lt;li&gt;The output now appears inside &lt;strong&gt;&lt;code&gt;HistoryList&lt;/code&gt;&lt;/strong&gt;, rendered by another container: &lt;strong&gt;&lt;code&gt;HistoryPanelContainer&lt;/code&gt;&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;If the user clicks "retry" on a previous generation, &lt;code&gt;retry(id)&lt;/code&gt; is called, and the whole flow replays from step 3.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This flow illustrates how each module contributes &lt;strong&gt;just enough logic&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Presentational components do no data fetching or state control&lt;/li&gt;
&lt;li&gt;Hooks encapsulate shared logic (prompt execution, history management)&lt;/li&gt;
&lt;li&gt;Containers coordinate data + orchestration&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This layered separation enables the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Unit testing of stateless UI components in isolation&lt;/li&gt;
&lt;li&gt;Reuse of prompt-related logic across templates and editors&lt;/li&gt;
&lt;li&gt;Swappable output renderer implementations (streamed, animated, minimal)&lt;/li&gt;
&lt;li&gt;Optional expansion toward multi-agent flows, auto-reply, or collaborative editing&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each layer does one job:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;PromptTemplateForm&lt;/code&gt; → render dynamic prompt inputs&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;PromptCompiler&lt;/code&gt; → compile from schema&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;LLMApiCaller&lt;/code&gt; → abstract API transport&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;ResponseRenderer&lt;/code&gt; → format output&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;usePromptHistory()&lt;/code&gt; → model iteration&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Understanding these interactions helps scale small prototypes into reliable AI interfaces — with full visibility and traceability.&lt;/p&gt;




&lt;h2&gt;
  
  
   Application Architecture Layer     
&lt;/h2&gt;

&lt;p&gt;A well-architected prompt interface benefits greatly from a clear separation between global application state and UI rendering logic. This is especially important in LLM apps where the same data may affect multiple components at different layers.&lt;/p&gt;

&lt;h3&gt;
  
  
  🔧 State Management Strategy
&lt;/h3&gt;

&lt;p&gt;Use a combination of React context + custom hooks or a small store library like Zustand to model the following top-level state, a simple example:  &lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kr"&gt;interface&lt;/span&gt; &lt;span class="nx"&gt;AppState&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nl"&gt;currentPrompt&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nl"&gt;variables&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;Record&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nl"&gt;output&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nl"&gt;history&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;PromptSession&lt;/span&gt;&lt;span class="p"&gt;[];&lt;/span&gt;
  &lt;span class="nl"&gt;selectedHistoryId&lt;/span&gt;&lt;span class="p"&gt;?:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nl"&gt;isStreaming&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;boolean&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Encapsulate updates in domain-specific hooks:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="nf"&gt;usePromptExecution&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;      &lt;span class="c1"&gt;// handles LLMApiCaller + StreamingHandler&lt;/span&gt;
&lt;span class="nf"&gt;usePromptHistory&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;        &lt;span class="c1"&gt;// stores and retrieves past sessions&lt;/span&gt;
&lt;span class="nf"&gt;usePromptVariables&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;      &lt;span class="c1"&gt;// manages input field state&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  🧱 Component Role Alignment
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;PromptFormContainer&lt;/code&gt; reads from &lt;code&gt;usePromptVariables&lt;/code&gt;, compiles prompt, calls execution hook&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;PromptTemplateForm&lt;/code&gt; renders inputs, receives value+onChange as props&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;PromptPreview&lt;/code&gt; reflects current &lt;code&gt;compiledPrompt&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;ResponseRenderer&lt;/code&gt; reflects latest &lt;code&gt;output&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;HistoryPanelContainer&lt;/code&gt; subscribes to &lt;code&gt;usePromptHistory&lt;/code&gt; and dispatches &lt;code&gt;retry&lt;/code&gt; / &lt;code&gt;fork&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This separation ensures:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Consistent updates across containers&lt;/li&gt;
&lt;li&gt;No prop-drilling or tight coupling between siblings&lt;/li&gt;
&lt;li&gt;Hooks are testable and traceable in isolation&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  🎯 Why It Matters
&lt;/h3&gt;

&lt;p&gt;A single &lt;code&gt;LLM interaction&lt;/code&gt; affects:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Input state (form variables)&lt;/li&gt;
&lt;li&gt;Execution flow (stream/cancel)&lt;/li&gt;
&lt;li&gt;Output view (response + error)&lt;/li&gt;
&lt;li&gt;History trace (storage + versioning)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Without a clear data layer, coordinating these becomes error-prone and hard to reason about. With scoped state hooks and container/presenter separation, each module handles only the logic relevant to its role.&lt;/p&gt;

&lt;p&gt;This enables confident refactoring, feature growth (e.g., adding &lt;code&gt;preset templates&lt;/code&gt; or &lt;code&gt;multi-agent threads&lt;/code&gt;), and team scalability.&lt;/p&gt;




&lt;h2&gt;
  
  
  Closing Thoughts
&lt;/h2&gt;

&lt;p&gt;A well-designed prompt interface should function as a complete execution environment. It requires more than a form, it requires state models, clear separation of concerns, and feedback-aware architecture.&lt;/p&gt;

&lt;p&gt;From controlled inputs to prompt compilation, API orchestration, streaming output, and session history, each layer must be deliberately modeled to remain inspectable and extendable.&lt;/p&gt;

&lt;p&gt;Each component should reflect its intent:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Inputs should be inspectable.&lt;/li&gt;
&lt;li&gt;Outputs should be traceable.&lt;/li&gt;
&lt;li&gt;History should be restorable.&lt;/li&gt;
&lt;li&gt;Interactions should be reversible.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This design mindset ensures engineers can build with clarity, test with confidence, and evolve features without losing control. Prompt interfaces deserve the same rigor as any other developer tool.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>react</category>
      <category>webdev</category>
      <category>javascript</category>
    </item>
  </channel>
</rss>
