<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Paton Wong</title>
    <description>The latest articles on DEV Community by Paton Wong (@patonw).</description>
    <link>https://dev.to/patonw</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/patonw"/>
    <language>en</language>
    <item>
      <title>Agentic workflows with Aerie</title>
      <dc:creator>Paton Wong</dc:creator>
      <pubDate>Tue, 07 Apr 2026 08:06:39 +0000</pubDate>
      <link>https://dev.to/patonw/agentic-workflows-with-aerie-1724</link>
      <guid>https://dev.to/patonw/agentic-workflows-with-aerie-1724</guid>
      <description>&lt;h2&gt;
  
  
  Introducing Aerie
&lt;/h2&gt;

&lt;p&gt;This is an introduction of a new open-source tool for creating and running AI-powered workflows.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why use workflows?
&lt;/h3&gt;

&lt;p&gt;You hire a brilliant intern and give them unrestricted access to your company's systems with high-level instructions to complete a complex task. They do a reasonably good job without breaking anything important, the first time. Should it be a surprise when a minor misunderstanding of the next task cascades into complete disaster? Yet this is commonly how we manage AI agents. For all their impressive capabilities, language models do not learn from experience as you "engineer" a prompt &lt;sup id="fnref1"&gt;1&lt;/sup&gt;.&lt;/p&gt;

&lt;p&gt;Software agents are systems that make decisions and operate independently from&lt;br&gt;
human supervision on behalf of users. AI agents replace deterministic program&lt;br&gt;
logic with language models. These models, however, are inherently&lt;br&gt;
probabilistic. The more autonomy we give them, the greater the opportunity for&lt;br&gt;
surprises.&lt;/p&gt;

&lt;p&gt;No matter how well-tuned prompts are during development, there are&lt;br&gt;
uncountably many ways for things to go wrong in the wild. The more detailed you&lt;br&gt;
make the prompt to account for pitfalls, the less attention the model can pay&lt;br&gt;
to the core task. Furthermore, failure-retry loops can balloon the context,&lt;br&gt;
confusing the model even further.&lt;/p&gt;

&lt;p&gt;AI powered workflows provide a more reliable alternative to purely&lt;br&gt;
agent-driven systems. A workflow breaks a task down to discrete, well-defined&lt;br&gt;
steps. AI plays a specific but limited role in some of those steps allowing it&lt;br&gt;
to concentrate on what it excels at without extraneous distractions.&lt;/p&gt;
&lt;h3&gt;
  
  
  What is Aerie?
&lt;/h3&gt;

&lt;p&gt;Aerie&lt;sup id="fnref2"&gt;2&lt;/sup&gt; is a graphical tool for building agentic workflows. Programming expertise&lt;br&gt;
is helpful, but not necessary. In this instance graphical is an overloaded term:&lt;br&gt;
aside from the user interface of the visual editor, workflows are structured as&lt;br&gt;
node graphs. Each node represents agents, data transformations, decisions, etc.&lt;br&gt;
Outputs of a node can be connected to inputs of other nodes.&lt;br&gt;
Data flows predictably from one node to the next.&lt;/p&gt;

&lt;p&gt;With this visual approach it's easier to build, debug, explain and iterate on&lt;br&gt;
workflows -- making aerie appropriate for prototyping and collaboration.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8mazkcpuc7310vluq69q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8mazkcpuc7310vluq69q.png" alt="graph legend" width="800" height="625"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Setup
&lt;/h2&gt;

&lt;p&gt;Aerie can be run from &lt;a href="https://github.com/patonw/aerie" rel="noopener noreferrer"&gt;source&lt;/a&gt; or a binary AppImage available on the &lt;a href="https://github.com/patonw/aerie/releases" rel="noopener noreferrer"&gt;releases page&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The AppImage can be run directly under Linux without installation. However, you&lt;br&gt;
will usually need to set the correct permissions after downloading the file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;chmod&lt;/span&gt; +rx aerie-x86_64.AppImage
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It can also be run under Windows with &lt;a href="https://learn.microsoft.com/en-us/windows/wsl/tutorials/gui-apps" rel="noopener noreferrer"&gt;WSL&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Building and running from source is recommended, however. The development stack&lt;br&gt;
provides a uniform and predictable environment for the application. On the&lt;br&gt;
other hand, it requires far more disk space and time for the initial start. For&lt;br&gt;
instructions on building the source, see the &lt;a href="https://patonw.github.io/aerie/dev_start.html" rel="noopener noreferrer"&gt;Development Guide&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;You can also install from source using the &lt;a href="https://nixos.org/download/" rel="noopener noreferrer"&gt;nix tool&lt;/a&gt;: &lt;a href="https://patonw.github.io/aerie/user_start.html#installation" rel="noopener noreferrer"&gt;Installation&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting started
&lt;/h2&gt;

&lt;p&gt;To get a small taste of the potential of this approach, we will start by&lt;br&gt;
building a trivial workflow. It will pass a user's prompt and conversation&lt;br&gt;
history to an LLM and then rewrite its response as a haiku. Almost every modern&lt;br&gt;
language model can handle this in a single step, but we'll use two agents for&lt;br&gt;
didactic purposes.&lt;/p&gt;

&lt;p&gt;In later articles we'll explore topics like data extraction, tool use and&lt;br&gt;
iteration.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnqrkge6fc99oyxyg0l9r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnqrkge6fc99oyxyg0l9r.png" alt="create button" width="265" height="379"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click the Create button on the command palette to create a new workflow with&lt;br&gt;
default nodes. Rather than an empty document, it will contain a basic chat&lt;br&gt;
agent which you can choose to integrate into your workflow or discard. We'll do&lt;br&gt;
the former this time.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0m849dvf9lpfy1gp936y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0m849dvf9lpfy1gp936y.png" alt="finish node" width="800" height="433"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;First, disconnect the &lt;code&gt;conversation&lt;/code&gt; pin on the Finish node, for now.  You can&lt;br&gt;
do this by right-clicking on the wire itself or the pin on either source or&lt;br&gt;
destination node.&lt;/p&gt;

&lt;p&gt;Normally, this would send the completed conversation of the&lt;br&gt;
workflow to the chat session, viewable in the Chat tab. In the meantime, we'll&lt;br&gt;
be working only in the workflow editor.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;ℹ️ &lt;strong&gt;note&lt;/strong&gt;&lt;br&gt;
Pins on the right side of a node are output pins while pins on the left side&lt;br&gt;
are inputs. Information flows in only one direction along a wire from the&lt;br&gt;
output pin of a source node to an input pin of the destination node.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Normal Agent
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1g7nlx6exnexbx6vu2vc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1g7nlx6exnexbx6vu2vc.png" alt="agent wires" width="764" height="700"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, we'll use the existing agent to generate a normal response.&lt;br&gt;
Disconnect the &lt;code&gt;temperature&lt;/code&gt; and &lt;code&gt;input&lt;/code&gt; wires between the &lt;em&gt;Start&lt;/em&gt; and &lt;em&gt;Agent&lt;/em&gt; nodes.&lt;/p&gt;

&lt;p&gt;The &lt;em&gt;Start&lt;/em&gt; node is the entry point into the workflow, gathering settings and&lt;br&gt;
inputs from the execution environment and exposing them to the other nodes in&lt;br&gt;
the workflow. These values are only available from the &lt;em&gt;Start&lt;/em&gt; node.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F911fflvdse09ivdhsxu3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F911fflvdse09ivdhsxu3.png" alt="agent settings" width="438" height="379"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Agent&lt;/em&gt; nodes define parameters for invoking LLMs via Chat and Structured&lt;br&gt;
nodes. An &lt;em&gt;Agent&lt;/em&gt; node does not generate content by itself. Rather it holds&lt;br&gt;
settings to differentiate itself from other agents and can be re-used in&lt;br&gt;
different stages of the workflow by content generating nodes.&lt;/p&gt;

&lt;p&gt;Set the LLM model using the format &lt;code&gt;{provider}/{model}&lt;/code&gt;. Examples:&lt;br&gt;
&lt;code&gt;ollama/devstral:latest&lt;/code&gt; or &lt;a href="https://openrouter.ai/openrouter/free" rel="noopener noreferrer"&gt;&lt;code&gt;openrouter/openrouter/free&lt;/code&gt;&lt;/a&gt;. Most providers will have a list or database of models they provide (e.g. &lt;a href="https://openrouter.ai/models" rel="noopener noreferrer"&gt;https://openrouter.ai/models&lt;/a&gt; &amp;amp; &lt;a href="https://docs.mistral.ai/getting-started/models" rel="noopener noreferrer"&gt;https://docs.mistral.ai/getting-started/models&lt;/a&gt;).&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;📢 &lt;strong&gt;important&lt;/strong&gt;&lt;br&gt;
Local providers like Ollama don't require authentication, but services like &lt;br&gt;
OpenRouter, Anthropic, etc usually require an API key. See API Keys for details.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Set the temperature low (~0.25).&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;ℹ️ &lt;strong&gt;note&lt;/strong&gt;&lt;br&gt;
The temperature can be set between 0.0 and 1.0. It controls how words&lt;br&gt;
are selected from a range of possibilities during generation. It is loosely&lt;br&gt;
correlated with creativity. Higher temperatures mean more improbable outputs,&lt;br&gt;
while lower temperatures tend to produce drier generic responses.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd72ahsbw96wthj4xy7fe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd72ahsbw96wthj4xy7fe.png" alt="chat node" width="385" height="448"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now that the agent is configured let's take a look at the &lt;em&gt;Chat&lt;/em&gt; node. This is the node&lt;br&gt;
that actually interacts with the language model provider to generate content.&lt;/p&gt;

&lt;p&gt;It takes configuration values from an &lt;em&gt;Agent&lt;/em&gt; node and optionally a&lt;br&gt;
conversation history -- an ongoing list of user prompts and agent responses.&lt;br&gt;
In this instance, the conversation is supplied by the &lt;em&gt;Start&lt;/em&gt; node, since this&lt;br&gt;
is the first &lt;em&gt;Chat&lt;/em&gt; in our workflow.&lt;/p&gt;

&lt;p&gt;Finally, it takes a prompt, which you can supply from a text value like the&lt;br&gt;
&lt;code&gt;input&lt;/code&gt; pin of the &lt;em&gt;Start&lt;/em&gt; node as we saw earlier with the default workflow. In&lt;br&gt;
this instance, however, leave the pin unwired and type the text prompt directly&lt;br&gt;
into the node.&lt;/p&gt;

&lt;h3&gt;
  
  
  Saving
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4r9i3c7132mzeyebgqml.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4r9i3c7132mzeyebgqml.png" alt="autosave" width="680" height="306"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Before we continue, it's a good idea to enable &lt;code&gt;autosave&lt;/code&gt; in the Settings tab.&lt;br&gt;
This will write any changes you make to disk automatically. Alternatively you&lt;br&gt;
will need to click the Save button in the command palette manually for all&lt;br&gt;
changed workflows.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;⚠️ &lt;strong&gt;warning&lt;/strong&gt;&lt;br&gt;
If there are unsaved changes to workflows other than the one displayed, they&lt;br&gt;
may be lost. The app will not warn about discarding unsaved changes when&lt;br&gt;
exiting.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Previews
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh41umd4rbji5sn88t3tf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh41umd4rbji5sn88t3tf.png" alt="create preview" width="729" height="693"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So far we've modified existing nodes, but now let's create a new node for&lt;br&gt;
examining wire values during a run. The &lt;em&gt;Preview&lt;/em&gt; node will show intermediate&lt;br&gt;
values when the workflow is run from the editor but has no effect otherwise.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;ℹ️ &lt;strong&gt;note&lt;/strong&gt;&lt;br&gt;
The &lt;em&gt;Preview&lt;/em&gt; node can accept any wire value and will change its display&lt;br&gt;
format according to the type.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Right click on the canvas in the area you want the new node to appear. A&lt;br&gt;
context menu appears with nodes that can be added to this graph. Select the&lt;br&gt;
Preview item to create a new node.&lt;/p&gt;

&lt;h3&gt;
  
  
  Running Workflows
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgrctsfle8d0qpcqkyp5i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgrctsfle8d0qpcqkyp5i.png" alt="running workflow" width="800" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Connect the &lt;code&gt;response&lt;/code&gt; pin of the &lt;em&gt;Chat&lt;/em&gt; node to the &lt;em&gt;Preview&lt;/em&gt;'s input and &lt;strong&gt;Run&lt;/strong&gt; the&lt;br&gt;
workflow using the button in the command palette.&lt;/p&gt;

&lt;p&gt;As the workflow runs, nodes that have finished will be marked with a green&lt;br&gt;
check.&lt;/p&gt;

&lt;p&gt;Nodes that are actively running will have a spinning circle in the corner.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3oy78906rk6mwts5j851.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3oy78906rk6mwts5j851.png" alt="finished workflow" width="800" height="628"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once the workflow has run, the &lt;em&gt;Preview&lt;/em&gt; node will show a standard response to&lt;br&gt;
our prompt.&lt;/p&gt;

&lt;h3&gt;
  
  
  Poetic Agent
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmtv5ija67nqch37grino.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmtv5ija67nqch37grino.png" alt="agent two" width="800" height="412"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now that We have a first agent generating normal (boring) responses, it's time&lt;br&gt;
to create a second agent to generate poetry. This has a distinct purpose and&lt;br&gt;
personality from the previous agent, so we'll configure it with different&lt;br&gt;
settings.&lt;/p&gt;

&lt;p&gt;Create a second agent from the context menu &lt;em&gt;LLM › Agent&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Connect it to the first agent. It will take configuration values from the first&lt;br&gt;
agent unless you override them.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;ℹ️ &lt;strong&gt;note&lt;/strong&gt;&lt;br&gt;
Different languages will have differing proficiencies at various tasks. Some&lt;br&gt;
will focus more on generating program code while others will be better at&lt;br&gt;
writing long-form text. It can be beneficial to experiment with different&lt;br&gt;
combinations in a workflow.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Override the temperature and set it higher (&amp;gt;0.75).&lt;/p&gt;

&lt;p&gt;You can also override the system message (currently blank) to add personality&lt;br&gt;
or add specific instructions for the current task. Instructions can vary&lt;br&gt;
between formatting requirements, strategies for executing a task or admonitions&lt;br&gt;
about avoiding particular pitfalls.&lt;/p&gt;

&lt;p&gt;We won't provide any instructions this time. However, let's give the agent a&lt;br&gt;
role to play, to impart some flavor on the generated result.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi9x5ymvyuyjh1zcj3esf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi9x5ymvyuyjh1zcj3esf.png" alt="chat two" width="800" height="491"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Create a second &lt;em&gt;LLM › Chat&lt;/em&gt; node and connect it to the new agent.&lt;/p&gt;

&lt;p&gt;Since we are asking it to act on prior responses, you will need to connect the&lt;br&gt;
conversation to the previous &lt;em&gt;Chat&lt;/em&gt; node &lt;strong&gt;NOT&lt;/strong&gt; the &lt;em&gt;Start&lt;/em&gt; node.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;📢 &lt;strong&gt;important&lt;/strong&gt;&lt;br&gt;
Connecting it to the &lt;em&gt;Start&lt;/em&gt; node would create a parallel conversation that&lt;br&gt;
omits the previous agent's response.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Finally, connect it to another Preview node so we can compare the results&lt;br&gt;
side-by-side.&lt;/p&gt;

&lt;h2&gt;
  
  
  Incremental Execution
&lt;/h2&gt;

&lt;p&gt;Notice that the new nodes do not have status indicators, yet, in contrast to the&lt;br&gt;
old nodes. This shows which nodes will be executed during an incremental&lt;br&gt;
run. Other nodes with  will be skipped, saving time and avoiding&lt;br&gt;
extra API fees. This allows you to quickly try variations on node parameters or&lt;br&gt;
different combinations of nodes without redundant work.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 &lt;strong&gt;tip&lt;/strong&gt;&lt;br&gt;
Selected nodes and the node under the cursor are also re-executed during an&lt;br&gt;
incremental run.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;You can trigger an incremental run with the shortcut &lt;code&gt;Ctrl+R&lt;/code&gt; (see shortcuts&lt;br&gt;
with the &lt;code&gt;?&lt;/code&gt; key).&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;ℹ️ &lt;strong&gt;note&lt;/strong&gt;&lt;br&gt;
The Run button in the command palette will trigger a full re-run of every&lt;br&gt;
node in the workflow.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frielwltmj8sy7s24hynf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frielwltmj8sy7s24hynf.png" alt="run two" width="780" height="424"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Notice that the "normal" &lt;em&gt;Chat&lt;/em&gt; node does not rerun incrementally (assuming you&lt;br&gt;
haven't changed, selected or hovered over it).&lt;/p&gt;

&lt;p&gt;Try changing the second prompt (e.g. haiku → sonnet) and notice the status&lt;br&gt;
indicator disappears.&lt;/p&gt;

&lt;p&gt;Another incremental run should only re-execute that node.&lt;/p&gt;

&lt;p&gt;If you change the second Agent node, one of two things will happen, depending&lt;br&gt;
on whether the &lt;code&gt;cascade&lt;/code&gt; setting is enabled. When &lt;code&gt;cascade&lt;/code&gt; is enabled, a&lt;br&gt;
status reset will propagate from a node to its children and all its&lt;br&gt;
descendants.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 &lt;strong&gt;tip&lt;/strong&gt;&lt;br&gt;
Without &lt;code&gt;cascade&lt;/code&gt; only the Agent node's status is cleared. To have the Chat&lt;br&gt;
node re-run incrementally, you will need to hover over or select it.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Chat Sessions
&lt;/h2&gt;

&lt;p&gt;We've been using the Workflow tab exclusively so far. If you go to the Chat&lt;br&gt;
tab, notice that none of the messages appear. That's because the workflow&lt;br&gt;
hasn't added anything to the session. The fix is simple: connect the last Chat&lt;br&gt;
node to the Finish node.&lt;/p&gt;

&lt;p&gt;Why didn't we do this from the beginning? Try another incremental run. You&lt;br&gt;
should get an error about unrelated histories. This is because the&lt;br&gt;
incremental state has an old copy of the conversation.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;ℹ️ &lt;strong&gt;note&lt;/strong&gt;&lt;br&gt;
Internally, replacing it with the current conversation would invalidate the&lt;br&gt;
entire workflow state. Rewinding and using the stale conversation is not&lt;br&gt;
permitted, however, since workflows are not allowed to make destructive&lt;br&gt;
changes to the session. They can only add content, but ignoring new messages&lt;br&gt;
and overwriting them with new ones would remove existing history from the&lt;br&gt;
session.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This restriction only applies to workflows. From the Session tab you can perform&lt;br&gt;
various changes to the conversation history.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;📢 &lt;strong&gt;important&lt;/strong&gt;&lt;br&gt;
Why aren't my chats saved?&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;While what we've seen here isn't particularly groundbreaking or useful, now&lt;br&gt;
you should be comfortable with using the editor to build workflows. Next, we'll&lt;br&gt;
explore generating and manipulating structured data, before moving on to tools,&lt;br&gt;
subgraphs and iteration.&lt;/p&gt;

&lt;h2&gt;
  
  
  Troubleshooting
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Workflow gets stuck on Chat node
&lt;/h3&gt;

&lt;h4&gt;
  
  
  API Keys
&lt;/h4&gt;

&lt;p&gt;API keys specific for each provider must be defined in the &lt;a href="https://github.com/0xPlaygrounds/rig/blob/main/skills/rig/references/providers.md" rel="noopener noreferrer"&gt;environment&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Unfortunately, &lt;a href="https://rig.rs/" rel="noopener noreferrer"&gt;rig&lt;/a&gt; the underlying library to connect to AI &lt;br&gt;
providers usually halts the execution thread instead of triggering a&lt;br&gt;
recoverable error.&lt;/p&gt;

&lt;p&gt;Changes to the environment will not take effect until the application restarts.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;⚠️ &lt;strong&gt;warning&lt;/strong&gt;&lt;br&gt;
While it is common practice to use system/account-wide environment variables,&lt;br&gt;
there are security concerns stemming from this. One alternative is to use&lt;br&gt;
&lt;a href="https://direnv.net/" rel="noopener noreferrer"&gt;direnv&lt;/a&gt; to limit its scope by directory. However, this&lt;br&gt;
requires API key to be stored as plain text.&lt;/p&gt;

&lt;p&gt;A more secure option is to use a password manager/vault application with&lt;br&gt;
console integration, like &lt;a href="https://bitwarden.com" rel="noopener noreferrer"&gt;Bitwarden&lt;/a&gt;,  &lt;a href="https://www.hashicorp.com/en/products/vault" rel="noopener noreferrer"&gt;vault&lt;/a&gt;,  &lt;a href="https://www.passwordstore.org/" rel="noopener noreferrer"&gt;pass&lt;/a&gt;, etc. Some will allow you to launch  applications with environment variables pulled from secure storage.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h4&gt;
  
  
  Enable streaming
&lt;/h4&gt;

&lt;p&gt;In some cases, providers may actively generate a response, but the response&lt;br&gt;
itself will be large, taking minutes to complete. Most providers support&lt;br&gt;
streaming individual tokens, allowing you to see the response as it is&lt;br&gt;
generated, rather than waiting for it to finish.&lt;/p&gt;

&lt;h4&gt;
  
  
  Change providers/models
&lt;/h4&gt;

&lt;p&gt;Some providers have high latency or unreliable connections. If one does not&lt;br&gt;
respond in a reasonable amount of time try another.&lt;/p&gt;

&lt;p&gt;Be aware that some providers (&lt;a href="https://openrouter.ai/" rel="noopener noreferrer"&gt;openrouter&lt;/a&gt; for instance) proxy to other providers. Different models may run on different providers.&lt;/p&gt;

&lt;p&gt;Even on a single provider, models may be allocated different hardware resources&lt;br&gt;
to handle different requirements or due to popularity.&lt;/p&gt;

&lt;h3&gt;
  
  
  Workflow gets stuck elsewhere
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Check console logs
&lt;/h4&gt;

&lt;p&gt;This application is still under active development. Most errors will trigger an&lt;br&gt;
error dialog, but some may cause the run to fail silently. The console may&lt;br&gt;
provider warnings or other indication about what has failed.&lt;/p&gt;

&lt;h3&gt;
  
  
  Can't edit node
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Workflow is running or frozen
&lt;/h4&gt;

&lt;p&gt;The workflow can't be edited while it is running. Wait for it to complete or&lt;br&gt;
use the &lt;strong&gt;Stop&lt;/strong&gt; button to interrupt it.&lt;/p&gt;

&lt;p&gt;The editor can be frozen/unfrozen manually or while examining edit history.&lt;br&gt;
This prevents unintended changes when browsing through the Undo stack.&lt;/p&gt;

&lt;p&gt;To unfreeze the workflow, toggle the button on the control palette.&lt;/p&gt;

&lt;h4&gt;
  
  
  (Dis)connect input pins
&lt;/h4&gt;

&lt;p&gt;Some fields can take values from controls on the node as well as input wires.&lt;/p&gt;

&lt;p&gt;The controls will not be visible unless the wire is disconnected.&lt;/p&gt;

&lt;h4&gt;
  
  
  Toggle optional controls
&lt;/h4&gt;

&lt;p&gt;Some node fields are optional. For example, fields that might override a&lt;br&gt;
previous value will need to be enabled to be edited.&lt;/p&gt;

&lt;h3&gt;
  
  
  Chat history disappears on restarting app
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Set active session
&lt;/h4&gt;

&lt;p&gt;By default no session is active. When no session is active (denoted by an&lt;br&gt;
empty value in the session selection) chats are discarded when the app exits.&lt;br&gt;
To save an ongoing chat, rename the session. The active session is reloaded&lt;br&gt;
the next time you start the app.&lt;/p&gt;




&lt;ol&gt;

&lt;li id="fn1"&gt;
&lt;p&gt;Fine tuning models is a different matter, with steep data and resource requirements. ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn2"&gt;
&lt;p&gt;A nest of a bird of prey perched high on a cliff or tree top. ↩&lt;/p&gt;
&lt;/li&gt;

&lt;/ol&gt;

</description>
      <category>ai</category>
      <category>opensource</category>
      <category>automation</category>
      <category>rust</category>
    </item>
  </channel>
</rss>
