<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Paton Wong</title>
    <description>The latest articles on DEV Community by Paton Wong (@patonw).</description>
    <link>https://dev.to/patonw</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/patonw"/>
    <language>en</language>
    <item>
      <title>Subgraphs and Iteration: Once more, from the top</title>
      <dc:creator>Paton Wong</dc:creator>
      <pubDate>Mon, 13 Apr 2026 16:34:26 +0000</pubDate>
      <link>https://dev.to/patonw/subgraphs-and-iteration-once-more-from-the-top-78n</link>
      <guid>https://dev.to/patonw/subgraphs-and-iteration-once-more-from-the-top-78n</guid>
      <description>&lt;p&gt;In the previous example we explored how to use an LLM to extract factual claims and how to check an individual claim against the rest of the context.&lt;/p&gt;

&lt;p&gt;To check all claims there are two options:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Ask the LLM to check all claims in one shot&lt;/li&gt;
&lt;li&gt;Check each claim individually and combine the responses&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In the first option, many LLMs end up skipping claims or mixing up evidence. This is often as effective as asking someone to glance over the material and use their intuition to "feel" if the claims are grounded. Reasoning models can perform better in this regard by breaking down the list during reasoning, but this still lacks rigor.&lt;/p&gt;

&lt;p&gt;To check claims individually, however, we need to overcome a particular hurdle. By design, each node in a workflow is executed at most, once per run. This ensures workflows complete within a finite amount of time, bounded by the total number of nodes.&lt;/p&gt;

&lt;p&gt;Rather than adding control structures analogous to for/while loops, workflows use a special node that acts on entire lists: Iterative Subgraph.&lt;/p&gt;

&lt;p&gt;From an outside perspective, this node takes a list as input and is executed once during the run, producing a new list. Internally, it repeats a subtask over every item in the list and gathers the results into an output list.&lt;/p&gt;

&lt;p&gt;You determine the subtask it performs on the list items by creating a nested graph -- a subgraph.&lt;/p&gt;

&lt;p&gt;Aerie supports two kinds of subgraphs: simple and iterative. Simple graphs do little more than hold nodes to simplify the parent workflow. Even without additional behavior they are useful for organization. Iterative subgraphs execute their contents on each item of a list.&lt;/p&gt;

&lt;h2&gt;
  
  
  Subgraphs
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk8bps1kip0qlr5j2zfa1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk8bps1kip0qlr5j2zfa1.png" alt="subgraph node" width="351" height="378"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There are some common rules governing all types of subgraph nodes. To edit the contents of a subgraph, double-click the icon in the body of the node.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe06pyv637rgsrdcidt4o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe06pyv637rgsrdcidt4o.png" alt="subgraph controls" width="201" height="310"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This will open the subgraph editor which has a modified control palette. The top part shows the subgraph stack. Subgraphs can contain their own subgraphs. Each button in the stack shows a different level in this hierarchy. Clicking on one of switches the editor to the corresponding ancestor graph.&lt;/p&gt;

&lt;p&gt;Inside the subgraphs, the editor contains a set of nodes and wires, including a Start and Finish node.&lt;/p&gt;

&lt;p&gt;The inputs and outputs to the subgraph node are determined by the pins on its internal Start and Finish nodes. You can customize these pins within the subgraph editor, unlike the Start and Finish nodes of top-level workflows.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;📢 &lt;strong&gt;important&lt;/strong&gt;&lt;br&gt;
During a workflow run, when a subgraph node is executed, the entirety of its contents is executed before the parent resumes. When a subgraph node is finished, none of its internal nodes will run in the future. The execution of nodes inside the subgraph does not interleave with nodes of the parent.&lt;/p&gt;

&lt;p&gt;ℹ️ &lt;strong&gt;note&lt;/strong&gt;&lt;br&gt;
At the moment, subgraphs do not run incrementally. Any changes inside the subgraph reset the entire node.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Continuing the claims checking example, let's create a &lt;em&gt;Subgraph › Simple&lt;/em&gt; node to hold the first agent. This isn't necessary for the small number of nodes, but it will allow us to get familiar with the subgraph editor.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd4wp70ii5rm76fo3p922.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd4wp70ii5rm76fo3p922.png" alt="selection" width="753" height="656"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Select the nodes containing the agent, schema and structured output and copy them into the subgraph.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgqoo47q368rcpf67h2oc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgqoo47q368rcpf67h2oc.png" alt="sub-input" width="800" height="599"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The default text input can be wired directly into the prompt pin.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwk7jzagu0m2ndq71yeo4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwk7jzagu0m2ndq71yeo4.png" alt="output trash" width="336" height="216"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The output needs to be JSON, however. We must replace the output. First double-click on the label to edit the pin on the Finish node. Clicking on the trash icon will remove this value.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2b3csuckhzlvgchgyfn4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2b3csuckhzlvgchgyfn4.png" alt="add output" width="340" height="308"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then add a JSON output using the &lt;code&gt;+new&lt;/code&gt; button.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv8owxt0fsmgvazw1676f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv8owxt0fsmgvazw1676f.png" alt="simple-complete" width="687" height="415"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Connect it to the Structured Output node to complete the subgraph.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fblsygfcwbekjnq112osn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fblsygfcwbekjnq112osn.png" alt="subgraph replace" width="759" height="475"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now we can replace the nodes in the parent with the subgraph, by rewiring the upstream and downstream nodes. You can rename it by double clicking on the title to document the purpose of the subgraph.&lt;/p&gt;

&lt;p&gt;What did that buy us? Instead of a tangle of nodes near each other, we have a single node that represents a logical unit of work. The agent and schema are neatly hidden away.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fra3k3cgvm2ybapxgp3k1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fra3k3cgvm2ybapxgp3k1.png" alt="alternative" width="504" height="394"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On the flip-side, if you want to compare and modify the agents, for instance, now you have to navigate into a subgraph. Alternatively, you could add an input pin for the agent and pass it in from the top-level workflow.&lt;/p&gt;

&lt;p&gt;Deciding what to put inside the subgraph vs what to pass as inputs is a balancing act that will change depending on the situation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Iteration
&lt;/h2&gt;

&lt;p&gt;Iterative subgraphs operate on list items. This leads to changes in how inputs and outputs are handled.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Node&lt;/th&gt;
&lt;th&gt;Subgraph&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbg9dm2x0df8j5gll8jpr.png" alt="outside" width="346" height="467"&gt;&lt;/td&gt;
&lt;td&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwvumq3qpyoqi69zc4nps.png" alt="inside" width="510" height="298"&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Inside the subgraph, the Start node exposes individual values (i.e. text, message, number, etc). On the Iterative Subgraph node in the parent, however, these are represented as lists. The parent sends in a list of text strings, for instance, and the subgraph runs once for every text.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;ℹ️ &lt;strong&gt;note&lt;/strong&gt;&lt;br&gt;
You can distinguish between single item (circle) and list (square) pins by shape.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Similarly, the Finish node inside the subgraph can receive individual values which the subgraph node collects and presents to the parent as a list of values.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcnb8r6u20revouicbd6p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcnb8r6u20revouicbd6p.png" alt="create iterator" width="800" height="559"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Create a &lt;em&gt;Subgraph › Iterative&lt;/em&gt; node. Copy the agent, context, schema and structured nodes from the workflow and paste them into the subgraph editor. Connect the input to the Structured node.&lt;/p&gt;

&lt;p&gt;Similarly to the Simple Subgraph, replace the text output pin with a JSON output.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0cwaxzji76931ul0b9eb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0cwaxzji76931ul0b9eb.png" alt="input doc" width="689" height="512"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You may have noticed that the context is now blank. Add a new text field to the Start node and rename it "document". Connect it to the context.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 &lt;strong&gt;tip&lt;/strong&gt;&lt;br&gt;
You could also reorder pins to minimize crossing wires if you wish. We'll leave them as is during this workflow, however.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Back in the parent workflow, we can connect the Wikipedia article to the &lt;code&gt;document&lt;/code&gt; pin, even though it is a single item, while the subgraph takes a list. In this instance, the subgraph will broadcast the item into every iteration such that all executions of the subgraph will see the same value for that pin.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmqk8fcrg1dcpszu8cqba.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmqk8fcrg1dcpszu8cqba.png" alt="unwrap claims" width="800" height="521"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On the other hand, the claims are currently wrapped inside JSON and automatic conversion isn't supported yet. We can use a &lt;em&gt;JSON › Unwrap JSON&lt;/em&gt; node, however. Changing the Transform filter to &lt;code&gt;[ .claims[1] ]&lt;/code&gt; will extract the second claim but place it inside a new list. This allows us to test the single value as a list.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 &lt;strong&gt;tip&lt;/strong&gt;&lt;br&gt;
Developing the workflow against a single item saves a lot of time and API credits.&lt;/p&gt;

&lt;p&gt;Adjust the agent settings and run the workflow until you are satisfied that the new subgraph works correctly using only one or two list items before expanding it to the full list.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh26p7gpnom0dt54vat7q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh26p7gpnom0dt54vat7q.png" alt="progress" width="800" height="597"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once everything is in order, change the filter to just &lt;code&gt;.claims&lt;/code&gt; to iterate over the whole list. Now when you run the workflow, notice the progress bar on the Subgraph node tracks.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjojcjjzxu969yr62stdc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjojcjjzxu969yr62stdc.png" alt="done" width="800" height="555"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once it finishes, you should have a list of JSON values, or an error, depending on how well the grounding agent behaves.&lt;/p&gt;

&lt;p&gt;You might have to go back and adjust the agent settings again until it produces satisfactory results.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;⚠️ You may have noticed the &lt;code&gt;parallel&lt;/code&gt; option on the Iterative Subgraph. This will speed up the workflow, but might cause issues with rate limiting. Hold off on using it until we cover rate-limiting via tools.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;With iteration in place, the claims checking workflow is complete. Given an article, the workflow will systematically verify that the most important claims are supported by citations in the text. More broadly, this pattern of extracting, iterating then combining will be a useful building block for solving many tasks.&lt;/p&gt;

&lt;p&gt;However, there are still ways to improve this workflow. One issue is that its results are only visible when running in the editor application. Another problem can occur if we try to process larger collections: the workflow can overwhelm the LLM provider and get blocked.&lt;/p&gt;

&lt;p&gt;In the next installments, we'll see how to overcome these limitations by rate limiting via tool calls and creating outputs.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>agents</category>
      <category>opensource</category>
      <category>automation</category>
    </item>
    <item>
      <title>Citation Needed: Structured data extraction workflows</title>
      <dc:creator>Paton Wong</dc:creator>
      <pubDate>Fri, 10 Apr 2026 19:04:13 +0000</pubDate>
      <link>https://dev.to/patonw/citation-needed-structured-data-extraction-workflows-15dm</link>
      <guid>https://dev.to/patonw/citation-needed-structured-data-extraction-workflows-15dm</guid>
      <description>&lt;p&gt;In the previous article we explored how to generate and use structured data in a workflow. Now, let's take it a step further.&lt;/p&gt;

&lt;p&gt;We'll build a workflow that checks whether an article provides evidence to support its claims (but not whether the evidence itself is valid). Rather than using this to fact check articles in the wild, this might be useful for critiquing your own writing before submission or checking generated text for hallucinations.&lt;/p&gt;

&lt;p&gt;This task is impractical to automate without generative language models. Natural language processing pipelines might be able to extract or categorize entities and phrases from a text, but this task requires a degree of reading comprehension not available without larger language models.&lt;/p&gt;

&lt;p&gt;Furthermore, while many language models are capable of performing individual steps, the overall process requires more rigor and discipline than they are trained for. Frontier models might handle moderately complex tasks, but verifying that they haven't hallucinated the results requires additional work on par with this workflow.&lt;/p&gt;

&lt;p&gt;What we can do instead is split the task into distinct steps: extracting claims then checking each of them. In this article we'll look into the first part using our old friend the &lt;em&gt;LLM › Structured&lt;/em&gt; node.&lt;/p&gt;

&lt;h2&gt;
  
  
  Claims Schema
&lt;/h2&gt;

&lt;p&gt;In the Structured Generation tutorial we saw how to generate a single structured entry from scratch. LLMs are capable of handling much more complexity. This time we will ask the model to determine which phrases in a text are factual claims and place them into a list. Furthermore, we ask the model to rank the importance of each claim, holistically, when deciding whether to include it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi1vjwbsap766reih42do.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi1vjwbsap766reih42do.png" alt="structured" width="800" height="591"&gt;&lt;/a&gt; Like before, create a new workflow and swap out the normal &lt;em&gt;Chat&lt;/em&gt; for a &lt;em&gt;Structured&lt;/em&gt; node.&lt;/p&gt;

&lt;p&gt;Create a &lt;em&gt;Parse JSON&lt;/em&gt; node and connect it to the &lt;code&gt;schema&lt;/code&gt; input of the &lt;em&gt;Structured&lt;/em&gt; node. Fill it with this schema conveniently generated by an LLM:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"$schema"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"http://json-schema.org/draft-07/schema#"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"title"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"ClaimsList"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"object"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"properties"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"claims"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"array"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"items"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"string"&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"minItems"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"maxItems"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"description"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"A list of claim strings. The list must contain at least one and at most five items."&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"required"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="s2"&gt;"claims"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"additionalProperties"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;📢 &lt;strong&gt;important&lt;/strong&gt;&lt;br&gt;
Technically, an array at the top-level would be a valid schema.&lt;/p&gt;

&lt;p&gt;However, many models have trouble generating data with that format. To ensure compatibility between providers, wrap the array in an object. Then extract the list later using JSON transformations.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Instructions
&lt;/h2&gt;

&lt;p&gt;In the previous example we combined instructions with dynamic data into the prompt. This time we'll reserve the system message for instructions and inject the data in a separate step.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvmgrfctro64z7l89lgxj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvmgrfctro64z7l89lgxj.png" alt="instructions" width="521" height="466"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;By partitioning the instructions and the data it becomes much easier to reuse the workflow on new inputs. We can use the &lt;code&gt;system&lt;/code&gt; message field of the &lt;em&gt;Agent&lt;/em&gt; node for instructions:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Follow these instructions exactly.
Do not respond directly to the user.
Do not hallucinate the final answer.

## Instructions

Extract the key factual claims in the user's statement and format them into a list (5 items or fewer).
Ensure that each claim can stand alone without additional context to make sense of it.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;💡 &lt;strong&gt;tip&lt;/strong&gt;&lt;br&gt;
You should experiment with variations on the instructions, particularly the preamble to optimize it for your preferred language model. I find this combination effective with the nemotron family and various other open models.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The system message is sent once at the beginning of each request. Theoretically, the LLM should pay special attention to it. Regardless, this avoids sending repeat instructions with every prompt of a conversation, even when the entire conversation is sent with every request. &lt;sup id="fnref1"&gt;1&lt;/sup&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Input Document
&lt;/h2&gt;

&lt;p&gt;The input document for a workflow will typically be supplied by the runner. While developing a workflow, however, it's convenient to create a node for a predefined text to take advantage of iterative execution. In the final version of the workflow we can delete this node and connect to the &lt;code&gt;input&lt;/code&gt; of the &lt;em&gt;Start&lt;/em&gt; node.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fahzomg9ga581m0gd66p3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fahzomg9ga581m0gd66p3.png" alt="input doc" width="759" height="556"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Create a &lt;em&gt;Value › Plain Text&lt;/em&gt; node to hold the article content.&lt;/p&gt;

&lt;p&gt;Connect it to the &lt;code&gt;prompt&lt;/code&gt; input of the &lt;em&gt;Structured&lt;/em&gt; node.&lt;/p&gt;

&lt;p&gt;Paste the contents of an article into the text field. I'm using a Wikipedia article about &lt;a href="https://en.wikipedia.org/w/index.php?title=Apiary&amp;amp;action=edit" rel="noopener noreferrer"&gt;apiaries&lt;/a&gt; (artificial beehives).&lt;/p&gt;

&lt;h2&gt;
  
  
  Claim Checking
&lt;/h2&gt;

&lt;p&gt;We now have a workflow that generates a list of claims from a text. Our eventual goal is to have each claim checked individually against the original text, which will be supplied to the language model in a context document.&lt;/p&gt;

&lt;p&gt;However, before learning how to check every item, we should first explore how to check a single item.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fap4i8oraquzc30i6l5in.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fap4i8oraquzc30i6l5in.png" alt="list indexing" width="621" height="451"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;First, let's pull a single claim out of the structured generation using &lt;em&gt;JSON › Transform JSON&lt;/em&gt;. This node uses a &lt;a href="https://gedenkt.at/jaq/manual/" rel="noopener noreferrer"&gt;jq filter&lt;/a&gt; to manipulate JSON.&lt;/p&gt;

&lt;p&gt;The filter &lt;code&gt;.claims[1]&lt;/code&gt; tells it to access the "claims" field and return the second element (0-indexed).&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 &lt;strong&gt;tip&lt;/strong&gt;&lt;br&gt;
Ask your favorite frontier LLM for help writing &lt;code&gt;jq&lt;/code&gt; filters from sample data.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Add a second &lt;em&gt;Agent&lt;/em&gt; node with these instructions:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Follow these instructions exactly.
Do not respond directly to the user.
Do not hallucinate the final answer.

## Instructions

Help the user analyze the article in the context file.
The user is examining individual claims that the article makes.

Determine whether the context provides supporting evidence for the claim stated by the user.
List the reference or citation provided by the article.

DO NOT interpret the article as evidence for a claim made by the user.
The user is simply examining a claim made by the article.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fri7s64nkknp53edizqty.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fri7s64nkknp53edizqty.png" alt="context documents" width="739" height="591"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;How can we provide the article as context for the LLM? There are several ways:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Inject it into the system message using templating&lt;/li&gt;
&lt;li&gt;Provide it as a user message in the conversation&lt;/li&gt;
&lt;li&gt;Use a &lt;em&gt;LLM › Context&lt;/em&gt; node&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The third option is cleanest since it provides a clear demarcation between instructions, context and prompt. The &lt;em&gt;Context&lt;/em&gt; node sits between the agent and a chat node, augmenting the agent by injecting its contents into requests made by the agent.&lt;/p&gt;

&lt;p&gt;Connect the &lt;em&gt;Plain Text&lt;/em&gt; node containing the article to the &lt;code&gt;context&lt;/code&gt; input. In the final version of the workflow, this should be connected to the &lt;code&gt;input&lt;/code&gt; pin of the &lt;em&gt;Start&lt;/em&gt; node.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F12a9poubk5ta1j516smm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F12a9poubk5ta1j516smm.png" alt="unstructured check" width="676" height="421"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can use a simple &lt;em&gt;Chat&lt;/em&gt; node to do a quick spot check on how the context affects the language model response. However, to facilitate checking the entire collection, the responses for each item should be structured.&lt;/p&gt;

&lt;h2&gt;
  
  
  Structured Check
&lt;/h2&gt;

&lt;p&gt;Replace the &lt;em&gt;Chat&lt;/em&gt; node with a &lt;em&gt;Structured&lt;/em&gt; node, connecting it to the &lt;em&gt;Context&lt;/em&gt; and &lt;em&gt;Transform&lt;/em&gt; nodes.&lt;/p&gt;

&lt;p&gt;Use this schema for the claims checking &lt;em&gt;Structured&lt;/em&gt; node:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"$schema"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"https://json-schema.org/draft/2020-12/schema"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"description"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"A factual claim with evidence from citations or references"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"object"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"required"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="s2"&gt;"claim"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="s2"&gt;"grounding"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"properties"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"claim"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"string"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"description"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"the original claim made by the article"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"grounding"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"enum"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="s2"&gt;"not a claim"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="s2"&gt;"unsupported"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="s2"&gt;"fully supported"&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"description"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"The level of support for the claim provided by citations and references. If the provided text is actually a definition or something other than a claim, then &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;not a claim&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"evidence"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"array"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"items"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"string"&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"description"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"The citations and references that support the claim. Empty if the claim is not supported."&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fncp74sdo4w5erwaj4aat.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fncp74sdo4w5erwaj4aat.png" alt="structured check" width="748" height="442"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Connect the unwrapped claim to the prompt and run.&lt;/p&gt;

&lt;p&gt;By changing the claim index we can see how it handles different claims and statements.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this tutorial we've explored using language models to extract structured data from plain text, then transforming data for further processing. The workflow is still incomplete since we've only checked one claim.&lt;/p&gt;

&lt;p&gt;Before we can go any further, we'll need to learn about iterating over lists using subgraphs. This will allow us to check every claim individually, then draw a conclusion by combining all results.&lt;/p&gt;




&lt;ol&gt;

&lt;li id="fn1"&gt;
&lt;p&gt;Some LLM providers support caching portions of the request. However, since this behavior isn't standardized across providers yet, aerie does not support it. ↩&lt;/p&gt;
&lt;/li&gt;

&lt;/ol&gt;

</description>
      <category>ai</category>
      <category>agents</category>
      <category>opensource</category>
      <category>automation</category>
    </item>
    <item>
      <title>Agentic tool use in Aerie workflows</title>
      <dc:creator>Paton Wong</dc:creator>
      <pubDate>Thu, 09 Apr 2026 19:21:12 +0000</pubDate>
      <link>https://dev.to/patonw/agentic-tool-use-in-aerie-workflows-4l4b</link>
      <guid>https://dev.to/patonw/agentic-tool-use-in-aerie-workflows-4l4b</guid>
      <description>&lt;p&gt;A key defining feature of agents is the ability to use tools to interact with its surrounding environment. Interaction can include gathering information from external sources or triggering actions.&lt;/p&gt;

&lt;p&gt;With respect to software agents, the environment does not necessarily refer to a physical environment or the entire world. Rather, it often refers to the host computer or a set of remote services like reservation systems, knowledge bases, e-commerce platforms.&lt;/p&gt;

&lt;p&gt;An agent uses some kind of program logic to automatically decide which tools to use and how to call them. In the case of AI agents, we use language models as the decision mechanism.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frh3w69o5qc7yio02u2wj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frh3w69o5qc7yio02u2wj.png" alt="range step" width="800" height="543"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In workflows, we can expose tools to language models through &lt;em&gt;Chat&lt;/em&gt; and &lt;em&gt;Structured&lt;/em&gt; nodes. &lt;em&gt;Agent&lt;/em&gt; nodes supply a set of tools, along with other parameters that are relevant to the specific subtask.&lt;/p&gt;

&lt;p&gt;In this example, we will create simple weather agents using live weather services over the Internet.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tool Providers
&lt;/h2&gt;

&lt;p&gt;AI applications can implement and expose tools directly to language models. However, translating between program functions to tool calls, invoking implementations then transforming results can be tedious and error prone.&lt;/p&gt;

&lt;p&gt;Instead, modern agentic systems use tools via a &lt;a href="https://modelcontextprotocol.io/" rel="noopener noreferrer"&gt;Model Context Protocol&lt;/a&gt; interoperability layer. AI applications employ client libraries to connect to MCP servers.&lt;/p&gt;

&lt;p&gt;MCP servers bundle related tools and provide a model/language agnostic interface. They can be run locally or hosted by remote services. Client applications can connect to them through a local STDIO transport, or over an HTTP connection to internal network resources or to the Internet.&lt;/p&gt;

&lt;p&gt;A wide variety of existing MCP servers can be found online[^mcp-lists]. The majority of them are focused on wrapping a particular application or service but many general-purpose utility-centric servers exist.&lt;/p&gt;

&lt;p&gt;[^mcp-lists]: Some MCP server directories: &lt;a href="https://github.com/modelcontextprotocol/servers" rel="noopener noreferrer"&gt;https://github.com/modelcontextprotocol/servers&lt;/a&gt; &lt;a href="https://mcpservers.com/" rel="noopener noreferrer"&gt;https://mcpservers.com/&lt;/a&gt; &lt;a href="https://mcpindex.net/en" rel="noopener noreferrer"&gt;https://mcpindex.net/en&lt;/a&gt; &lt;a href="https://mcpserverdirectory.org/" rel="noopener noreferrer"&gt;https://mcpserverdirectory.org/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F238ql9us7a5myg4srhbx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F238ql9us7a5myg4srhbx.png" alt="tool-tab" width="800" height="625"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In aerie, tool providers are managed from the Tool tab.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdyhb0fx65v5obcxterd5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdyhb0fx65v5obcxterd5.png" alt="add-tools" width="239" height="315"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can create or import tool providers using the buttons at the top.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 &lt;strong&gt;tip&lt;/strong&gt;&lt;br&gt;
There are many tool provider configurations in &lt;code&gt;examples/tools/nix&lt;/code&gt; that might be of use. Use the import button to add them.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;STDIO servers&lt;/strong&gt; are local executables that are launched and closed with the host application. Multiple copies in different applications are independent instances, but may use shared files or services. For instance, most database MCP servers would connect to a running database engine, rather than launching an embedded database instance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;HTTP servers&lt;/strong&gt; are typically remote services hosted by a SaaS company or cloud platform. You will need individual credentials to use these services, usually in the form of an API key.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjr003swx8ts35dvdbb6o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjr003swx8ts35dvdbb6o.png" alt="edit provider" width="329" height="527"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Select the provider from the list to edit its configuration.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz3bk3zjp17ar6l834wot.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz3bk3zjp17ar6l834wot.png" alt="provider export" width="336" height="351"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Using the context menu on a provider you can remove or export the provider settings.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;⚠️ &lt;strong&gt;warning&lt;/strong&gt;&lt;br&gt;
Do not include API keys, passwords and other personal information in the tool provider configuration since those will be included in the export. Set environment variables and reference them in the configuration.&lt;/p&gt;

&lt;p&gt;In a sensitive environment or with high-value credentials, do not set environment variables globally. Rather, use a secrets manager to decrypt them from a vault and set them only for the process.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2d4tg9px4vizna37umc0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2d4tg9px4vizna37umc0.png" alt="inspect tool" width="339" height="554"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Select one of its tools to inspect its schema.&lt;/p&gt;

&lt;h3&gt;
  
  
  Weather Tools
&lt;/h3&gt;

&lt;p&gt;For these example workflows, we will fetch weather data from &lt;a href="https://open-meteo.com/" rel="noopener noreferrer"&gt;open-meteo&lt;/a&gt;, since it does not require API keys. Since there is no official MCP server, we will use a third-party server: &lt;a href="https://github.com/isdaniel/mcp_weather_server" rel="noopener noreferrer"&gt;MCP Weather Server&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;In the Tools tab, import &lt;code&gt;examples/tools/nix/open-meteo.mcp&lt;/code&gt;. By default this will use the &lt;a href="https://nixos.org/" rel="noopener noreferrer"&gt;nix&lt;/a&gt; package manager to load and run &lt;a href="https://docs.astral.sh/uv/guides/tools/" rel="noopener noreferrer"&gt;uvx&lt;/a&gt;. Alternatively, you can invoke &lt;code&gt;uvx&lt;/code&gt; directly with the sole argument &lt;code&gt;mcp_weather_server&lt;/code&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 &lt;strong&gt;tip&lt;/strong&gt;&lt;br&gt;
For other capabilities, explore other included example tools like tavily for web search (requires an account) or qdrant for vector indexing/search.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Weather Chat
&lt;/h2&gt;

&lt;p&gt;When an agent supplies tools to a &lt;em&gt;Chat&lt;/em&gt; node, the language model must decide if a tool is necessary to complete a particular request. If it is, it must also decide which tool to call and what arguments to pass to it.&lt;/p&gt;

&lt;p&gt;The results of the tool call are sent back to the model for a follow-up request to interpret the results. This generally happens behind the scenes automatically. Based on the results, the model might decide to call additional tools or finish its response to the user.&lt;/p&gt;

&lt;p&gt;For example, in planning a short trip for a user, an LLM might first look up the current date, search for activities and events in a certain city, then check the weather for that city to ensure that conditions will be favorable.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;ℹ️ &lt;strong&gt;note&lt;/strong&gt;&lt;br&gt;
The ability to break down a prompt into distinct steps and execute them in sequence largely depends on the model's complexity and training. There is no guarantee the model will complete the task on its own. Codifying these steps is the primary use case of workflows.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn08xc06edi86c26jyod0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn08xc06edi86c26jyod0.png" alt="weather-agent" width="800" height="594"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For the workflow, we'll only focus on checking the weather. Let's start with current conditions.&lt;/p&gt;

&lt;p&gt;Create a new workflow and update the Agent node's system message, replacing the blank with your city:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;You are a personal assistant for someone living in ______&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Disconnect the &lt;code&gt;prompt&lt;/code&gt; input of the &lt;em&gt;Chat&lt;/em&gt; node and override it with a question requiring knowledge of the current weather.&lt;/p&gt;

&lt;p&gt;Even though we added tool providers to the application, earlier, we need to tell individual agents which tools they are allowed to use.&lt;/p&gt;

&lt;p&gt;Add &lt;em&gt;Tools › Select Tools&lt;/em&gt; and check &lt;code&gt;get_current_weather&lt;/code&gt; under open-meteo. Wire it into the Agent node.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 &lt;strong&gt;tip&lt;/strong&gt;&lt;br&gt;
You can select other tools too to check whether the language model can select the correct one on its own.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvxcsokkz9bfnplovbj5k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvxcsokkz9bfnplovbj5k.png" alt="current weather" width="800" height="580"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Add a &lt;em&gt;Preview&lt;/em&gt; node and &lt;strong&gt;run&lt;/strong&gt; the workflow.&lt;/p&gt;

&lt;p&gt;Between the user's question and the final answer, there are extra messages. The LLM first responds in JSON, with parameters for a tool call. It does not call the tool directly, by itself. Rather the host application is responsible for calling the tool and supplying the results back to the LLM, which we can see as the follow-up message.&lt;/p&gt;

&lt;p&gt;This isolates the responsibility of dealing with third-party accounts, permissions, security, etc away from the LLM. All it needs to deal with is language and logic.&lt;/p&gt;

&lt;h2&gt;
  
  
  Multi-step
&lt;/h2&gt;

&lt;p&gt;So far so good, but what about getting forecasts for upcoming days?&lt;/p&gt;

&lt;p&gt;open-meteo has &lt;code&gt;get_weather_details&lt;/code&gt; that can return hourly forecasts over the next day. To get something further out, we'll need to use &lt;code&gt;get_weather_byDateTimeRange&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The latter call requires knowing the current date and time to calculate a range in the future (API does not provide historical data). That might seem trivial at first glance, however language models have no information about the current state of the world, including the current date or time.&lt;/p&gt;

&lt;p&gt;Fortunately, the weather MCP server provides &lt;code&gt;get_current_datetime&lt;/code&gt; which the agent can call before checking the forecast (however, it does not include day-of-week labels).&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 &lt;strong&gt;tip&lt;/strong&gt;&lt;br&gt;
&lt;em&gt;Transform JSON&lt;/em&gt; can also inject &lt;a href="https://gedenkt.at/jaq/manual/#date-time" rel="noopener noreferrer"&gt;date-time&lt;/a&gt; information into a request, though zone handling is limited.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjw523i37owbf2xxtpkab.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjw523i37owbf2xxtpkab.png" alt="multi-step-good" width="800" height="429"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When a language model behaves as expected, it queries the current data before fetching the forecast and the overall workflow succeeds.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmw916w8dxf3b4rtkt54y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmw916w8dxf3b4rtkt54y.png" alt="multi-step-fail" width="800" height="422"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On the other hand, some smaller models that are capable of planning or using tools may refuse to do both simultaneously.&lt;/p&gt;

&lt;p&gt;Even if you explain the steps using instructions, they will still fail to call the correct (or any) tool. In many cases, they will hallucinate the current date or even the entire forecast, rather than failing in an obvious manner (i.e. refusing to generate an incorrect answer or returning an error response).&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;📢 &lt;strong&gt;important&lt;/strong&gt;&lt;br&gt;
You cannot rely on a successful &lt;em&gt;Chat&lt;/em&gt; response to determine that the model has actually executed the necessary steps. When it matters, you will need to check the message history using another agent or conditional logic.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The more tools an agent has to choose from, the more potential for it to get confused. A universal catch-all agent using a small model to route between dozens of tools is likely to fail or hallucinate frequently.&lt;/p&gt;

&lt;h2&gt;
  
  
  Structured
&lt;/h2&gt;

&lt;p&gt;Rather than a general purpose chat agent, if the specific purpose of a workflow is known, we can specify a sequence of steps to complete the task. We can also be more selective about the tools exposed to an agent.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 &lt;strong&gt;tip&lt;/strong&gt;&lt;br&gt;
You can use language models to route general prompts to task-specific branches within a workflow or route between workflows. See the examples and tutorials for more information.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Let's assume that we know answering a prompt requires knowing the current date for a location and the weather forecast.&lt;/p&gt;

&lt;p&gt;With that knowledge, we can break down the workflow into phases:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;getting the current time for the city&lt;/li&gt;
&lt;li&gt;calculating a future date range&lt;/li&gt;
&lt;li&gt;getting the forecast&lt;/li&gt;
&lt;li&gt;answering the original prompt&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo4j7p726jkejqncwt5hz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo4j7p726jkejqncwt5hz.png" alt="extract zone" width="800" height="416"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To get the current time for step 1, we use a language model to extract the zone in much the same way we generated data from a schema in the previous chapter: using the &lt;em&gt;Structured&lt;/em&gt; node.&lt;/p&gt;

&lt;p&gt;Instead of passing in a schema, we pass in an agent that has tools. The LLM will understand that it has to find a city in the input and infer the time zone.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F61jx9w77hpjk24f5xaiq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F61jx9w77hpjk24f5xaiq.png" alt="invoke tool" width="800" height="655"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Unlike the Chat node, the Structured does not automatically call the tool --- it only returns the tool name and arguments. To call the tool in a workflow, use &lt;em&gt;Tools › Invoke Tool&lt;/em&gt;. This allows the workflow to modify or transform the tool arguments before the tool implementation is executed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fefucgcrc1cy2mnsd6q7y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fefucgcrc1cy2mnsd6q7y.png" alt="final chat" width="800" height="477"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With the results in hand, we have several options for sending the date-time to the LLM for additional steps. One option is to emulate the prompt/tool call/tool results loop used by the Chat node by simply passing the conversation to a new agent.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frh3w69o5qc7yio02u2wj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frh3w69o5qc7yio02u2wj.png" alt="range step" width="800" height="543"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If we determine, during testing that the model does not perform steps 2 &amp;amp; 3 reliably, we can add them to the workflow similarly to the first step. Here, we've used a model to calculate the date range as a structured object, leaving the forecast call to the &lt;em&gt;Chat&lt;/em&gt; node.&lt;/p&gt;

&lt;h2&gt;
  
  
  Chat History
&lt;/h2&gt;

&lt;p&gt;To finish, we need to reconnect the Start and Finish nodes so that the workflow acts on the user prompt and displays messages in the Chat tab. However, there are multiple ways to organize messages in the final history.&lt;/p&gt;

&lt;p&gt;If the &lt;em&gt;Chat&lt;/em&gt; node extends the conversation, then all intermediate messages will appear as a flattened sequence in the history. We might not necessarily want to see messages in between the original question and the final answer.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fopz3g4keztmwtzhwa4b9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fopz3g4keztmwtzhwa4b9.png" alt="extend history" width="800" height="425"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;One option is to take only the user prompt and final response and add them manually using &lt;em&gt;History › Extend History&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2hmhlyvkf4xl7csxev30.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2hmhlyvkf4xl7csxev30.png" alt="extended results" width="596" height="346"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The resulting session is now very simple and clean. However, we lose all the intermediate information that was used to arrive at the conclusion.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnxl1u0qjxec12hmtqwsi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnxl1u0qjxec12hmtqwsi.png" alt="side chat" width="800" height="418"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;An alternative that preserves the data without cluttering the session is to use &lt;em&gt;History › Side Chat&lt;/em&gt;. The wiring is less complex than using an Extend History node.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;collapsed&lt;/th&gt;
&lt;th&gt;expanded&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffec2d0jlr4ltx2emon81.png" alt="side results 1" width="592" height="361"&gt;&lt;/td&gt;
&lt;td&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4tul7yrgz1jjnk3fyosj.png" alt="side results 2" width="591" height="404"&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The chat view still shows just the first and last messages by default, but now the intermediate data is accessible from the expanding "details" section.&lt;/p&gt;

&lt;p&gt;Once you are satisfied with the workflow, remember to reconnect the &lt;code&gt;input&lt;/code&gt; pin of the &lt;em&gt;Start&lt;/em&gt; node to the &lt;em&gt;Chat&lt;/em&gt; node.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this article we've developed some contrived examples to demonstrate basic tool integration.&lt;/p&gt;

&lt;p&gt;While these are too narrow in scope for a general-purpose chat bot, workflows are a perfect fit for specialist tasks. Larger systems can be built by composing smaller workflows using subgraphs and chaining.&lt;/p&gt;

&lt;p&gt;In the following set of tutorials we will explore using subgraphs to organize more complex workflows before covering chained workflows.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>agents</category>
      <category>opensource</category>
      <category>automation</category>
    </item>
    <item>
      <title>Structured Generation: teaching AI agents to color inside the lines</title>
      <dc:creator>Paton Wong</dc:creator>
      <pubDate>Wed, 08 Apr 2026 16:23:08 +0000</pubDate>
      <link>https://dev.to/patonw/structured-generation-taming-ai-agents-with-aerie-workflows-gf6</link>
      <guid>https://dev.to/patonw/structured-generation-taming-ai-agents-with-aerie-workflows-gf6</guid>
      <description>&lt;p&gt;In the previous article, we explored generating free-form text in a workflow, as well as dividing responsibility for different parts of a task among agents. This time, let's look into generating machine-readable structured data.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 &lt;strong&gt;tip&lt;/strong&gt;&lt;br&gt;
 Skip to the action if you're already familiar with structured data and schemas.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Motivation
&lt;/h2&gt;

&lt;p&gt;Why would we want data to be structured? First, it is easier to filter, transform and combine documents with automated tools when we know ahead of time the shape of responses and what properties they can contain.&lt;/p&gt;

&lt;p&gt;For instance, if we had to sort and organize thousands of profiles in unstructured text:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"John was born twenty five years ago and programs Python"&lt;/p&gt;

&lt;p&gt;"Alice is a cryptography expert born in 1998"&lt;/p&gt;

&lt;p&gt;etc.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Using traditional text-based tools there are an uncountable number of permutations, phrasings, exceptions and edge cases to consider. Instead, by using a language model to transform texts into structured data, we could use simple operations to fill in missing data and categorize each entry:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"John"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"age"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;25&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"occupation"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"software engineer"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"skills"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"python"&lt;/span&gt;&lt;span class="p"&gt;]},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Alice"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"dob"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"1998"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"occupation"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"cryptographer"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Second, many external applications and services require structured inputs. If we can construct structured data, our agents will be able to interact with these systems, bridging natural language and programmatic logic. In the parlance of AI agents, these are referred to as "tools" or "functions".&lt;/p&gt;

&lt;p&gt;Generative language models are exceptionally good at translating between unstructured and structured data. Even many small models can extract structured data from paragraphs reliably. Medium-sized models with long context windows can often handle larger documents while following specific instructions about what to find.&lt;/p&gt;

&lt;h2&gt;
  
  
  JSON Schema
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://en.wikipedia.org/wiki/JSON" rel="noopener noreferrer"&gt;JSON&lt;/a&gt; (JavaScript Object Notation) is the de-facto standard for structured data across modern services and applications. Not only can programs easily parse JSON, but since it is a self-describing format, even an untrained human user can glean meaning from a JSON document without needing a deep understanding of its syntax. Most modern LLMs can generate JSON reliably when creating examples for a user or for invoking remote tools.&lt;/p&gt;

&lt;p&gt;To instruct language models on the specific structure desired, we can use &lt;a href="https://json-schema.org/overview/what-is-jsonschema" rel="noopener noreferrer"&gt;JSON Schema&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Schemas themselves are written as JSON documents. They dictate what fields are required in the target documents, along with type restrictions and more. It provides a way to describe a JSON document with strict precision, maximum flexibility or anywhere in between.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 &lt;strong&gt;tip&lt;/strong&gt;&lt;br&gt;
 While you can write schemas from scratch, it may be quicker to either use a language model to generate one or a specialized schema editing tool (e.g. &lt;a href="https://json.ophir.dev" rel="noopener noreferrer"&gt;JSONJoy&lt;/a&gt;). By leveraging LLMs you don't need to know the rules for building schemas &lt;sup id="fnref1"&gt;1&lt;/sup&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;For this tutorial, however, we'll use one of the canonical examples: &lt;a href="https://json-schema.org/learn/json-schema-examples#user-profile" rel="noopener noreferrer"&gt;User Profile&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Generating Data
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frekgo9kwl4ubkbt4sysu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frekgo9kwl4ubkbt4sysu.png" alt="rename workflow" width="169" height="276"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Start by creating a new workflow from the command palette.&lt;/p&gt;

&lt;p&gt;Use the rename button to replace the automatic name.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc3gg2a5r5rcj5xsclhoq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc3gg2a5r5rcj5xsclhoq.png" alt="schema contents" width="649" height="499"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Remove the Chat node using either the node context menu or Delete key.&lt;/p&gt;

&lt;p&gt;Replace it with a &lt;em&gt;LLM › Structured&lt;/em&gt; node. Conversation history is not needed this time, but make sure to connect the Agent.&lt;/p&gt;

&lt;p&gt;Use a &lt;em&gt;JSON › Parse JSON&lt;/em&gt; node to provide the schema to the Structured node. Copy the schema contents from &lt;a href="https://json-schema.org/learn/json-schema-examples#user-profile" rel="noopener noreferrer"&gt;User Profile&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;This will force the &lt;em&gt;Structured&lt;/em&gt; node to generate data in the specified format. If the model fails to produce JSON or does not follow the schema, we can set the node to retry a number of times.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7e7qxc7lem2eb3cnqfe3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7e7qxc7lem2eb3cnqfe3.png" alt="generated" width="800" height="454"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Set the prompt describing a user or character and tell the model to follow the schema.&lt;/p&gt;

&lt;p&gt;Attach a &lt;em&gt;Preview&lt;/em&gt; node to the &lt;code&gt;data&lt;/code&gt; pin of the &lt;em&gt;Structured&lt;/em&gt; node.&lt;/p&gt;

&lt;p&gt;Depending on the model and temperature this may work the first time or it may fail.&lt;/p&gt;

&lt;p&gt;You can try switching models or adjusting the temperature or experiment with the &lt;code&gt;retry&lt;/code&gt; and &lt;code&gt;extract&lt;/code&gt; options.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 &lt;strong&gt;tip&lt;/strong&gt;&lt;br&gt;
 The &lt;code&gt;retry&lt;/code&gt; and &lt;code&gt;extract&lt;/code&gt; options on the &lt;em&gt;Structured&lt;/em&gt; node also provide mechanisms for coping with different failure modes of weaker models. Often when retrying the model will understand its mistake and correct it. Other times, the model will get stuck explaining or apologizing while also producing correct structured data. For the latter case, the &lt;code&gt;extract&lt;/code&gt; option will attempt to find structured data embedded within the response.&lt;/p&gt;

&lt;p&gt;Together, they can prevent most common errors. Sometimes, however, you will still want to handle failure recovery within the workflow. Refer to the documentation for details.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Templating
&lt;/h2&gt;

&lt;p&gt;Now that we have a JSON document with a known structure, there are many things we can do with it. Some examples are request routing, database updates, and content filtering. However, for this tutorial, we will only use it to generate unstructured text via a template. At a larger scale, this pattern could also be used to generate reports from longer documents or collections of items.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;ℹ️ &lt;strong&gt;note&lt;/strong&gt;&lt;br&gt;
 This pattern of generating structured data then formatting it immediately is not strictly necessary. LLMs can follow mostly instructions about formatting directly, though they often surround replies with unwanted verbiage. However, this is just a stand-in for more useful transformations.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9604g6ikpttzfvvuj8ja.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9604g6ikpttzfvvuj8ja.png" alt="templating" width="800" height="505"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Add a &lt;em&gt;Value › Template&lt;/em&gt; node to the workflow.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;ℹ️ &lt;strong&gt;note&lt;/strong&gt;&lt;br&gt;
 The &lt;em&gt;Template&lt;/em&gt; node uses &lt;a href="https://docs.rs/minijinja/latest/minijinja/syntax/index.html" rel="noopener noreferrer"&gt;jinja-like syntax&lt;/a&gt; which supports conditionals, filters, iteration and more.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The node takes a template string which may contain variables. On execution, the node substitutes the variables with concrete values provided by a JSON object via the &lt;code&gt;variables&lt;/code&gt; input. Variables can be simple strings, arrays or dictionaries.&lt;/p&gt;

&lt;p&gt;Attach the &lt;code&gt;variables&lt;/code&gt; input to the &lt;code&gt;data&lt;/code&gt; output of the &lt;em&gt;Structured&lt;/em&gt; node and use this template:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jinja"&gt;&lt;code&gt;&lt;span class="c"&gt;## Profile ##&lt;/span&gt;

name: &lt;span class="cp"&gt;{{&lt;/span&gt; &lt;span class="nv"&gt;username&lt;/span&gt; &lt;span class="cp"&gt;}}&lt;/span&gt;
e-mail: &lt;span class="cp"&gt;{{&lt;/span&gt; &lt;span class="nv"&gt;email&lt;/span&gt; &lt;span class="cp"&gt;}}&lt;/span&gt;
Interests:
  &lt;span class="cp"&gt;{%&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="nv"&gt;item&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nv"&gt;interests&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="cp"&gt;%}&lt;/span&gt;
    - &lt;span class="cp"&gt;{{&lt;/span&gt; &lt;span class="nv"&gt;item&lt;/span&gt; &lt;span class="cp"&gt;}}&lt;/span&gt;
  &lt;span class="cp"&gt;{%&lt;/span&gt; &lt;span class="k"&gt;endfor&lt;/span&gt; &lt;span class="cp"&gt;%}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;ℹ️ &lt;strong&gt;note&lt;/strong&gt;&lt;br&gt;
 If the provided context is not a key-value map (e.g. text value, message, etc.) it will be exposed to the template as the variable &lt;code&gt;value&lt;/code&gt;. This can be handy when wrapping a simple value or using a list valued input, without resorting to using a &lt;em&gt;Transform JSON&lt;/em&gt; node to wrap the item in a JSON object.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In addition to generating data directly, we can use the &lt;em&gt;Structured&lt;/em&gt; node to extract structured data from existing text as we'll see in upcoming articles.&lt;/p&gt;

&lt;p&gt;Beyond simple transformations and templating, we could also use structured data to control the flow of execution with conditional branching, iteration or workflow routing which will be covered later.&lt;/p&gt;

&lt;p&gt;Before delving into that, however, we will first cover how to work with external tools to create proper AI agents.&lt;/p&gt;

&lt;h2&gt;
  
  
  Bonus: Transformations
&lt;/h2&gt;

&lt;p&gt;As mentioned in the main article, structured data can be merged and transformed into new structures.&lt;/p&gt;

&lt;p&gt;Examples of things you could do include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;exclude or combine fields&lt;/li&gt;
&lt;li&gt;merge multiple objects&lt;/li&gt;
&lt;li&gt;group elements of a list by field values&lt;/li&gt;
&lt;li&gt;exclude list entries based on value&lt;/li&gt;
&lt;li&gt;remove duplicate entries from a list&lt;/li&gt;
&lt;li&gt;convert a list of entries into a lookup table by name&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;One popular utility for doing this is the command-line utility &lt;a href="https://jqlang.org/manual/" rel="noopener noreferrer"&gt;jq&lt;/a&gt;. The &lt;em&gt;JSON&lt;/em&gt; sub-menu contains nodes that can be used together to provide analogous functionality.&lt;/p&gt;

&lt;p&gt;For instance, to replicate how &lt;em&gt;Template&lt;/em&gt; automatically wraps single values, you can use &lt;em&gt;JSON › Transform JSON&lt;/em&gt; with a simple filter:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{ value: . }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5vakh7purjic06f8g1pu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5vakh7purjic06f8g1pu.png" alt="transform" width="800" height="539"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can also combine data from multiple branches of the workflow using &lt;em&gt;JSON › Gather JSON&lt;/em&gt;. This node takes multiple inputs and combines them into a single JSON array. The inputs can be existing JSON values, texts, numbers or more. By itself, a heterogeneous list of assorted data can be useful, but confusing to debug. Instead we will transform it into an object with descriptive keys.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;ℹ️ &lt;strong&gt;note&lt;/strong&gt;&lt;br&gt;
 The &lt;em&gt;JSON › Transform JSON&lt;/em&gt; node uses an optimized implementation called &lt;a href="https://gedenkt.at/jaq/manual/" rel="noopener noreferrer"&gt;jaq&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;code&gt;jq&lt;/code&gt; syntax can be difficult to comprehend at first. Fortunately, many LLMs are capable of generating filters from a prompt and/or examples.&lt;/p&gt;

&lt;p&gt;With the prompt:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Write a jq filter that takes a list of user entries and creates an object keyed by the username field, removing the username field in the process.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Some models might produce this filter:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;reduce .[] as $u ({}; .[$u.username] = $u | del(.username))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;While others might produce:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[ .[] 
  | { key: ( .username ), value: ( . | del(.username) ) } 
]  | from_entries
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Depending on the complexity of the ask, you may need to iterate with the LLM to fix any problems encountered.&lt;/p&gt;




&lt;ol&gt;

&lt;li id="fn1"&gt;
&lt;p&gt;You can describe the desired structure, providing examples and counter-examples, to a language model which will generate a schema. For instance: "Generate a JSON schema for a user containing a name, login, department and an optional role." ↩&lt;/p&gt;
&lt;/li&gt;

&lt;/ol&gt;

</description>
      <category>ai</category>
      <category>agents</category>
      <category>opensource</category>
      <category>automation</category>
    </item>
    <item>
      <title>Agentic workflows with Aerie</title>
      <dc:creator>Paton Wong</dc:creator>
      <pubDate>Tue, 07 Apr 2026 08:06:39 +0000</pubDate>
      <link>https://dev.to/patonw/agentic-workflows-with-aerie-1724</link>
      <guid>https://dev.to/patonw/agentic-workflows-with-aerie-1724</guid>
      <description>&lt;h2&gt;
  
  
  Introducing Aerie
&lt;/h2&gt;

&lt;p&gt;This is an introduction of a new open-source tool for creating and running AI-powered workflows.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why use workflows?
&lt;/h3&gt;

&lt;p&gt;You hire a brilliant intern and give them unrestricted access to your company's systems with high-level instructions to complete a complex task. They do a reasonably good job without breaking anything important, the first time. Should it be a surprise when a minor misunderstanding of the next task cascades into complete disaster? Yet this is commonly how we manage AI agents. For all their impressive capabilities, language models do not learn from experience as you "engineer" a prompt &lt;sup id="fnref1"&gt;1&lt;/sup&gt;.&lt;/p&gt;

&lt;p&gt;Software agents are systems that make decisions and operate independently from&lt;br&gt;
human supervision on behalf of users. AI agents replace deterministic program&lt;br&gt;
logic with language models. These models, however, are inherently&lt;br&gt;
probabilistic. The more autonomy we give them, the greater the opportunity for&lt;br&gt;
surprises.&lt;/p&gt;

&lt;p&gt;No matter how well-tuned prompts are during development, there are&lt;br&gt;
uncountably many ways for things to go wrong in the wild. The more detailed you&lt;br&gt;
make the prompt to account for pitfalls, the less attention the model can pay&lt;br&gt;
to the core task. Furthermore, failure-retry loops can balloon the context,&lt;br&gt;
confusing the model even further.&lt;/p&gt;

&lt;p&gt;AI powered workflows provide a more reliable alternative to purely&lt;br&gt;
agent-driven systems. A workflow breaks a task down to discrete, well-defined&lt;br&gt;
steps. AI plays a specific but limited role in some of those steps allowing it&lt;br&gt;
to concentrate on what it excels at without extraneous distractions.&lt;/p&gt;
&lt;h3&gt;
  
  
  What is Aerie?
&lt;/h3&gt;

&lt;p&gt;Aerie&lt;sup id="fnref2"&gt;2&lt;/sup&gt; is a graphical tool for building agentic workflows. Programming expertise&lt;br&gt;
is helpful, but not necessary. In this instance graphical is an overloaded term:&lt;br&gt;
aside from the user interface of the visual editor, workflows are structured as&lt;br&gt;
node graphs. Each node represents agents, data transformations, decisions, etc.&lt;br&gt;
Outputs of a node can be connected to inputs of other nodes.&lt;br&gt;
Data flows predictably from one node to the next.&lt;/p&gt;

&lt;p&gt;With this visual approach it's easier to build, debug, explain and iterate on&lt;br&gt;
workflows -- making aerie appropriate for prototyping and collaboration.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8mazkcpuc7310vluq69q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8mazkcpuc7310vluq69q.png" alt="graph legend" width="800" height="625"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Setup
&lt;/h2&gt;

&lt;p&gt;Aerie can be run from &lt;a href="https://github.com/patonw/aerie" rel="noopener noreferrer"&gt;source&lt;/a&gt; or a binary AppImage available on the &lt;a href="https://github.com/patonw/aerie/releases" rel="noopener noreferrer"&gt;releases page&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The AppImage can be run directly under Linux without installation. However, you&lt;br&gt;
will usually need to set the correct permissions after downloading the file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;chmod&lt;/span&gt; +rx aerie-x86_64.AppImage
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It can also be run under Windows with &lt;a href="https://learn.microsoft.com/en-us/windows/wsl/tutorials/gui-apps" rel="noopener noreferrer"&gt;WSL&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Building and running from source is recommended, however. The development stack&lt;br&gt;
provides a uniform and predictable environment for the application. On the&lt;br&gt;
other hand, it requires far more disk space and time for the initial start. For&lt;br&gt;
instructions on building the source, see the &lt;a href="https://patonw.github.io/aerie/dev_start.html" rel="noopener noreferrer"&gt;Development Guide&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;You can also install from source using the &lt;a href="https://nixos.org/download/" rel="noopener noreferrer"&gt;nix tool&lt;/a&gt;: &lt;a href="https://patonw.github.io/aerie/user_start.html#installation" rel="noopener noreferrer"&gt;Installation&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;
  
  
  Getting started
&lt;/h2&gt;

&lt;p&gt;To get a small taste of the potential of this approach, we will start by&lt;br&gt;
building a trivial workflow. It will pass a user's prompt and conversation&lt;br&gt;
history to an LLM and then rewrite its response as a haiku. Almost every modern&lt;br&gt;
language model can handle this in a single step, but we'll use two agents for&lt;br&gt;
didactic purposes.&lt;/p&gt;

&lt;p&gt;In later articles we'll explore topics like data extraction, tool use and&lt;br&gt;
iteration.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnqrkge6fc99oyxyg0l9r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnqrkge6fc99oyxyg0l9r.png" alt="create button" width="265" height="379"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click the Create button on the command palette to create a new workflow with&lt;br&gt;
default nodes. Rather than an empty document, it will contain a basic chat&lt;br&gt;
agent which you can choose to integrate into your workflow or discard. We'll do&lt;br&gt;
the former this time.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0m849dvf9lpfy1gp936y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0m849dvf9lpfy1gp936y.png" alt="finish node" width="800" height="433"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;First, disconnect the &lt;code&gt;conversation&lt;/code&gt; pin on the Finish node, for now.  You can&lt;br&gt;
do this by right-clicking on the wire itself or the pin on either source or&lt;br&gt;
destination node.&lt;/p&gt;

&lt;p&gt;Normally, this would send the completed conversation of the&lt;br&gt;
workflow to the chat session, viewable in the Chat tab. In the meantime, we'll&lt;br&gt;
be working only in the workflow editor.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;ℹ️ &lt;strong&gt;note&lt;/strong&gt;&lt;br&gt;
Pins on the right side of a node are output pins while pins on the left side&lt;br&gt;
are inputs. Information flows in only one direction along a wire from the&lt;br&gt;
output pin of a source node to an input pin of the destination node.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h3&gt;
  
  
  Normal Agent
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1g7nlx6exnexbx6vu2vc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1g7nlx6exnexbx6vu2vc.png" alt="agent wires" width="764" height="700"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, we'll use the existing agent to generate a normal response.&lt;br&gt;
Disconnect the &lt;code&gt;temperature&lt;/code&gt; and &lt;code&gt;input&lt;/code&gt; wires between the &lt;em&gt;Start&lt;/em&gt; and &lt;em&gt;Agent&lt;/em&gt; nodes.&lt;/p&gt;

&lt;p&gt;The &lt;em&gt;Start&lt;/em&gt; node is the entry point into the workflow, gathering settings and&lt;br&gt;
inputs from the execution environment and exposing them to the other nodes in&lt;br&gt;
the workflow. These values are only available from the &lt;em&gt;Start&lt;/em&gt; node.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F911fflvdse09ivdhsxu3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F911fflvdse09ivdhsxu3.png" alt="agent settings" width="438" height="379"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Agent&lt;/em&gt; nodes define parameters for invoking LLMs via Chat and Structured&lt;br&gt;
nodes. An &lt;em&gt;Agent&lt;/em&gt; node does not generate content by itself. Rather it holds&lt;br&gt;
settings to differentiate itself from other agents and can be re-used in&lt;br&gt;
different stages of the workflow by content generating nodes.&lt;/p&gt;

&lt;p&gt;Set the LLM model using the format &lt;code&gt;{provider}/{model}&lt;/code&gt;. Examples:&lt;br&gt;
&lt;code&gt;ollama/devstral:latest&lt;/code&gt; or &lt;a href="https://openrouter.ai/openrouter/free" rel="noopener noreferrer"&gt;&lt;code&gt;openrouter/openrouter/free&lt;/code&gt;&lt;/a&gt;. Most providers will have a list or database of models they provide (e.g. &lt;a href="https://openrouter.ai/models" rel="noopener noreferrer"&gt;https://openrouter.ai/models&lt;/a&gt; &amp;amp; &lt;a href="https://docs.mistral.ai/getting-started/models" rel="noopener noreferrer"&gt;https://docs.mistral.ai/getting-started/models&lt;/a&gt;).&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;📢 &lt;strong&gt;important&lt;/strong&gt;&lt;br&gt;
Local providers like Ollama don't require authentication, but services like &lt;br&gt;
OpenRouter, Anthropic, etc usually require an API key. See API Keys for details.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Set the temperature low (~0.25).&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;ℹ️ &lt;strong&gt;note&lt;/strong&gt;&lt;br&gt;
The temperature can be set between 0.0 and 1.0. It controls how words&lt;br&gt;
are selected from a range of possibilities during generation. It is loosely&lt;br&gt;
correlated with creativity. Higher temperatures mean more improbable outputs,&lt;br&gt;
while lower temperatures tend to produce drier generic responses.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd72ahsbw96wthj4xy7fe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd72ahsbw96wthj4xy7fe.png" alt="chat node" width="385" height="448"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now that the agent is configured let's take a look at the &lt;em&gt;Chat&lt;/em&gt; node. This is the node&lt;br&gt;
that actually interacts with the language model provider to generate content.&lt;/p&gt;

&lt;p&gt;It takes configuration values from an &lt;em&gt;Agent&lt;/em&gt; node and optionally a&lt;br&gt;
conversation history -- an ongoing list of user prompts and agent responses.&lt;br&gt;
In this instance, the conversation is supplied by the &lt;em&gt;Start&lt;/em&gt; node, since this&lt;br&gt;
is the first &lt;em&gt;Chat&lt;/em&gt; in our workflow.&lt;/p&gt;

&lt;p&gt;Finally, it takes a prompt, which you can supply from a text value like the&lt;br&gt;
&lt;code&gt;input&lt;/code&gt; pin of the &lt;em&gt;Start&lt;/em&gt; node as we saw earlier with the default workflow. In&lt;br&gt;
this instance, however, leave the pin unwired and type the text prompt directly&lt;br&gt;
into the node.&lt;/p&gt;
&lt;h3&gt;
  
  
  Saving
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4r9i3c7132mzeyebgqml.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4r9i3c7132mzeyebgqml.png" alt="autosave" width="680" height="306"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Before we continue, it's a good idea to enable &lt;code&gt;autosave&lt;/code&gt; in the Settings tab.&lt;br&gt;
This will write any changes you make to disk automatically. Alternatively you&lt;br&gt;
will need to click the Save button in the command palette manually for all&lt;br&gt;
changed workflows.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;⚠️ &lt;strong&gt;warning&lt;/strong&gt;&lt;br&gt;
If there are unsaved changes to workflows other than the one displayed, they&lt;br&gt;
may be lost. The app will not warn about discarding unsaved changes when&lt;br&gt;
exiting.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h3&gt;
  
  
  Previews
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh41umd4rbji5sn88t3tf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh41umd4rbji5sn88t3tf.png" alt="create preview" width="729" height="693"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So far we've modified existing nodes, but now let's create a new node for&lt;br&gt;
examining wire values during a run. The &lt;em&gt;Preview&lt;/em&gt; node will show intermediate&lt;br&gt;
values when the workflow is run from the editor but has no effect otherwise.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;ℹ️ &lt;strong&gt;note&lt;/strong&gt;&lt;br&gt;
The &lt;em&gt;Preview&lt;/em&gt; node can accept any wire value and will change its display&lt;br&gt;
format according to the type.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Right click on the canvas in the area you want the new node to appear. A&lt;br&gt;
context menu appears with nodes that can be added to this graph. Select the&lt;br&gt;
Preview item to create a new node.&lt;/p&gt;
&lt;h3&gt;
  
  
  Running Workflows
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgrctsfle8d0qpcqkyp5i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgrctsfle8d0qpcqkyp5i.png" alt="running workflow" width="800" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Connect the &lt;code&gt;response&lt;/code&gt; pin of the &lt;em&gt;Chat&lt;/em&gt; node to the &lt;em&gt;Preview&lt;/em&gt;'s input and &lt;strong&gt;Run&lt;/strong&gt; the&lt;br&gt;
workflow using the button in the command palette.&lt;/p&gt;

&lt;p&gt;As the workflow runs, nodes that have finished will be marked with a green&lt;br&gt;
check.&lt;/p&gt;

&lt;p&gt;Nodes that are actively running will have a spinning circle in the corner.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3oy78906rk6mwts5j851.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3oy78906rk6mwts5j851.png" alt="finished workflow" width="800" height="628"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once the workflow has run, the &lt;em&gt;Preview&lt;/em&gt; node will show a standard response to&lt;br&gt;
our prompt.&lt;/p&gt;
&lt;h3&gt;
  
  
  Poetic Agent
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmtv5ija67nqch37grino.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmtv5ija67nqch37grino.png" alt="agent two" width="800" height="412"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now that We have a first agent generating normal (boring) responses, it's time&lt;br&gt;
to create a second agent to generate poetry. This has a distinct purpose and&lt;br&gt;
personality from the previous agent, so we'll configure it with different&lt;br&gt;
settings.&lt;/p&gt;

&lt;p&gt;Create a second agent from the context menu &lt;em&gt;LLM › Agent&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Connect it to the first agent. It will take configuration values from the first&lt;br&gt;
agent unless you override them.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;ℹ️ &lt;strong&gt;note&lt;/strong&gt;&lt;br&gt;
Different languages will have differing proficiencies at various tasks. Some&lt;br&gt;
will focus more on generating program code while others will be better at&lt;br&gt;
writing long-form text. It can be beneficial to experiment with different&lt;br&gt;
combinations in a workflow.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Override the temperature and set it higher (&amp;gt;0.75).&lt;/p&gt;

&lt;p&gt;You can also override the system message (currently blank) to add personality&lt;br&gt;
or add specific instructions for the current task. Instructions can vary&lt;br&gt;
between formatting requirements, strategies for executing a task or admonitions&lt;br&gt;
about avoiding particular pitfalls.&lt;/p&gt;

&lt;p&gt;We won't provide any instructions this time. However, let's give the agent a&lt;br&gt;
role to play, to impart some flavor on the generated result.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi9x5ymvyuyjh1zcj3esf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi9x5ymvyuyjh1zcj3esf.png" alt="chat two" width="800" height="491"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Create a second &lt;em&gt;LLM › Chat&lt;/em&gt; node and connect it to the new agent.&lt;/p&gt;

&lt;p&gt;Since we are asking it to act on prior responses, you will need to connect the&lt;br&gt;
conversation to the previous &lt;em&gt;Chat&lt;/em&gt; node &lt;strong&gt;NOT&lt;/strong&gt; the &lt;em&gt;Start&lt;/em&gt; node.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;📢 &lt;strong&gt;important&lt;/strong&gt;&lt;br&gt;
Connecting it to the &lt;em&gt;Start&lt;/em&gt; node would create a parallel conversation that&lt;br&gt;
omits the previous agent's response.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Finally, connect it to another Preview node so we can compare the results&lt;br&gt;
side-by-side.&lt;/p&gt;
&lt;h2&gt;
  
  
  Incremental Execution
&lt;/h2&gt;

&lt;p&gt;Notice that the new nodes do not have status indicators, yet, in contrast to the&lt;br&gt;
old nodes. This shows which nodes will be executed during an incremental&lt;br&gt;
run. Other nodes with  will be skipped, saving time and avoiding&lt;br&gt;
extra API fees. This allows you to quickly try variations on node parameters or&lt;br&gt;
different combinations of nodes without redundant work.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 &lt;strong&gt;tip&lt;/strong&gt;&lt;br&gt;
Selected nodes and the node under the cursor are also re-executed during an&lt;br&gt;
incremental run.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;You can trigger an incremental run with the shortcut &lt;code&gt;Ctrl+R&lt;/code&gt; (see shortcuts&lt;br&gt;
with the &lt;code&gt;?&lt;/code&gt; key).&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;ℹ️ &lt;strong&gt;note&lt;/strong&gt;&lt;br&gt;
The Run button in the command palette will trigger a full re-run of every&lt;br&gt;
node in the workflow.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frielwltmj8sy7s24hynf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frielwltmj8sy7s24hynf.png" alt="run two" width="780" height="424"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Notice that the "normal" &lt;em&gt;Chat&lt;/em&gt; node does not rerun incrementally (assuming you&lt;br&gt;
haven't changed, selected or hovered over it).&lt;/p&gt;

&lt;p&gt;Try changing the second prompt (e.g. haiku → sonnet) and notice the status&lt;br&gt;
indicator disappears.&lt;/p&gt;

&lt;p&gt;Another incremental run should only re-execute that node.&lt;/p&gt;

&lt;p&gt;If you change the second Agent node, one of two things will happen, depending&lt;br&gt;
on whether the &lt;code&gt;cascade&lt;/code&gt; setting is enabled. When &lt;code&gt;cascade&lt;/code&gt; is enabled, a&lt;br&gt;
status reset will propagate from a node to its children and all its&lt;br&gt;
descendants.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 &lt;strong&gt;tip&lt;/strong&gt;&lt;br&gt;
Without &lt;code&gt;cascade&lt;/code&gt; only the Agent node's status is cleared. To have the Chat&lt;br&gt;
node re-run incrementally, you will need to hover over or select it.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;
  
  
  Chat Sessions
&lt;/h2&gt;

&lt;p&gt;We've been using the Workflow tab exclusively so far. If you go to the Chat&lt;br&gt;
tab, notice that none of the messages appear. That's because the workflow&lt;br&gt;
hasn't added anything to the session. The fix is simple: connect the last Chat&lt;br&gt;
node to the Finish node.&lt;/p&gt;

&lt;p&gt;Why didn't we do this from the beginning? Try another incremental run. You&lt;br&gt;
should get an error about unrelated histories. This is because the&lt;br&gt;
incremental state has an old copy of the conversation.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;ℹ️ &lt;strong&gt;note&lt;/strong&gt;&lt;br&gt;
Internally, replacing it with the current conversation would invalidate the&lt;br&gt;
entire workflow state. Rewinding and using the stale conversation is not&lt;br&gt;
permitted, however, since workflows are not allowed to make destructive&lt;br&gt;
changes to the session. They can only add content, but ignoring new messages&lt;br&gt;
and overwriting them with new ones would remove existing history from the&lt;br&gt;
session.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This restriction only applies to workflows. From the Session tab you can perform&lt;br&gt;
various changes to the conversation history.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;📢 &lt;strong&gt;important&lt;/strong&gt;&lt;br&gt;
Why aren't my chats saved?&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;While what we've seen here isn't particularly groundbreaking or useful, now&lt;br&gt;
you should be comfortable with using the editor to build workflows. Next, we'll&lt;br&gt;
explore generating and manipulating structured data, before moving on to tools,&lt;br&gt;
subgraphs and iteration.&lt;/p&gt;
&lt;h2&gt;
  
  
  Troubleshooting
&lt;/h2&gt;
&lt;h3&gt;
  
  
  Workflow gets stuck on Chat node
&lt;/h3&gt;
&lt;h4&gt;
  
  
  API Keys
&lt;/h4&gt;

&lt;p&gt;API keys specific for each provider must be defined in the &lt;a href="https://github.com/0xPlaygrounds/rig/blob/main/skills/rig/references/providers.md" rel="noopener noreferrer"&gt;environment&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Platforms (Windows, Mac, Linux) have different ways of defining variables at the system-level, per account or for a terminal session.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;ℹ️ &lt;strong&gt;note&lt;/strong&gt;&lt;br&gt;
Changes to the environment will not take effect until the application restarts.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;While it is common practice to use system/account-wide environment variables, there are security concerns stemming from this. One alternative is to use &lt;a href="https://direnv.net/" rel="noopener noreferrer"&gt;direnv&lt;/a&gt; to limit its scope by directory. However, this requires API key to be stored as plain text.&lt;/p&gt;

&lt;p&gt;Example using direnv:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;cd&lt;/span&gt; ~/Projects/xyz

&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; .envrc
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;OPENROUTER_API_KEY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="k"&gt;****&lt;/span&gt;
^D

&lt;span class="nv"&gt;$ &lt;/span&gt;direnv allow &lt;span class="nb"&gt;.&lt;/span&gt;
direnv: reloading
direnv: loading .envrc
direnv &lt;span class="nb"&gt;export&lt;/span&gt;: +OPENROUTER_API_KEY

&lt;span class="nv"&gt;$ &lt;/span&gt;aerie
...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;⚠️ &lt;strong&gt;warning&lt;/strong&gt;&lt;br&gt;
Do not use system/account-wide environment variables for high-value secrets (production/admin/etc tokens/keys/passwords). Session-level variables may also be insecure.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;A more secure option is to use a password manager/vault application with console integration, like &lt;a href="https://bitwarden.com" rel="noopener noreferrer"&gt;Bitwarden&lt;/a&gt;, &lt;a href="https://www.hashicorp.com/en/products/vault" rel="noopener noreferrer"&gt;vault&lt;/a&gt;, &lt;a href="https://www.passwordstore.org/" rel="noopener noreferrer"&gt;pass&lt;/a&gt;, etc.&lt;/p&gt;

&lt;p&gt;Example using Bitwarden CLI (note is &lt;a href="https://www.dotenv.org/docs/security/env.html" rel="noopener noreferrer"&gt;dotenv&lt;/a&gt; formatted):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;aerie &lt;span class="nt"&gt;--env&lt;/span&gt; &amp;lt;&lt;span class="o"&gt;(&lt;/span&gt;bw get notes &lt;span class="s2"&gt;"API Keys"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
? Master password: &lt;span class="k"&gt;*****&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Enable streaming
&lt;/h4&gt;

&lt;p&gt;In some cases, providers may actively generate a response, but the response&lt;br&gt;
itself will be large, taking minutes to complete. Most providers support&lt;br&gt;
streaming individual tokens, allowing you to see the response as it is&lt;br&gt;
generated, rather than waiting for it to finish.&lt;/p&gt;

&lt;h4&gt;
  
  
  Change providers/models
&lt;/h4&gt;

&lt;p&gt;Some providers have high latency or unreliable connections. If one does not&lt;br&gt;
respond in a reasonable amount of time try another.&lt;/p&gt;

&lt;p&gt;Be aware that some providers (&lt;a href="https://openrouter.ai/" rel="noopener noreferrer"&gt;openrouter&lt;/a&gt; for instance) proxy to other providers. Different models may run on different providers.&lt;/p&gt;

&lt;p&gt;Even on a single provider, models may be allocated different hardware resources&lt;br&gt;
to handle different requirements or due to popularity.&lt;/p&gt;

&lt;h3&gt;
  
  
  Workflow gets stuck elsewhere
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Check console logs
&lt;/h4&gt;

&lt;p&gt;This application is still under active development. Most errors will trigger an&lt;br&gt;
error dialog, but some may cause the run to fail silently. The console may&lt;br&gt;
provider warnings or other indication about what has failed.&lt;/p&gt;

&lt;h3&gt;
  
  
  Can't edit node
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Workflow is running or frozen
&lt;/h4&gt;

&lt;p&gt;The workflow can't be edited while it is running. Wait for it to complete or&lt;br&gt;
use the &lt;strong&gt;Stop&lt;/strong&gt; button to interrupt it.&lt;/p&gt;

&lt;p&gt;The editor can be frozen/unfrozen manually or while examining edit history.&lt;br&gt;
This prevents unintended changes when browsing through the Undo stack.&lt;/p&gt;

&lt;p&gt;To unfreeze the workflow, toggle the button on the control palette.&lt;/p&gt;

&lt;h4&gt;
  
  
  (Dis)connect input pins
&lt;/h4&gt;

&lt;p&gt;Some fields can take values from controls on the node as well as input wires.&lt;/p&gt;

&lt;p&gt;The controls will not be visible unless the wire is disconnected.&lt;/p&gt;

&lt;h4&gt;
  
  
  Toggle optional controls
&lt;/h4&gt;

&lt;p&gt;Some node fields are optional. For example, fields that might override a&lt;br&gt;
previous value will need to be enabled to be edited.&lt;/p&gt;

&lt;h3&gt;
  
  
  Chat history disappears on restarting app
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Set active session
&lt;/h4&gt;

&lt;p&gt;By default no session is active. When no session is active (denoted by an&lt;br&gt;
empty value in the session selection) chats are discarded when the app exits.&lt;br&gt;
To save an ongoing chat, rename the session. The active session is reloaded&lt;br&gt;
the next time you start the app.&lt;/p&gt;




&lt;ol&gt;

&lt;li id="fn1"&gt;
&lt;p&gt;Fine tuning models is a different matter, with steep data and resource requirements. ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn2"&gt;
&lt;p&gt;A nest of a bird of prey perched high on a cliff or tree top. ↩&lt;/p&gt;
&lt;/li&gt;

&lt;/ol&gt;

</description>
      <category>ai</category>
      <category>opensource</category>
      <category>automation</category>
    </item>
  </channel>
</rss>
