<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: shi jizhi</title>
    <description>The latest articles on DEV Community by shi jizhi (@shi_jizhi).</description>
    <link>https://dev.to/shi_jizhi</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/shi_jizhi"/>
    <language>en</language>
    <item>
      <title>I Built a Tiny Async DSL for AI Agent Workflows (and Why I Avoided Graph Frameworks)</title>
      <dc:creator>shi jizhi</dc:creator>
      <pubDate>Fri, 23 Jan 2026 08:22:14 +0000</pubDate>
      <link>https://dev.to/shi_jizhi/i-built-a-tiny-async-dsl-for-ai-agent-workflows-and-why-i-avoided-graph-frameworks-51kl</link>
      <guid>https://dev.to/shi_jizhi/i-built-a-tiny-async-dsl-for-ai-agent-workflows-and-why-i-avoided-graph-frameworks-51kl</guid>
      <description>&lt;p&gt;When building LLM-powered agents, I kept running into the same problem:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Most frameworks feel heavier than the actual workflow logic.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Graphs, nodes, planners, routers, memory managers…&lt;br&gt;&lt;br&gt;
All useful, but for many real projects, I just wanted to express:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;do step A
&lt;/li&gt;
&lt;li&gt;then step B
&lt;/li&gt;
&lt;li&gt;maybe loop
&lt;/li&gt;
&lt;li&gt;maybe branch
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So I tried a different direction:&lt;br&gt;&lt;br&gt;
&lt;strong&gt;what if an agent workflow is just normal async Python functions, composed together?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;That idea became &lt;strong&gt;PicoFlow&lt;/strong&gt; — a tiny, async-first DSL for AI agent workflows.&lt;/p&gt;


&lt;h2&gt;
  
  
  Why I Didn’t Want Graph-Based Frameworks
&lt;/h2&gt;

&lt;p&gt;Frameworks like LangChain and CrewAI are powerful, but they come with tradeoffs:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Problem&lt;/th&gt;
&lt;th&gt;What I experienced&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Heavy abstractions&lt;/td&gt;
&lt;td&gt;You think in framework concepts, not code&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Debugging friction&lt;/td&gt;
&lt;td&gt;Stack traces jump across internal layers&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Overkill for small agents&lt;/td&gt;
&lt;td&gt;Simple flows still require complex setup&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;For small and medium workflows, this felt unnecessary.&lt;/p&gt;

&lt;p&gt;I wanted something closer to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;normal async functions&lt;/li&gt;
&lt;li&gt;explicit control flow&lt;/li&gt;
&lt;li&gt;minimal runtime magic&lt;/li&gt;
&lt;/ul&gt;


&lt;h2&gt;
  
  
  Design Principle: Workflow = Function Composition
&lt;/h2&gt;

&lt;p&gt;In PicoFlow, each step is just an async function:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;picoflow&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;flow&lt;/span&gt;

&lt;span class="nd"&gt;@flow&lt;/span&gt;
&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;step_a&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;with_output&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;hello&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="nd"&gt;@flow&lt;/span&gt;
&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;step_b&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;with_output&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;output&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt; world&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And composition is just:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;pipeline&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;step_a&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;step_b&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;No nodes. No graphs. No planners.&lt;/p&gt;

&lt;p&gt;Just functions.&lt;/p&gt;




&lt;h2&gt;
  
  
  LLM Is Just Another Step
&lt;/h2&gt;

&lt;p&gt;Calling an LLM is also just a flow:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;picoflow&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;llm&lt;/span&gt;

&lt;span class="n"&gt;LLM_URL&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;llm+openai://api.openai.com/v1/chat/completions?model=gpt-4.1-mini&amp;amp;api_key_env=OPENAI_API_KEY&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;

&lt;span class="n"&gt;agent&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;step_a&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;llm&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Answer in one sentence: {output}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;llm_adapter&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;LLM_URL&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Key ideas:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Prompt is a template&lt;/li&gt;
&lt;li&gt;Context is explicit&lt;/li&gt;
&lt;li&gt;LLM backend is configured via URL-style adapter&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So switching providers does not affect your workflow code.&lt;/p&gt;




&lt;h2&gt;
  
  
  Control Flow Without Framework Magic
&lt;/h2&gt;

&lt;p&gt;Because everything is Python, you can express loops and conditions naturally.&lt;/p&gt;

&lt;p&gt;Example: repeat until done:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;picoflow&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Flow&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;repeat&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;step&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;Flow&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="k"&gt;while&lt;/span&gt; &lt;span class="ow"&gt;not&lt;/span&gt; &lt;span class="n"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;done&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="n"&gt;ctx&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;step&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;acall&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;ctx&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nc"&gt;Flow&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;run&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;agent&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;repeat&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;thinking_step&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;acting_step&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;No custom DSL.&lt;br&gt;&lt;br&gt;
No hidden schedulers.&lt;br&gt;&lt;br&gt;
Just async code.&lt;/p&gt;




&lt;h2&gt;
  
  
  LangChain vs PicoFlow: A Concrete Comparison
&lt;/h2&gt;

&lt;p&gt;Let’s compare a very simple task:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Take user input, ask the LLM to summarize it, and return the result.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  LangChain (simplified)
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;langchain.chat_models&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;ChatOpenAI&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;langchain.prompts&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;PromptTemplate&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;langchain.chains&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;LLMChain&lt;/span&gt;

&lt;span class="n"&gt;llm&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;ChatOpenAI&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="n"&gt;prompt&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;PromptTemplate&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;from_template&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Summarize in one sentence: {text}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;chain&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;LLMChain&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;llm&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;llm&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;prompt&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;prompt&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;chain&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;text&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;user_input&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Already this requires understanding:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;LLM wrappers&lt;/li&gt;
&lt;li&gt;prompt objects&lt;/li&gt;
&lt;li&gt;chain abstractions&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When workflows grow, routers, memory, and tools quickly add more layers.&lt;/p&gt;




&lt;h3&gt;
  
  
  PicoFlow
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;picoflow&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;flow&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;llm&lt;/span&gt;

&lt;span class="nd"&gt;@flow&lt;/span&gt;
&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;input_step&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;with_input&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;user_input&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;agent&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;input_step&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;llm&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Summarize in one sentence: {input}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;llm_adapter&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;LLM_URL&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;agent&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="n"&gt;output&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;What’s different:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;no separate chain objects&lt;/li&gt;
&lt;li&gt;prompt inline where it is used&lt;/li&gt;
&lt;li&gt;workflow is explicit Python composition&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Debugging is also straightforward because stack traces remain inside your own code.&lt;/p&gt;




&lt;h2&gt;
  
  
  What PicoFlow Is (and Is Not)
&lt;/h2&gt;

&lt;h3&gt;
  
  
  It is:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;async-first&lt;/li&gt;
&lt;li&gt;minimal abstractions&lt;/li&gt;
&lt;li&gt;explicit data flow&lt;/li&gt;
&lt;li&gt;easy to debug&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  It is not:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;a full agent operating system&lt;/li&gt;
&lt;li&gt;a prompt management platform&lt;/li&gt;
&lt;li&gt;a graph orchestration engine&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you want large-scale multi-agent coordination, LangGraph may be a better fit.&lt;/p&gt;

&lt;p&gt;If you want &lt;strong&gt;simple, readable, hackable workflows&lt;/strong&gt;, PicoFlow is designed for that space.&lt;/p&gt;




&lt;h2&gt;
  
  
  When This Approach Works Best
&lt;/h2&gt;

&lt;p&gt;I’ve found PicoFlow useful for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;CLI agents&lt;/li&gt;
&lt;li&gt;backend service pipelines&lt;/li&gt;
&lt;li&gt;tool-using agents&lt;/li&gt;
&lt;li&gt;local LLM workflows&lt;/li&gt;
&lt;li&gt;RAG prototypes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Basically: when you want to stay close to normal Python.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why I Open-Sourced It
&lt;/h2&gt;

&lt;p&gt;This project started as personal tooling while experimenting with agent design.&lt;/p&gt;

&lt;p&gt;But I kept rewriting the same patterns:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;flow composition&lt;/li&gt;
&lt;li&gt;retries&lt;/li&gt;
&lt;li&gt;loops&lt;/li&gt;
&lt;li&gt;tracing&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So I turned it into a small library instead of another private utility module.&lt;/p&gt;

&lt;p&gt;The goal is not to replace big frameworks, but to offer a &lt;strong&gt;simpler option when you don’t need all that machinery.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Try It
&lt;/h2&gt;

&lt;p&gt;Repository:&lt;br&gt;&lt;br&gt;
&lt;a href="https://github.com/the-picoflow/picoflow" rel="noopener noreferrer"&gt;https://github.com/the-picoflow/picoflow&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It’s small, readable, and designed to be easy to modify if needed.&lt;/p&gt;

&lt;p&gt;Feedback and design discussions are very welcome — especially around:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;DSL ergonomics&lt;/li&gt;
&lt;li&gt;control-flow helpers&lt;/li&gt;
&lt;li&gt;tracing and debugging hooks&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Closing Thoughts
&lt;/h2&gt;

&lt;p&gt;Agent frameworks are getting more powerful, but also more complex.&lt;/p&gt;

&lt;p&gt;I think there’s still room for tools that prioritize:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;readability&lt;/li&gt;
&lt;li&gt;composability&lt;/li&gt;
&lt;li&gt;low cognitive overhead&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Sometimes, &lt;strong&gt;less framework is more agent.&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>python</category>
    </item>
  </channel>
</rss>
