<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Fumio SAGAWA</title>
    <description>The latest articles on DEV Community by Fumio SAGAWA (@albatrosary).</description>
    <link>https://dev.to/albatrosary</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/albatrosary"/>
    <language>en</language>
    <item>
      <title>Stopped writing code myself. multi-ai-cli is better.
Multi-LLM orchestration in terminal:
• Blackboard (files)
• @sequence (parallel/sequential)
• @pause for drift control
Lightweight, no heavy frameworks. End tab hell!</title>
      <dc:creator>Fumio SAGAWA</dc:creator>
      <pubDate>Tue, 24 Mar 2026 06:51:25 +0000</pubDate>
      <link>https://dev.to/albatrosary/stopped-writing-code-myself-multi-ai-cli-is-better-multi-llm-orchestration-in-terminal--bl7</link>
      <guid>https://dev.to/albatrosary/stopped-writing-code-myself-multi-ai-cli-is-better-multi-llm-orchestration-in-terminal--bl7</guid>
      <description>&lt;div class="ltag__link--embedded"&gt;
  &lt;div class="crayons-story "&gt;
  &lt;a href="https://dev.to/albatrosary/i-built-multi-ai-cli-to-kill-browser-tab-hell-true-multi-llm-orchestration-blackboard-memory-in-3mc6" class="crayons-story__hidden-navigation-link"&gt;I Built multi-ai-cli to Kill Browser Tab Hell: True Multi-LLM Orchestration + Blackboard Memory in Your Terminal (v0.13.0)&lt;/a&gt;


  &lt;div class="crayons-story__body crayons-story__body-full_post"&gt;
    &lt;div class="crayons-story__top"&gt;
      &lt;div class="crayons-story__meta"&gt;
        &lt;div class="crayons-story__author-pic"&gt;

          &lt;a href="/albatrosary" class="crayons-avatar  crayons-avatar--l  "&gt;
            &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F420242%2Fe71c659c-a5af-47cd-9127-2cfd3faf65b3.jpg" alt="albatrosary profile" class="crayons-avatar__image"&gt;
          &lt;/a&gt;
        &lt;/div&gt;
        &lt;div&gt;
          &lt;div&gt;
            &lt;a href="/albatrosary" class="crayons-story__secondary fw-medium m:hidden"&gt;
              Fumio SAGAWA
            &lt;/a&gt;
            &lt;div class="profile-preview-card relative mb-4 s:mb-0 fw-medium hidden m:inline-block"&gt;
              
                Fumio SAGAWA
                
              
              &lt;div id="story-author-preview-content-3336573" class="profile-preview-card__content crayons-dropdown branded-7 p-4 pt-0"&gt;
                &lt;div class="gap-4 grid"&gt;
                  &lt;div class="-mt-4"&gt;
                    &lt;a href="/albatrosary" class="flex"&gt;
                      &lt;span class="crayons-avatar crayons-avatar--xl mr-2 shrink-0"&gt;
                        &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F420242%2Fe71c659c-a5af-47cd-9127-2cfd3faf65b3.jpg" class="crayons-avatar__image" alt=""&gt;
                      &lt;/span&gt;
                      &lt;span class="crayons-link crayons-subtitle-2 mt-5"&gt;Fumio SAGAWA&lt;/span&gt;
                    &lt;/a&gt;
                  &lt;/div&gt;
                  &lt;div class="print-hidden"&gt;
                    
                      Follow
                    
                  &lt;/div&gt;
                  &lt;div class="author-preview-metadata-container"&gt;&lt;/div&gt;
                &lt;/div&gt;
              &lt;/div&gt;
            &lt;/div&gt;

          &lt;/div&gt;
          &lt;a href="https://dev.to/albatrosary/i-built-multi-ai-cli-to-kill-browser-tab-hell-true-multi-llm-orchestration-blackboard-memory-in-3mc6" class="crayons-story__tertiary fs-xs"&gt;&lt;time&gt;Mar 11&lt;/time&gt;&lt;span class="time-ago-indicator-initial-placeholder"&gt;&lt;/span&gt;&lt;/a&gt;
        &lt;/div&gt;
      &lt;/div&gt;

    &lt;/div&gt;

    &lt;div class="crayons-story__indention"&gt;
      &lt;h2 class="crayons-story__title crayons-story__title-full_post"&gt;
        &lt;a href="https://dev.to/albatrosary/i-built-multi-ai-cli-to-kill-browser-tab-hell-true-multi-llm-orchestration-blackboard-memory-in-3mc6" id="article-link-3336573"&gt;
          I Built multi-ai-cli to Kill Browser Tab Hell: True Multi-LLM Orchestration + Blackboard Memory in Your Terminal (v0.13.0)
        &lt;/a&gt;
      &lt;/h2&gt;
        &lt;div class="crayons-story__tags"&gt;
            &lt;a class="crayons-tag crayons-tag--filled  " href="/t/showdev"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;showdev&lt;/a&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/ai"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;ai&lt;/a&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/productivity"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;productivity&lt;/a&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/python"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;python&lt;/a&gt;
        &lt;/div&gt;
      &lt;div class="crayons-story__bottom"&gt;
        &lt;div class="crayons-story__details"&gt;
            &lt;a href="https://dev.to/albatrosary/i-built-multi-ai-cli-to-kill-browser-tab-hell-true-multi-llm-orchestration-blackboard-memory-in-3mc6#comments" class="crayons-btn crayons-btn--s crayons-btn--ghost crayons-btn--icon-left flex items-center"&gt;
              Comments


              &lt;span class="hidden s:inline"&gt;Add Comment&lt;/span&gt;
            &lt;/a&gt;
        &lt;/div&gt;
        &lt;div class="crayons-story__save"&gt;
          &lt;small class="crayons-story__tertiary fs-xs mr-2"&gt;
            6 min read
          &lt;/small&gt;
            
              &lt;span class="bm-initial"&gt;
                

              &lt;/span&gt;
              &lt;span class="bm-success"&gt;
                

              &lt;/span&gt;
            
        &lt;/div&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/div&gt;

&lt;/div&gt;


</description>
      <category>ai</category>
      <category>productivity</category>
      <category>python</category>
      <category>showdev</category>
    </item>
    <item>
      <title>I Built multi-ai-cli to Kill Browser Tab Hell: True Multi-LLM Orchestration + Blackboard Memory in Your Terminal (v0.13.0)</title>
      <dc:creator>Fumio SAGAWA</dc:creator>
      <pubDate>Wed, 11 Mar 2026 01:26:30 +0000</pubDate>
      <link>https://dev.to/albatrosary/i-built-multi-ai-cli-to-kill-browser-tab-hell-true-multi-llm-orchestration-blackboard-memory-in-3mc6</link>
      <guid>https://dev.to/albatrosary/i-built-multi-ai-cli-to-kill-browser-tab-hell-true-multi-llm-orchestration-blackboard-memory-in-3mc6</guid>
      <description>&lt;p&gt;GPT-4, Claude 3.5, Gemini... Are you still keeping 10 AI tabs open in your browser, endlessly copy-pasting code between your IDE and chat UIs? &lt;/p&gt;

&lt;p&gt;It's time to end the "Tab Hell."&lt;/p&gt;

&lt;p&gt;Let me give you a slightly hot take: Just changing a system prompt to say "You are a reviewer" while hitting the exact same expensive model backend is &lt;strong&gt;not a true "Multi-Agent" system.&lt;/strong&gt; That’s why I built &lt;strong&gt;&lt;a href="https://github.com/ashiras/multi-ai-cli" rel="noopener noreferrer"&gt;multi-ai-cli&lt;/a&gt;&lt;/strong&gt;. It’s a lightweight, Python-powered orchestrator designed to turn your terminal into a true multi-model battlefield.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0g7irbxnkm26ng0azoqp.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0g7irbxnkm26ng0azoqp.gif" alt="multi-ai-cli terminal demonstration showing parallel AI execution with @sequence" width="600" height="337"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The v0.13.0 Highlight: Agent / Engine Separation
&lt;/h2&gt;

&lt;p&gt;Instead of just swapping prompts, we’ve completely separated the physical AI providers (Engines) from their logical roles (Agents). &lt;/p&gt;

&lt;p&gt;Just configure the auto-generated &lt;code&gt;~/.multi-ai/config.ini&lt;/code&gt; file upon your first run like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ini"&gt;&lt;code&gt;&lt;span class="c"&gt;# Example: Mapping Engines to Agents
&lt;/span&gt;&lt;span class="py"&gt;ENGINE.openai_main&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;gpt-4o&lt;/span&gt;
&lt;span class="py"&gt;ENGINE.local_coder&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;qwen2.5-coder:14b  # Yes, Ollama works perfectly!&lt;/span&gt;

&lt;span class="py"&gt;AGENT.gpt.architect&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;openai_main&lt;/span&gt;
&lt;span class="py"&gt;AGENT.local.code&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;local_coder&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now you can route heavy architectural tasks to the smartest cloud models, and offload simple coding tasks to local models. &lt;strong&gt;Spin up Ollama’s &lt;code&gt;qwen2.5-coder:14b&lt;/code&gt; locally, and you get a fully offline, API-key-free multi-AI experience!&lt;/strong&gt; No vendor lock-in. You can freely switch to the optimal backend for each specific role.&lt;/p&gt;

&lt;h2&gt;
  
  
  Core Concept: "Blackboard" Memory &amp;amp; Avoiding API Bankruptcy
&lt;/h2&gt;

&lt;p&gt;When orchestrating multiple AIs, if you keep feeding a bloated conversation history to the APIs, &lt;strong&gt;you will face API bankruptcy in no time.&lt;/strong&gt; To solve this, I split the memory into two layers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;🧠 Short-Term Memory (&lt;code&gt;@scrub&lt;/code&gt;)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;My most-used command. It flushes the messy conversation history instantly while keeping the Persona (system prompt) intact. It stops the AI from hallucinating on old context and saves your wallet.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;% @sequence &lt;span class="nt"&gt;-e&lt;/span&gt;
&lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="k"&gt;*&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt; Editor prompt captured &lt;span class="o"&gt;(&lt;/span&gt;182 chars, 13 lines&lt;span class="o"&gt;)&lt;/span&gt;:
&lt;span class="nt"&gt;---&lt;/span&gt; Preview &lt;span class="nt"&gt;---&lt;/span&gt;
&lt;span class="o"&gt;[&lt;/span&gt;
  @gpt Remember exactly this token: ZEBRA-9182
&lt;span class="o"&gt;||&lt;/span&gt;
  @gemini Remember exactly this token: ZEBRA-9182
&lt;span class="o"&gt;]&lt;/span&gt;
-&amp;gt; @scrub gpt -&amp;gt;
&lt;span class="o"&gt;[&lt;/span&gt;
  @gpt What is the token?
&lt;span class="o"&gt;||&lt;/span&gt;
  @gemini What is the token?
&lt;span class="o"&gt;]&lt;/span&gt;
&lt;span class="nt"&gt;---&lt;/span&gt; End Preview &lt;span class="nt"&gt;---&lt;/span&gt;
&lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="k"&gt;*&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt; Sequence Execution: 3 steps detected.
&lt;span class="o"&gt;==================================================&lt;/span&gt;
&lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="k"&gt;*&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt; Executing Step 1/3 &lt;span class="o"&gt;[&lt;/span&gt;PARALLEL: 2 tasks]...
    Task 1: @gpt Remember exactly this token: ZEBRA-9182
    Task 2: @gemini Remember exactly this token: ZEBRA-9182

&lt;span class="nt"&gt;---&lt;/span&gt; GPT &lt;span class="nt"&gt;---&lt;/span&gt;
Okay — I’ll remember this token exactly:

ZEBRA-9182


&lt;span class="nt"&gt;---&lt;/span&gt; Gemini &lt;span class="nt"&gt;---&lt;/span&gt;
I have memorized the token exactly: &lt;span class="k"&gt;**&lt;/span&gt;ZEBRA-9182&lt;span class="k"&gt;**&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt; 

Just &lt;span class="nb"&gt;let &lt;/span&gt;me know whenever you need me to recall it!

&lt;span class="o"&gt;[&lt;/span&gt;✓] Step 1/3 completed &lt;span class="o"&gt;(&lt;/span&gt;all parallel tasks &lt;span class="k"&gt;done&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt;
&lt;span class="nt"&gt;--------------------------------------------------&lt;/span&gt;
&lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="k"&gt;*&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt; Executing Step 2/3...
    Command: @scrub gpt
&lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="k"&gt;*&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt; GPT memory scrubbed.
&lt;span class="o"&gt;[&lt;/span&gt;✓] Step 2/3 completed successfully.
&lt;span class="nt"&gt;--------------------------------------------------&lt;/span&gt;
&lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="k"&gt;*&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt; Executing Step 3/3 &lt;span class="o"&gt;[&lt;/span&gt;PARALLEL: 2 tasks]...
    Task 1: @gpt What is the &lt;span class="s1"&gt;'token?'&lt;/span&gt;
    Task 2: @gemini What is the &lt;span class="s1"&gt;'token?'&lt;/span&gt;

&lt;span class="nt"&gt;---&lt;/span&gt; GPT &lt;span class="nt"&gt;---&lt;/span&gt;
Which token &lt;span class="k"&gt;do &lt;/span&gt;you mean?

If you mean:
- API token: I can’t see your secrets or account tokens.
- A “token” &lt;span class="k"&gt;in &lt;/span&gt;text/LLMs: it’s a chunk of text a model processes, often a word or part of a word.
- Auth/session token &lt;span class="k"&gt;in &lt;/span&gt;an app: it’s a credential used to prove identity.

Tell me the context and I’ll answer precisely.


&lt;span class="nt"&gt;---&lt;/span&gt; Gemini &lt;span class="nt"&gt;---&lt;/span&gt;
The token is ZEBRA-9182.

&lt;span class="o"&gt;[&lt;/span&gt;✓] Step 3/3 completed &lt;span class="o"&gt;(&lt;/span&gt;all parallel tasks &lt;span class="k"&gt;done&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt;
&lt;span class="o"&gt;==================================================&lt;/span&gt;
&lt;span class="o"&gt;[&lt;/span&gt;✓] Sequence Execution complete. All 3 steps succeeded.
% 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;💾 Long-Term Blackboard Memory (&lt;code&gt;-r&lt;/code&gt; / &lt;code&gt;-w&lt;/code&gt;)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Save the AI's output to local files (&lt;code&gt;-w&lt;/code&gt;), and feed them into different models later (&lt;code&gt;-r&lt;/code&gt;). The real magic here is &lt;strong&gt;State Recovery&lt;/strong&gt;. If an automated pipeline fails halfway through, you don't have to start over. You just read the last saved file and design a new flow to recover from that exact point.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;% @sequence &lt;span class="nt"&gt;-e&lt;/span&gt;
&lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="k"&gt;*&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt; Editor prompt captured &lt;span class="o"&gt;(&lt;/span&gt;225 chars, 7 lines&lt;span class="o"&gt;)&lt;/span&gt;:
&lt;span class="nt"&gt;---&lt;/span&gt; Preview &lt;span class="nt"&gt;---&lt;/span&gt;

@sh &lt;span class="s2"&gt;"echo '&amp;lt;p&amp;gt;Hello World&amp;lt;/p&amp;gt;'"&lt;/span&gt; &lt;span class="nt"&gt;-w&lt;/span&gt; raw.html
-&amp;gt;
@gpt &lt;span class="s2"&gt;"Extract the text from this HTML"&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; raw.html &lt;span class="nt"&gt;-w&lt;/span&gt; text.txt
-&amp;gt;
@claude &lt;span class="s2"&gt;"Translate this text into Japanese"&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; text.txt
-&amp;gt;
@gemini &lt;span class="s2"&gt;"Translate this text into French"&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; text.txt
&lt;span class="nt"&gt;---&lt;/span&gt; End Preview &lt;span class="nt"&gt;---&lt;/span&gt;
&lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="k"&gt;*&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt; Sequence Execution: 4 steps detected.
&lt;span class="o"&gt;==================================================&lt;/span&gt;
&lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="k"&gt;*&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt; Executing Step 1/4...
    Command: @sh &lt;span class="s1"&gt;'echo '&lt;/span&gt;&lt;span class="s2"&gt;"'"&lt;/span&gt;&lt;span class="s1"&gt;'&amp;lt;p&amp;gt;Hello World&amp;lt;/p&amp;gt;'&lt;/span&gt;&lt;span class="s2"&gt;"'"&lt;/span&gt;&lt;span class="s1"&gt;''&lt;/span&gt; &lt;span class="nt"&gt;-w&lt;/span&gt; raw.html
&lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="k"&gt;*&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt; @sh: Executing: &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s1"&gt;'&amp;lt;p&amp;gt;Hello World&amp;lt;/p&amp;gt;'&lt;/span&gt;
&lt;span class="o"&gt;[&lt;/span&gt;✓] @sh: SUCCESS &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;exit &lt;/span&gt;code: 0, 12.0ms&lt;span class="o"&gt;)&lt;/span&gt;
&lt;span class="nt"&gt;---&lt;/span&gt; stdout &lt;span class="nt"&gt;---&lt;/span&gt;
&amp;lt;p&amp;gt;Hello World&amp;lt;/p&amp;gt;
&lt;span class="nt"&gt;---&lt;/span&gt; end stdout &lt;span class="nt"&gt;---&lt;/span&gt;
&lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="k"&gt;*&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt; @sh: Artifact saved to &lt;span class="s1"&gt;'raw.html'&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;format: text&lt;span class="o"&gt;)&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt;
&lt;span class="o"&gt;[&lt;/span&gt;✓] Step 1/4 completed successfully.
&lt;span class="nt"&gt;--------------------------------------------------&lt;/span&gt;
&lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="k"&gt;*&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt; Executing Step 2/4...
    Command: @gpt &lt;span class="s1"&gt;'Extract the text from this HTML'&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; raw.html &lt;span class="nt"&gt;-w&lt;/span&gt; text.txt
&lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="k"&gt;*&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt; Result saved to &lt;span class="s1"&gt;'text.txt'&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;mode: raw&lt;span class="o"&gt;)&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt;                                                                            
&lt;span class="o"&gt;[&lt;/span&gt;✓] Step 2/4 completed successfully.
&lt;span class="nt"&gt;--------------------------------------------------&lt;/span&gt;
&lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="k"&gt;*&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt; Executing Step 3/4...
    Command: @claude &lt;span class="s1"&gt;'Translate this text into Japanese'&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; text.txt

&lt;span class="nt"&gt;---&lt;/span&gt; Claude &lt;span class="nt"&gt;---&lt;/span&gt;
&lt;span class="nt"&gt;---&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;File: text.txt] &lt;span class="nt"&gt;---&lt;/span&gt;
こんにちは世界
&lt;span class="nt"&gt;---&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;End of File: text.txt] &lt;span class="nt"&gt;---&lt;/span&gt;

&lt;span class="o"&gt;[&lt;/span&gt;✓] Step 3/4 completed successfully.
&lt;span class="nt"&gt;--------------------------------------------------&lt;/span&gt;
&lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="k"&gt;*&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt; Executing Step 4/4...
    Command: @gemini &lt;span class="s1"&gt;'Translate this text into French'&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; text.txt

&lt;span class="nt"&gt;---&lt;/span&gt; Gemini &lt;span class="nt"&gt;---&lt;/span&gt;
Bonjour le monde

&lt;span class="o"&gt;[&lt;/span&gt;✓] Step 4/4 completed successfully.
&lt;span class="o"&gt;==================================================&lt;/span&gt;
&lt;span class="o"&gt;[&lt;/span&gt;✓] Sequence Execution complete. All 4 steps succeeded.
% 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  The Workflows: Terminal Superiority
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Parallel Orchestration with HAN Syntax (&lt;code&gt;@sequence&lt;/code&gt;)
&lt;/h3&gt;

&lt;p&gt;Write human-AI hybrid workflows like code. Use &lt;code&gt;-&amp;gt;&lt;/code&gt; for sequential steps, and &lt;code&gt;[ A || B ]&lt;/code&gt; for parallel execution.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Fetch code from GitHub, run parallel reviews, and merge the results&lt;/span&gt;
@sequence
-&amp;gt; @github.file &lt;span class="nt"&gt;--repo&lt;/span&gt; &lt;span class="s2"&gt;"myproj/repo"&lt;/span&gt; &lt;span class="nt"&gt;--path&lt;/span&gt; &lt;span class="s2"&gt;"app.py"&lt;/span&gt; &lt;span class="nt"&gt;-w&lt;/span&gt; code.md
-&amp;gt; &lt;span class="o"&gt;[&lt;/span&gt; @claude.review &lt;span class="s2"&gt;"Find bugs"&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; code.md &lt;span class="nt"&gt;-w&lt;/span&gt; claude_review.md
  &lt;span class="o"&gt;||&lt;/span&gt; @gemini.plan &lt;span class="s2"&gt;"Optimization ideas"&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; code.md &lt;span class="nt"&gt;-w&lt;/span&gt; gemini_opt.md
  &lt;span class="o"&gt;||&lt;/span&gt; @gpt.code &lt;span class="s2"&gt;"Add test cases"&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; code.md &lt;span class="nt"&gt;-w&lt;/span&gt; tests.py &lt;span class="o"&gt;]&lt;/span&gt;
-&amp;gt; @gpt &lt;span class="s2"&gt;"Merge the 3 reviews above and create the final version"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
   &lt;span class="nt"&gt;-r&lt;/span&gt; claude_review.md &lt;span class="nt"&gt;-r&lt;/span&gt; gemini_opt.md &lt;span class="nt"&gt;-r&lt;/span&gt; tests.py &lt;span class="nt"&gt;-w&lt;/span&gt; final.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;(💡 Pro Tip: Open another terminal and run &lt;code&gt;tail -f logs/chat.log&lt;/code&gt;. You get a real-time HUD monitoring all AI conversations as they happen! Debugging is an absolute breeze.)&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Feed External Commands to AI (&lt;code&gt;@sh&lt;/code&gt;)
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Example 1: Preprocess text before passing it to an AI agent&lt;/span&gt;
@sh &lt;span class="s2"&gt;"cat raw.html | sed 's/&amp;lt;[^&amp;gt;]*&amp;gt;//g'"&lt;/span&gt; &lt;span class="nt"&gt;-w&lt;/span&gt; text.txt

&lt;span class="c"&gt;# Example 2: Inspect local project files or command output&lt;/span&gt;
@sh &lt;span class="s2"&gt;"ls -la src"&lt;/span&gt;
@sh &lt;span class="s2"&gt;"git diff --stat"&lt;/span&gt;

&lt;span class="c"&gt;# Example 3: Run local scripts or test suites seamlessly within your workflow&lt;/span&gt;
@sh &lt;span class="s2"&gt;"python scripts/build_index.py"&lt;/span&gt;
@sh &lt;span class="s2"&gt;"pytest tests/"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is the ultimate terminal advantage that browsers can't touch. Pipe the output of any CLI tool directly into the AI.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Example 4: Run linters and let Claude fix the errors&lt;/span&gt;
@sh &lt;span class="s2"&gt;"flake8 app.py"&lt;/span&gt; &lt;span class="nt"&gt;-w&lt;/span&gt; lint.md
-&amp;gt;
@claude &lt;span class="s2"&gt;"Fix all these lint errors"&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; lint.md &lt;span class="nt"&gt;-r&lt;/span&gt; app.py &lt;span class="nt"&gt;-w&lt;/span&gt; fixed.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  The "Read-Only" Philosophy: Preventing Repo Disasters
&lt;/h2&gt;

&lt;p&gt;Even with all this power, our adapters (like GitHub and Figma) are strictly &lt;strong&gt;Read-Only&lt;/strong&gt;. This is a deliberate safety-by-design choice.&lt;/p&gt;

&lt;p&gt;We all dread the nightmare of an autonomous agent going rogue and &lt;code&gt;git push&lt;/code&gt;-ing broken code while you're getting coffee.&lt;br&gt;
&lt;strong&gt;Analysis and generation are the AI's job. The final Write (commit) is your responsibility.&lt;/strong&gt;&lt;br&gt;
By keeping the human in the loop, you maintain absolute control over your codebase. This makes it a tool you can actually trust in a real-world workflow.&lt;/p&gt;
&lt;h2&gt;
  
  
  Try It Out! 🚀
&lt;/h2&gt;

&lt;p&gt;multi-ai-cli is currently at v0.13.0. To avoid registry bloat and keep things blazingly fast, you have two hacker-friendly ways to get started:&lt;/p&gt;
&lt;h3&gt;
  
  
  Option 1: The Lightning-Fast Source Way (Recommended for Python users)
&lt;/h3&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone git@github.com:ashiras/multi-ai-cli.git
&lt;span class="nb"&gt;cd &lt;/span&gt;multi-ai-cli
uv &lt;span class="nb"&gt;sync&lt;/span&gt;                     &lt;span class="c"&gt;# Install dependencies&lt;/span&gt;
uv run multi-ai &lt;span class="nt"&gt;--version&lt;/span&gt;   &lt;span class="c"&gt;# Verify the installation&lt;/span&gt;

&lt;span class="c"&gt;# Run it like this from now on!&lt;/span&gt;
&lt;span class="c"&gt;# uv run multi-ai "@gpt Hello world"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  Option 2: The Clean Binary Way (No Python Required)
&lt;/h3&gt;

&lt;p&gt;Don't want to mess with environments at all? You can download the latest pre-built binary directly from our GitHub Releases (macOS / Linux / Windows supported). It's a zero-dependency single file!&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Example for macOS / Linux&lt;/span&gt;
curl &lt;span class="nt"&gt;-L&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; multi-ai https://github.com/ashiras/multi-ai-cli/releases/download/v0.13.0/multi-ai
&lt;span class="nb"&gt;chmod&lt;/span&gt; +x multi-ai
&lt;span class="nb"&gt;sudo mv &lt;/span&gt;multi-ai /usr/local/bin/   &lt;span class="c"&gt;# Or to ~/bin/ etc.&lt;/span&gt;
multi-ai &lt;span class="nt"&gt;--version&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;(Note: Please check the Releases page for the exact URL for your specific OS!)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;I can never go back to juggling 10 AI browser tabs. Upgrade your terminal into the ultimate multi-AI battlefield today!&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;GitHub: &lt;a href="https://github.com/ashiras/multi-ai-cli" rel="noopener noreferrer"&gt;ashiras/multi-ai-cli&lt;/a&gt; (Stars and issues are highly appreciated!)&lt;/li&gt;
&lt;li&gt;Feedback: What adapters do you want to see next? Jira? Notion? Let me know in the comments below! 👇&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>python</category>
      <category>showdev</category>
    </item>
  </channel>
</rss>
