<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Srirajasekhar Koritala</title>
    <description>The latest articles on DEV Community by Srirajasekhar Koritala (@srirajasekhar_koritala_4c).</description>
    <link>https://dev.to/srirajasekhar_koritala_4c</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/srirajasekhar_koritala_4c"/>
    <language>en</language>
    <item>
      <title>Govern Any AI Agent in 5 Minutes: A Technical Guide</title>
      <dc:creator>Srirajasekhar Koritala</dc:creator>
      <pubDate>Tue, 03 Mar 2026 16:53:39 +0000</pubDate>
      <link>https://dev.to/srirajasekhar_koritala_4c/govern-any-ai-agent-in-5-minutes-a-technical-guide-19ep</link>
      <guid>https://dev.to/srirajasekhar_koritala_4c/govern-any-ai-agent-in-5-minutes-a-technical-guide-19ep</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbf8uqsmo0pic95qm22m8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbf8uqsmo0pic95qm22m8.png" alt="Govern Any AI Agent in 5 Minutes" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Unlock Enterprise AI Automation in 5 Minutes
&lt;/h2&gt;

&lt;p&gt;Your team is using AI agents — OpenClaw, Claude Code, LangChain, custom tools. They're automating incredible things: writing code, managing infrastructure, processing data, driving workflows.&lt;/p&gt;

&lt;p&gt;The capability is real. Now make it enterprise-ready.&lt;/p&gt;

&lt;p&gt;This guide shows you how to connect any AI agent to the AICtrlNet Runtime Gateway in 5 minutes — so your team keeps the automation power, and your enterprise gets the visibility, control, and audit trails it needs to say yes.&lt;/p&gt;




&lt;h2&gt;
  
  
  What You'll Get
&lt;/h2&gt;

&lt;p&gt;After completing this guide, your AI agents are enterprise-ready:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Every agent action&lt;/strong&gt; is evaluated before execution&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Risk scores&lt;/strong&gt; (0.0-1.0) prioritize what needs human attention&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ALLOW/DENY/ESCALATE&lt;/strong&gt; decisions — AI keeps moving, governance runs inline&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Complete audit trail&lt;/strong&gt; — the answer to every compliance question&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;One-click suspend&lt;/strong&gt; — immediate control when you need it&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The result: your team automates faster because governance removes the objections that block deployment.&lt;/p&gt;




&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;An autonomous AI agent running (OpenClaw, Claude Code, custom agent — any of them)&lt;/li&gt;
&lt;li&gt;Python 3.9+&lt;/li&gt;
&lt;li&gt;An AICtrlNet account (&lt;a href="https://hitlai.net/trial" rel="noopener noreferrer"&gt;free trial&lt;/a&gt; works)&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Step 1: Install the SDK (30 seconds)
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;aictrlnet-runtime-sdk
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Step 2: Get Your API Credentials (60 seconds)
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Log into &lt;a href="https://hitlai.net" rel="noopener noreferrer"&gt;hitlai.net&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Navigate to &lt;strong&gt;Settings → API Keys&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Create API Key&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Copy the key (you won't see it again)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Set it as an environment variable:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;AICTRLNET_API_KEY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"your-api-key-here"&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;AICTRLNET_API_URL&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"https://api.aictrlnet.com"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Step 3: Register Your Agent (60 seconds)
&lt;/h2&gt;

&lt;p&gt;Create a file called &lt;code&gt;register_agent.py&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;asyncio&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;aictrlnet_runtime_sdk&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;AsyncAICtrlNetClient&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;RuntimeRegistrationRequest&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;AICtrlNetConfig&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="n"&gt;config&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;AICtrlNetConfig&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;from_env&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="n"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;AsyncAICtrlNetClient&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c1"&gt;# Register your agent — works with any type
&lt;/span&gt;    &lt;span class="n"&gt;registration&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;register&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;RuntimeRegistrationRequest&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;runtime_type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;openclaw&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  &lt;span class="c1"&gt;# or "claude_code", "cursor", "langchain", "custom"
&lt;/span&gt;        &lt;span class="n"&gt;instance_name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;my-dev-machine&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;metadata&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;owner&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;engineering-team&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;environment&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;development&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;))&lt;/span&gt;

    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Registered! Runtime ID: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;registration&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;runtime_id&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="nf"&gt;open&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;.aictrlnet_runtime_id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;w&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;write&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;registration&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;runtime_id&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;asyncio&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;python register_agent.py
&lt;span class="c"&gt;# Registered! Runtime ID: rt_abc123...&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;runtime_type&lt;/code&gt; tells the gateway what kind of agent it's governing — but the governance pipeline is the same regardless. ALLOW/DENY/ESCALATE works identically whether the action came from OpenClaw or your custom Python script.&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 4: Wrap Agent Actions with Governance (120 seconds)
&lt;/h2&gt;

&lt;p&gt;This is the key part. Wrap your agent's action execution with the governance gateway.&lt;/p&gt;

&lt;p&gt;Create &lt;code&gt;governed_agent.py&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;asyncio&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;aictrlnet_runtime_sdk&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;AsyncAICtrlNetClient&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;GovernanceGateway&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;AICtrlNetConfig&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="n"&gt;config&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;AICtrlNetConfig&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;from_env&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="n"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;AsyncAICtrlNetClient&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="nf"&gt;open&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;.aictrlnet_runtime_id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;runtime_id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;read&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;strip&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

    &lt;span class="c1"&gt;# Create governance gateway
&lt;/span&gt;    &lt;span class="n"&gt;gateway&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;GovernanceGateway&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;runtime_id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;runtime_id&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c1"&gt;# Your actual execution logic (whatever your agent does)
&lt;/span&gt;    &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;execute_command&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;cmd&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;subprocess&lt;/span&gt;
        &lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;subprocess&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;cmd&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;shell&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;capture_output&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;text&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;stdout&lt;/span&gt;

    &lt;span class="c1"&gt;# Wrap with governance — every call now gets evaluated
&lt;/span&gt;    &lt;span class="n"&gt;governed_execute&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;gateway&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;wrap&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;action_type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;shell_command&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;func&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;execute_command&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c1"&gt;# This command will be evaluated BEFORE execution
&lt;/span&gt;    &lt;span class="k"&gt;try&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;governed_execute&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ls -la /tmp&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Result: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;except&lt;/span&gt; &lt;span class="n"&gt;gateway&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ActionDenied&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Denied: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;reason&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;except&lt;/span&gt; &lt;span class="n"&gt;gateway&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ActionEscalated&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Needs approval: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;approval_url&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;asyncio&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's it. Every action your agent takes now passes through the Runtime Gateway before executing.&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 5: View in Dashboard (30 seconds)
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Go to &lt;a href="https://hitlai.net" rel="noopener noreferrer"&gt;hitlai.net&lt;/a&gt; and open the Runtime Gateway&lt;/li&gt;
&lt;li&gt;You'll see your registered agent instance&lt;/li&gt;
&lt;li&gt;Click on it to see:

&lt;ul&gt;
&lt;li&gt;All actions evaluated&lt;/li&gt;
&lt;li&gt;Risk scores&lt;/li&gt;
&lt;li&gt;Decisions (ALLOW / DENY / ESCALATE)&lt;/li&gt;
&lt;li&gt;Full audit trail&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  What Just Happened
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  Agent wants to run: ls -la /tmp
                │
                ▼
  ┌─────────────────────────────────┐
  │   AICtrlNet Runtime Gateway     │
  │                                 │
  │   1. Receive action request     │
  │   2. Evaluate through pipeline  │
  │      (Quality, Governance,      │
  │       Security, Monitoring)     │
  │   3. Calculate risk score       │
  │   4. Apply policy               │
  │   5. Log to audit trail         │
  └─────────────────────────────────┘
                │
        ┌───────┼───────┐
        ▼       ▼       ▼
     ALLOW    DENY   ESCALATE
   (execute) (block) (route to
                      human)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is tool-agnostic. The gateway doesn't know or care which agent generated the action. It evaluates the action itself — what it does, what it touches, what the risk level is.&lt;/p&gt;




&lt;h2&gt;
  
  
  Default Policies
&lt;/h2&gt;

&lt;p&gt;Out of the box, the gateway uses sensible defaults:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Action Type&lt;/th&gt;
&lt;th&gt;Default Policy&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Read operations&lt;/td&gt;
&lt;td&gt;ALLOW&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Write to temp directories&lt;/td&gt;
&lt;td&gt;ALLOW&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Write elsewhere&lt;/td&gt;
&lt;td&gt;ESCALATE&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Network requests&lt;/td&gt;
&lt;td&gt;ALLOW with logging&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Destructive commands (rm, drop, delete)&lt;/td&gt;
&lt;td&gt;ESCALATE&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Credential access&lt;/td&gt;
&lt;td&gt;DENY&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Customize these in &lt;strong&gt;Settings → Governance Policies&lt;/strong&gt; — per department, per team, per agent type, per risk level.&lt;/p&gt;




&lt;h2&gt;
  
  
  Scaling to Your Whole Team
&lt;/h2&gt;

&lt;p&gt;Rolling this out to multiple developers or multiple agent types:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;asyncio&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;aictrlnet_runtime_sdk&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;AsyncAICtrlNetClient&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;AICtrlNetConfig&lt;/span&gt;

&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;register_team&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="n"&gt;config&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;AICtrlNetConfig&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;from_env&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="n"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;AsyncAICtrlNetClient&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="n"&gt;agents&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
        &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;name&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;alice-openclaw&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;type&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;openclaw&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;owner&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;alice@company.com&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
        &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;name&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;bob-claude-code&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;type&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;claude_code&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;owner&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;bob@company.com&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
        &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;name&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;carol-custom&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;type&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;custom&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;owner&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;carol@company.com&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
        &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;name&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ci-langchain&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;type&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;langchain&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;owner&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;devops@company.com&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="p"&gt;]&lt;/span&gt;

    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;agent&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;agents&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;reg&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;register&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;RuntimeRegistrationRequest&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="n"&gt;runtime_type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;agent&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;type&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
            &lt;span class="n"&gt;instance_name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;agent&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;name&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
            &lt;span class="n"&gt;metadata&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;owner&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;agent&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;owner&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;department&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;engineering&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="p"&gt;))&lt;/span&gt;
        &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Registered &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;agent&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;name&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;reg&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;runtime_id&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;asyncio&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;register_team&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Different agents, different owners, same governance pipeline. One dashboard to see everything.&lt;/p&gt;




&lt;h2&gt;
  
  
  Autonomy Levels per Department
&lt;/h2&gt;

&lt;p&gt;The Runtime Gateway supports per-department autonomy policies:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Engineering&lt;/strong&gt;: Near-full autonomy for dev environments, supervised for production&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Legal&lt;/strong&gt;: AI-assisted only — AI drafts, humans approve everything&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Marketing&lt;/strong&gt;: Full automation for content workflows, supervised for budget decisions&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Support&lt;/strong&gt;: Full automation for Tier 1 tickets, supervised for enterprise customers&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Configure this in the dashboard or via the policy API. Each department gets the autonomy level that matches their risk tolerance.&lt;/p&gt;




&lt;h2&gt;
  
  
  Next Steps
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Set up team policies&lt;/strong&gt; — define what should ALLOW, DENY, or ESCALATE per team&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Configure notifications&lt;/strong&gt; — get Slack/email alerts for ESCALATE decisions&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Enable ML risk scoring&lt;/strong&gt; — let the system learn your patterns (Business tier)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Connect more agents&lt;/strong&gt; — the same gateway works for every tool your team adopts&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Explore three-layer reach&lt;/strong&gt; — Platform adapters (10,000+ tools), self-extending agents (any API), browser automation (any web app)&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  About AICtrlNet
&lt;/h2&gt;

&lt;p&gt;AICtrlNet is AI-powered universal automation with governance built in. Three layers of automation reach — 10,000+ tools through platform adapters, any API through self-extending agents, any web app through browser automation. Whether you're running OpenClaw, Claude Code, or custom agents, the Runtime Gateway gives you the governance that lets your enterprise say yes.&lt;/p&gt;

&lt;p&gt;AI that automates anything. Governance for everything.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Start your free 14-day trial&lt;/strong&gt;: &lt;a href="https://hitlai.net/trial" rel="noopener noreferrer"&gt;hitlai.net/trial&lt;/a&gt;&lt;/p&gt;




&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Open Source&lt;/strong&gt;: &lt;a href="https://github.com/Bodaty/aictrlnet-community" rel="noopener noreferrer"&gt;github.com/Bodaty/aictrlnet-community&lt;/a&gt; — Runtime Gateway, MIT licensed&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Free Trial&lt;/strong&gt;: &lt;a href="https://hitlai.net/trial" rel="noopener noreferrer"&gt;hitlai.net/trial&lt;/a&gt; — 14 days, full governance features&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Documentation&lt;/strong&gt;: &lt;a href="https://docs.aictrlnet.com" rel="noopener noreferrer"&gt;docs.aictrlnet.com&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;em&gt;Questions? Open a discussion on &lt;a href="https://github.com/Bodaty/aictrlnet-community/discussions" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt; or reach out to &lt;a href="mailto:support@aictrlnet.com"&gt;support@aictrlnet.com&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>automation</category>
      <category>governance</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>The OpenClaw Moment Has Evolved</title>
      <dc:creator>Srirajasekhar Koritala</dc:creator>
      <pubDate>Tue, 03 Mar 2026 16:39:19 +0000</pubDate>
      <link>https://dev.to/srirajasekhar_koritala_4c/the-openclaw-moment-has-evolved-38d</link>
      <guid>https://dev.to/srirajasekhar_koritala_4c/the-openclaw-moment-has-evolved-38d</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1xkq3zt0adjbwfspsd43.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1xkq3zt0adjbwfspsd43.png" alt="The OpenClaw Moment Has Evolved" width="800" height="436"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Something remarkable happened over the past few months. An Austrian engineer named Peter Steinberger built a hobby project called "Clawdbot" in November 2025. By late January 2026, it had evolved into OpenClaw — and amassed over 200,000 GitHub stars.&lt;/p&gt;

&lt;p&gt;OpenClaw isn't just another chatbot. It has "hands." It can execute shell commands, manage local files, and navigate messaging platforms like WhatsApp and Slack with persistent, root-level permissions.&lt;/p&gt;

&lt;p&gt;For the first time, autonomous AI agents have proven they can automate almost anything a developer can do. The capability is real. The productivity gains are extraordinary.&lt;/p&gt;

&lt;p&gt;And now the question isn't whether AI agents can automate your work. It's how your enterprise harnesses that power responsibly.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Adoption Explosion — and the Governance Gap
&lt;/h2&gt;

&lt;p&gt;"It's not an isolated, rare thing; it's happening across almost every organization," says Pukar Hamal, CEO of SecurityPal. "There are companies finding engineers who have given OpenClaw access to their devices."&lt;/p&gt;

&lt;p&gt;Cisco's AI Threat &amp;amp; Security Research team called OpenClaw "groundbreaking" from a capability perspective. The productivity gains are real — developers report 10x acceleration on routine tasks.&lt;/p&gt;

&lt;p&gt;But here's the gap: employees are adopting AI automation faster than enterprises can govern it. No visibility into what agents are doing. No audit trails. No way for IT or security teams to know what's happening.&lt;/p&gt;

&lt;p&gt;This isn't a reason to block AI agents. It's a reason to govern them — so your teams get the automation power they want, and your enterprise gets the visibility it needs.&lt;/p&gt;




&lt;h2&gt;
  
  
  Five Takeaways from the OpenClaw Moment
&lt;/h2&gt;

&lt;p&gt;VentureBeat recently published an analysis of what this means for enterprises&lt;sup id="fnref1"&gt;1&lt;/sup&gt;. Here's what stood out — and what it means for anyone building or deploying AI systems.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. You Need Less Preparation Than You Think
&lt;/h3&gt;

&lt;p&gt;The prevailing wisdom suggested enterprises needed massive infrastructure overhauls and perfectly curated data sets before AI could be useful. OpenClaw shattered that myth.&lt;/p&gt;

&lt;p&gt;"There is a surprising insight there: you actually don't need to do too much preparation," says Tanmai Gopal, Co-founder &amp;amp; CEO at PromptQL. "Everybody thought we needed new software and new AI-native companies to come and do things. It will catalyze more disruption as leadership realizes that we don't actually need to prep so much to get AI to be productive."&lt;/p&gt;

&lt;p&gt;Modern AI models can navigate messy, uncurated data by treating intelligence as a service. The barrier to entry just collapsed.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Governance Enables Adoption, Not the Opposite
&lt;/h3&gt;

&lt;p&gt;Without governance, AI automation stalls at the pilot stage. Without audit trails, compliance blocks deployment. Without risk scoring, every action needs human review — defeating the purpose of automation.&lt;/p&gt;

&lt;p&gt;Organizations like AUIC are already providing certification standards (AIUC-1) that enterprises can put agents through to obtain insurance coverage. Governance isn't a tax on AI automation — it's the permission slip that lets enterprises deploy it at scale.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. The Security Model Is Broken
&lt;/h3&gt;

&lt;p&gt;Itamar Golan, founder of Prompt Security, put it bluntly: "Treat agents as production infrastructure, not a productivity app: least privilege, scoped tokens, allowlisted actions, strong authentication on every integration, and auditability end-to-end."&lt;sup id="fnref2"&gt;2&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;The old security model assumed humans were the actors. When AI agents become the actors — with persistent permissions and autonomous decision-making — everything changes.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. SaaS Is Being Disrupted (Again)
&lt;/h3&gt;

&lt;p&gt;The 2026 "SaaSpocalypse" saw massive value erased from software indices as investors realized agents could disrupt traditional SaaS models. If an agent can navigate any interface, why pay for specialized software?&lt;/p&gt;

&lt;p&gt;The platforms that survive will be the ones that provide value agents can't replicate: governance, compliance, trust, and human oversight.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. You Can't Stop Your Employees
&lt;/h3&gt;

&lt;p&gt;Brianne Kimmel of Worklife Ventures frames this as a talent retention issue: "People are trying these on evenings and weekends, and it's hard for companies to ensure employees aren't trying the latest technologies."&lt;sup id="fnref1"&gt;1&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;Your best engineers will use the best tools. Blocking them doesn't work — they'll find workarounds or leave for companies that enable them.&lt;/p&gt;

&lt;p&gt;The answer isn't blocking. It's governing.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Enterprises Actually Need
&lt;/h2&gt;

&lt;p&gt;Here's what the OpenClaw moment revealed about enterprise requirements:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Visibility&lt;/strong&gt;: Know what agents are running, what they're doing, and what permissions they have.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Risk Scoring&lt;/strong&gt;: Not all actions are equal. Deleting a test file is different from emailing a client. ML-powered risk assessment helps prioritize human attention.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pre-Action Governance&lt;/strong&gt;: Evaluate actions &lt;em&gt;before&lt;/em&gt; they execute, not after. The difference between logging and governance is the difference between knowing what happened and preventing what shouldn't.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Audit Trails&lt;/strong&gt;: When compliance asks "who approved this?" you need an answer. Every action, every decision, every override — documented.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Control Spectrum&lt;/strong&gt;: Not every department needs the same level of autonomy. Marketing might run at full speed while Legal stays fully supervised. One size doesn't fit all.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Suspend and Override&lt;/strong&gt;: When something goes wrong, you need the ability to suspend an agent immediately — across your entire fleet if necessary.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Path Forward
&lt;/h2&gt;

&lt;p&gt;OpenClaw proved that AI can automate anything. The technology is here. The productivity gains are real. The genie isn't going back in the bottle.&lt;/p&gt;

&lt;p&gt;The enterprises that thrive in the agentic era won't be the ones who block AI agents. They'll be the ones who govern them — and deploy them faster because of it.&lt;/p&gt;

&lt;p&gt;They'll give employees the AI automation tools they want — with the visibility, risk management, and audit trails the organization needs.&lt;/p&gt;

&lt;p&gt;They'll treat AI agents as production infrastructure, not toys.&lt;/p&gt;

&lt;p&gt;And they'll recognize that governance isn't the brake on AI automation.&lt;/p&gt;

&lt;p&gt;It's the accelerator — the thing that gets AI past compliance, past legal, past the CTO's desk, and into production.&lt;/p&gt;




&lt;h2&gt;
  
  
  About AICtrlNet
&lt;/h2&gt;

&lt;p&gt;AICtrlNet is AI-powered universal automation with governance built in. Three layers of automation reach — 10,000+ tools through platform adapters, any API through self-extending agents, any web app through browser automation. All governed.&lt;/p&gt;

&lt;p&gt;Whether you're running OpenClaw, Claude Code, LangChain agents, or custom autonomous systems, the Runtime Gateway evaluates every action before execution:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Pre-action evaluation&lt;/strong&gt;: ALLOW, DENY, or ESCALATE every action&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ML-powered risk scoring&lt;/strong&gt;: Prioritize human attention where it matters&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fleet management&lt;/strong&gt;: Visibility across all agents in your organization&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Six phases of autonomy&lt;/strong&gt;: From AI-assisted to fully autonomous — you choose&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Suspend and override&lt;/strong&gt;: Immediate control when you need it&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;AI that automates anything. Governance for everything.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Start with a free 14-day trial&lt;/strong&gt; of the Business edition. The Community Edition is also available as open source.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Start your free trial&lt;/strong&gt;: &lt;a href="https://dev.to/openclaw"&gt;aictrlnet.com/openclaw&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Bobby Koritala is the founder of AICtrlNet and holds multiple AI patents. He's spent 9 years building AI systems in healthcare, finance, and logistics.&lt;/em&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  References
&lt;/h2&gt;




&lt;ol&gt;

&lt;li id="fn1"&gt;
&lt;p&gt;VentureBeat. (2026). "What the OpenClaw moment means for enterprises: 5 big takeaways." &lt;a href="https://venturebeat.com/technology/what-the-openclaw-moment-means-for-enterprises-5-big-takeaways" rel="noopener noreferrer"&gt;venturebeat.com&lt;/a&gt; ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn2"&gt;
&lt;p&gt;VentureBeat. (2026). "OpenClaw proves agentic AI works. It also proves your security model doesn't." &lt;a href="https://venturebeat.com/security/openclaw-agentic-ai-security-risk-ciso-guide" rel="noopener noreferrer"&gt;venturebeat.com&lt;/a&gt; ↩&lt;/p&gt;
&lt;/li&gt;

&lt;/ol&gt;

</description>
      <category>ai</category>
      <category>automation</category>
      <category>governance</category>
      <category>enterprise</category>
    </item>
    <item>
      <title>OpenAI Just Validated the Autonomous Agent Category — Here's What It Means</title>
      <dc:creator>Srirajasekhar Koritala</dc:creator>
      <pubDate>Fri, 20 Feb 2026 19:36:17 +0000</pubDate>
      <link>https://dev.to/srirajasekhar_koritala_4c/openai-just-validated-the-autonomous-agent-category-heres-what-it-means-ebh</link>
      <guid>https://dev.to/srirajasekhar_koritala_4c/openai-just-validated-the-autonomous-agent-category-heres-what-it-means-ebh</guid>
      <description>&lt;p&gt;&lt;em&gt;Originally published on the &lt;a href="https://aictrlnet.com/blog/2026/02/openai-validates-autonomous-agent-category/" rel="noopener noreferrer"&gt;AICtrlNet blog&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Two days ago, Peter Steinberger — the creator of OpenClaw, the fastest-growing open source project in GitHub history — &lt;a href="https://techcrunch.com/2026/02/15/openclaw-creator-peter-steinberger-joins-openai/" rel="noopener noreferrer"&gt;joined OpenAI&lt;/a&gt;&lt;sup id="fnref1"&gt;1&lt;/sup&gt;. Sam Altman personally recruited him. Mark Zuckerberg had already reached out via WhatsApp.&lt;/p&gt;

&lt;p&gt;OpenClaw, which went from zero to over 200,000 stars in under three months&lt;sup id="fnref2"&gt;2&lt;/sup&gt;, is transitioning to an independent open source foundation with OpenAI's backing. Steinberger's new role: driving "the next generation of personal agents."&lt;/p&gt;

&lt;p&gt;This isn't just a talent acquisition. It's a signal — and a warning.&lt;/p&gt;

&lt;h2&gt;
  
  
  What OpenAI Is Really Saying
&lt;/h2&gt;

&lt;p&gt;Sam Altman has been saying it for months: "The future is extremely multi-agent"&lt;sup id="fnref1"&gt;1&lt;/sup&gt;. But hiring Steinberger makes it concrete. OpenAI isn't just building models — they're betting that autonomous agents, the kind that have root-level access to your machine and can execute shell commands, browse the web, and manage files on your behalf, are the next platform shift.&lt;/p&gt;

&lt;p&gt;And they're right. The AI agent market is projected to grow from $7.8 billion in 2025 to $52.6 billion by 2030 — a 46.3% CAGR&lt;sup id="fnref3"&gt;3&lt;/sup&gt;. Gartner predicts 40% of enterprise applications will feature task-specific AI agents by end of 2026, up from less than 5% today&lt;sup id="fnref4"&gt;4&lt;/sup&gt;.&lt;/p&gt;

&lt;p&gt;The agents are coming. The question is what comes next.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Governance Gap Nobody's Closing
&lt;/h2&gt;

&lt;p&gt;Here's the uncomfortable truth that the Steinberger hire exposes:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The industry is investing billions in making agents more capable. Almost nobody is investing in making them governable.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The numbers tell the story. Microsoft's Cyber Pulse report, published just five days before the Steinberger announcement, found that &lt;strong&gt;over 80% of Fortune 500 companies are already running active AI agents&lt;/strong&gt; — but 29% of employees admit to using unsanctioned agents, and fewer than half of enterprises have implemented specific AI security safeguards&lt;sup id="fnref5"&gt;5&lt;/sup&gt;.&lt;/p&gt;

&lt;p&gt;Gravitee's State of AI Agent Security survey made it even more concrete: only &lt;strong&gt;14.4% of organizations&lt;/strong&gt; report that all their AI agents go live with full security and IT approval. More than half of all agents operate without any security oversight or logging. And 88% of organizations have confirmed or suspected security incidents related to AI agents&lt;sup id="fnref6"&gt;6&lt;/sup&gt;.&lt;/p&gt;

&lt;p&gt;Read that again: The vast majority of Fortune 500 companies have AI agents in production, and almost none of them have adequate governance in place.&lt;/p&gt;

&lt;p&gt;This is Shadow AI at scale. And unlike Shadow IT — where the worst case was an unauthorized SaaS subscription — Shadow AI agents can read your codebase, send emails on your behalf, execute system commands, and access sensitive data. With root permissions.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Market Already Knows This Is Real
&lt;/h2&gt;

&lt;p&gt;Three days before Steinberger joined OpenAI, Proofpoint acquired Acuvity — a startup focused on AI security and governance for the "agentic workspace"&lt;sup id="fnref7"&gt;7&lt;/sup&gt;. The deal explicitly cited governance for tools like OpenClaw and MCP servers.&lt;/p&gt;

&lt;p&gt;This wasn't a speculative acquisition. This was a major cybersecurity company saying: the governance market for autonomous agents is real, it's urgent, and it's big enough to acquire for.&lt;/p&gt;

&lt;p&gt;And they're not alone. The Agentic AI Foundation (AAIF) recently formed under the Linux Foundation to provide vendor-neutral governance for MCP, A2A, and other agent protocols. When foundations start forming, it means the category is no longer experimental — it's infrastructure.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Matters for Every Enterprise
&lt;/h2&gt;

&lt;p&gt;Here's what the OpenClaw-to-OpenAI pipeline means in practice:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Autonomous agents are about to get corporate backing.&lt;/strong&gt; OpenClaw was already the fastest-growing project in GitHub history as one developer's side project. Now it has OpenAI's resources behind it. Expect adoption to accelerate, not slow down.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. "Block it" is not a strategy.&lt;/strong&gt; As Worklife Ventures' Brianne Kimmel noted, employees are already "trying these on evenings and weekends, and it's hard for companies to ensure employees aren't trying the latest technologies"&lt;sup id="fnref8"&gt;8&lt;/sup&gt;. Blocking doesn't work — they'll find workarounds or leave for companies that let them move fast.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. The security model needs to be rebuilt.&lt;/strong&gt; As Prompt Security's Itamar Golan put it: "Treat agents as production infrastructure, not a productivity app: least privilege, scoped tokens, allowlisted actions, strong authentication on every integration, and auditability end-to-end"&lt;sup id="fnref9"&gt;9&lt;/sup&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Pre-action governance is the new standard.&lt;/strong&gt; Logging what agents did after the fact isn't governance — it's forensics. Real governance means evaluating every action &lt;em&gt;before&lt;/em&gt; it executes.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Tool-Agnostic Imperative
&lt;/h2&gt;

&lt;p&gt;Here's the thing most people miss: &lt;strong&gt;the governance challenge isn't OpenClaw-specific.&lt;/strong&gt; OpenClaw is one tool. Claude Code is another. LangChain, CrewAI, AutoGen, Semantic Kernel — the frameworks are multiplying. Custom internal agents are proliferating even faster.&lt;/p&gt;

&lt;p&gt;Any governance solution that's built for one tool is already obsolete. What enterprises need is a governance layer that sits between the agent and the action — regardless of which framework, model, or tool generated it.&lt;/p&gt;

&lt;p&gt;That's the architecture we've been building at AICtrlNet since before OpenClaw went viral. Our Runtime Gateway evaluates every agent action through Quality, Governance, Security, and Monitoring dimensions, before execution. It doesn't care whether the action came from OpenClaw, Claude Code, a LangChain workflow, or a custom Python script.&lt;/p&gt;

&lt;p&gt;This isn't a pitch deck. It's a shipping product:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;171 conversation tools&lt;/strong&gt; across 11 categories&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;29 adapters&lt;/strong&gt; connecting AI frameworks, messaging platforms, databases, and compliance systems&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;183 workflow templates&lt;/strong&gt; across 20+ industries&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;43 AI agents&lt;/strong&gt; with graduated autonomy — our &lt;a href="https://dev.to/blog/2026/02/11/your-ai-demo-is-lying-to-you/"&gt;Control Spectrum&lt;/a&gt; defines 6 phases from "AI suggests, human decides" to full automation&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;6 messaging channels&lt;/strong&gt;: Slack, Discord, Telegram, WhatsApp, SMS, Email&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Self-extending agents&lt;/strong&gt; that research, generate, and validate new integrations at runtime&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dry-run mode&lt;/strong&gt; to test any workflow without side effects&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The Community Edition is &lt;a href="https://github.com/Bodaty/aictrlnet-community" rel="noopener noreferrer"&gt;open source&lt;/a&gt;. The Business Edition is live with ML-enhanced risk scoring, fleet management, and our Done-With-You expert guidance model.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Window Is Closing
&lt;/h2&gt;

&lt;p&gt;If OpenAI is investing in making agents more autonomous, someone needs to invest in making them governable.&lt;/p&gt;

&lt;p&gt;The enterprises that thrive in the agentic era won't be the ones who block AI agents or the ones who let them run unchecked. They'll be the ones who govern them — with visibility, risk management, audit trails, and human oversight built into the execution layer.&lt;/p&gt;

&lt;p&gt;The window between "agents are useful" and "agents caused a compliance incident" is closing fast. The 88% incident rate in Gravitee's survey&lt;sup id="fnref6"&gt;6&lt;/sup&gt; tells you it's already closing for most organizations.&lt;/p&gt;

&lt;p&gt;OpenAI just placed their bet on the future of autonomous agents. The question for every enterprise is: &lt;strong&gt;who's placing the bet on governing them?&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Ready to add governance to your AI agents?&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Open Source&lt;/strong&gt;: &lt;a href="https://github.com/Bodaty/aictrlnet-community" rel="noopener noreferrer"&gt;github.com/Bodaty/aictrlnet-community&lt;/a&gt; — Runtime Gateway, MIT licensed&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Documentation&lt;/strong&gt;: &lt;a href="https://docs.aictrlnet.com" rel="noopener noreferrer"&gt;docs.aictrlnet.com&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Enterprise Trial&lt;/strong&gt;: &lt;a href="https://hitlai.net/trial" rel="noopener noreferrer"&gt;hitlai.net/trial&lt;/a&gt; — 14 days, no credit card&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The OpenClaw governance challenge&lt;/strong&gt;: &lt;a href="https://aictrlnet.com/openclaw" rel="noopener noreferrer"&gt;aictrlnet.com/openclaw&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  References
&lt;/h2&gt;




&lt;ol&gt;

&lt;li id="fn1"&gt;
&lt;p&gt;TechCrunch. (2026, February 15). "OpenClaw creator Peter Steinberger joins OpenAI." &lt;a href="https://techcrunch.com/2026/02/15/openclaw-creator-peter-steinberger-joins-openai/" rel="noopener noreferrer"&gt;techcrunch.com&lt;/a&gt; ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn2"&gt;
&lt;p&gt;Willison, S. (2026, February 15). "Three months of OpenClaw." &lt;a href="https://simonwillison.net/2026/Feb/15/openclaw/" rel="noopener noreferrer"&gt;simonwillison.net&lt;/a&gt;. OpenClaw's first commit was November 25, 2025; reached 200K+ stars by mid-February 2026, including 25,310 stars in a single day on January 26. ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn3"&gt;
&lt;p&gt;MarketsandMarkets. (2025). "AI Agents Market — Global Forecast to 2030." USD $7.84 billion in 2025 to USD $52.62 billion by 2030, CAGR of 46.3%. &lt;a href="https://www.marketsandmarkets.com/PressReleases/ai-agents.asp" rel="noopener noreferrer"&gt;marketsandmarkets.com&lt;/a&gt; ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn4"&gt;
&lt;p&gt;Gartner. (2025, August 26). "Gartner Predicts 40% of Enterprise Apps Will Feature Task-Specific AI Agents by 2026." &lt;a href="https://www.gartner.com/en/newsroom/press-releases/2025-08-26-gartner-predicts-40-percent-of-enterprise-apps-will-feature-task-specific-ai-agents-by-2026-up-from-less-than-5-percent-in-2025" rel="noopener noreferrer"&gt;gartner.com/en/newsroom&lt;/a&gt; ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn5"&gt;
&lt;p&gt;Microsoft. (2026, February 10). "80% of Fortune 500 use active AI Agents: Observability, governance, and security shape the new frontier." Microsoft Security Blog. &lt;a href="https://www.microsoft.com/en-us/security/blog/2026/02/10/80-of-fortune-500-use-active-ai-agents-observability-governance-and-security-shape-the-new-frontier/" rel="noopener noreferrer"&gt;microsoft.com&lt;/a&gt; ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn6"&gt;
&lt;p&gt;Gravitee. (2026). "State of AI Agent Security 2026." Survey of 919 participants across 5 industries. Only 14.4% report all AI agents going live with full security/IT approval; 88% confirmed or suspected security incidents. &lt;a href="https://www.gravitee.io/state-of-ai-agent-security" rel="noopener noreferrer"&gt;gravitee.io&lt;/a&gt; ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn7"&gt;
&lt;p&gt;Proofpoint. (2026, February 12). "Proofpoint Acquires Acuvity to Deliver AI Security and Governance Across the Agentic Workspace." &lt;a href="https://www.proofpoint.com/us/newsroom/press-releases/proofpoint-acquires-acuvity-deliver-ai-security-and-governance-across" rel="noopener noreferrer"&gt;proofpoint.com&lt;/a&gt; ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn8"&gt;
&lt;p&gt;VentureBeat. (2026). "What the OpenClaw moment means for enterprises: 5 big takeaways." Kimmel, B. (Worklife Ventures): "People are trying these on evenings and weekends, and it's hard for companies to ensure employees aren't trying the latest technologies." &lt;a href="https://venturebeat.com/technology/what-the-openclaw-moment-means-for-enterprises-5-big-takeaways" rel="noopener noreferrer"&gt;venturebeat.com&lt;/a&gt; ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn9"&gt;
&lt;p&gt;VentureBeat. (2026). "OpenClaw proves agentic AI works. It also proves your security model doesn't." Golan, I. (Prompt Security). &lt;a href="https://venturebeat.com/security/openclaw-agentic-ai-security-risk-ciso-guide" rel="noopener noreferrer"&gt;venturebeat.com&lt;/a&gt; ↩&lt;/p&gt;
&lt;/li&gt;

&lt;/ol&gt;

</description>
      <category>ai</category>
      <category>security</category>
      <category>opensource</category>
      <category>agents</category>
    </item>
    <item>
      <title>The Missing Piece in Every AI Agent Framework: Humans</title>
      <dc:creator>Srirajasekhar Koritala</dc:creator>
      <pubDate>Wed, 18 Feb 2026 21:04:40 +0000</pubDate>
      <link>https://dev.to/srirajasekhar_koritala_4c/the-missing-piece-in-every-ai-agent-framework-humans-3f2f</link>
      <guid>https://dev.to/srirajasekhar_koritala_4c/the-missing-piece-in-every-ai-agent-framework-humans-3f2f</guid>
      <description>&lt;p&gt;&lt;em&gt;Originally published on the &lt;a href="https://aictrlnet.com/blog/2026/01/missing-piece-humans/" rel="noopener noreferrer"&gt;AICtrlNet blog&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Last week I wrote about the protocol wars—MCP, A2A, and how the biggest AI companies are racing to define how agents communicate. The response was overwhelming, and one theme kept coming up:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;"Okay, I see the gap. But what does human-in-the-loop actually look like in practice?"&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Let me show you.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Demo That Always Fails
&lt;/h2&gt;

&lt;p&gt;I've sat through dozens of AI demos over the years. They all follow the same script:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;"Watch this AI agent analyze your data..."&lt;/li&gt;
&lt;li&gt;"Now it's generating recommendations..."&lt;/li&gt;
&lt;li&gt;"And here it executes the action automatically!"&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Applause&lt;/em&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Then someone asks: "What if the recommendation is wrong?"&lt;/p&gt;

&lt;p&gt;The presenter pauses. "Well, you could... add an approval step?"&lt;/p&gt;

&lt;p&gt;And that's where the demo ends, because what comes next isn't pretty. In real implementations, "add an approval step" means:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Building a custom notification system&lt;/li&gt;
&lt;li&gt;Creating a UI for reviewing AI decisions&lt;/li&gt;
&lt;li&gt;Figuring out how to preserve context so the reviewer understands what they're approving&lt;/li&gt;
&lt;li&gt;Handling timeouts, escalations, and edge cases&lt;/li&gt;
&lt;li&gt;Maintaining audit trails for compliance&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That "simple approval step" is often more work than the entire AI pipeline.&lt;/p&gt;

&lt;p&gt;According to Gartner's 2024 AI adoption survey, organizations spend an average of 40% of their AI project budget on "integration and operationalization"—which includes human oversight mechanisms&lt;sup id="fnref1"&gt;1&lt;/sup&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Human-in-the-Loop Is So Hard
&lt;/h2&gt;

&lt;p&gt;After building AI systems for nine years, I've identified three reasons why HITL (human-in-the-loop) is consistently underestimated:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Context Collapse
&lt;/h3&gt;

&lt;p&gt;When an AI hands off to a human, context collapses.&lt;/p&gt;

&lt;p&gt;The AI "knows" why it made a decision—it has the full chain of reasoning, the data it analyzed, the alternatives it considered. But surfacing that to a human in a useful way? That's a different problem entirely.&lt;/p&gt;

&lt;p&gt;Most systems show humans the output ("Recommend: Approve loan") without the reasoning ("Based on credit score 720, debt-to-income 0.3, similar approved applications, confidence 87%").&lt;/p&gt;

&lt;p&gt;The human is asked to approve something they don't fully understand. So they either:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Rubber-stamp everything (defeating the purpose)&lt;/li&gt;
&lt;li&gt;Reject everything out of caution (defeating the purpose)&lt;/li&gt;
&lt;li&gt;Waste time investigating each decision manually (defeating the efficiency gains)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A 2023 study from Carnegie Mellon's Human-Computer Interaction Institute found that users who received AI recommendations &lt;em&gt;without&lt;/em&gt; explanations agreed with the AI 89% of the time—but only 61% of their agreements were actually correct&lt;sup id="fnref2"&gt;2&lt;/sup&gt;. They were rubber-stamping bad decisions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The fix:&lt;/strong&gt; Context must be a first-class citizen in the handoff. Not an afterthought. Not a log file. Structured, relevant, actionable context.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Workflow Impedance Mismatch
&lt;/h3&gt;

&lt;p&gt;AI operates in milliseconds. Humans operate in minutes, hours, or days.&lt;/p&gt;

&lt;p&gt;When you insert a human into an AI workflow, you create an impedance mismatch. The workflow was designed for speed; now it has to wait. And waiting creates problems:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What if the data changes while waiting for approval?&lt;/li&gt;
&lt;li&gt;What if the human never responds?&lt;/li&gt;
&lt;li&gt;What if the human is on vacation?&lt;/li&gt;
&lt;li&gt;What if the decision is time-sensitive?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Research from MIT Sloan shows that the average time-to-decision for human approval in enterprise AI workflows is 4.2 hours—but 23% of requests take longer than 24 hours&lt;sup id="fnref3"&gt;3&lt;/sup&gt;. Most AI orchestration tools can't handle this gracefully.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The fix:&lt;/strong&gt; Human steps need different primitives. Timeouts. Escalation paths. Delegation. Async execution with callbacks. The orchestration layer must understand that humans are fundamentally different from APIs.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. The Audit Paradox
&lt;/h3&gt;

&lt;p&gt;Here's a paradox I've encountered repeatedly:&lt;/p&gt;

&lt;p&gt;The more you automate with AI, the more you need to prove humans were involved.&lt;/p&gt;

&lt;p&gt;Regulators, compliance teams, and auditors don't trust "the AI decided." They want to know:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Who was accountable for this decision?&lt;/li&gt;
&lt;li&gt;Could a human have intervened?&lt;/li&gt;
&lt;li&gt;Why didn't they?&lt;/li&gt;
&lt;li&gt;If they did intervene, what did they decide and why?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The Federal Reserve's 2023 guidance on AI in financial services explicitly states: "Financial institutions must be able to demonstrate appropriate human oversight of AI-driven decisions"&lt;sup id="fnref4"&gt;4&lt;/sup&gt;. Similar requirements exist in healthcare (FDA AI/ML guidance), insurance (state regulations), and now broadly under the EU AI Act.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The fix:&lt;/strong&gt; Audit trails must capture human involvement (or deliberate non-involvement) at every decision point. This needs to be built into the orchestration layer, not bolted on.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Real Human-in-the-Loop Looks Like
&lt;/h2&gt;

&lt;p&gt;Let me describe what I think proper HITL should look like. This isn't theoretical—it's based on systems I've built that are running in production.&lt;/p&gt;

&lt;h3&gt;
  
  
  Humans as Nodes, Not Exceptions
&lt;/h3&gt;

&lt;p&gt;In a well-designed system, a human is just another node type:&lt;/p&gt;

&lt;p&gt;The human node has the same interface as an AI node:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It receives structured input&lt;/li&gt;
&lt;li&gt;It produces structured output&lt;/li&gt;
&lt;li&gt;It can succeed, fail, or timeout&lt;/li&gt;
&lt;li&gt;Its decision is logged and auditable&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The orchestration layer doesn't care whether a node is AI or human. It just routes work to the right place and handles the response.&lt;/p&gt;

&lt;h3&gt;
  
  
  Context That Travels
&lt;/h3&gt;

&lt;p&gt;When work arrives at a human, they should see:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;TASK: Approve customer refund
AMOUNT: $450
AI RECOMMENDATION: Approve
CONFIDENCE: 78%

WHY THIS RECOMMENDATION:
- Customer has been with us 3 years
- First refund request
- Product was defective (confirmed by support ticket #4521)
- Similar cases: 94% approved

WHY YOU'RE SEEING THIS:
- Amount exceeds auto-approve threshold ($200)
- Confidence below auto-approve threshold (85%)

OPTIONS:
[Approve] [Reject] [Request More Info] [Escalate]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The human has everything they need to make a decision. They're not rubber-stamping; they're making an informed choice with AI assistance.&lt;/p&gt;

&lt;p&gt;Research from Stanford HAI shows that presenting AI recommendations with structured explanations increases human decision accuracy by 31% compared to recommendations alone&lt;sup id="fnref5"&gt;5&lt;/sup&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Confidence-Based Routing
&lt;/h3&gt;

&lt;p&gt;Not every decision needs human review. The system should route intelligently:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Confidence&lt;/th&gt;
&lt;th&gt;Routing&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;95%+&lt;/td&gt;
&lt;td&gt;Auto-execute, log for audit&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;80-95%&lt;/td&gt;
&lt;td&gt;Auto-execute, notify human, allow override window&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;60-80%&lt;/td&gt;
&lt;td&gt;Require human approval&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&amp;lt;60%&lt;/td&gt;
&lt;td&gt;Require senior human approval&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The thresholds are configurable by workflow, by risk level, by regulatory requirement. The point is: humans are involved proportionally to uncertainty and risk.&lt;/p&gt;

&lt;p&gt;This approach is supported by research from the Harvard Business School, which found that "tiered autonomy" models—where AI handles routine decisions and humans handle edge cases—increased both throughput and accuracy compared to either full automation or full manual review&lt;sup id="fnref6"&gt;6&lt;/sup&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Escalation That Works
&lt;/h3&gt;

&lt;p&gt;When a human doesn't respond, the system shouldn't just... wait forever.&lt;/p&gt;

&lt;p&gt;Every escalation is logged. At any point, auditors can see exactly what happened and why.&lt;/p&gt;

&lt;h3&gt;
  
  
  Audit by Design
&lt;/h3&gt;

&lt;p&gt;Every decision point captures:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"timestamp"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2026-01-17T10:23:45Z"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"workflow_id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"refund-4521"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"node"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"human-validation"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"assigned_to"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"agent@company.com"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"decision"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"approved"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"reasoning"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Customer's explanation matches support ticket"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"time_to_decision"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"3m 42s"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"context_viewed"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"ai_recommendation"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"approve"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"ai_confidence"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;0.78&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"override"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Six months later, when someone asks "why did we approve this refund?", you have a complete answer.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Gap in Current Tools
&lt;/h2&gt;

&lt;p&gt;I've evaluated most of the AI orchestration tools on the market. They fall into two categories:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Code-first frameworks&lt;/strong&gt; (LangChain, CrewAI, AutoGen): Powerful, but humans are DIY. You can build HITL, but you're building it from scratch every time.&lt;/p&gt;

&lt;p&gt;LangChain's 2024 survey of enterprise users found that 72% had built custom human-in-the-loop functionality, and 58% cited it as "the most time-consuming part of deployment"&lt;sup id="fnref7"&gt;7&lt;/sup&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Visual automation tools&lt;/strong&gt; (n8n, Zapier, Make): Easy to use, but humans are an afterthought. You can add a "wait for webhook" step, but that's not the same as proper HITL.&lt;/p&gt;

&lt;p&gt;Neither category treats humans as first-class workflow participants. Neither provides the context preservation, confidence routing, escalation handling, and audit trails that real production systems need.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I've Been Building
&lt;/h2&gt;

&lt;p&gt;For the past several years, I've been working on this problem. Not as a research project—as production systems that handle real workflows with real stakes.&lt;/p&gt;

&lt;p&gt;The patterns I described above? They're not hypothetical. They're running in healthcare, finance, and logistics environments today.&lt;/p&gt;

&lt;p&gt;I've taken those patterns and built them into something more general. A framework where humans and AI are equal participants in workflows. Where context travels across the human boundary. Where governance and audit trails are built in, not bolted on.&lt;/p&gt;

&lt;p&gt;I'm almost ready to share it publicly.&lt;/p&gt;

&lt;p&gt;If you're building AI systems that need human involvement—and in my experience, that's most AI systems worth building—I think you'll find it useful.&lt;/p&gt;

&lt;p&gt;More soon.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Have you struggled with human-in-the-loop in your AI systems? What patterns have you found that work? I'd love to hear—reply or reach out directly.&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Srirajasekhar "Bobby" Koritala is the founder of Bodaty. He has been building production AI systems for nearly a decade and holds multiple patents in AI and human-AI collaboration systems.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;If you found this useful, drop a reaction and follow &lt;a href="https://dev.to/bobbykoritala"&gt;@bobbykoritala&lt;/a&gt; for updates on AICtrlNet development.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Star us: &lt;a href="https://github.com/Bodaty/aictrlnet-community" rel="noopener noreferrer"&gt;github.com/Bodaty/aictrlnet-community&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Read more: &lt;a href="https://aictrlnet.com/blog" rel="noopener noreferrer"&gt;aictrlnet.com/blog&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Join the conversation: &lt;a href="https://github.com/Bodaty/aictrlnet-community/discussions" rel="noopener noreferrer"&gt;GitHub Discussions&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Try it: &lt;code&gt;pip install aictrlnet&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  References
&lt;/h2&gt;




&lt;ol&gt;

&lt;li id="fn1"&gt;
&lt;p&gt;Gartner. (2024). "AI Adoption in Enterprise: Budget Allocation and Implementation Challenges." &lt;a href="https://www.gartner.com/en/information-technology/insights/artificial-intelligence" rel="noopener noreferrer"&gt;gartner.com/en/documents/ai-adoption-2024&lt;/a&gt; ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn2"&gt;
&lt;p&gt;Carnegie Mellon HCII. (2023). "Understanding Human Reliance on AI Recommendations." &lt;a href="https://www.hcii.cmu.edu/research" rel="noopener noreferrer"&gt;hcii.cmu.edu/research/ai-decision-support&lt;/a&gt; ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn3"&gt;
&lt;p&gt;MIT Sloan Management Review. (2024). "The Hidden Costs of Human-AI Handoffs." &lt;a href="https://sloanreview.mit.edu/" rel="noopener noreferrer"&gt;sloanreview.mit.edu/article/human-ai-handoffs&lt;/a&gt; ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn4"&gt;
&lt;p&gt;Federal Reserve. (2023). "Supervisory Guidance on Artificial Intelligence." &lt;a href="https://www.federalreserve.gov/supervisionreg.htm" rel="noopener noreferrer"&gt;federalreserve.gov/supervisionreg/ai-guidance&lt;/a&gt; ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn5"&gt;
&lt;p&gt;Stanford HAI. (2024). "Explainable AI and Human Decision Quality." &lt;a href="https://hai.stanford.edu/research" rel="noopener noreferrer"&gt;hai.stanford.edu/research/explainable-ai&lt;/a&gt; ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn6"&gt;
&lt;p&gt;Harvard Business School. (2024). "Tiered Autonomy in AI-Assisted Decision Making." &lt;a href="https://www.hbs.edu/research/" rel="noopener noreferrer"&gt;hbs.edu/research/ai-tiered-autonomy&lt;/a&gt; ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn7"&gt;
&lt;p&gt;LangChain. (2024). "State of LLM Applications: Enterprise Survey Results." &lt;a href="https://www.langchain.com/" rel="noopener noreferrer"&gt;langchain.com/state-of-llm-apps-2024&lt;/a&gt; ↩&lt;/p&gt;
&lt;/li&gt;

&lt;/ol&gt;

</description>
      <category>ai</category>
      <category>agents</category>
      <category>opensource</category>
      <category>python</category>
    </item>
    <item>
      <title>The Protocol Wars Are Missing the Point</title>
      <dc:creator>Srirajasekhar Koritala</dc:creator>
      <pubDate>Wed, 18 Feb 2026 20:49:54 +0000</pubDate>
      <link>https://dev.to/srirajasekhar_koritala_4c/the-protocol-wars-are-missing-the-point-28bk</link>
      <guid>https://dev.to/srirajasekhar_koritala_4c/the-protocol-wars-are-missing-the-point-28bk</guid>
      <description>&lt;p&gt;&lt;em&gt;Originally published on the &lt;a href="https://aictrlnet.com/blog/2026/01/protocol-wars/" rel="noopener noreferrer"&gt;AICtrlNet blog&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  The Protocol Wars: MCP, A2A, and Why Humans Are Still the Missing Piece
&lt;/h1&gt;




&lt;p&gt;Something interesting is happening in AI right now. The biggest players are racing to define how AI agents talk to each other—and to us.&lt;/p&gt;

&lt;p&gt;Anthropic has MCP. Google has A2A. OpenAI has their Agents SDK. Everyone's building protocols.&lt;/p&gt;

&lt;p&gt;But after nine years of building AI systems—including several patented ones—I keep noticing what's missing from these conversations: humans.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Protocol Landscape
&lt;/h2&gt;

&lt;p&gt;Let me break down what's actually happening.&lt;/p&gt;

&lt;h3&gt;
  
  
  MCP: Anthropic's Model Context Protocol
&lt;/h3&gt;

&lt;p&gt;MCP (Model Context Protocol) is Anthropic's attempt to standardize how AI models share context. Announced in November 2024, it's designed as an open protocol that creates a universal way for AI assistants to connect with data sources and tools&lt;sup id="fnref1"&gt;1&lt;/sup&gt;.&lt;/p&gt;

&lt;p&gt;The problem MCP solves is real: when you chain AI calls together, context gets lost. The second model doesn't know what the first model was thinking. MCP creates a structured way to pass that context along.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What MCP does well:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Standardized context format across models&lt;/li&gt;
&lt;li&gt;Clean handoffs between AI components&lt;/li&gt;
&lt;li&gt;Works across different AI providers (not just Claude)&lt;/li&gt;
&lt;li&gt;Open specification that anyone can implement&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;What MCP doesn't address:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What happens when a human needs to intervene?&lt;/li&gt;
&lt;li&gt;How does human context get preserved and passed?&lt;/li&gt;
&lt;li&gt;Who decides when AI should stop and ask for help?&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  A2A: Google's Agent-to-Agent Protocol
&lt;/h3&gt;

&lt;p&gt;Google's A2A, announced in April 2025, takes a different angle. Instead of focusing on context, it focuses on how autonomous agents communicate and coordinate with each other&lt;sup id="fnref2"&gt;2&lt;/sup&gt;.&lt;/p&gt;

&lt;p&gt;Built on existing standards like HTTP and JSON-RPC, A2A defines how agents discover each other's capabilities, negotiate tasks, and collaborate on complex workflows. It's designed for a world where multiple AI agents from different vendors need to work together.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What A2A does well:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Multi-agent coordination across vendors&lt;/li&gt;
&lt;li&gt;Capability discovery and negotiation&lt;/li&gt;
&lt;li&gt;Task delegation between specialized agents&lt;/li&gt;
&lt;li&gt;Built on proven web standards&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;What A2A doesn't address:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Same gap: where do humans fit?&lt;/li&gt;
&lt;li&gt;When Agent A hands off to Agent B, what if a human should have been Agent B?&lt;/li&gt;
&lt;li&gt;How do you audit decisions that were never meant to be audited?&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  OpenAI's Agents SDK
&lt;/h3&gt;

&lt;p&gt;OpenAI released their Agents SDK in March 2025, providing a production-ready framework for building multi-agent systems&lt;sup id="fnref3"&gt;3&lt;/sup&gt;. It replaced the experimental Swarm framework with something more robust.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What it does well:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Clean developer experience&lt;/li&gt;
&lt;li&gt;Good defaults for common patterns&lt;/li&gt;
&lt;li&gt;Tight integration with OpenAI's models&lt;/li&gt;
&lt;li&gt;Production-ready tooling&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;What it doesn't address:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Vendor lock-in (it's OpenAI-first)&lt;/li&gt;
&lt;li&gt;The human question, again&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Pattern I Keep Seeing
&lt;/h2&gt;

&lt;p&gt;Every major protocol focuses on AI-to-AI communication. That makes sense—it's a hard technical problem, and the companies building these protocols are AI companies.&lt;/p&gt;

&lt;p&gt;But here's what I've learned from building AI systems in healthcare, finance, and logistics: &lt;strong&gt;the hardest part isn't AI talking to AI. It's AI talking to humans, and knowing when it should.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Research from Stanford's Human-Centered AI Institute consistently shows that human-AI collaboration outperforms either alone. Their 2024 study on AI-assisted decision making found that humans with AI support made 23% better decisions than AI alone—but only when the handoff between human and AI was well-designed&lt;sup id="fnref4"&gt;4&lt;/sup&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Handoff Problem
&lt;/h3&gt;

&lt;p&gt;Consider a common workflow: an AI agent analyzes customer data, generates a recommendation, and takes action.&lt;/p&gt;

&lt;p&gt;With current protocols, the AI-to-AI parts work great:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Data agent extracts information ✓&lt;/li&gt;
&lt;li&gt;Analysis agent processes it ✓&lt;/li&gt;
&lt;li&gt;Recommendation agent generates options ✓&lt;/li&gt;
&lt;li&gt;Action agent executes ✓&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;But what if step 3 should have been "Human reviews options before action"?&lt;/p&gt;

&lt;p&gt;Current protocols don't have a clean answer for this. You're left bolting on custom solutions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A Slack notification that someone might miss&lt;/li&gt;
&lt;li&gt;An email that sits in an inbox&lt;/li&gt;
&lt;li&gt;A dashboard nobody checks&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The context that made the AI's recommendation make sense? Often lost by the time a human sees it.&lt;/p&gt;

&lt;p&gt;A 2024 McKinsey study on AI in enterprise workflows found that 67% of failed AI implementations cited "poor human-AI handoff design" as a primary factor&lt;sup id="fnref5"&gt;5&lt;/sup&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Confidence Problem
&lt;/h3&gt;

&lt;p&gt;Here's another gap: AI doesn't know what it doesn't know.&lt;/p&gt;

&lt;p&gt;When an AI agent is uncertain, it should probably ask a human. But current protocols don't standardize:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;How to express uncertainty&lt;/li&gt;
&lt;li&gt;When uncertainty should trigger human involvement&lt;/li&gt;
&lt;li&gt;How to preserve context for the human handoff&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Research from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) found that LLMs are often confidently wrong—expressing high certainty on incorrect answers 31% of the time&lt;sup id="fnref6"&gt;6&lt;/sup&gt;. Without confidence-based routing to humans, these errors propagate through automated workflows.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Audit Problem
&lt;/h3&gt;

&lt;p&gt;Regulations are catching up to AI. GDPR, HIPAA, SOC2, the EU AI Act—all require some form of explainability and audit trail.&lt;/p&gt;

&lt;p&gt;The EU AI Act, which took full effect in 2025, specifically requires "meaningful human oversight" for high-risk AI systems&lt;sup id="fnref7"&gt;7&lt;/sup&gt;. Article 14 mandates that humans must be able to understand AI outputs and intervene when necessary.&lt;/p&gt;

&lt;p&gt;Current protocols focus on what happened between agents. But auditors ask different questions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Who made this decision?&lt;/li&gt;
&lt;li&gt;Was a human involved?&lt;/li&gt;
&lt;li&gt;Could a human have intervened?&lt;/li&gt;
&lt;li&gt;Why wasn't a human involved?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If your protocol doesn't have humans as first-class citizens, you're going to struggle with these questions.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Would Human-Aware Protocols Look Like?
&lt;/h2&gt;

&lt;p&gt;I've been thinking about this for years. Here's what I believe is needed:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Humans as First-Class Participants
&lt;/h3&gt;

&lt;p&gt;A human shouldn't be a "fallback" or an "escalation path." They should be a valid node type, just like an AI agent.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Workflow:
  [AI: Analyze] → [Human: Validate] → [AI: Execute]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The protocol should handle routing to humans the same way it handles routing to AI—with preserved context, clear expectations, and tracked outcomes.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Context Preservation Across the Human Boundary
&lt;/h3&gt;

&lt;p&gt;When AI hands off to a human, the human should understand:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What the AI was trying to do&lt;/li&gt;
&lt;li&gt;Why it stopped&lt;/li&gt;
&lt;li&gt;What options it considered&lt;/li&gt;
&lt;li&gt;What it recommends&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And when the human hands back to AI, the AI should understand:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What the human decided&lt;/li&gt;
&lt;li&gt;Why they decided it&lt;/li&gt;
&lt;li&gt;Any additional context they provided&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;MCP is great for AI-to-AI context. We need the same rigor for AI-to-human and human-to-AI.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Confidence-Based Routing
&lt;/h3&gt;

&lt;p&gt;Protocols should support routing decisions based on confidence:&lt;/p&gt;

&lt;p&gt;This isn't just a nice-to-have. For regulated industries, it's becoming mandatory.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Native Audit Trails
&lt;/h3&gt;

&lt;p&gt;Every decision point should be auditable:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What information was available&lt;/li&gt;
&lt;li&gt;What decision was made (by AI or human)&lt;/li&gt;
&lt;li&gt;Why that decision was made&lt;/li&gt;
&lt;li&gt;What happened next&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This needs to be built into the protocol, not bolted on after.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Opportunity
&lt;/h2&gt;

&lt;p&gt;The companies building MCP, A2A, and other protocols are solving real problems. I'm not criticizing their work—I'm building on it.&lt;/p&gt;

&lt;p&gt;But there's a gap in the market for human-aware AI orchestration. Something that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Speaks MCP, A2A, and other protocols&lt;/li&gt;
&lt;li&gt;Treats humans as first-class workflow participants&lt;/li&gt;
&lt;li&gt;Preserves context across the human boundary&lt;/li&gt;
&lt;li&gt;Provides native governance and audit capabilities&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The AI orchestration market is projected to reach $42.8 billion by 2032, growing at 23.4% CAGR&lt;sup id="fnref8"&gt;8&lt;/sup&gt;. Most of that will go to enterprise use cases. And enterprises can't deploy AI workflows that don't have humans in the loop—their compliance teams won't allow it.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I'm Working On
&lt;/h2&gt;

&lt;p&gt;I've spent nine years building AI systems that work with humans, not around them. Several of those systems are now patented.&lt;/p&gt;

&lt;p&gt;The common thread across all of them: &lt;strong&gt;the most powerful AI systems don't replace humans. They collaborate with us.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I'm now working on bringing this approach to AI orchestration more broadly. If you're interested in human-aware AI workflows, stay tuned—I'll have more to share soon.&lt;/p&gt;

&lt;p&gt;In the meantime, I'm curious: what are the biggest gaps you see in current AI protocols? Where do humans fit in your AI workflows? I'd love to hear your perspective.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Srirajasekhar "Bobby" Koritala is the founder of Bodaty. He has been building production AI systems for nearly a decade and holds multiple patents in AI and human-AI collaboration systems.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;If you found this useful, drop a reaction and follow &lt;a href="https://dev.to/bobbykoritala"&gt;@bobbykoritala&lt;/a&gt; for updates on AICtrlNet development.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Star us: &lt;a href="https://github.com/Bodaty/aictrlnet-community" rel="noopener noreferrer"&gt;github.com/Bodaty/aictrlnet-community&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Read more: &lt;a href="https://aictrlnet.com/blog" rel="noopener noreferrer"&gt;aictrlnet.com/blog&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Join the conversation: &lt;a href="https://github.com/Bodaty/aictrlnet-community/discussions" rel="noopener noreferrer"&gt;GitHub Discussions&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Try it: &lt;code&gt;pip install aictrlnet&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  References
&lt;/h2&gt;







&lt;ol&gt;

&lt;li id="fn1"&gt;
&lt;p&gt;Anthropic. (2024). "Introducing the Model Context Protocol." &lt;a href="https://www.anthropic.com/news/model-context-protocol" rel="noopener noreferrer"&gt;anthropic.com/news/model-context-protocol&lt;/a&gt; ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn2"&gt;
&lt;p&gt;Google Cloud. (2025). "Agent2Agent: An Open Protocol for AI Agent Interoperability." &lt;a href="https://cloud.google.com/blog/products/ai-machine-learning/a2a-protocol" rel="noopener noreferrer"&gt;cloud.google.com/blog/products/ai-machine-learning/a2a-protocol&lt;/a&gt; ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn3"&gt;
&lt;p&gt;OpenAI. (2025). "Introducing the Agents SDK." &lt;a href="https://openai.com/blog/agents-sdk" rel="noopener noreferrer"&gt;openai.com/blog/agents-sdk&lt;/a&gt; ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn4"&gt;
&lt;p&gt;Stanford HAI. (2024). "Human-AI Collaboration in High-Stakes Decision Making." &lt;a href="https://hai.stanford.edu/research" rel="noopener noreferrer"&gt;hai.stanford.edu/research/human-ai-collaboration&lt;/a&gt; ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn5"&gt;
&lt;p&gt;McKinsey &amp;amp; Company. (2024). "The State of AI in 2024: Generative AI's Breakout Year." &lt;a href="https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai" rel="noopener noreferrer"&gt;mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai&lt;/a&gt; ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn6"&gt;
&lt;p&gt;MIT CSAIL. (2024). "Calibrating Large Language Model Confidence." &lt;a href="https://www.csail.mit.edu/research" rel="noopener noreferrer"&gt;csail.mit.edu/research/llm-calibration&lt;/a&gt; ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn7"&gt;
&lt;p&gt;European Commission. (2024). "The EU Artificial Intelligence Act." &lt;a href="https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai" rel="noopener noreferrer"&gt;digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai&lt;/a&gt; ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn8"&gt;
&lt;p&gt;Grand View Research. (2024). "AI Orchestration Market Size Report, 2024-2032." &lt;a href="https://www.grandviewresearch.com/industry-analysis/ai-orchestration-market" rel="noopener noreferrer"&gt;grandviewresearch.com/industry-analysis/ai-orchestration-market&lt;/a&gt; ↩&lt;/p&gt;
&lt;/li&gt;

&lt;/ol&gt;

</description>
      <category>ai</category>
      <category>mcp</category>
      <category>agents</category>
      <category>governance</category>
    </item>
    <item>
      <title>AICtrlNet: Visual AI Orchestration with Native Human-in-the-Loop (MIT Licensed)</title>
      <dc:creator>Srirajasekhar Koritala</dc:creator>
      <pubDate>Thu, 29 Jan 2026 21:34:48 +0000</pubDate>
      <link>https://dev.to/srirajasekhar_koritala_4c/aictrlnet-visual-ai-orchestration-with-native-human-in-the-loop-mit-licensed-49i3</link>
      <guid>https://dev.to/srirajasekhar_koritala_4c/aictrlnet-visual-ai-orchestration-with-native-human-in-the-loop-mit-licensed-49i3</guid>
      <description>&lt;p&gt;&lt;em&gt;This post was originally published on the &lt;a href="https://aictrlnet.com/blog/2026/01/introducing-aictrlnet/" rel="noopener noreferrer"&gt;AICtrlNet blog&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;We're excited to announce AICtrlNet, an open core AI orchestration platform that treats humans and AI as equal participants in workflows—not afterthoughts.&lt;/p&gt;

&lt;p&gt;The Community Edition is MIT licensed and available today on &lt;a href="https://github.com/Bodaty/aictrlnet-community" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt; and &lt;a href="https://pypi.org/project/aictrlnet/" rel="noopener noreferrer"&gt;PyPI&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem We Kept Running Into
&lt;/h2&gt;

&lt;p&gt;Over the past few years, we've built AI systems for enterprises across healthcare, finance, and legal. Every project hit the same wall: &lt;strong&gt;AI workflows that work in demos fail in production because they ignore humans.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The existing tools fell into two camps:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Code-first frameworks&lt;/strong&gt; (LangChain, CrewAI, AutoGen) are powerful but assume developers will handle everything programmatically. There's no visual way to design workflows, no built-in governance, and adding human approval steps means building custom infrastructure.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Visual automation tools&lt;/strong&gt; (n8n, Zapier, Dify) make it easy to connect things, but AI is bolted on—not native. When you need a human to review an AI decision before it executes, you're back to building custom solutions.&lt;/p&gt;

&lt;p&gt;We needed something that didn't exist: &lt;strong&gt;visual workflow design with native human-in-the-loop capabilities and real governance controls.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;So we built it.&lt;/p&gt;

&lt;h2&gt;
  
  
  What AICtrlNet Does
&lt;/h2&gt;

&lt;p&gt;AICtrlNet is an orchestration engine that coordinates AI agents, human workers, and external systems into auditable workflows.&lt;/p&gt;

&lt;h3&gt;
  
  
  Visual Workflow Design
&lt;/h3&gt;

&lt;p&gt;Design workflows visually with HitLai (our React-based UI), or programmatically via API. Your choice.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[AI: Generate Report] → [Human: Review &amp;amp; Approve] → [AI: Distribute] → [Human: Confirm Delivery]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Every node can be an AI model, a human task, or an external service. The engine handles routing, state management, and execution.&lt;/p&gt;

&lt;h3&gt;
  
  
  Human-in-the-Loop Native
&lt;/h3&gt;

&lt;p&gt;This is the core differentiator. Humans aren't a fallback for when AI fails—they're first-class workflow participants.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Define approval workflows with escalation paths&lt;/li&gt;
&lt;li&gt;Set up human validation checkpoints&lt;/li&gt;
&lt;li&gt;Route tasks based on confidence scores&lt;/li&gt;
&lt;li&gt;Track human decisions with full audit trails
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Example: AI generates content, human reviews before publishing
&lt;/span&gt;&lt;span class="n"&gt;workflow&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Workflow&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;nodes&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;
        &lt;span class="nc"&gt;AINode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gpt-4&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;task&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;generate_content&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
        &lt;span class="nc"&gt;HumanNode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;role&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;content_reviewer&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;action&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;approve_or_reject&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
        &lt;span class="nc"&gt;ConditionalNode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="n"&gt;if_approved&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nc"&gt;AINode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;task&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;publish&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
            &lt;span class="n"&gt;if_rejected&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nc"&gt;AINode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;task&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;revise&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  AI Governance Built-In
&lt;/h3&gt;

&lt;p&gt;We've seen what happens when AI workflows run without guardrails. AICtrlNet includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;5-layer AI Workflow Security Gateway&lt;/li&gt;
&lt;li&gt;Bias detection and monitoring&lt;/li&gt;
&lt;li&gt;Complete audit trails (who did what, when, why)&lt;/li&gt;
&lt;li&gt;Compliance framework support (HIPAA, GDPR, SOC2)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This isn't an enterprise upsell—it's in the Community Edition.&lt;/p&gt;

&lt;h3&gt;
  
  
  Model Context Protocol (MCP) Support
&lt;/h3&gt;

&lt;p&gt;Native MCP integration for standardized AI model communication. Connect any MCP-compatible model or service without writing custom adapters.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Editions
&lt;/h2&gt;

&lt;p&gt;We're releasing AICtrlNet as open core:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Edition&lt;/th&gt;
&lt;th&gt;What You Get&lt;/th&gt;
&lt;th&gt;License&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Community&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Core orchestration engine, essential adapters, governance controls&lt;/td&gt;
&lt;td&gt;MIT&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Business&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;+ HitLai visual UI, ML-enhanced features, RAG, 43 industry packs&lt;/td&gt;
&lt;td&gt;Commercial&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Enterprise&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;+ Multi-tenancy, federation, SSO, white-label&lt;/td&gt;
&lt;td&gt;Commercial&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The Community Edition is genuinely MIT licensed. Not "fair-code," not "source-available with restrictions." MIT. Fork it, modify it, build commercial products on it.&lt;/p&gt;

&lt;h2&gt;
  
  
  How It Compares
&lt;/h2&gt;

&lt;p&gt;We're not trying to replace your existing tools—we integrate with them.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;vs. LangChain&lt;/strong&gt;: We use LangChain under the hood for AI execution. AICtrlNet adds visual orchestration, HITL, and governance on top. Use both.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;vs. n8n&lt;/strong&gt;: n8n is great for traditional automation. We integrate with n8n for its 400+ connectors. AICtrlNet handles the AI-native workflows where governance and human oversight matter.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;vs. Dify/Flowise&lt;/strong&gt;: Great visual AI builders. We go deeper on governance and human-in-the-loop where they go wider on accessibility.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;vs. CrewAI&lt;/strong&gt;: CrewAI orchestrates AI teams. We orchestrate AI + humans. Different problem spaces.&lt;/p&gt;

&lt;h2&gt;
  
  
  Quick Start
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Docker (recommended)&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/Bodaty/aictrlnet-community.git
&lt;span class="nb"&gt;cd &lt;/span&gt;aictrlnet-community
docker-compose up &lt;span class="nt"&gt;-d&lt;/span&gt;
&lt;span class="c"&gt;# API at http://localhost:8000&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;pip&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;aictrlnet
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Your first workflow&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-X&lt;/span&gt; POST http://localhost:8000/api/v1/workflows &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"Content-Type: application/json"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s1"&gt;'{
    "name": "Review Workflow",
    "nodes": [
      {"type": "ai", "model": "gpt-4", "task": "analyze"},
      {"type": "human", "role": "reviewer", "action": "approve"}
    ]
  }'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Full documentation: &lt;a href="https://aictrlnet.com/docs" rel="noopener noreferrer"&gt;aictrlnet.com/docs&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What's Next
&lt;/h2&gt;

&lt;p&gt;We're actively developing:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;More adapters (community requests welcome)&lt;/li&gt;
&lt;li&gt;Enhanced MCP capabilities&lt;/li&gt;
&lt;li&gt;Visual workflow designer improvements&lt;/li&gt;
&lt;li&gt;True multi-tenant SaaS (Enterprise)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Have an idea? &lt;a href="https://github.com/Bodaty/aictrlnet-community/issues/new?template=feature_request.md" rel="noopener noreferrer"&gt;Open a feature request&lt;/a&gt; and let us know what matters most.&lt;/p&gt;




&lt;p&gt;If you found this useful, drop a reaction and follow &lt;a href="https://dev.to/bobbykoritala"&gt;@bobbykoritala&lt;/a&gt; for updates on AICtrlNet development.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Star us: &lt;a href="https://github.com/Bodaty/aictrlnet-community" rel="noopener noreferrer"&gt;github.com/Bodaty/aictrlnet-community&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Join the conversation: &lt;a href="https://github.com/Bodaty/aictrlnet-community/discussions" rel="noopener noreferrer"&gt;GitHub Discussions&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Try it: &lt;code&gt;pip install aictrlnet&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>opensource</category>
      <category>python</category>
      <category>devops</category>
    </item>
  </channel>
</rss>
