<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Mohammad Abu Jafar</title>
    <description>The latest articles on DEV Community by Mohammad Abu Jafar (@moeadnan).</description>
    <link>https://dev.to/moeadnan</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/moeadnan"/>
    <language>en</language>
    <item>
      <title>AI Agents Are Economic Actors. We're Treating Them Like Chatbots.</title>
      <dc:creator>Mohammad Abu Jafar</dc:creator>
      <pubDate>Fri, 10 Apr 2026 17:04:36 +0000</pubDate>
      <link>https://dev.to/moeadnan/ai-agents-are-economic-actors-were-treating-them-like-chatbots-23j9</link>
      <guid>https://dev.to/moeadnan/ai-agents-are-economic-actors-were-treating-them-like-chatbots-23j9</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffpx8yfjk68oo7wies0rz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffpx8yfjk68oo7wies0rz.png" alt=" " width="800" height="730"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Invoice Problem
&lt;/h2&gt;

&lt;p&gt;Your agent just approved a $47,000 invoice to a vendor it has never seen before. At 2 AM. On a Saturday.&lt;/p&gt;

&lt;p&gt;The model that powered the decision passed all safety checks — the output was not toxic, not biased, not hallucinated. The function call was syntactically correct. The tool executed successfully. By every standard metric in the AI safety ecosystem, nothing went wrong.&lt;/p&gt;

&lt;p&gt;Except that the agent had a $5,000 financial limit. The vendor was not in the approved supplier list. The time-of-day risk profile was elevated. And the person who delegated authority to this agent explicitly excluded wire transfers from its scope.&lt;/p&gt;

&lt;p&gt;None of these constraints exist in the model. They exist in the organization. And today, almost nobody is enforcing them.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Gap Nobody Talks About
&lt;/h2&gt;

&lt;p&gt;The AI safety conversation has been dominated by model-level concerns: alignment, jailbreaks, hallucination, content policy. These are real problems with real teams working on them. OpenAI, Anthropic, Google, and Meta all invest heavily in making model outputs safe.&lt;/p&gt;

&lt;p&gt;But there is a different category of problem that emerges the moment an agent gets a credit card, a database connection, a deployment key, or an email account. The question shifts from "is this output harmful?" to a set of questions that no model provider can answer:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Is this agent authorized to perform this action?&lt;/li&gt;
&lt;li&gt;Who delegated that authority, and do they have it to give?&lt;/li&gt;
&lt;li&gt;Does this action violate organizational policy?&lt;/li&gt;
&lt;li&gt;What is the quantified risk, given the agent's track record?&lt;/li&gt;
&lt;li&gt;Does this conflict with something another agent is already doing?&lt;/li&gt;
&lt;li&gt;Can we prove all of the above to an auditor?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These are not AI problems. These are institutional governance problems. They existed for human employees — hiring policies, delegation of authority, separation of duties, spending limits, audit trails. We built entire professions (compliance, internal audit, risk management) to handle them.&lt;/p&gt;

&lt;p&gt;When agents become economic actors — entities that commit real resources on behalf of real principals — we need the same infrastructure, rebuilt for machines.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why I Built This
&lt;/h2&gt;

&lt;p&gt;I spent 15 years designing the financial and operational control systems institutions run on — Delegation of Authority structures, procurement controls, approval chains, separation-of-duty frameworks. I have worked across government entities, publicly listed corporate groups, and the world's first university dedicated to artificial intelligence. I project-managed a UAE Federal Government mandate reengineering 200+ institutional processes. I contributed to a $70M+ ERP deployment spanning 36 subsidiaries and 30,000 employees. I have worked across 500+ institutional policies and governance frameworks.&lt;/p&gt;

&lt;p&gt;The pipeline inside AgentCTRL — authority graphs, policy evaluation, delegation chains, risk scoring — is not an invention. It is a digitization of the control systems I built my career on. Those systems were built for humans. AgentCTRL rebuilds them for AI agents.&lt;/p&gt;

&lt;p&gt;When I looked at what the AI industry was building for agent governance, I saw prompt-level instructions, content filters, and simple guardrails. I did not see Delegation of Authority. I did not see separation of duty. I did not see the institutional infrastructure that every enterprise already requires for human employees. That gap is what AgentCTRL exists to close.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Model Providers Cannot Ship This
&lt;/h2&gt;

&lt;p&gt;The argument against a standalone governance layer is: "Won't OpenAI / Anthropic just build this?" The short answer is no, for the same structural reasons that Nvidia does not build Stripe.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Multi-model governance.&lt;/strong&gt; OpenAI cannot govern an agent running on Claude. Cross-model enforcement requires an independent layer.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Organization-specific authority.&lt;/strong&gt; "VP of Finance approves up to $50K, CFO above that, agents inherit from their creating user" is not AI capability. It is organizational structure. Every customer's authority graph is different.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Regulatory audit trails.&lt;/strong&gt; Auditors do not trust the system being governed to audit itself. The enforcement layer and the execution layer must be separate.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cross-agent policy composition.&lt;/strong&gt; When 15 agents across procurement, finance, and operations are executing concurrently, the governance question is not single-action safety. It is composite organizational risk.&lt;/p&gt;

&lt;p&gt;AWS ships IAM. Enterprises still buy SailPoint. Oracle ships audit logging. Enterprises still buy Imperva. Cloud providers ship security groups. Enterprises still buy Wiz. Platform vendors always ship basic governance as a feature. Dedicated governance layers survive when compliance requirements exceed the built-in, customers are multi-vendor, and governance must be independent of the thing being governed.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Structural Enforcement Looks Like
&lt;/h2&gt;

&lt;p&gt;AgentCTRL is a Python library — zero dependencies, framework-agnostic — that implements a 5-stage sequential decision pipeline. Every action proposed by an agent is checked against autonomy constraints, policy rules, delegated authority limits, quantified risk, and cross-agent conflicts. Each stage can short-circuit the pipeline. The result is always one of three: &lt;strong&gt;ALLOW&lt;/strong&gt; (execute it), &lt;strong&gt;ESCALATE&lt;/strong&gt; (ask a human), or &lt;strong&gt;BLOCK&lt;/strong&gt; (stop it). If any stage throws an error, the pipeline defaults to BLOCK. Fail-closed, always.&lt;/p&gt;

&lt;p&gt;Back to the opening scenario. That $47,000 invoice to an unknown vendor at 2 AM on a Saturday? Here is what happens when the action passes through the pipeline:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Autonomy:&lt;/strong&gt; The agent is level 2 — cleared for &lt;code&gt;invoice.approve&lt;/code&gt;. Passes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Policy:&lt;/strong&gt; The organization has a rule: invoices above $5,000 require escalation. The amount is $47,000. &lt;strong&gt;ESCALATE.&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The pipeline short-circuits. The invoice never gets approved. A human reviews it. The audit trail records exactly what happened, why, and who delegated the authority that the agent was operating under.&lt;/p&gt;

&lt;p&gt;But it goes deeper. If the policy had allowed amounts up to $50,000, the pipeline would have continued:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Authority:&lt;/strong&gt; The agent's delegation chain gives it a $10,000 financial limit from the VP of Finance. $47,000 exceeds this. &lt;strong&gt;ESCALATE.&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Risk scoring:&lt;/strong&gt; Novel vendor (+20%), off-hours (+10%), high-value (+25%). Combined risk: 0.75 (CRITICAL). &lt;strong&gt;ESCALATE.&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Three independent layers, each capable of catching the problem. This is not prompt engineering. Policies cannot be jailbroken. Authority limits cannot be talked around. Risk scores are deterministic, not probabilistic. The tool call does not happen unless the pipeline approves it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Both Directions
&lt;/h2&gt;

&lt;p&gt;Most governance conversations focus on outbound: controlling what your agents do. But there is an equally important inbound question: when someone else's agent calls your API, accesses your MCP tools, or triggers your webhooks, who decides whether to let them in?&lt;/p&gt;

&lt;p&gt;AgentCTRL handles both. The same pipeline, configured with different policies:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Outbound:&lt;/strong&gt; Your finance agent wants to approve an invoice. Is the amount within its limit? Does policy allow it? Is the risk acceptable?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Inbound:&lt;/strong&gt; An external agent calls your &lt;code&gt;/v1/customers&lt;/code&gt; endpoint. Is it verified? Does it have the right credentials? Is it allowed to access PII? Same five stages. Same ALLOW / ESCALATE / BLOCK.&lt;/p&gt;

&lt;p&gt;This matters because the agent-to-agent economy is coming. When your agent negotiates with a vendor's agent, both sides need governance. Not just safety — institutional governance with audit trails.&lt;/p&gt;

&lt;h2&gt;
  
  
  Trust as Credit
&lt;/h2&gt;

&lt;p&gt;Static permission systems assume you know what an agent will do before it does it. The real world does not work that way.&lt;/p&gt;

&lt;p&gt;AgentCTRL's trust calibration system treats agent trust like credit. New agents start with no track record — everything escalates. As an agent accumulates governed actions (50+ actions with &amp;gt;90% success rate), it earns a risk discount. The pipeline becomes more permissive for agents that demonstrate reliability, and tightens for agents that do not.&lt;/p&gt;

&lt;p&gt;This is the architecture for dynamic autonomy — not a feature we claim is complete, but the structural foundation. The pipeline already evaluates trust context. The risk engine already applies calibration discounts. What comes next is making those thresholds adaptive.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Real Scarcity
&lt;/h2&gt;

&lt;p&gt;Here is the question nobody in AI is asking yet: what is the cost of a human looking at something?&lt;/p&gt;

&lt;p&gt;The cost of AI compute is approaching zero. A token costs fractions of a cent. An agent can evaluate a thousand invoices in the time it takes a human to review one. But human judgment — the ability to assess a novel situation, weigh organizational context, and make a judgment call — is finite. There are 8 billion humans. Each has roughly 10 productive hours a day. That number does not change regardless of how many agents exist.&lt;/p&gt;

&lt;p&gt;Today, governance systems ask: "Is this allowed?" That question produces a binary answer. A blanket $5,000 approval threshold wastes human attention on routine transactions that the agent handles perfectly, while missing novel $500 risks that actually need a human eye.&lt;/p&gt;

&lt;p&gt;The better question is: "Is this worth consuming human attention?" That reframes governance from static rule enforcement to attention-cost optimization. The expected loss from an autonomous agent error, weighed against the cost of interrupting a human, is the calculation that matters.&lt;/p&gt;

&lt;p&gt;We are not building this yet. But it is the direction the architecture supports, and it is the question that will define the next generation of governance systems. Static rules were built for a world where humans did the work and needed to be checked. In a world where agents do the work, the governance system's job is to protect the scarcest resource: human judgment.&lt;/p&gt;

&lt;h2&gt;
  
  
  What This Is Not
&lt;/h2&gt;

&lt;p&gt;AgentCTRL is not a prompt filter. It does not look at model outputs.&lt;/p&gt;

&lt;p&gt;It is not an orchestration framework. It does not run agents or manage workflows.&lt;/p&gt;

&lt;p&gt;It is not a model-level safety tool. It does not compete with RLHF, constitutional AI, or content classifiers.&lt;/p&gt;

&lt;p&gt;It is the layer that answers: "Given that this agent wants to take this action, with these parameters, at this time — should the action actually execute?" That question is orthogonal to whether the model is aligned. A perfectly aligned model can still produce an action that violates organizational policy.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try It
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;agentctrl
python &lt;span class="nt"&gt;-m&lt;/span&gt; agentctrl          &lt;span class="c"&gt;# see the pipeline demo&lt;/span&gt;
agentctrl validate &lt;span class="s1"&gt;'{"agent_id": "analyst", "action_type": "invoice.approve", "action_params": {"amount": 6000}}'&lt;/span&gt;
agentctrl init               &lt;span class="c"&gt;# scaffold starter policies + authority config&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;74 tests. Zero dependencies. Apache 2.0.&lt;/p&gt;

&lt;p&gt;The code is at &lt;a href="https://github.com/moeintel/AgentCTRL" rel="noopener noreferrer"&gt;github.com/moeintel/AgentCTRL&lt;/a&gt;. The PyPI package is &lt;a href="https://pypi.org/project/agentctrl/" rel="noopener noreferrer"&gt;&lt;code&gt;agentctrl&lt;/code&gt;&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I built this because AI agents need the same institutional controls that human employees have had for decades. Not because agents are dangerous — because agents are economic actors, and economic actors need institutional infrastructure.&lt;/p&gt;

&lt;p&gt;Those systems were built for humans. AgentCTRL rebuilds them for AI agents.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;AgentCTRL is built by &lt;a href="https://moeintel.ai" rel="noopener noreferrer"&gt;MoeIntel&lt;/a&gt;. Created by &lt;a href="https://github.com/moeadnan" rel="noopener noreferrer"&gt;Mohammad Abu Jafar&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>agents</category>
      <category>governance</category>
      <category>python</category>
    </item>
    <item>
      <title>I've been designing control systems for 15 years. AI agents just made my career relevant in a way I didn't expect.</title>
      <dc:creator>Mohammad Abu Jafar</dc:creator>
      <pubDate>Fri, 10 Apr 2026 16:54:29 +0000</pubDate>
      <link>https://dev.to/moeadnan/ive-been-designing-control-systems-for-15-years-ai-agents-just-made-my-career-relevant-in-a-way-i-jnh</link>
      <guid>https://dev.to/moeadnan/ive-been-designing-control-systems-for-15-years-ai-agents-just-made-my-career-relevant-in-a-way-i-jnh</guid>
      <description>&lt;p&gt;Hey everyone. First post here, figured I'd start with an introduction.&lt;/p&gt;

&lt;p&gt;I'm Mohammad. I spent 15 years building the boring-but-critical systems that keep institutions from imploding — approval chains, spending limits, delegation of authority, separation of duty. The stuff that means a junior accountant can't wire $50K to a random vendor at 2 AM.&lt;/p&gt;

&lt;p&gt;Then AI agents showed up with API keys and credit cards. And none of those controls exist for them.&lt;/p&gt;

&lt;p&gt;An agent running on a service account has no spending limit. No scope restriction. No audit trail that says who authorized it to act. If a human employee operated that way, they'd be escorted out by security.&lt;/p&gt;

&lt;p&gt;So I built the thing I couldn't find: AgentCTRL — an open-source Python library that puts real institutional controls on agent actions. Not prompt-level "please don't do bad things." Structural enforcement. The tool call doesn't happen unless the pipeline says yes.&lt;/p&gt;

&lt;p&gt;pip install agentctrl — zero deps, Apache 2.0, 74 tests, works with LangChain/OpenAI/CrewAI or anything else.&lt;/p&gt;

&lt;p&gt;Writing my first real article here soon about why AI agents are economic actors and why we keep governing them like chatbots.&lt;/p&gt;

&lt;p&gt;GitHub: &lt;a href="https://github.com/moeintel/AgentCTRL" rel="noopener noreferrer"&gt;https://github.com/moeintel/AgentCTRL&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
