<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Joe Carpenter</title>
    <description>The latest articles on DEV Community by Joe Carpenter (@dingdawg).</description>
    <link>https://dev.to/dingdawg</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/dingdawg"/>
    <language>en</language>
    <item>
      <title>How an AI Agent Ran Up a $47,000 Bill in 11 Days (And How to Stop It)</title>
      <dc:creator>Joe Carpenter</dc:creator>
      <pubDate>Sat, 25 Apr 2026 07:52:13 +0000</pubDate>
      <link>https://dev.to/dingdawg/how-an-ai-agent-ran-up-a-47000-bill-in-11-days-and-how-to-stop-it-1fk</link>
      <guid>https://dev.to/dingdawg/how-an-ai-agent-ran-up-a-47000-bill-in-11-days-and-how-to-stop-it-1fk</guid>
      <description>&lt;h1&gt;
  
  
  How an AI Agent Ran Up a $47,000 Bill in 11 Days (And How to Stop It)
&lt;/h1&gt;

&lt;p&gt;&lt;em&gt;Published by Innovative Systems Global — April 2026&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;In November 2025, four AI agents entered an infinite retry loop.&lt;/p&gt;

&lt;p&gt;Nobody noticed for 11 days.&lt;/p&gt;

&lt;p&gt;When the bill arrived, it was $47,000. All of it from LLM API calls. All of it preventable. The team had logging. They had monitoring. They did not have a hard limit.&lt;/p&gt;

&lt;p&gt;This is not a unique incident. It's becoming a rite of passage for engineering teams running agents in production.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why this keeps happening
&lt;/h2&gt;

&lt;p&gt;Every major LLM provider — OpenAI, Anthropic, Google — charges per token. The more your agent runs, the more you pay. This is the correct model. The problem is that &lt;strong&gt;agents don't know how much they're spending&lt;/strong&gt;, and &lt;strong&gt;nothing stops them when they exceed a budget&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Current "solutions":&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Spend alerts&lt;/strong&gt; — fire after the damage is done. An alert at $1,000 doesn't help when an agent burns $4,700 per day.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;API rate limits&lt;/strong&gt; — these throttle requests per minute, not total spend.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Observability platforms&lt;/strong&gt; (Helicone, LangSmith) — they show you what happened. They don't prevent it.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cloud billing alerts&lt;/strong&gt; — by the time AWS or OpenAI sends an alert, the loop has been running for days.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;What's missing: a hard gate that runs &lt;strong&gt;before&lt;/strong&gt; the LLM call, checks the budget, and refuses to proceed if the limit is exceeded.&lt;/p&gt;




&lt;h2&gt;
  
  
  The two-line problem
&lt;/h2&gt;

&lt;p&gt;Here's what most agent code looks like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;chat&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;completions&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gpt-4o&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;messages&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;messages&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There is no cost tracking here. No budget check. No receipt. If this code runs 50,000 times in an infinite loop, you find out when the bill arrives.&lt;/p&gt;




&lt;h2&gt;
  
  
  The fix: meter every call, enforce every limit
&lt;/h2&gt;

&lt;p&gt;We built &lt;code&gt;dingdawg-governance&lt;/code&gt; to solve this. Three new MCP tools in v2.1.0:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;meter_llm_call&lt;/code&gt;&lt;/strong&gt; — call this after every LLM response. Pass the model, tokens in, tokens out, and your agent ID. Get back the cost, your cumulative spend, and your budget status.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"receipt_id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"mtr_abc123_def456"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"agent_id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"my-research-agent"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"provider"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"openai"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"model"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"gpt-4o"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"prompt_tokens"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1200&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"completion_tokens"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;800&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"cost_usd"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;0.018&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"cumulative_spend_usd"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;12.43&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"budget_status"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"ok"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"budget_limit_usd"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;50.00&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;&lt;code&gt;set_llm_budget&lt;/code&gt;&lt;/strong&gt; — set a hard limit for any agent. Daily or monthly. Warning fires at 80% by default.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"agent_id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"my-research-agent"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"limit_usd"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;50.00&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"period"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"daily"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;&lt;code&gt;get_spend_report&lt;/code&gt;&lt;/strong&gt; — query spend by agent, model, and date range. See exactly which agents cost what.&lt;/p&gt;




&lt;h2&gt;
  
  
  How the $47K incident gets prevented
&lt;/h2&gt;

&lt;p&gt;With &lt;code&gt;dingdawg-governance&lt;/code&gt; wired:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Day 1: Agent starts loop. &lt;code&gt;meter_llm_call&lt;/code&gt; tracks each call.&lt;/li&gt;
&lt;li&gt;Day 1, ~$40 in: &lt;code&gt;budget_status&lt;/code&gt; flips to &lt;code&gt;"warning"&lt;/code&gt;. Your code can log, alert, or throttle.&lt;/li&gt;
&lt;li&gt;Day 1, $50 in: &lt;code&gt;budget_status&lt;/code&gt; flips to &lt;code&gt;"exceeded"&lt;/code&gt;. Your code stops the agent.&lt;/li&gt;
&lt;li&gt;Total damage: $50, not $47,000.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The enforcement is in YOUR code — you decide what to do when the budget is exceeded. The meter gives you the signal.&lt;/p&gt;




&lt;h2&gt;
  
  
  Installation
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# As an MCP server (Claude Desktop, Cursor, any MCP-compatible client)&lt;/span&gt;
npx dingdawg-governance

&lt;span class="c"&gt;# Claude Code&lt;/span&gt;
claude mcp add dingdawg-governance npx dingdawg-governance
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Free tier: unlimited &lt;code&gt;meter_llm_call&lt;/code&gt; and &lt;code&gt;set_llm_budget&lt;/code&gt; calls. Local filesystem storage. No API key required.&lt;/p&gt;

&lt;p&gt;Paid tier ($19/month): cloud receipt storage, team dashboards, cross-session spend history, PDF export. API key at dingdawg.com/developers.&lt;/p&gt;




&lt;h2&gt;
  
  
  Price table
&lt;/h2&gt;

&lt;p&gt;Built in. Covers 30+ models across OpenAI, Anthropic, Google, Groq, Mistral, Cohere, and DeepSeek. Updated with each release.&lt;/p&gt;

&lt;p&gt;If your model isn't in the table, it returns &lt;code&gt;cost_usd: 0&lt;/code&gt; with a note — it never silently miscalculates.&lt;/p&gt;




&lt;h2&gt;
  
  
  Works with any agent framework
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;dingdawg-governance&lt;/code&gt; is an MCP server. Any agent that can call MCP tools can use it — LangChain, AutoGen, CrewAI, custom agents, Claude Code, Cursor. No SDK required. No framework lock-in.&lt;/p&gt;




&lt;h2&gt;
  
  
  The broader problem
&lt;/h2&gt;

&lt;p&gt;The $47K incident is the visible symptom. The real problem is that enterprises are deploying agents with no spend governance at all. Every dollar an agent spends is invisible until it's gone.&lt;/p&gt;

&lt;p&gt;As agents become more autonomous — running overnight, chaining into other agents, operating without human supervision — the spend problem compounds. A single misconfigured retry policy can turn a $50 research job into a $50,000 infrastructure incident.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Budget enforcement isn't a nice-to-have. It's the seatbelt.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Get started
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npx dingdawg-governance
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Source: &lt;a href="https://github.com/dingdawg/governance-sdk" rel="noopener noreferrer"&gt;github.com/dingdawg/governance-sdk&lt;/a&gt;&lt;br&gt;
Pricing: &lt;a href="https://dingdawg.com/developers" rel="noopener noreferrer"&gt;dingdawg.com/developers&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Innovative Systems Global builds AI governance infrastructure for teams running agents in production. Based in the Rio Grande Valley, Texas.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>agents</category>
      <category>ai</category>
      <category>llm</category>
      <category>monitoring</category>
    </item>
    <item>
      <title>I built a governance layer for AI agents after watching them fail silently in production</title>
      <dc:creator>Joe Carpenter</dc:creator>
      <pubDate>Tue, 07 Apr 2026 15:03:33 +0000</pubDate>
      <link>https://dev.to/dingdawg/i-built-a-governance-layer-for-ai-agents-after-watching-them-fail-silently-in-production-3li3</link>
      <guid>https://dev.to/dingdawg/i-built-a-governance-layer-for-ai-agents-after-watching-them-fail-silently-in-production-3li3</guid>
      <description>&lt;p&gt;Picture this: a healthcare AI agent is triaging patient intake. It's running on a solid model, well-prompted, tested in staging. In production, a patient describes symptoms that match two possible care pathways — one urgent, one routine. The agent picks routine. No error is thrown. No log entry flags it. No human is notified. The patient waits three days for a callback that should have been a same-day referral.&lt;/p&gt;

&lt;p&gt;Nobody finds out until a follow-up call two weeks later.&lt;/p&gt;

&lt;p&gt;I'm not describing a real incident. But I've talked to enough people shipping agents into healthcare, fintech, and legal workflows to know this scenario isn't hypothetical — it's a near-miss waiting in every ungoverned production agent.&lt;/p&gt;

&lt;h2&gt;
  
  
  The actual problem
&lt;/h2&gt;

&lt;p&gt;When we started shipping AI agents into regulated environments, the agents themselves weren't the problem. The problem was what surrounded them. Or didn't.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;No audit trail.&lt;/strong&gt; When something went wrong, we had inference logs at best — token inputs and outputs, no semantic record of &lt;em&gt;why&lt;/em&gt; a decision was made or what policy it touched.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;No rollback.&lt;/strong&gt; If an agent executed a bad action — sent a message, wrote a record, triggered a workflow — we had no native mechanism to undo it or even flag it for review.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;No explainability.&lt;/strong&gt; When a compliance officer asked "why did your agent do that?", the honest answer was "we don't know, here's the prompt."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;No governance gate.&lt;/strong&gt; Actions executed on intent match. There was no intercept layer that could say: &lt;em&gt;this action requires human review before proceeding&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;In consumer apps, that's a bad UX. In regulated industries, that's liability.&lt;/p&gt;

&lt;h2&gt;
  
  
  What we built
&lt;/h2&gt;

&lt;p&gt;DingDawg is a governance layer that wraps any AI agent and intercepts every action before it executes. It's MCP-native, which means it slots directly into Claude Code, Codex, and Cursor without custom middleware. It also works with any Python agent via a two-line install.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;dingdawg-loop
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;dingdawg&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;schedule_governed&lt;/span&gt;

&lt;span class="nf"&gt;schedule_governed&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;agent_id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;@hipaa-intake&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;cron&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;0 9 * * *&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's it. Every action the agent takes is now routed through a governance gate before execution.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the governance receipt looks like
&lt;/h2&gt;

&lt;p&gt;Every governed action produces a receipt:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"action_id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"act_9f3a21bc"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"agent_id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"@hipaa-intake"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"timestamp"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2026-04-06T09:00:14Z"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"action"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"route_patient"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"policy_result"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"BLOCKED"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"lnn_trace"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"features"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"symptom_urgency_score"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"weight"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;0.84&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"direction"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"ESCALATE"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"prior_visit_flag"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"weight"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;0.61&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"direction"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"ESCALATE"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"routing_decision"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"weight"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;-0.91&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"direction"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"CONFLICT"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"explanation"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Agent routing conflicts with urgency signal at 0.84 confidence. Human review required before execution."&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"ipfs_cid"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"bafybeigdyrzt5sfp7udm7hu76uh7y26nf3efuylqabf3oclgtqy55fbzdi"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"policy_version"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"hipaa-v2.1"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The LNN causal trace is not a black-box score. It's a weighted feature explanation — you can see exactly which signals triggered the block and why. The &lt;code&gt;ipfs_cid&lt;/code&gt; is a content-addressed, immutable proof stored on IPFS. Your regulator can verify it. You cannot alter it after the fact.&lt;/p&gt;

&lt;h2&gt;
  
  
  The open-core model
&lt;/h2&gt;

&lt;p&gt;The SDK, governance primitives, LNN trace engine, and MCP integration are Apache 2.0. Free. Open on GitHub at &lt;a href="https://github.com/dingdawg/governance-sdk" rel="noopener noreferrer"&gt;github.com/dingdawg/governance-sdk&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The cloud tier adds multi-agent orchestration, managed IPFS pinning, enterprise policy management, and a creator marketplace where governance plugins can be published and monetized. We think the core infrastructure should be auditable. You shouldn't have to take our word for it on something this critical.&lt;/p&gt;

&lt;h2&gt;
  
  
  The regulatory window is closing
&lt;/h2&gt;

&lt;p&gt;EU AI Act enforcement starts &lt;strong&gt;August 2026&lt;/strong&gt;. It requires audit trails, explainability, and human oversight mechanisms for high-risk AI systems — healthcare, hiring, credit, law enforcement, critical infrastructure.&lt;/p&gt;

&lt;p&gt;Colorado SB 205 hits &lt;strong&gt;June 30 2026&lt;/strong&gt;. Narrower but sharper — specifically targeting consequential automated decisions with a right-to-explanation requirement.&lt;/p&gt;

&lt;p&gt;If you're shipping agents in any of these domains and you don't have governance infrastructure in place, you're building technical debt that will be expensive to retrofit under deadline pressure.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try it
&lt;/h2&gt;

&lt;p&gt;Free harness score — 2 minutes, shows exactly where your agent governance gaps are: &lt;strong&gt;&lt;a href="https://dingdawg.com/harness" rel="noopener noreferrer"&gt;dingdawg.com/harness&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Free compliance scan:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;dingdawg-compliance
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;GitHub: &lt;a href="https://github.com/dingdawg/governance-sdk" rel="noopener noreferrer"&gt;github.com/dingdawg/governance-sdk&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;npm: &lt;a href="https://www.npmjs.com/package/dingdawg-governance" rel="noopener noreferrer"&gt;npmjs.com/package/dingdawg-governance&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you're shipping agents in regulated environments, I'd genuinely like to hear what you're running into. The governance problem is underspecified and we're building in public.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>agents</category>
      <category>compliance</category>
      <category>mcp</category>
    </item>
  </channel>
</rss>
