<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Dan Evans</title>
    <description>The latest articles on DEV Community by Dan Evans (@dan_evans_1).</description>
    <link>https://dev.to/dan_evans_1</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/dan_evans_1"/>
    <language>en</language>
    <item>
      <title>Execution Is the Risk: Why AI Governance Must Live at the Boundary</title>
      <dc:creator>Dan Evans</dc:creator>
      <pubDate>Tue, 31 Mar 2026 03:08:08 +0000</pubDate>
      <link>https://dev.to/dan_evans_1/execution-is-the-risk-why-ai-governance-must-live-at-the-boundary-4f1a</link>
      <guid>https://dev.to/dan_evans_1/execution-is-the-risk-why-ai-governance-must-live-at-the-boundary-4f1a</guid>
      <description>&lt;p&gt;Most AI governance conversations are still missing the point.&lt;/p&gt;

&lt;p&gt;The risk does not come from what the model says. It comes from what the system does next.&lt;/p&gt;

&lt;p&gt;There is a moment in every AI system where a proposed action turns into a real state change. A record is written. A payment is sent. An account is modified. That moment is the execution boundary. And right now, most systems treat it as an assumption, not a control point.&lt;/p&gt;

&lt;p&gt;They check policy before execution. They log what happened after execution. Some even add approvals in the middle. But none of that guarantees that the action that was evaluated is the same action that actually committed.&lt;/p&gt;

&lt;p&gt;That gap is where failures live.&lt;/p&gt;

&lt;p&gt;If anything changes between evaluation and execution, identity, inputs, system state, timing, then the original decision is no longer valid. But most systems carry that decision forward as if it still applies. That is not governance. That is hope.&lt;/p&gt;

&lt;p&gt;Real control requires something stricter.&lt;/p&gt;

&lt;p&gt;At the moment of execution, authority has to be re-resolved against the current state. Not earlier. Not assumed. Not inferred. Proven. And the decision has to be bound to the action itself so that what executes is exactly what was authorized, nothing more, nothing less.&lt;/p&gt;

&lt;p&gt;That means no drift between evaluation and commit. No silent changes. No second interpretation. The decision and the execution have to become the same thing.&lt;/p&gt;

&lt;p&gt;And when that happens, you can do something most systems cannot do today. You can prove it.&lt;/p&gt;

&lt;p&gt;You can produce a verifiable record that shows exactly what was proposed, what was evaluated, what policy applied, what conditions existed, and why the system allowed or blocked the action. Not as a log. As a sealed artifact that can be independently verified and replayed.&lt;/p&gt;

&lt;p&gt;This is the shift that needs to happen.&lt;/p&gt;

&lt;p&gt;Governance cannot live in guidelines. It cannot live in logs. It cannot live in approvals. It has to live at the execution boundary, where actions become real.&lt;/p&gt;

&lt;p&gt;The model proposes.&lt;/p&gt;

&lt;p&gt;The system commits.&lt;/p&gt;

&lt;p&gt;Control exists only if authority is resolved at that exact moment, and the system can prove that what executed is exactly what was allowed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://primeformcalculus.com/" rel="noopener noreferrer"&gt;PrimeFormCalculus.com&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>aigovernance</category>
      <category>aiagents</category>
    </item>
    <item>
      <title>What Is AI Execution Risk? Why AI Governance Fails at the Execution Boundary</title>
      <dc:creator>Dan Evans</dc:creator>
      <pubDate>Sun, 29 Mar 2026 22:49:05 +0000</pubDate>
      <link>https://dev.to/dan_evans_1/what-is-ai-execution-risk-why-ai-governance-fails-at-the-execution-boundary-3ek6</link>
      <guid>https://dev.to/dan_evans_1/what-is-ai-execution-risk-why-ai-governance-fails-at-the-execution-boundary-3ek6</guid>
      <description>&lt;p&gt;Most discussions about AI governance miss where real failures actually happen. The problem isn’t what AI systems think. It’s what they execute.&lt;/p&gt;

&lt;p&gt;This is what’s known as AI execution risk.&lt;/p&gt;

&lt;p&gt;AI execution risk happens when a system performs an action that was approved earlier, but is no longer valid at the moment it runs. In many AI and machine learning systems, decisions are made upstream and executed later. By the time execution happens, the context may have changed, but the system continues anyway.&lt;/p&gt;

&lt;p&gt;That gap between reasoning and execution is where things break.&lt;/p&gt;

&lt;p&gt;In real-world software engineering, this shows up in simple ways. An agent skips steps but still reports success. A workflow runs on outdated data. A system performs the correct action at the wrong time. These are not hallucinations. They are execution failures.&lt;/p&gt;

&lt;p&gt;From a security perspective, this is where the real risk lives. Once AI systems can take action, they become part of your execution layer. If there is no control at that point, you are trusting earlier reasoning instead of verifying what is true now.&lt;/p&gt;

&lt;p&gt;That’s why most approaches to AI governance fall short. Policies, monitoring, and audits happen before or after execution, but not at the moment the action actually occurs.&lt;/p&gt;

&lt;p&gt;AI execution risk is the failure that occurs when an AI-driven action is executed without being checked against current conditions.&lt;/p&gt;

&lt;p&gt;Most AI governance frameworks focus on model behavior, compliance policies, and monitoring outputs. They do not control execution itself.&lt;/p&gt;

&lt;p&gt;The shift is to treat execution as a boundary.&lt;/p&gt;

&lt;p&gt;Every action needs to be checked again at the moment it runs. Not based on what was decided earlier, but based on what is valid now. That turns governance from something abstract into something that actually controls behavior.&lt;/p&gt;

&lt;p&gt;If AI is going to operate in real systems, governance can’t stop at reasoning. It has to exist at execution.&lt;/p&gt;

&lt;p&gt;Full breakdown here:&lt;br&gt;
&lt;a href="https://primeformcalculus.com/ai-execution-risk" rel="noopener noreferrer"&gt;PrimeFormCalculus.com&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Curious how others are handling AI execution risk in production systems?&lt;/p&gt;

</description>
      <category>ai</category>
      <category>machinelearning</category>
      <category>security</category>
      <category>softwareengineering</category>
    </item>
    <item>
      <title>The Most Dangerous Failures Aren’t Wrong Decisions They’re Unchecked Actions</title>
      <dc:creator>Dan Evans</dc:creator>
      <pubDate>Thu, 26 Mar 2026 18:52:56 +0000</pubDate>
      <link>https://dev.to/dan_evans_1/the-most-dangerous-failures-arent-wrong-decisions-theyre-unchecked-actions-1dhg</link>
      <guid>https://dev.to/dan_evans_1/the-most-dangerous-failures-arent-wrong-decisions-theyre-unchecked-actions-1dhg</guid>
      <description>&lt;p&gt;Most systems don’t fail because they are wrong.&lt;br&gt;
They fail because they act without proving they are allowed to act.&lt;/p&gt;

&lt;p&gt;That problem exists in every domain.&lt;/p&gt;

&lt;p&gt;Take healthcare.&lt;/p&gt;

&lt;p&gt;A clinical AI flags a patient as “low risk” and recommends discharge.&lt;br&gt;
The model is accurate based on the data it saw.&lt;br&gt;
The workflow moves forward.&lt;br&gt;
The system updates the record and releases the patient.&lt;/p&gt;

&lt;p&gt;But something changed.&lt;/p&gt;

&lt;p&gt;A late lab result came in.&lt;br&gt;
A nurse entered a new symptom.&lt;br&gt;
A medication interaction wasn’t accounted for.&lt;/p&gt;

&lt;p&gt;The recommendation was valid when it was generated.&lt;br&gt;
It is no longer valid when it is executed.&lt;/p&gt;

&lt;p&gt;And yet the system still commits it.&lt;/p&gt;

&lt;p&gt;Because nothing re-checks authority at the moment of action.&lt;/p&gt;

&lt;p&gt;PFC fixes this by inserting a hard boundary before anything becomes real.&lt;/p&gt;

&lt;p&gt;Every action must carry proof.&lt;br&gt;
Who authorized it.&lt;br&gt;
What policy allowed it.&lt;br&gt;
What conditions must still be true.&lt;/p&gt;

&lt;p&gt;And at execution, that proof is verified against the current state.&lt;/p&gt;

&lt;p&gt;If anything has changed, the action is blocked.&lt;/p&gt;

&lt;p&gt;Not flagged.&lt;br&gt;
Not logged.&lt;br&gt;
Not reviewed later.&lt;/p&gt;

&lt;p&gt;Stopped.&lt;/p&gt;

&lt;p&gt;This is why PFC works across all domains.&lt;/p&gt;

&lt;p&gt;Because the problem is not the model.&lt;br&gt;
It is the lack of control between decision and execution.&lt;/p&gt;

&lt;p&gt;Healthcare.&lt;br&gt;
Finance.&lt;br&gt;
Infrastructure.&lt;br&gt;
Security.&lt;/p&gt;

&lt;p&gt;Different systems.&lt;br&gt;
Same failure point.&lt;/p&gt;

&lt;p&gt;PFC governs that moment.&lt;/p&gt;

&lt;p&gt;The moment where a decision becomes reality.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.primeformcalculus.com" rel="noopener noreferrer"&gt;https://www.primeformcalculus.com&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aigovernance</category>
      <category>ai</category>
      <category>healthcareai</category>
      <category>patientsafety</category>
    </item>
    <item>
      <title>I built a system that stops AI actions before they execute</title>
      <dc:creator>Dan Evans</dc:creator>
      <pubDate>Tue, 24 Mar 2026 20:49:24 +0000</pubDate>
      <link>https://dev.to/dan_evans_1/i-built-a-system-that-stops-ai-actions-before-they-execute-1bd4</link>
      <guid>https://dev.to/dan_evans_1/i-built-a-system-that-stops-ai-actions-before-they-execute-1bd4</guid>
      <description>&lt;p&gt;Most AI systems today don’t just generate text. They actually take actions. They send emails, trigger workflows, move data, and even change infrastructure. And almost all of them follow the same pattern. The model decides, the action executes, and only then do you deal with what happened.&lt;/p&gt;

&lt;p&gt;That always felt backwards to me.&lt;/p&gt;

&lt;p&gt;I kept running into situations where AI wasn’t just suggesting things. It was doing things. Calling APIs, kicking off jobs, modifying state. And the only safeguards in place were logs, retries, or alerts after the fact. Which means if something goes wrong, it has already happened.&lt;/p&gt;

&lt;p&gt;So I started thinking about what it would look like to move control earlier in the process.&lt;/p&gt;

&lt;p&gt;Instead of letting AI act directly, every action gets evaluated first. Before anything executes, the system checks if it is allowed, whether it matches policy, and whether it is within scope. It returns a simple result, allow or deny. If it is denied, the action never happens.&lt;/p&gt;

&lt;p&gt;That one shift changes a lot. You are no longer reacting to bad outcomes. You are preventing them. Decisions become consistent, explainable, and enforceable across systems instead of scattered across logs and edge cases.&lt;/p&gt;

&lt;p&gt;This matters most in places where actions are hard to undo. Financial operations, infrastructure changes, and automated workflows all fall into this category. Once those happen, rollback is often messy or incomplete.&lt;/p&gt;

&lt;p&gt;I could not find a clean way to do this, so I built a small API around the idea. Evaluate first, execute second.&lt;/p&gt;

&lt;p&gt;Curious how others are thinking about this. Are you relying on safeguards after execution, or putting something in place before actions happen? &lt;a href="https://www.primeformcalculus.com" rel="noopener noreferrer"&gt;https://www.primeformcalculus.com&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>python</category>
      <category>security</category>
      <category>machinelearning</category>
    </item>
  </channel>
</rss>
