DEV Community

Cover image for I Built a Demo for Deterministic AI Execution Governance
Dan Evans
Dan Evans

Posted on

I Built a Demo for Deterministic AI Execution Governance

As AI agents become more capable, one question keeps bothering me:

What actually controls execution authority once an AI decides to act?

A lot of current AI governance focuses on:

  • model alignment
  • moderation
  • observability
  • monitoring
  • logging
  • post-event analysis

But there’s a different layer that I think deserves more attention:

The execution boundary.

So I built a small open source project called the PFC Authority Flip Demo to explore that idea.

GitHub repo:
https://github.com/danlevans1/pfc-authority-flip-demo


The Core Idea

An AI agent may propose an action.

But proposal should not automatically equal authority.

The demo simulates:

  • an AI request
  • policy evaluation
  • authority revocation
  • execution blocking
  • signed governance receipts
  • deterministic replay verification

The important part is this:

The system doesn’t just log what happened.

It deterministically decides whether execution authority exists before the action can affect the real world.


What Is an “Authority Flip”?

In the demo, authority changes state during evaluation.

Example:

  • request arrives
  • policy limit exceeded
  • execution authority revoked
  • action blocked
  • signed receipt generated

That transition is what I’m calling an authority flip.

The action never executes.


Why Replay Verification Matters

One of the biggest problems with AI governance is proving what actually happened later.

The demo generates a signed governance artifact that can be:

  • replayed
  • recomputed
  • independently verified offline

The replay verifier checks:

  • payload hash
  • signature validity
  • deterministic recomputation

So the system can later prove:

“This exact request produced this exact governance decision.”

That’s different from ordinary logging.


Running the Demo

bash python3 -m venv .venv source .venv/bin/activate python -m pip install -r requirements.txt python run_demo.py python verify_replay.py

Expected output looks something like:

text REQUEST: simulated $12,400,000 trade POLICY LIMIT: $500,000 DECISION: BLOCK AUTHORITY: REVOKED SIGNED RECEIPT: CREATED

Then replay verification:

text REPLAY: PASS PAYLOAD HASH: MATCH SIGNATURE: VALID DECISION: RECOMPUTED


Why I Think This Matters

As AI systems move toward:

  • autonomous agents
  • financial operations
  • infrastructure control
  • API execution
  • delegated workflows

…the industry may eventually need stronger runtime execution governance.

Not just:

  • “What did the AI say?” …but:
  • “Who had authority to execute?”
  • “Can execution be interrupted deterministically?”
  • “Can the governance decision be independently verified later?”

That’s the layer I’m exploring with PFC.


Feedback Welcome

This is still early-stage work, but I wanted to publish the concept publicly and make the demo runnable for developers.

Would love feedback from:

  • security engineers
  • infrastructure teams
  • AI governance researchers
  • agent framework developers
  • fintech engineers

Repo:
https://github.com/danlevans1/pfc-authority-flip-demo

Top comments (0)