DEV Community

Chandan Galani
Chandan Galani

Posted on

I’m building a deterministic policy firewall for AI systems — looking for technical feedback

I’ve been working on a small but opinionated system and would love technical feedback from people who’ve dealt with AI in regulated or high-risk environments.

The core idea is simple:

AI systems can propose actions.

Something else must decide whether those actions are allowed to execute.

This project is not about “understanding intent” perfectly.

Intent normalization is deliberately lossy (regex / LLM / upstream systems).

The invariant is a deterministic policy layer that:

  • blocks unsafe or illegal execution
  • fails closed when inputs are ambiguous
  • produces a tamper-evident audit trail

Think of it as an execution firewall or control plane for AI agents.

I’ve tested it across:

  • fintech (loan approvals, AML-style constraints)
  • healthtech (prescription safety, controlled substances, pregnancy)
  • legal (M&A, antitrust thresholds)
  • insurance, e-commerce, and government scenarios
  • including unstructured natural-language inputs

This is early-stage and intentionally conservative.

False positives escalate; false negatives are unacceptable.

Repo: https://github.com/LOLA0786/Intent-Engine-Api

I’m not looking for product feedback — mainly architectural criticism:

  • Where does this break down?
  • What would you challenge if you were deploying this?
  • What’s missing at the execution boundary?

Happy to clarify assumptions.

Top comments (0)