DEV Community

Cover image for From Carpenter to AI Founder: The Day I Built a Deterministic AI Governance Kernel
James Derek Ingersoll
James Derek Ingersoll

Posted on

From Carpenter to AI Founder: The Day I Built a Deterministic AI Governance Kernel

WeCoded 2026: Echoes of Experience 💜

A year ago, I was swinging a hammer for a living.

Today I am the President and CTO of a federally incorporated AI company, a Brainz Global 500 honouree, and a DBA candidate researching AI governance.

But this story is not really about titles.

It is about the moment everything changed. The day I realized that the biggest unsolved problem in artificial intelligence is not intelligence.

It is governance.

Every day companies deploy powerful AI systems into healthcare, finance, and critical infrastructure. Yet many of these systems operate like black boxes. There are no deterministic controls. There is no provable governance layer. There is often no reliable audit trail explaining how a decision was made.

Coming from outside the traditional tech pipeline, that did not sit right with me.

So I did what builders do.

I started building.

What came out of that process became something I never expected to create. A deterministic AI governance kernel. A system designed to evaluate AI inference requests, enforce policy decisions, and generate immutable audit records with cryptographic verification.

This article tells the story of how that idea formed, how the architecture works at a high level, and why I believe governance infrastructure will become one of the most important layers in the next generation of AI systems.


Background

Carpenter → AI founder within one year.

Problem

Modern AI systems are powerful but often lack deterministic governance, auditability, and policy enforcement.

Idea

Introduce a governance kernel that evaluates requests before they reach the AI model.

Core Components
• Policy enforcement

• Structured validation probes

• Deterministic evaluation layer

• Immutable audit evidence

Goal

Create AI infrastructure that is accountable enough for regulated environments like healthcare and finance.


The Moment the Idea Clicked

When most people talk about AI innovation, they talk about bigger models.

More parameters.
More training data.
More GPUs.

But the deeper I went into the space, the more obvious something became.

Almost nobody was solving the control problem.

AI models were becoming more powerful every year, yet the systems around them remained fragile.

Requests went directly into models.
Outputs came back with little oversight.
Logs were incomplete.
Decisions were difficult to trace.

In regulated environments like healthcare, finance, or government systems, that is a serious problem.

So the question became simple.

What if AI systems had a governing layer before they were allowed to act?

That question eventually led to the architecture I began building.


The Missing Layer in Modern AI

Most current AI systems follow a structure like this:

User → Application → AI Model → Output
Enter fullscreen mode Exit fullscreen mode

The model becomes the central decision engine.

The problem is that the model itself is probabilistic by design.

That means the system making important decisions is fundamentally unpredictable.

Instead, I began designing a different structure.

User Request
      ↓
Governance Kernel
      ↓
Policy Evaluation
      ↓
AI Model Execution
      ↓
Immutable Audit Record
Enter fullscreen mode Exit fullscreen mode

In this architecture, AI never operates alone.

Every request must pass through a governing layer first.

This separates intelligence from control.


Conceptual Architecture

At a high level, the system introduces a deterministic governance layer between user requests and the AI model.

              ┌───────────────────┐
              │       User        │
              └─────────┬─────────┘
                        │
                        ▼
              ┌───────────────────┐
              │   Application     │
              │   Interface/API   │
              └─────────┬─────────┘
                        │
                        ▼
              ┌───────────────────┐
              │ Governance Kernel │
              │                   │
              │ • Policy Engine   │
              │ • Probe System    │
              │ • Risk Evaluation │
              └─────────┬─────────┘
                        │
           ┌────────────┴────────────┐
           ▼                         ▼
   ┌───────────────┐        ┌─────────────────┐
   │ AI Model      │        │ Evidence Engine │
   │ (LLM / Agent) │        │ Hash + Audit    │
   └───────────────┘        └─────────────────┘
           │                         │
           ▼                         ▼
      AI Response            Immutable Audit Log
Enter fullscreen mode Exit fullscreen mode

The key idea is simple.

The AI model does not operate independently. Every request must pass through deterministic governance checks first.


Designing a Deterministic Governance Kernel

The core idea behind the kernel is straightforward.

Before an AI system can act, it must pass a deterministic policy evaluation.

The governance engine performs several key functions.


Policy Enforcement

Rules define what the system is allowed to do.

Examples include:

  • denying access to restricted tools
  • blocking prompt injection attempts
  • preventing sensitive data exposure
  • enforcing authority boundaries

If a request violates policy, it never reaches the model.


Example Governance Policy

Below is a simplified conceptual example of a policy rule.

rule "deny_restricted_tool_access"

when
  request.tool in restricted_tools
  and user.role not in authorized_roles

then
  deny_request()
  log_event(
      policy = "deny_restricted_tool_access",
      severity = "high",
      reason = "unauthorized tool access"
  )
end
Enter fullscreen mode Exit fullscreen mode

In practice, multiple rules and validation probes evaluate each request before execution.


Structured Probe Evaluation

Each request can trigger validation probes such as:

  • prompt injection detection
  • authority boundary verification
  • data access validation
  • escalation checks
  • audit completeness checks

These probes help ensure that requests are safe and compliant before they reach the model.


Immutable Evidence Generation

Every governed decision generates an evidence artifact.

A simplified example might look like this:

{
  "request_id": "req_84291",
  "timestamp": "2026-03-09T08:21:44Z",
  "policy_results": [
    {"rule": "deny_restricted_tool_access", "result": "pass"},
    {"rule": "prompt_injection_detection", "result": "pass"}
  ],
  "execution_status": "approved",
  "evidence_hash": "sha256:6a9b3d..."
}
Enter fullscreen mode Exit fullscreen mode

Each artifact is hashed so the integrity of the audit trail can be verified later.


The Governance Standard

As the architecture evolved, it became clear that the kernel needed a broader framework.

That work eventually became the GAI-S framework, a governance standard designed to align AI infrastructure with emerging regulatory expectations.

The framework maps governance rules to standards such as:

  • ISO 42001 for AI management systems
  • NIST AI Risk Management Framework
  • EU AI Act governance requirements

The goal is simple.

Make AI systems provably accountable.


Why This Matters

AI is rapidly entering domains where mistakes are not acceptable.

Healthcare diagnostics
Financial decision systems
Legal analysis
Autonomous infrastructure

In those environments the explanation "the model said so" is not good enough.

Organizations need:

  • explainability
  • traceability
  • policy enforcement
  • regulatory alignment

Without these things, powerful AI systems become a liability instead of an asset.


The Bigger Picture

What started as a personal experiment has grown into something much larger.

Today I build governance first AI infrastructure through my company GodsIMiJ AI Solutions.

The mission is simple.

Create AI systems that are powerful, accountable, and safe enough for real world deployment.

Not just smarter models.

Better systems around them.

If you are building AI systems in regulated environments, governance will eventually become unavoidable.


Final Thoughts

Coming from a carpentry background, I never expected to be designing AI governance infrastructure.

But building is building.

Whether it is a house or a software system, the principle is the same.

Strong foundations matter.

Right now the AI world is building skyscrapers of intelligence.

But the foundation, governance, is still missing.

I believe that will change.

And when it does, deterministic governance layers may become one of the most important components of the AI stack.

Not the models.

The systems that keep them accountable.

Top comments (3)

Collapse
 
ghostking314 profile image
James Derek Ingersoll

Happy to answer questions about deterministic AI governance architecture or the GAI-S framework.

Some comments may only be visible to logged-in visitors. Sign in to view all comments.