DEV Community

deltax profile picture

deltax

Working on AI governance and safety frameworks. Non-decision systems, explicit stop conditions, internal auditability, and structural traceability.

Location Belgium Joined Joined on  Personal website https://zenodo.org/records/18190262

Work

Independent researcher — AI governance and safety frameworks

Designing a cognitive framework for decision clarity (constraints over ideology)

Designing a cognitive framework for decision clarity (constraints over ideology)

Comments
1 min read

Want to connect with deltax?

Create an account to connect with deltax. You can also sign in below to proceed if you already have an account.

Already have an account? Sign in
Measuring structural gain without adding content

Measuring structural gain without adding content

Comments
1 min read
Improving a 741-page system by +31% without adding content

Improving a 741-page system by +31% without adding content

Comments
1 min read
Error is delayed alignment, not failure

Error is delayed alignment, not failure

1
Comments
1 min read
Non-decision-making AI: governance by structure, not interpretation

Non-decision-making AI: governance by structure, not interpretation

1
Comments
1 min read
Non-decision-making AI governance with internal audit and stop conditions

Non-decision-making AI governance with internal audit and stop conditions

Comments
1 min read
A zero-margin axial compliance protocol for medical acts

A zero-margin axial compliance protocol for medical acts

Comments
1 min read
Knowing when to stop: an edge-first approach to AI safety

Knowing when to stop: an edge-first approach to AI safety

Comments
1 min read
Institutional audit of a non-decision AI framework (27-document corpus)

Institutional audit of a non-decision AI framework (27-document corpus)

Comments
1 min read
Publishing Work, Not Metrics (Zenodo DOI Inside)

Publishing Work, Not Metrics (Zenodo DOI Inside)

Comments
1 min read
A non-decision protocol for human–AI systems with explicit stop conditions

A non-decision protocol for human–AI systems with explicit stop conditions

Comments
1 min read
Non-decision AI: stop conditions as a first-class control surface

Non-decision AI: stop conditions as a first-class control surface

Comments
1 min read
If AI Doesn’t Produce Measurable Improvement, It Should Stay Silent

If AI Doesn’t Produce Measurable Improvement, It Should Stay Silent

Comments
1 min read
Cron should never be the decision layer

Cron should never be the decision layer

Comments
1 min read
AI should measure, not decide: silence is a valid output

AI should measure, not decide: silence is a valid output

Comments
1 min read
AI Safety Isn’t About Better Answers. It’s About Knowing When to Stop.

AI Safety Isn’t About Better Answers. It’s About Knowing When to Stop.

Comments
1 min read
If AI Doesn’t Improve Anything, It Should Stay Silent

If AI Doesn’t Improve Anything, It Should Stay Silent

Comments
1 min read
If AI Doesn’t Produce Measurable Improvement, It Should Stay Silent

If AI Doesn’t Produce Measurable Improvement, It Should Stay Silent

Comments
1 min read
If AI Doesn’t Produce Measurable Improvement, It Should Stay Silent

If AI Doesn’t Produce Measurable Improvement, It Should Stay Silent

Comments
1 min read
If AI Doesn’t Improve Anything, It Should Stop Talking

If AI Doesn’t Improve Anything, It Should Stop Talking

Comments
1 min read
If AI Doesn’t Improve Anything, It Should Stop Talking

If AI Doesn’t Improve Anything, It Should Stop Talking

Comments
1 min read
DELTΔX: A non-decision AI governance framework with explicit stop conditions

DELTΔX: A non-decision AI governance framework with explicit stop conditions

Comments 2
1 min read
loading...