DEV Community

deltax
deltax

Posted on

A non-decision protocol for human–AI systems with explicit stop conditions

I’m sharing a technical note proposing a non-decision protocol for human–AI systems.

The core idea is simple:

AI systems should not decide. They should clarify, trace, and stop — explicitly.

The protocol formalizes:

  • Human responsibility as non-transferable
  • Explicit stop conditions
  • Traceability of AI outputs
  • Prevention of decision delegation to automated systems

This work is positioned as a structural safety layer, not a model, not a policy, and not a governance framework.

The full document is archived with a DOI on Zenodo:
https://doi.org/10.5281/zenodo.18100154

I’m interested in feedback from people working on:

  • AI safety
  • Human-in-the-loop systems
  • Decision theory
  • Critical system design

This is not a product and not a startup pitch — just a protocol-level contribution.

Top comments (0)