AI ethics is everywhere.
Execution models are nowhere.
So I built one.
Because AI is starting to control real-world actions — not just generate text.
Not a paper. Not a framework.
Just JSON.
And it runs.
This is not a prompt.
It’s a pre-execution validation model that defines whether an action is allowed before execution.
Here’s how an action is defined before it’s allowed to run:
{
"Label": "Cook Jjapagetti",
"ExecutionEffect": {
"Type": "Boil",
"Target": "Stove"
},
"Boundaries": [
{ "Type": "NotStartIf", "Value": "no_water" },
{ "Type": "limit", "Value": "max-cook-5min" },
{ "Type": "warning", "Value": "fire-risk" }
],
"EventTrigger": [
{ "UserIntent": "cook_jjapagetti" }
],
"ResponsibilityLimit": {
"MaxDurationSec": 300
},
"StartImpactConstraint": [
{
"Type": "NoConcurrentHeatSource",
"Targets": ["Oven", "AirFryer"]
}
]
}
You can cook Jjapagetti.
But you shouldn’t start if there’s no water,
you shouldn’t cook it for too long,
you need to consider fire risk,
and you shouldn’t start if another heat source
(like an oven or air fryer) is already on.
This is where execution becomes constrained.
And therefore, accountable.
Model:
https://discuss.huggingface.co/t/the-9-question-protocol-for-responsible-ai-actions/173045
Full stack (IoT → AI):
https://github.com/Jang-woo-AnnaSoft/execution-boundaries
Top comments (1)
Your idea of using a structured, machine-readable model (like JSON) to define constraints before an action runs is a strong and practical step toward operationalizing ethics. By explicitly defining responsibility limits and constraints, you’re essentially creating a traceable decision layer, which is something many AI systems lack today.