Most software systems are designed to produce outputs.
Very few are designed to refuse.
In my experience, refusal is one of the hardest things to design — especially in systems that deal with money, performance, or growth.
*The problem with confident systems
*
Many decision tools assume a simple pipeline:
- Ingest data
- Compute metrics
- Produce a recommendation
What’s often missing is a serious answer to:
What if the data shouldn’t be trusted yet?
When systems skip that question, they create false certainty.
And false certainty is dangerous — because it looks correct right up until it fails.
*Designing for uncertainty instead of outcomes
*
When I started building what eventually became MDU Engine, I made a rule for myself:
The system must be allowed to say “I don’t know yet.”
That single rule shaped everything:
- Validation gates before any logic runs
- Deterministic simulations for reproducibility
- Explicit decision blocking when data windows are too short
- Explanations focused on constraints, not prescriptions
This isn’t about being pessimistic.
It’s about being honest about uncertainty.
*Determinism over cleverness
*
One design choice that surprised people was avoiding probabilistic “magic”:
- Same input → same output
- Same data → same decision
- Every run reproducible
Why?
Because decisions that can’t be replayed can’t be audited.
And decisions that can’t be audited shouldn’t move money.
*Why explainability matters more than accuracy
*
In practice, operators don’t just ask:
“Is this recommendation correct?”
They ask:
- Why now?
- What could go wrong?
- What would make this safer next time?
If a system can’t answer those questions, accuracy alone doesn’t help.
*A quiet experiment
*
MDU Engine is intentionally small and conservative.
It doesn’t optimise.
It doesn’t automate.
It exists to make decision quality visible.
If you want to explore it or critique it, it’s live and public:
App: https://app.mduengine.com
Overview: https://mduengine.com
I’m less interested in adoption and more interested in discussion:
How should systems behave when uncertainty is high?
*Closing thought
*
We’ve built many tools that are good at telling us what to do.
We haven’t built many that are good at telling us:
“Not yet — and here’s why.”
I think we’ll need more of those.
Top comments (0)