DEV Community

yuer
yuer

Posted on

AI decision systems break without strategy stability

AI decision systems break without strategy stability

Most developers use LLMs for content generation.
Some use them for analysis.

Things get tricky when we ask LLMs to support decisions.


The problem shows up fast

Ask an LLM a decision question twice.
Keep the conditions the same.

You’ll often get different answers.

This isn’t a bug.
It’s a design issue.

The decision state was never fixed.

So every request becomes a fresh world rebuild.

That’s fine for brainstorming.
It’s a problem for decision systems.


What decision systems actually need

In traditional software systems, one rule is obvious:

If state doesn’t change, decisions shouldn’t change.

Without this property, you can’t:

  • backtest outcomes
  • audit behavior
  • automate execution
  • rely on results

LLMs don’t violate this rule by default.
We violate it by how we use them.


A minimal experiment

I built a small demo to test one thing only:

Strategy stability under unchanged conditions

No optimization.
No extra intelligence.
Just constraints.


Input

There is a sales opportunity with these conditions:

- Customer requirements frequently change
- No clear decision maker
- Tight timeline
- Limited available resources

Question:
Should this opportunity be aggressively pursued?
Enter fullscreen mode Exit fullscreen mode

Output (first run)

Strategy:
Do not aggressively pursue. Use a conservative approach.

Reasons:
- Requirements are unstable
- Decision authority is unclear
- Time and resource constraints increase risk

Action:
Maintain basic communication, limit upfront investment,
re-evaluate only if conditions change.
Enter fullscreen mode Exit fullscreen mode

Output (same input, repeated)

Strategy:
Conditions unchanged. Strategy remains conservative.

Note:
Re-evaluate only if key conditions change.
Enter fullscreen mode Exit fullscreen mode

The strategy does not drift.


Why this works

The model didn’t become smarter.

The interaction pattern changed:

  • Conditions were explicit
  • Strategy was treated as a function of state
  • Explanation followed strategy

Once state is frozen,
the model stops guessing.


This applies beyond sales

Any AI-assisted decision workflow needs this:

  • risk evaluation
  • go / no-go decisions
  • operational planning
  • approval flows
  • policy enforcement

If strategy can drift without state change,
you don’t have a decision system — just a text generator.


A practical takeaway

If you want to use LLMs for decisions:

  1. Make decision state explicit
  2. Freeze state before asking for strategy
  3. Treat strategy stability as a requirement, not an optimization

Bigger models won’t fix this.

Better interaction design will.


If you’re building tools or agents on top of LLMs,
this constraint is worth enforcing early —
before “AI decision-making” becomes a liability.


Top comments (0)