Enterprises increasingly rely on AI assistants to support research, procurement, product comparisons, competitive intelligence, and communication tasks. These systems are commonly assumed to behave like stable analysts: consistent, predictable, and aligned with factual sources. Our findings demonstrate that this assumption is incorrect.
Across 200 controlled tests involving GPT, Gemini, and Claude, we observe substantial instability:
61 percent of identical runs produce materially different answers
48 percent shift their reasoning
27 percent contradict themselves
34 percent disagree with competing models
This behaviour is structural, not incidental. It arises from silent model updates, a lack of stability thresholds, missing audit trails, and optimisation for plausibility rather than reproducibility.
This paper presents the evidence, explains why the volatility cannot be resolved by model vendors, outlines the financial and regulatory consequences for enterprises, and proposes a governance framework for prevention and remediation. The analysis is designed for CFOs, CROs, GCs, CIOs, board members, and executive decision makers.
Top comments (0)