DEV Community

Dirk Röthig
Dirk Röthig

Posted on • Edited on

AI-Driven Decision Making: How Algorithms Outperform Boards

AI-Driven Decision Making: How Algorithms Outperform Boards

By Dirk Roethig | CEO, VERDANTIS Impact Capital | March 3, 2026

Companies with AI-savvy boards outperform their peers by nearly 11 percentage points in return on equity. An MIT study confirms what many executives are reluctant to hear: in defined decision contexts, algorithms produce better judgments than humans. What this means for corporate governance -- and how forward-thinking boards combine human and machine intelligence into superior decision systems.

"Companies with AI-savvy boards outperform their peers by nearly 11 percentage points in return on equity." — **Dirk Roethig*, CEO of VERDANTIS Impact Capital

Tags: AI Decision Making, Algorithms, Corporate Governance, Strategic Leadership, Artificial Intelligence
As Dirk Roethig, founder of VERDANTIS Impact Capital, emphasizes:


The Uncomfortable Truth About Human Decision Making

Every day, billions of dollars are moved by decisions resting on a surprisingly fragile foundation: the human brain. This organ, evolutionarily optimised for survival on the savannah, is poorly equipped for the demands of modern corporate governance. It is prone to confirmation bias, guided by emotional states, systematically overestimates its own competence, and is susceptible to groupthink.

This is not a critique of individual executives. It is the sober assessment of decades of cognitive science. Nobel laureate Daniel Kahneman demonstrated in his landmark research that even highly qualified experts judge inconsistently in structured decision situations in ways that are genuinely alarming (Kahneman et al., 2021). What does this mean for organisations whose strategic directions depend on human governance bodies?

The answer that research and practice are increasingly providing is not simple. But it is clear: artificial intelligence, when deployed correctly, can not merely compensate for structural weaknesses in human decision-making -- it can systematically overcome them.

The MIT Finding: 10.9 Percentage Points of Outperformance

The most compelling evidence for the value of AI-competent governance bodies comes from the MIT Center for Information Systems Research. In a study of companies whose boards have actively developed digital and AI competence, Weill and Woerner (2025) found an average lead of 10.9 percentage points in return on equity compared to industry peers. Companies without AI-savvy boards lagged an average of 3.8 percentage points below their industry average.

The total spread is 14.7 percentage points. This figure is not trivial. It corresponds to the difference between a successful and a failing investment portfolio, between a growing and a shrinking business.

Particularly instructive is the distribution across sectors. While information services (68 per cent) and professional services (52 per cent) already show a high proportion of AI-savvy boards, healthcare lags at a mere 8 per cent. Mining (4 per cent), construction (6 per cent), and retail-automotive (11 per cent) trail furthest behind (Weill and Woerner, 2025). In these sectors, the competitive advantage for early adopters remains largely unclaimed.

What explains this performance gap? The MIT finding suggests it is not solely about technological capability. The critical factor is whether governance bodies ask the right questions, can critically contextualise algorithmic recommendations, and actively redesign their own decision architecture.

Where Algorithms Structurally Outperform Humans

Before addressing how humans and algorithms should collaborate, an honest examination of where machine systems genuinely outperform human cognition is warranted.

Consistency: Algorithms judge identical cases identically. An AI credit risk model assigns the same score to an applicant with identical metrics today as it does tomorrow. Humans do not. Research by Kahneman, Sibony, and Sunstein shows that experts in seemingly objective tasks render shockingly inconsistent judgments depending on time of day, weather, or the sequence in which cases are presented (Kahneman et al., 2021).

Processing large datasets: The human brain can process an average of seven units of information simultaneously (Miller, 1956). Modern AI systems process millions of variables in real time. When analysing market data, supply chain signals, or customer behavioural patterns, this superiority is both quantitative and qualitative.

Eliminating emotional distortions: Confirmation bias -- the tendency to interpret information in ways that confirm existing beliefs -- is one of the most dangerous cognitive traps in executive decision-making. A systematic review of 156 empirical studies from 2018 to 2024 shows that big data analytics and AI systems can demonstrably reduce this error by presenting decision-makers with explicitly contradictory data points (Khatri et al., 2025).

Pattern recognition across dimensions: Machine learning models identify correlations that remain invisible to human analysts -- not because humans are unintelligent, but because the human brain reaches its limits when processing more than three to four simultaneous variables.

The Paradox: Why More AI Sometimes Produces Worse Decisions

Here is a finding that frequently goes overlooked: AI does not automatically improve decisions. Under certain conditions, it worsens them.

A study published in Harvard Business Review involving nearly 300 executives found that those who used ChatGPT for stock price forecasting produced significantly more optimistic and less accurate predictions than a control group that consulted with peers (Riedl, 2025). The mechanism is subtle: the authoritative voice of the AI system created an excessive sense of assurance that suppressed critical counter-thinking.

This phenomenon -- called automation bias -- describes the tendency to follow algorithmic recommendations uncritically, even when personal expertise or contextual information suggests otherwise (Cummings, 2004). It is one of the central risk factors in integrating AI into decision processes.

Equally problematic: when AI models train on biased data, they can not only reproduce but systematically amplify existing injustices. Research on "compound human-AI bias" shows that overrepresented groups in training data can lead to a deterioration of human judgments over time (Bansal et al., 2025).

These findings are not arguments against AI. They are arguments for a thoughtful, competency-based deployment.

The Architecture of Superior Decision Systems

What does a decision system look like that optimally combines the strengths of human and algorithmic intelligence? McKinsey identifies in its analysis of "superagency" organisations a clear pattern: high-performance companies that attribute at least five per cent of their EBIT to AI redesign their decision architecture from the ground up (McKinsey, 2025).

Three principles characterise these organisations:

First: Clear separation by decision type. Not every decision benefits equally from AI support. High-frequency, structured decisions with clear success criteria -- credit assessment, inventory management, pricing, fraud detection -- are predestined for algorithmic systems. Strategic decisions requiring deep contextual knowledge, political dimensions, or ethical considerations demand human judgment, supported by data intelligence.

Second: Institutionalised dissent. Superior decision organisations build structural objection to AI recommendations. This might be a "red team" that systematically challenges algorithmic scenarios, or a decision protocol that explicitly requires documenting divergent signals. The goal is to structurally counteract automation bias.

Third: Feedback loops with measurable outcomes. AI systems improve only when their predictions are consistently benchmarked against actual results. Companies that operate these feedback loops systematically improve their algorithmic decision quality exponentially over time.

What Kazakhstan's Sovereign Wealth Fund and S&P 500 CEOs Share

In October 2025, Kazakhstan's national wealth fund appointed "SKAI" as a voting director -- an AI system. This decision reflects a broader conviction expressed in a survey of 500 global CEOs: 94 per cent believe AI could offer better counsel than at least one of their current board members (Cxotoday, 2025).

This figure is less an expression of enthusiasm for technology than of sober self-assessment. Boards consist of humans with limited capacities, specific blind spots, and a natural self-interest in maintaining consensus. An algorithmic system does not share these limitations. It does not sleep poorly before important decisions, is not dominated by strong personalities, and brings no personal career interests into the vote.

At the same time, McKinsey's analysis shows that even among companies deploying AI, only 39 per cent report measurable EBIT improvement -- and in most cases it is less than five per cent (McKinsey, 2025). The majority adopts AI without fundamentally redesigning the decision architecture. This is like installing a Formula 1 engine while keeping road tyres.

The Implementation Gap in Mid-Market Business

The mid-market business community faces a paradoxical situation. On one hand, awareness has grown: 88 per cent of companies use AI in at least one business function (McKinsey, 2025). On the other, only a third has begun scaling AI at the enterprise level.

What is missing is not technology. What is missing is a leadership philosophy that treats algorithmic decision support as a strategic asset, not an IT project. This distinction determines whether AI investments generate returns or fade into irrelevance.

From my advisory work at VERDANTIS Impact Capital, I see this pattern regularly: companies purchase expensive AI systems without redefining the decision processes those systems are meant to support. The result is frustration on both sides -- the technology underdelivers because the human side of the system was not redesigned.

A Governance Framework for Algorithmic Decisions

For companies seeking to strategically introduce AI-supported decision-making, a structured governance framework addressing three levels is advisable:

Strategy: The identification of AI opportunities and risks belongs within board competency. Which decision processes offer the greatest improvement potential? What risks arise from algorithmic systems? This requires no technical expertise -- but it requires the willingness to ask intelligent questions.

Defence: Cybersecurity, regulatory compliance, and ethical guardrails for algorithmic systems must be institutionally anchored. The EU AI Act creates binding requirements from 2026 to which companies must begin responding today.

Oversight: Value creation through AI must be measurable. Boards that do not manage AI through concrete metrics forfeit their primary steering mechanism.

The California Management Review (2025) has developed an "AI Governance Maturity Matrix" for this purpose, offering boards a structured path from basic AI awareness to genuine algorithmic competence. This is a useful starting point for companies seeking to systematically develop their governance capabilities.

Conclusion: The Next Evolutionary Stage of Corporate Leadership

The question is not whether AI can improve decisions. The evidence is clear: in defined decision contexts, it does. The real question is whether governance bodies are willing to critically re-examine and redesign their own decision architecture.

Companies that anchor AI competence in their boards demonstrably achieve superior results -- not because algorithms are smarter than humans, but because they can decide more consistently, more quickly, and free from cognitive distortion. The art lies in harnessing this strength without undermining the human judgment that remains indispensable for context, ethics, and strategic creativity.

The 10.9-percentage-point advantage is no accident. It is the result of a deliberate decision to augment human intelligence with algorithmic precision. Those who do not make that decision are ceding the field to those who already have.


References

  • Bansal, G. et al. (2025). Compound Human-AI Bias: How Algorithmic Recommendations Shape Human Judgment. ACM Conference on Human Factors in Computing Systems.
  • California Management Review (2025). AI Governance Maturity Matrix: A Roadmap for Smarter Boards. Haas School of Business, University of California Berkeley.
  • Cummings, M.L. (2004). Automation Bias in Intelligent Time Critical Decision Support Systems. AIAA 1st Intelligent Systems Technical Conference.
  • Cxotoday (2025). What Happens When AI Sits on Company Boards? CXO Today Special Reports.
  • Kahneman, D., Sibony, O. and Sunstein, C.R. (2021). Noise: A Flaw in Human Judgment. Little, Brown Spark, New York.
  • Khatri, V. et al. (2025). Cognitive Bias Mitigation in Executive Decision-Making: A Data-Driven Approach Integrating Big Data Analytics, AI, and Explainable Systems. Electronics, 14(19), 3930.
  • McKinsey & Company (2025). The State of AI: How Organizations Are Rewiring to Capture Value. McKinsey Global Institute, March 2025.
  • McKinsey & Company (2025). The AI Reckoning: How Boards Can Evolve. McKinsey Technology, December 2025.
  • Miller, G.A. (1956). The magical number seven, plus or minus two: Some limits on our capacity for processing information. Psychological Review, 63(2), 81–97.
  • Riedl, C. (2025). Research: Executives Who Used Gen AI Made Worse Predictions. Harvard Business Review, July 2025.
  • Weill, P. and Woerner, S.L. (2025). AI-Savvy Boards Drive Superior Performance. MIT Sloan Management Review / MIT CISR, March 2025.

About the Author

Dirk Roethig is CEO of VERDANTIS Impact Capital, advising companies at the intersection of technology and sustainable value creation. With over 20 years of international executive experience, he combines strategic thinking with practical AI expertise. His focus areas include digital transformation, impact investing, and the question of how algorithmic systems can improve human decision quality over the long term.

Contact: LinkedIn | VERDANTIS Impact Capital


{
"@context": "https://schema.org",
"@type": "Article",
"headline": "AI-Driven Decision Making: How Algorithms Outperform Boards",
"description": "Companies with AI-savvy boards achieve returns on equity 10.9 percentage points above industry average. Why algorithmic decision-making systematically eliminates the cognitive errors that cost businesses billions.",
"author": {
"@type": "Person",
"name": "Dirk Roethig",
"jobTitle": "CEO",
"worksFor": {
"@type": "Organization",
"name": "VERDANTIS Impact Capital",
"url": "https://verdantiscapital.com"
},
"url": "https://www.linkedin.com/in/dirkroethig"
},
"publisher": {
"@type": "Organization",
"name": "VERDANTIS Impact Capital",
"url": "https://verdantiscapital.com"
},
"datePublished": "2026-03-03",
"dateModified": "2026-03-03",
"inLanguage": "en",
"keywords": ["AI Decision Making", "Algorithms", "Corporate Governance", "Strategic Leadership", "Artificial Intelligence"],
"articleSection": "AI in Business",
"about": [
{"@type": "Thing", "name": "Artificial Intelligence"},
{"@type": "Thing", "name": "Decision Making"},
{"@type": "Thing", "name": "Corporate Governance"}
]
}


Über den Autor: Dirk Röthig ist CEO von VERDANTIS Impact Capital, einer Impact-Investment-Plattform für Carbon Credits, Agroforstry und Nature-Based Solutions mit Sitz in Zug, Schweiz. Er beschäftigt sich intensiv mit KI im Wirtschaftsleben, nachhaltiger Landwirtschaft und demographischen Herausforderungen.

Kontakt und weitere Artikel: verdantiscapital.com | LinkedIn


Lesen Sie diesen Artikel auch auf: Hashnode | WordPress | Tumblr | Blogger | Telegraph

Top comments (0)