Deepfakes, Disinformation and Digital Ethics: AI Risks Every CEO Must Know
By Dirk Roethig | CEO, VERDANTIS Impact Capital | March 3, 2026
Deepfake fraud cost companies over $1.1 billion in 2025. A single employee wired $25 million after a video call with a synthetic "CFO". What CEOs must know about AI risks, digital ethics and the EU AI Act -- before they become the next target.
The Day the CFO Was Not the CFO
It was an ordinary workday in Hong Kong. An experienced finance controller at British engineering firm Arup joined a video call. On screen: his CFO. Familiar colleagues. Normal meeting atmosphere. The instruction: execute 15 wire transfers totalling 200 million Hong Kong dollars -- approximately $25.6 million US.
The controller executed the transfers. Only days later, after checking with the UK head office, did he learn the truth: none of the video call participants had been real. The CFO, the colleagues, the entire conference round -- all AI-generated deepfakes, deceptively authentic in both image and voice (CNN, 2024).
This case is not a dystopian scenario from a thriller. It is documented reality. And it is symptomatic of a threat landscape that escalated dramatically throughout 2025.
The Numbers: What Deepfakes Cost Today
The macroeconomic dimension of AI-enabled fraud has crossed a threshold that business leaders can no longer afford to ignore.
According to analysis by DeepStrike, deepfake fraud incidents in the United States alone reached $1.1 billion in damages in 2025 -- more than three times the previous year's figure. Globally, documented losses attributable to deepfakes have already reached $1.56 billion (Surfshark Research, 2025). Projections from the Deloitte Center for Financial Services show that AI-enabled fraud losses will grow to $40 billion annually by 2027 -- at a compound annual growth rate of 32 percent.
Even more alarming is the velocity of proliferation: from 500,000 deepfakes in 2023 to over eight million in 2025 -- growth of nearly 900 percent (DeepStrike, 2025). In Germany, the deepfake fraud rate surged by 1,100 percent in Q1 2025 compared to the prior year period (Fraunhofer ISI, 2025).
CEO fraud has become a mass phenomenon: at least 400 companies per day are targeted with CEO fraud variants. Voice cloning fraud -- the deceptively accurate simulation of an executive over the telephone -- rose by 680 percent within a single year (Keepnet Labs, 2026).
Why Companies Are So Vulnerable
The frightening truth behind these numbers lies not in the sophistication of attackers alone. It lies in the unpreparedness of defenders.
A Gartner survey of 302 cybersecurity leaders reveals: 43 percent reported at least one deepfake incident in audio calls, 37 percent in video conferences (Gartner, 2025). Yet corporate response remains alarmingly weak: only 13 percent of companies worldwide have implemented anti-deepfake protocols. Only 32 percent of executives believe their organization is even prepared to handle a deepfake incident.
A further structural problem: 25 percent of executives have little or no familiarity with deepfakes whatsoever (Programs.com, 2026). Those who do not know a threat exists cannot guard against it.
The democratization of the technology has reduced entry barriers for attackers to near zero. Voice and facial synthesis tools that three years ago required expensive specialist expertise are today accessible for a few dollars per month. What was once the preserve of state actors or highly organized cybercriminals is now the tool of the average fraudster.
Disinformation as a Strategic Business Risk
Deepfakes are merely one manifestation of a larger phenomenon: the systematic use of AI to produce and distribute disinformation. For businesses, the consequences extend well beyond direct financial fraud.
Reputation deepfakes: Fabricated videos depicting a CEO making controversial statements, announcing false strategies, or simulating scandalous behavior. Even when a fake is exposed within hours, the reputational damage can cost millions -- and send stock prices tumbling.
Market manipulation through AI-generated disinformation: Fake press releases, synthetic "leaks" about product defects, or staged regulatory decisions can be deployed deliberately to manipulate share prices. The US Securities and Exchange Commission has issued repeated warnings about this development (SEC, 2025).
Employee targeting via synthetic identities: HR professionals report job applications where video interviews were conducted with synthetic persons -- extending to fully "onboarded" employees who were in reality hackers seeking access to corporate networks.
Political and regulatory interference: In an era of stakeholder capitalism, companies are increasingly targets of politically motivated disinformation campaigns. Fabricated documents suggesting corruption or environmental violations can trigger regulatory investigations, destroy partnerships and deter investors.
UNESCO has described this development as a fundamental crisis of epistemic trust -- an erosion of the collective capacity to distinguish truth from fabrication (UNESCO, 2024).
The Legal Framework: What the EU AI Act Requires
Europe's response to this threat is the most comprehensive AI regulatory framework in the world. The EU AI Act, in force since August 2024, enters full application for most enterprise requirements on 2 August 2026.
For CEOs, the transparency obligations are particularly significant:
Article 50(2): Providers of generative AI systems must ensure that AI-generated outputs are marked in a machine-readable format, detectable as artificially generated or manipulated.
Article 50(4): Enterprises deploying AI systems to create deepfakes are required to explicitly label such content as synthetic -- except in legally authorized exceptions such as law enforcement.
The consequences of non-compliance are substantial: up to €35 million or 7 percent of global annual revenue -- whichever is higher. Additionally, the EU has created a new criminal offense for the unauthorized dissemination of AI-generated deepfakes, punishable by one to five years imprisonment (EU AI Act, Article 50; Blackbird.AI, 2025).
In December 2025, the European Commission published a first draft Code of Practice for labeling AI-generated content. The final version is expected for June 2026 (Kirkland & Ellis, 2026). Companies that have not yet begun building compliance structures are acquiring massive time pressure.
Digital Ethics as a Leadership Responsibility
The technical and regulatory dimensions of the problem are one thing. The other is more fundamental: what ethical responsibility does executive leadership bear in an era of synthetic realities?
In my advisory work at VERDANTIS Impact Capital, I repeatedly encounter the same misperception: digital ethics is treated as a departmental matter -- delegated to compliance, IT, or a newly appointed Chief AI Officer. In fact, it is an intrinsic leadership responsibility.
KPMG's 2025 AI Governance study identifies that the decisive success factors for responsible AI are not technical but cultural in nature. First and foremost is the posture of executive leadership: Does the board define clear ethical guardrails? Does an AI Governance Board exist? Are responsibilities unambiguously assigned? (KPMG, 2025)
43 percent of DAX-40 corporations have now appointed dedicated AI ethics officers. For mid-market companies this is barely the case. And it is precisely here -- in organizations with flatter hierarchies, less specialized teams and tighter resources -- that governance gaps are largest.
The BVDW has formulated six core ethical principles for AI deployment: fairness, transparency, explainability, data protection, security and robustness (BVDW, 2025). These principles are not bureaucratic checklists. They are the foundation upon which trust -- in technology, in companies, in institutions -- is built.
What CEOs Must Do: A Five-Point Framework
The analysis of the threat landscape and the requirements of the EU AI Act yields a practical framework for business leaders:
1. Build a threat inventory. Which communication processes in the organization rely on trust in voice or image? Where are wire transfer instructions issued by phone or video? Which executives could be targets of impersonation? This inventory is the starting point for every protective measure.
2. Introduce verification protocols for critical transactions. The Arup case was preventable. A simple protocol -- every transfer above a defined threshold requires counter-confirmation via a second, verified channel -- would have sufficed. Such protocols cost little yet prevent million-dollar losses.
3. Make employee training mandatory. Only 34.3 percent of Germans know what deepfakes even are (Fraunhofer ISI, 2025). Employees who cannot recognize deepfakes are the weakest link in the defensive chain. Regular simulations -- similar to phishing tests -- for all staff with financial access are not optional but obligatory.
4. Build AI governance structures. An AI Governance Board, clear guidelines for internal AI use, a process for evaluating new AI applications -- these structures simultaneously prepare organizations for EU AI Act compliance and reduce the risk of unintended ethical violations.
5. Create an EU AI Act compliance roadmap. The transparency obligations must be fulfilled by August 2026 at the latest. Companies deploying generative AI need a system inventory, risk assessment and implementation plan now. Those who wait until 2026 will miss the deadline.
The Paradox of Trust Erosion
There is a deeper dimension to this topic that extends beyond corporate risk. Deepfakes and AI-generated disinformation undermine not only trust in individual pieces of content. They undermine trust in institutions, in democratic processes, in the very reliability of perception itself.
LSE International Development writes of a "Deepfake Blindspot in AI Governance" -- a dangerous gap between the pace of technological development and the capacity of regulation and society to keep pace (LSE, 2025). This gap has consequences that far exceed individual fraud cases.
In the economy of trust, CEOs are not passive observers. They are active shapers. Companies that demonstrate transparency proactively, visibly embody ethical AI governance, and equip their employees and customers to recognize synthetic content build a competitive advantage that is difficult to replicate: trust capital.
As I argued in my analysis of AI transformation in business, technological change is always also cultural change. That is true of the opportunities of AI -- and it is equally true of its risks.
Conclusion: Ethical AI Leadership Is Not a Luxury
The message is unambiguous: deepfakes are not a future threat. They are the present. Eight million deepfakes in 2025. $1.1 billion in damage in the United States alone. 400 CEO fraud attacks per day. And a corporate landscape where 87 percent of firms have no anti-deepfake protocols.
The EU AI Act creates binding parameters from August 2026. But regulation alone does not protect organizations. What protects them is the combination of technical safeguards, trained employees, clear governance and -- fundamentally -- executive leadership that understands digital ethics as a core responsibility.
The question every CEO should answer today is not: "Could my company be the target of a deepfake attack?" The answer to that is: yes. The relevant question is: "What do we do when it happens -- and what are we doing to ensure the damage remains minimal?"
Those who do not answer this question today will answer it tomorrow under considerably less favorable circumstances.
References
- Blackbird.AI (2025). Deepfake Detection Now Required Under EU AI Act Rules. Blackbird.AI Research.
- BVDW (2025). Six Ethical Principles for the Development and Use of AI. Bundesverband Digitale Wirtschaft.
- CNN (2024). Arup revealed as victim of $25 million deepfake scam involving Hong Kong employee. CNN Business, 16 May 2024.
- DeepStrike (2025). Deepfake Statistics 2025: The Data Behind the AI Fraud Wave. DeepStrike Research Report.
- Deloitte Center for Financial Services (2025). AI-Enabled Fraud: Projections to 2027. Deloitte Insights.
- EU AI Act (2024). Regulation (EU) 2024/1689 of the European Parliament and of the Council. Article 50: Transparency Obligations. Official Journal of the European Union.
- Fraunhofer ISI (2025). Deepfakes: Opportunities and Risks for Politics, Business and Society. Fraunhofer Institute for Systems and Innovation Research.
- Gartner (2025). Why CIOs Cannot Ignore the Rising Tide of Deepfake Attacks. Gartner Newsroom, 2 September 2025.
- Keepnet Labs (2026). Deepfake Statistics & Trends 2026: Key Data & Insights. Keepnet Security Research.
- Kirkland & Ellis (2026). Illuminating AI: The EU's First Draft Code of Practice on Transparency for AI-Generated Content. Kirkland Alert, February 2026.
- KPMG (2025). AI Governance: The Key Success Factors. KPMG Germany.
- LSE International Development (2025). The Deepfake Blindspot in AI Governance. London School of Economics Blog, 4 December 2025.
- Programs.com (2026). The Latest Deepfake Facts & Statistics (2026). Programs.com Research.
- SEC (2025). AI, Deepfakes, and the Future of Financial Deception. Statement of Perry Carpenter, KnowBe4, SEC, March 2025.
- Surfshark Research (2025). AI Drives Deepfake Losses to $1.56 Billion. Surfshark Data Chart.
- UNESCO (2024). Deepfakes and the Crisis of Knowing. UNESCO Digital Regulation.
About the Author
Dirk Roethig is CEO of VERDANTIS Impact Capital and advises companies at the intersection of technology, sustainable value creation and digital resilience. With over 20 years of experience in international executive leadership, he combines strategic thinking with deep AI expertise. His focus areas include digital transformation, impact investing, and how organizations can convert technological risks into strategic opportunities.
Contact: LinkedIn | VERDANTIS Impact Capital
Top comments (0)