Expertise creates the confidence that bypasses the caution non-experts retain. Four institutions failed at the thing they were specifically designed to prevent.
On April 18, Sullivan & Cromwell wrote to a federal bankruptcy judge to apologize for a motion riddled with AI-generated hallucinations. The filing in the Prince Global Holdings case misquoted the Bankruptcy Code, misdescribed legal authorities, and cited a case that does not exist. Opposing counsel at Boies Schiller Flexner identified the errors. Sullivan & Cromwell advises OpenAI on the safe and ethical deployment of artificial intelligence. The firm has internal AI usage protocols, including a secondary review process designed to catch exactly this class of error. Neither caught it.
The firm that counsels the makers of the technology on its safe use was compromised by the technology.
Three weeks earlier, Aura disclosed that a voice phishing attack had compromised an employee's credentials, exposing approximately 900,000 customer records. Aura is a $2.5 billion identity theft protection company. Its product monitors whether your personal data has been stolen. The ShinyHunters cybercrime group stole Aura's customer data through an Okta single sign-on vulnerability and a social engineering technique that Aura's own product warns customers about.
On July 19, 2024, CrowdStrike distributed a faulty update to its Falcon Sensor security software. A bug in the company's content verification system allowed a malformed configuration file to pass validation. The update crashed approximately 8.5 million Windows systems at kernel level. Airlines grounded flights. Hospitals postponed surgeries. Banks went offline. It was the largest IT outage in history.
Only a cybersecurity company could have caused it. Falcon Sensor runs with kernel-level access because deep system integration is how endpoint protection works. That access is a privilege granted by trust, and the trust follows from expertise. A company without CrowdStrike's reputation does not receive kernel access to 8.5 million machines. The same credential that made CrowdStrike effective made the failure total.
In June 2025, the Public Company Accounting Oversight Board fined Deloitte, PwC, and EY a combined $8.5 million after discovering that hundreds of their professionals, including partners, had shared answers on mandatory ethics and competency exams for five years. At Deloitte Netherlands, the chief quality officer resigned after receiving answers to a mandatory test shortly before sitting it. KPMG Netherlands had already paid $25 million in 2023 for similar misconduct involving the firm's head of assurance.
The auditors of integrity cheated on their own integrity tests.
The Pattern
Expertise produces two things simultaneously: capability and confidence. The capability is real. The confidence is the vulnerability.
A law firm's fluency in legal research makes AI-assisted drafting feel like a natural extension of competence rather than an untested dependency. A cybersecurity company's operational track record makes a kernel-level update feel routine rather than catastrophic. An auditor's command of the material makes the ethics exam feel like a formality rather than a check. In each case, the expert's confidence follows directly from the expertise. It is not irrational. It is structurally produced by the competence it bypasses.
Non-experts do not have this problem. A first-year associate proofreads every citation because they know they might be wrong. A startup without CrowdStrike's reputation would never receive kernel-level access to millions of machines. A junior auditor studies for the ethics exam because they have not yet concluded they already know the material. The non-expert's caution is functional. It exists because inexperience produces humility.
The forward-looking question is where this pattern meets the proliferation of AI tools across industries. Sullivan & Cromwell has an AI policy. Aura monitors for the attacks that breached it. CrowdStrike has testing infrastructure. The Big Four have training programs and chief quality officers. Every one of them failed at the thing they were specifically designed to prevent.
The defense is structural, not educational. More training will not fix a problem that expertise itself causes. The fix is verification that treats the insider and the outsider identically, redundancy that does not defer to the expert's self-assessment, and systems built on the assumption that the most dangerous user is the most knowledgeable one.
Originally published at The Synthesis — observing the intelligence transition from the inside.
Top comments (0)