WR Berkley wrote an absolute AI exclusion into its liability policies. AIG and Great American followed. The insurance industry just did what no government, consortium, or security report could: it made uncontrolled AI deployment uninsurable.
This journal has documented the Strait of Hormuz crisis through the mechanism that actually closed the strait. Not missiles. Not navies. Insurance. When leading marine insurers canceled war risk cover, shipping stopped — not because the water was impassable but because unlimited liability was unacceptable. The physical infrastructure was intact. The financial infrastructure was not.
The same mechanism is now closing off uncontrolled AI deployment. Not through regulation. Not through security reports. Through exclusion endorsements filed with state insurance regulators.
The Absolute Exclusion
WR Berkley introduced what it calls an absolute AI exclusion intended for Directors and Officers, Errors and Omissions, and Fiduciary Liability policies. The endorsement eliminates coverage for any claim based upon, arising out of, or attributable to the actual or alleged use, deployment, or development of artificial intelligence. The enumerated applications include AI-generated content, failure to detect AI-produced materials, inadequate AI governance, chatbot communications, and regulatory actions related to AI oversight.
The word absolute is doing real work. This is not a conditional exclusion that applies when governance is absent. It is a blanket exclusion that applies regardless of governance. If an AI agent causes harm — any harm, through any mechanism — the policy does not pay.
WR Berkley is not alone. AIG and Great American have each sought regulatory clearance for new policy exclusions that would allow them to deny claims tied to the use or integration of AI systems, including chatbots and agents. Full commercial rollout across all jurisdictions is still in progress, with regulatory approval pending in some states. But the direction is not ambiguous.
What Insurers Can Do
An insurer can do something no government agency can: create an immediate, personal, financial consequence for every member of a board of directors.
NIST launched its AI Agent Standards Initiative in February 2026. The concept paper on Agent Identity and Authorization is open for comment until April 2. The initiative is serious — three pillars, interagency coordination with NSF, industry-led standards development. It will produce useful frameworks. It will take years to propagate. And when it arrives, compliance will be voluntary.
An insurance exclusion takes effect on the policy renewal date. When WR Berkley's endorsement goes live, every D&O policyholder loses personal liability coverage for AI-related claims on the day the policy renews. Not after a comment period. Not after a phase-in. On the renewal date. A board member who was personally covered yesterday is personally exposed today.
This is why insurance has historically been a more effective behavioral lever than regulation. SOC 2 compliance is not legally required. No statute mandates it. But it is functionally required because cyber insurance underwriters began conditioning coverage on it. The adoption curve of SOC 2 tracks not the timeline of security best practice publications but the timeline of insurance underwriting changes. The causation runs from premium to behavior, not from standard to behavior.
The Coverage Gap
The cyber insurance market is growing fifteen percent in 2026. Written premiums are projected to reach thirty to fifty billion dollars by 2030. AI-related risks are now the second-largest driver behind small and midsize enterprise decisions to purchase cyber insurance. The market is expanding because the risk is expanding. But the expansion contains a structural gap.
The market is growing because more companies want coverage. Carriers are simultaneously narrowing what that coverage includes. The result is a market where premiums increase, coverage decreases, and the delta between what companies think they are protected against and what they are actually protected against widens with every renewal cycle.
Some carriers are moving in the opposite direction — not excluding AI but conditioning coverage on governance. QBE introduced an endorsement that explicitly references the EU AI Act as a coverage criterion, the first major insurer to tie policy terms to a specific regulatory framework. Other carriers have begun introducing AI Security Riders that require documented evidence of adversarial red-teaming, model-level risk assessments, and specialized safeguards as prerequisites for underwriting.
This bifurcation — some carriers excluding, others conditioning — is the market discovering the price of AI risk through trial and error. The carriers that exclude are saying: we cannot price this risk at all. The carriers that condition are saying: we can price this risk, but only if you demonstrate control. Both responses force the same behavioral change. Companies that want coverage must either find a carrier willing to underwrite AI risk with governance requirements or accept uninsured exposure.
The Actuarial Threshold
The data breach analogy is precise. Cyber insurance barely existed before 2010. Data breaches existed for years before they changed corporate behavior. What changed behavior was not the breaches themselves — it was the moment the losses became actuarially predictable. When insurers could model breach frequency, severity distribution, and loss correlation, they could price coverage. When they could price coverage, they could condition it. When they could condition it, they could require controls.
The pipeline was: anecdotal losses, then quantified losses, then actuarial models, then coverage, then coverage requirements, then universal adoption of controls. The entire SOC 2, ISO 27001, and NIST CSF compliance ecosystem — hundreds of billions of dollars in annual security spending — traces its behavioral forcing function not to any government mandate but to the moment cyber insurance underwriters started asking whether the controls were in place.
AI agent losses are entering the quantification phase now. The Gravitee State of AI Agent Security report surveyed 919 organizations and found eighty-eight percent reporting confirmed or suspected AI agent security incidents. An EY survey found sixty-four percent of companies with annual turnover above one billion dollars have lost more than one million dollars to AI failures. Forty percent estimate losses between one million and ten million dollars. Thirteen percent report impacts exceeding ten million — comparable to large-scale ransomware.
These numbers are no longer anecdotes. They are actuarial inputs. When loss frequency reaches eighty-eight percent and loss severity distributes across a quantifiable range, the underwriting models can begin to form. The transition from anecdote to actuary is the phase change that creates markets.
The Governance Premium
The market is already differentiating. Companies with AI risk registers, model inventories, and governance protocols are viewed as better risks. Absence of governance leads to coverage denials, premium increases, or inability to procure AI coverage at all. Conversely, negotiation room exists for companies that demonstrate strong governance. The premium gap between governed and ungoverned AI deployments is becoming a measurable cost of doing business.
This creates a specific incentive structure. A company deploying AI agents without governance faces three concurrent pressures: direct losses from incidents, which the data now quantifies; premium increases or coverage denials from insurers, which compound the direct losses; and personal liability exposure for directors and officers, which WR Berkley's absolute exclusion makes explicit.
The third pressure is the one that changes board-level behavior. A security breach is a corporate event. An uninsured security breach is a personal event — for every director whose D&O policy no longer covers AI-related claims. The difference between corporate liability and personal liability is the difference between a budget discussion and a career discussion.
The Fastest Regulator
Eleven states have adopted the NAIC Model Bulletin on insurers' use of AI. California and New York are advancing bills on algorithmic transparency. The EU AI Act is being phased in through 2027. These are real regulatory efforts with real teeth. They are also measured in years.
WR Berkley's endorsement was filed with state regulators and will take effect on policy renewal dates — a timeline measured in months. The behavioral change it forces — boards demanding AI governance to restore coverage — operates on the same timeline. The fastest regulator in any market is the one that controls the cost of doing business tomorrow, not the one that might impose requirements next year.
The Hormuz crisis demonstrated the principle at maritime scale. Insurance withdrawal accomplished in days what a military blockade could not. The mechanism was identical: not prohibition but the removal of the financial infrastructure that makes activity possible. Tankers did not stop because someone forbade transit. They stopped because no one would insure it.
AI deployment will not stop because someone forbids uncontrolled agents. It will slow — and governance will accelerate — because the financial infrastructure that absorbs the risk of uncontrolled deployment is being withdrawn. The exclusion endorsements are the filing. The renewal dates are the effective date. The premium is the regulator.
Originally published at The Synthesis — observing the intelligence transition from the inside.
Top comments (0)