Cybersecurity & Tech

Healthy Insurance Markets Will Be Critical for AI Governance

Cristian Trout
Wednesday, December 17, 2025, 2:52 PM
The question is not if insurers will play a role, but rather how to ensure they play a socially beneficial one.
OpenAI CEO Sam Altman speaks at TechCrunch Disrupt 2017 in San Francisco. (TechCrunch, https://flic.kr/p/XBdtys; CC BY 2.0, https://creativecommons.org/licenses/by/2.0/)

Published by The Lawfare Institute
in Cooperation With
Brookings

An insurance market for artificial intelligence (AI) risk is emerging. Major insurers are taking notice of AI risks, as mounting AI-related losses hit their balance sheets. Some are starting to exclude AI risks from policies, creating opportunities for others to fill these gaps. Alongside a few specialty insurers, the market is frothing with start-ups—such as the Artificial Intelligence Underwriting Company (for whom I work), Armilla AI, Testudo, and Vouch—competing to help insurers price AI risk and provide dedicated AI coverage.

How this fledgling insurance market matures will profoundly shape the safety, reliability and adoption of AI, as well as the AI industry’s resilience. Will insurance supply meet demand, protecting the industry from shocks while ensuring victims are compensated? Will insurers enable AI adoption by filling the trust gap, or will third-party verification devolve into box-ticking exercises? Will insurers reduce harm by identifying and spreading best practices, or will they merely shield their policyholders from liability with legal maneuvering?

In a recent Lawfare article, Daniel Schwarcz and Josephine Wolff made the case for pessimism, arguing that “liability insurers are unlikely to price coverage for AI safety risks in ways that encourage firms to reduce those risks.”

Here I provide the counterpoint. I make the case, not for blind optimism, but for engagement and intervention. Synthesizing a large swathe of theoretical and empirical work on insurance, my new paper finds considerable room for insurers to reduce harm and improve risk management in AI. However, realizing this potential will require many pieces to come together. On this point, I agree with skeptics like Schwarcz and Wolff.

Before getting into the challenges and solutions though, it’s important to grasp some of the basic dynamics of insurance.

Insurance as Private Governance

Insurers are fundamentally in the business of accurately pricing and spreading risk, but not only that: They also manage that risk by monitoring policyholders, identifying cost-effective risk mitigations, and enforcing private safety standards. Indeed, insurers have often played a key role in the safe assimilation of new technologies. For example, when Philadelphia grew tenfold in the 1700s, multiplying the cost of fires, fire insurers incentivized brick construction, spread fire-prevention practices, and improved firefighter equipment. When electricity created new hazards, property insurers funded the development of standards and certifications for electrical equipment. When automobile demand surged after World War II, insurers funded the development of crashworthiness ratings and lobbied for airbag mandates, contributing to the 90 percent drop in deaths per mile over the 20th century.

Insurers play the role of private regulator not out of benevolence, but because of simple market incentives. There are four key dynamics to understand.

First, insurers want to make premiums more affordable in order to expand their customer base and seize market share. Generally reducing risks is the most direct way to reduce premiums.

Second, insurers want to control their losses. Once insurers issue policies, they directly benefit from any further risk reductions. Encouraging policyholders to take cost-effective mitigations and monitoring them to ensure they don’t take excessive risks directly protects insurers’ balance sheets. Examples of this from auto insurance include safety training programs and telematics. The longer-term investments insurers make in safety research and development (R&D)—such as car headlight design—allow them to profit from predictable reductions in the sum and or volatility of their losses. Insurance capacity—the amount of risk insurers can bear—is a scarce resource, ultimately limited by available capital: Highly volatile losses strain this capacity by requiring insurers to hold larger capital buffers.

Third, insurers want to be partners to enterprise. Risk management services (such as cybersecurity consulting) are often a key value proposition for large corporate policyholders, and they help insurers to differentiate themselves. Insurers can also enable companies to signal product quality and trustworthiness more efficiently, through warranties, safety certificates, and proofs of insurance. This is precisely what’s driving the boom in start-ups competing to provide insurance against AI risk: filling the large trust gap between (often young) vendors of cutting-edge AI technology and wary enterprise clients struggling to assess the risks of an unproven technology.

Fourth and finally, insurers seek “good risk.” Underwriting fundamentally involves identifying profitable clients while avoiding adverse selection (where insurers attract and misprice too many high-risk clients). This requires understanding the psychologies, cultures, and risk management practices of potential clients. For example, before accepting a new client, cyber insurance underwriters will make an extensive assessment of the client’s cybersecurity posture.

Insurers deploy various tools to achieve these aims: adherence to safety standards as a condition of coverage, risk-adjusted premiums rewarding safer practices, audits or direct monitoring of policyholders, and refusing to pay claims if the policyholder violated the terms of the contract (such as by acting with gross negligence or recklessness).

Are these tools effective, though? Does insurance uptake really reduce harm relative to a baseline where insurers are absent?

Moral Hazard vs. the Distorted Incentives of AI Firms

Skeptics of “regulation by insurance” point out that the default outcome of insurance uptake is moral hazard—that is, insureds taking excessive risk, knowing they are protected. From this angle, the efforts insurers make to regulate insureds are just a Band-Aid for a problem created by insurance.

These skeptics have a point: Moral hazard is a danger. Nevertheless, insurers can often improve risk management and reduce harm, despite moral hazard. My research finds this happens when the incentives for insureds to take care were already suboptimal: Insurance essentially acts as a corrective for many types of market failures.

Consider fire insurance again: Making a house fire-resistant protects not just that one house but also neighboring ones. However, individual homeowners don’t see these positive externalities: They are underincentivized to make such investments in fire safety. By contrast, the insurer that covers the entire neighborhood (or even just most of it) captures much more of the total benefit from these investments. It frequently happens that insurers are thus better placed to provide what are essentially public goods.

Are frontier AI companies such as OpenAI, Anthropic, or Google DeepMind sufficiently incentivized to take care? Common law liability makes a valiant attempt to do so, but as I and others point out, it is not up to the task for several reasons.

First, leading AI companies are locked in a winner-take-most race for what could quickly become a hundred-billion- or multitrillion-dollar market, creating intense pressure to prioritize increasing AI capabilities over safety. This is especially true for start-ups that are burning capital at extraordinary rates while promising investors extremely aggressive revenue growth.

Second, safety R&D suffers from a classic public goods problem: Each company bears the full cost of such R&D, but competitors capture much of the benefit through spillovers. This leads to chronic underinvestment in a wide range of open research questions, despite calls from experts and nonprofits.

Third, the prospect of an AI Three Mile Island creates a free-rider problem. Nuclear’s promise of abundant energy died for a generation after accidents such as Three Mile Island and Chernobyl fueled public backlash and regulatory scrutiny. Similarly, if one AI company accidentally causes an AI Three Mile Island, the entire industry would suffer. But while all AI companies benefit from others investing in safety, each prefers to free ride.

Fourth, a large enough catastrophe or collapse in investor confidence will render AI companies “judgment-proof”—that is, insolvent and unable to pay the full amount of damages for which they are liable. Victims (and or taxpayers) will be left to foot the bill, essentially subsidizing AI companies’ risk-taking.

Fifth is the lack of mature risk management in the frontier AI industry. A wealth of research finds that individuals and young organizations systematically neglect low-probability, high-consequence risks. This is compounded by the overconfidence, optimism, and “move fast and break things” culture typical of start-ups. Also likely at work is a winner’s curse: It’s likely the AI company most willing to race ahead most underestimates the tail-risks.

Insurance uptake helps correct these misaligned incentives by involving seasoned stakeholders who don’t face the same competitive dynamics, are required by law to carry substantial capital reserves for tail-risks, and, again, are better placed to provide public goods.

Admittedly, history proves these beneficial outcomes are possible, not a given. There are still further challenges that skeptics rightly point to and which must be overcome if insurance is to be an effective form of private governance. I turn to these next.

Pricing Dynamic Risk

It is practically a truism to say AI risk is difficult to insure given the lack of data on incidents and losses. This is distracting and misleading. It’s distracting because it’s trivially true. Every new risk has no historical loss data: That says nothing of how well or poorly insurers will eventually price and manage it. It’s misleading because compared to, say, commercial nuclear power risk when it first appeared, data on AI’s risks is intrinsically easier to acquire: Unlike nuclear power plants, it’s possible to stress-test live AI systems quite cheaply (known as “red-teaming”). Other key data points, such as the cost of an intellectual property lawsuit or public relations scandal, are simply already known to insurers.

The dynamic nature of AI risk is the warranted concern. Because the underlying technology is evolving so rapidly, insurers could struggle to get a handle on it: Information asymmetries between insurers and their policyholders (especially if these last are AI developers) could remain large; lasting mitigation strategies will be difficult to identify; and the actuarial models that insurers traditionally rely on, which assume historical losses predict future ones, may not hold up.

This mirrors difficulties insurers faced with cyber risk, which stemmed from rapid technological evolution and intelligent adversaries adapting their strategies to thwart defenses. AI risk will include less of this adversarial element, at least where AI systems aren’t scheming against their creators.

Cyber insurers have recently started overcoming this information problem. Instead of relying solely on policyholders self-reporting their cybersecurity posture through lengthy, annual questionnaires, insurers now continuously scan policyholders’ vulnerabilities and security controls. This was enabled by so-called insurtech innovations and partnerships with major cloud service providers that already have access to much of the information on policyholders that insurers need. Insurers have also come to a consensus on mandating certain security controls, such as multi-factor authentication and endpoint detection, demonstrating that durable mitigations can be found.

For the AI insurance market to go well, insurers must learn the lessons of cyber. They must prepare from the start to use pricing and monitoring techniques, such as the aforementioned red-teaming, that are as adaptive as the technology they are insuring. They should also aim to simply raise the floor by mandating adherence to a robust safety and security standard before issuing a policy. Standardizing and sharing incident data will also be critical.

Even if insurers fail to price individual AI systems accurately, insurers can still help correct the distorted incentives of AI companies, as long as aggregate pricing is good enough. To illustrate: Pricing difficulties notwithstanding, aggregate loss ratios for cyber are well controlled, making it a profitable line of insurance. This speaks to the effectiveness of risk proxies such as company size, deployment scale, and economic sector. When premiums depend only on these factors, ignoring policyholders’ precautionary efforts, insurers lose a key tool for incentivizing good behavior. However, premiums will still track activity levels, a key determinant of how much risk is being taken. Excessive activity will be deterred by premium increases. Thus even with crude pricing, by drawing large potential future damages forward, insurers can help put brakes on the AI industry’s race to the bottom: The industry as a whole will be that much better incentivized to demonstrate their technology is safe enough to continue developing and deploying at scale.

Removing the Wedge Between Liability and Harm

For insurers covering third-party liability, sometimes lawyers are a safer investment than investments in safety.

We’ve occasionally seen this dark pattern in cyber insurance, where, in response to incidents, sometimes insurers provide lawyers who prevent outside forensics firms from sharing findings with policyholders to avoid creating evidence of negligence. This actively hampers institutional learning. The risk of liability may decrease, but the risk of harm increases.

The only real remedy is policy intervention, in the form of transparency requirements and clearer assignment of liability. Breach notification laws and disclosure rules are successful examples in cyber: With less room to bury damning incidents or poor security hygiene, insurers and policyholders have refocused their efforts on mitigating harms.

California’s recently passed Transparency in Frontier Artificial Intelligence Act is therefore a step in the right direction. The act creates whistleblower protections and requires major AI companies to report to the government what safeguards they have in place. Even skeptics of regulation by insurance and proponents of a federal preemption of state AI laws, recognize the value of such transparency requirements.

A predecessor bill that was vetoed last year would have taken this further by more clearly assigning liability to foundation model developers for certain catastrophic harms. The question of who to assign liability to has been discussed in Lawfare and elsewhere; at issue here is how it gets assigned. By removing the need to prove negligence, a no-fault liability regime for such catastrophes would eliminate legal ambiguity altogether, mirroring liability for other high-risk activities such as commercial nuclear power and ultra-hazardous chemical storage. This would focus insurer efforts on pricing technological risk and reducing harm, rather than pricing legal risk and shunting blame around.

Workers’ compensation laws from the 20th century were remarkably successful in this regard. The Industrial Revolution brought heavy machinery and, with it, a dramatic rise in worker injury and death. Once liability was clearly assigned to employers in the 1910s though, insurers’ inspectors and safety engineers got to work bending the curve: improvements in technology and safety practices produced a 50 percent reduction in injury rates between 1926 and 1945.

Catastrophic Risk: Greatest Challenge, Greatest Opportunity

Nowhere are the challenges and opportunities of this insurance market more stark than with catastrophic risks. Both experts and industry warn of frontier AI systems potentially enabling bioterrorism, causing financial meltdowns, or even escaping the control of their creators and wreaking havoc on computer systems. If even one of these risks is material, the potential losses are staggering. (For reference, the NotPetya cyberattack of 2017 cost roughly $10 billion globally; major IT disruptions such as the 2024 CrowdStrike outage cost some tens of billions globally; the coronavirus pandemic is estimated to have cost the U.S. alone roughly $16 trillion.)

Under business as usual, insurers face silent, unpriced exposure to these risks. Few are the voices sounding the alarm. We may therefore see a sudden market correction, similar to terrorism insurance post-9/11: After $32.5 billion in losses, insurers swiftly limited terrorism risk coverage or exited the market altogether. With coverage unavailable or prohibitively expensive, major construction projects and commercial aviation ground to a halt since bank loans often require carrying such insurance. The government was forced to stabilize the market, providing insurance or reinsurance at subsidized rates. It’s entirely possible an AI-related catastrophe could similarly freeze up economic activity if AI risks are suddenly excluded by insurers.

Silent coverage aside, insurers don’t have the risk appetite to write affirmative coverage for AI catastrophes. The likes of OpenAI and Anthropic already can’t purchase sufficient coverage, with insurers “balking” at their multibillion-dollar lawsuits for harms far smaller than those experts warn might come. Such supply-side failures leave both the AI industry and the broader economy vulnerable.

An enormous opportunity is also at stake here. Counterintuitively, it is precisely these low-probability, high-severity risks that insurers are well-suited to handle. Not because risk-pooling is very effective for such risks—it isn’t—but because, when insurers get serious skin in the game for such risks, they are powerfully motivated to invest in precisely the efforts markets are currently failing to invest in: forward-looking causal risk modeling, monitoring policyholders, and mandating robust safeguards. For catastrophic risks, these efforts are the only effective method for insurers to control the magnitude and volatility of losses.

Such efforts are on full display in commercial nuclear power. Insurers supplement public efforts with risk modeling, safety ratings, operator accreditation programs, and plant inspections. America’s nuclear fleet today stands as a remarkable achievement of engineering and management: Critical safety incidents have decreased by over an order of magnitude, while energy output per plant has increased, in no small part thanks to insurers.

Put another way, insurers are powerfully motivated to pick up the slack from poorly incentivized AI companies. The challenge of regulating frontier AI can be largely outsourced to the market, with the assurance that if risks turn out to be negligible, insurers will stop allocating so many resources to managing them.

Clearly delegating to insurers the task of pricing in catastrophic risk from AI also helps by simply directing their attention to the issue. My research finds that insurers price catastrophic risk quite effectively when they cover it knowingly, even when it involves great uncertainty. To reuse the example above, commercial nuclear insurance pricing was remarkably accurate at least as early as the 1970s, despite incredibly limited data. Insurers estimated the frequency of serious incidents at roughly 1-in-400 reactor years, which turned out to be within the right order of magnitude; the same can’t be said of the 1-in-20,000 reactor years estimate from the latest government report at the time.

This suggests table-top exercises or scenario modeling—such as those mandated by the Terrorism Risk Insurance Program—are particularly high-leverage interventions. By simply surfacing threat vectors and raising the salience of catastrophe scenarios, these turn unknown unknowns into at least known unknowns, which insurers can work with.

Alerting insurers to catastrophic AI risk is not enough however. They will simply write new exclusions, and the supply of coverage will be lacking or unaffordable. In response, major AI companies will likely self-insure through pure captives—that is, subsidiary companies that insure their parent companies. Fortune 50 companies such as Google and Microsoft already do this. Smaller competitors would be left out in the cold, exposed to risk or paying exorbitant premiums.

Pure captives also sacrifice nearly all potential for private governance here: They do nothing to solve the industry’s various legitimate coordination problems, such as preventing an AI Three Mile Island; and they lack sufficient independence to be a real check on the industry.

Mutualize: An Old Solution for a New Industry

To recap: Under business as usual, coverage for catastrophic AI will be priced all wrong, and will face both supply and demand failures; yet this is precisely where the opportunity for private governance is greatest.

There is an elegant, tried-and-true solution to these problems: The industry could form a mutual, a nonprofit insurer owned by its policyholders. AI companies would be insuring each other, paying premiums based on their risk profiles and activity levels. Historically, it is mutuals that have the best track record of matching effective private governance with sustainable financial protection. They coordinate the industry on best practices, invest in public goods such as safety R&D, and protect the industry’s reputation through robust oversight, often leveraging peer pressure. Crucially, mutuals have sufficient independence from policyholders to pull this off: No single policyholder has a monopoly over the mutual’s board.

The government can encourage mutualization by simply giving its blessing, signaling that it won’t attack the initiative. In fact, the McCarran-Ferguson Act already shields insurers from much federal anti-trust law, though not overt boycott: The mutual cannot arbitrarily exclude AI companies from membership.

If mutualization fails and market failures persist, the government could take more aggressive measures. It could mandate carrying coverage for catastrophic risk, and more or less force insurers to offer coverage through a joint-underwriting company. These are dedicated risk pools offering specialized coverage where it is otherwise unavailable. This intervention (or the threat of it) is the stick to the carrot of mutualization: Premiums would undoubtedly be higher and relationships more adversarial. Still, it would achieve policy goals. It would protect the AI industry from shocks, ensure victims are compensated, and develop effective private governance.

Whether a mutual or a joint-underwriting company, the idea is to create a dedicated, independent private body with both the leverage and incentives to robustly model, price, and mitigate covered risks. Even the skeptics of private governance by insurance agree that this works. Again, nuclear offers a successful precedent: Some of its risks are covered by a joint-underwriting company, American Nuclear Insurers; others, by a mutual, Nuclear Electric Insurance Limited. Both are critical to the overall regulatory regime.

Public Policy for Private Governance

Both skeptics and proponents of insurance as a governance tool agree: It won’t work without public policy nudges. This market needs steering. Light-touch interventions include transparency requirements, clearer assignment of liability, scenario modeling exercises, and facilitating information-sharing between stakeholders. Muscular interventions include insurance mandates and government backstops for excess losses.

Backstops, a form of state-backed insurance, make sense only for truly catastrophic risks. These are risks the government is always implicitly exposed to: It cannot credibly commit not to provide disaster relief or bailouts to critical sectors. Major AI developers may be counting on this. Instead of an ambiguous subsidy in the form of ad hoc relief, an explicit public-private partnership allows the government to extract something in return for playing insurer of last resort. Intervening on the insurance market has the benefit of avoiding picking winners or losers (in contrast to taking an equity stake in any particular AI firm).

A backstop also creates the confidence and buy-in the private sector needs to shoulder more risk than it otherwise would. This is precisely what the Price-Anderson Act did for nuclear energy, and the Terrorism Risk Insurance Act did for terrorism risk. Price-Anderson even generated (modest) revenue for the government through indemnification fees.

Major interventions require careful design of course. Poorly structured mandates or backstops could simply prop up insurance demand, subsidize risk-taking, or create a larger moat for well-resourced firms. On the other hand, business as usual carries its own risks. It leaves the economy vulnerable to shocks, potential victims without a guarantee they will be made whole, and private governance to wither on the vine or, worse, to perversely pursue legal over technological innovation.

The stakes are high then, and early actions by key actors—governments, insurers, underwriting start-ups, major AI companies—could profoundly shape how this nascent market develops. Nothing is prewritten: History is full of cautionary tales as well as success stories. Steering toward the good will require a mix of deft public policy, risk-taking, technological innovation, and good-faith cooperation.

Editor’s Note: The research this piece reports on was completed prior to Trout joining AIUC and was not financially supported by AIUC. The views expressed herein do not necessarily reflect those of AIUC.


Cristian Trout is a research fellow at the Artificial Intelligence Underwriting Company (AIUC) and a former Winter Fellow at the Centre for the Governance of AI.
}

Subscribe to Lawfare