Why Liability and Insurance Won’t Save AI: Lessons From Cyber Insurance
Holding AI developers responsible for any harm their systems cause may not be the most effective path to promoting AI safety.
.webp?sfvrsn=867f111c_4)
Published by The Lawfare Institute
in Cooperation With
In 2024, California’s state legislature considered a bill, SB 1047, or the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, that would have, among other provisions, imposed liability on some AI companies for harm caused by their machine learning models. Ultimately, California Gov. Gavin Newsom vetoed the bill, but its proponents contend that imposing legal liability is the best way to force AI companies to make their machine learning models safer and more secure.
That’s not an uncommon viewpoint these days—that the most effective path to promoting AI safety is to make sure that the people developing it are held responsible for any harms that their systems cause. But the history of data breaches, cybersecurity, and cyber insurance offers some cautionary lessons about just how useful liability is likely to be for making AI safer given the key role that liability insurers will inevitably play in pricing and covering this risk.
In a new paper, we argue that liability insurers are unlikely to price coverage for AI safety risks in ways that encourage firms to reduce those risks. Instead, as in the cyber insurance market, insurers will tend to base premiums on crude measures such as firm size and industry sector. These factors may suffice to keep insurers solvent, but they will do little to incentivize meaningful improvements in AI safety. In fact, they will signal to firms that investing in AI safety is unlikely to lower premiums. Insurers will lean on such blunt metrics because more refined risk-based pricing is extraordinarily difficult in fields like AI safety, where historical loss data is limited and rapidly outdated, and assessing the safety of individual firms’ AI systems requires specialized technical expertise that most insurers do not have and substantial expenditures that insurers looking to compete in a quickly growing line of coverage will tend to forego.
We reach these conclusions because the same dynamics have already played out in cybersecurity and cyber insurance. This history dates back to 2002, when California passed the first data breach notification law in the United States, SB 1386, requiring companies to report breaches of personal information that affected California residents. Data breaches were already a known problem—the law was motivated in part by a 2002 breach of the state’s Stephen P. Teale Data Center, leading to the theft of information about thousands of state employees. But breach notification laws like California’s opened the floodgates for people to learn about all the times their information was stolen—and to sue the companies responsible.
Paying for those class-action lawsuits was often the largest direct cost of data breaches in the early 2000s, so insurers began offering coverage to help companies pay for the legal costs and settlement fees associated with data breaches. For instance, a 2015 Ponemon Institute study of the costs of data breaches found that, on average, the legal expenditures for a breached firm were $1.07 million, compared to $170,000 in notification costs, and $990,000 for detection and forensic investigation.
Early data breach insurance policies designed to help alleviate these legal costs spurred the development of the cyber insurance industry, which today covers risks ranging from data breaches and denial-of-service attacks to ransomware and regulatory investigations. To qualify for these policies, companies often have to respond to lengthy questionnaires about their security practices and posture, and are sometimes required to implement specific security controls, such as multi-factor authentication. Even so, insurers have struggled with figuring out how to ensure these controls are implemented and maintained properly, and even with identifying which controls are most essential for mitigating cyber risks.
If, as we argue, liability coverage for AI safety risks follows the same pattern, then liability will prove to be a limited instrument for mitigating safety risks. That’s because, whether or not proposals about AI liability explicitly acknowledge it, insurance will play a major role in any attempt to impose liability on AI systems. Just as retailers and other companies that collected payment card numbers and personal information started buying cyber insurance after breach notification laws were passed, companies that develop and use AI models will look to their insurers to help them cover the costs of any new liability imposed on them and their use of AI.
In some cases, that liability may already be covered by existing insurance policies, including commercial general liability insurance, cyber insurance, and tech errors and omissions (E&O) coverage. Some insurers are already looking to exclude certain types of AI risk from their coverage, while others are offering new policies tailored specifically for AI liability, and those new policies may play a useful role in helping AI companies absorb the costs associated with lawsuits and limit their liability moving forward.
What insurance almost certainly won’t do, however, is make AI safer or more secure. To understand why, it’s helpful to return to the example of cyber insurance, and how much hope the public and private sectors once had that it would help solve the problem of cybersecurity.
The Unfulfilled Promise of the Cyber Insurance Industry
In October 2012, the Department of Homeland Security convened a roundtable of experts in Washington, D.C., and charged them with figuring out what obstacles had slowed the growth of the cyber insurance market and “how to move the cybersecurity insurance market forward.” The government cared about growing the cyber insurance market because it believed that was the best thing for cybersecurity. The idea was that a robust cyber insurance market would force companies to implement cybersecurity best practices as a condition of their insurance coverage and as a way to drive down their premium rates, in the same way that people install smoke detectors and sprinkler systems so that they can buy fire insurance, or drive more carefully to lower their car insurance premiums.
Importantly, both the public and the private sector believed at the time, the insurance industry was much better poised to do this than the government. It was better able to collect data about cyber risks because of its access to information about past claims, was better able to identify the most effective security controls and preventive measures by using that data to see which security tools actually worked to stop cybersecurity incidents, and could incentivize companies to use those tools by offering them discounts on their insurance premiums. Moreover, the insurance industry could keep pace with the changing nature of cyber threats because insurers updated their customers’ policies every year—unlike the government, which could take years and years to pass a single regulation.
It was a hopeful vision for private-sector cybersecurity—one in which the private sector could effectively regulate itself without the need for interference by slow, bureaucratic, and technologically inept regulators. The only problem was that it didn’t happen. The cyber insurance market did grow—and even expanded to cover a range of threats besides data breaches, such as ransomware—but it didn’t identify a set of cybersecurity best practices and offer customers discounts if they implemented those and thereby succeed in spreading effective, regularly updated security controls across the country. Far from it: The insurance industry struggled mightily—and still struggles today—to figure out how to model and price cyber risk. What has emerged is a cyber insurance industry that helps companies cover the costs associated with cybersecurity incidents, and helps them avoid liability for those incidents, but does little to actually strengthen their cybersecurity or better protect their computer systems.
The failure of the cyber insurance industry is instructive in thinking about whether policymakers need to impose liability to make AI safer and more secure because many of the reasons that insurance and liability failed to make computer systems more secure also apply to AI. Taken together, those reasons suggest that while liability may have some limited benefits in changing the behavior of AI companies, only ex ante regulation is likely to motivate significant shifts in how firms audit, test, and protect their AI systems.
Challenges of Using Insurance to Mitigate Cyber Risk
To understand the limitations of liability and insurance when it comes to trying to address AI safety risks, it’s helpful to look at why that same approach floundered in the cybersecurity domain.
First, insurers struggled tremendously to collect consistent, complete data about cybersecurity risks when they set out to model and price cyber risk. This was because many cybersecurity incidents went unreported (other than very specific incidents, like breaches of personal information, which were required to be reported by law). Additionally, cybersecurity incidents were perpetrated by smart, adaptable adversaries who could change tactics as their targets implemented new protections. So last year’s data on cybersecurity incidents—even if you could collect it—wouldn’t necessarily tell you anything about what those adversaries were going to do this year or next year, making it even harder for actuaries to build reliable models. These challenges came to a head in 2019 and 2020 when a massive, unpredicted spike in ransomware claims took the cyber insurance industry by surprise, leading to large premium hikes that belied just how little insurers knew about how to predict ransomware and how to mitigate it.
AI suffers from many of these same challenges when it comes to collecting data and building good risk models. So far, very little data is available about the harms AI has caused and there is every reason to believe those threats will continue to evolve and change in ways that will be difficult to model or predict. The data that is available comes from voluntary, public reports about AI harms, which are presumably a small portion of the actual incidents that occur, since companies have few incentives to publicize their failures and every reason to try to hide them. We therefore learn about a small, nonrepresentative set of AI harms—primarily those that can’t be hidden either because of their direct impacts or resulting litigation—and we often don’t know enough about them and how they occurred to draw meaningful conclusions about how best to prevent them in the future.
It’s possible that, over time, this data collection will improve—but probably only if regulators require it. As we have seen in cyber insurance, insurers may not collect detailed data about incidents resulting in claims if they fear that doing so could lead to greater liability for their policyholders because plaintiffs may be able to get access to those incident reports and use them to build a stronger case.
Another major obstacle for the cyber insurance industry has been figuring out how to assess policyholders’ cybersecurity posture. Computer systems at large companies are typically complicated, sprawling networks that involve lots of different software and hardware. Identifying all the potential vulnerabilities and ways they could be compromised requires extensive testing and technical expertise. But instead of trying to do in-depth, technical assessments of their would-be customers’ computers, insurers rely primarily on lengthy questionnaires in which they ask policyholders dozens of questions about their security practices: Do they use encryption and multi-factor authentication, do they have an incident response plan, do they patch their software regularly? But these questionnaires can’t always capture the nuance involved in securing computer systems—a company may encrypt its data but implement that encryption poorly, or use multi-factor authentication but forget to enable it for some legacy accounts. In other words, just asking the question is not the same as verifying how exactly the security controls are configured and monitoring them over time.
As a result of how hard it has been for insurers to assess their customers’ cybersecurity, most of them simply price coverage based on a company’s revenue and industry sector, rather than tying it to their security practices. So instead of incentivizing companies to invest in better cybersecurity, insurers send the message that it doesn’t really matter what they do to secure their data and networks—the premiums will stay the same.
AI insurance poses similar challenges because assessing the security and safety of AI systems is, if anything, even harder and more technologically complex than assessing the cybersecurity of a standard computer system. One 2025 survey by Accenture found that only 20 percent of companies are confident in their ability to secure their generative AI models against cyber risks. A 2025 IBM study found that 13 percent of surveyed companies had experienced a breach of an AI model or application and that 97 percent of the breached companies did not have access controls in place for their AI systems. Even the people developing AI are not confident that it can be designed to be fully safe or secure against serious threats. OpenAI CEO Sam Altman has publicly warned of the dangers of bad actors abusing “superintelligence” to attack critical infrastructure or build novel weapons. The lack of clear consensus on how to secure AI systems will make it nearly impossible for insurers to link premiums to their customers’ AI protections and will mean, accordingly, that those premiums provide AI companies with little incentive to invest heavily in safety or security precautions or testing.
Additionally, a third major challenge for cyber insurers has been the potential for catastrophic cyber risk to destabilize their industry. In general, when insurers sell coverage they try to diversify their customers so that it’s unlikely all of them will be affected by the same incident—by selling policies to companies in different geographic regions or sectors, for instance, so that a hurricane that hits one area won’t impact more than a limited number of insureds at once. But that’s hard to do with cybersecurity incidents, which can span geography and industry sectors easily, especially since companies in every country and sector rely on the same small number of operating systems and cloud computing providers. A vulnerability in the Windows operating system or Amazon Web Services could potentially impact a huge swath of an insurer’s customer base, causing massive losses.
To that end, the insurance industry has at various points asked the government to provide a backstop or some sort of federal reinsurance program for catastrophic cyber risk, to ensure that in the event of such an incident they would not go bankrupt. But this, too, has been complicated to figure out as it’s not immediately clear what types of cyber risks the government views as catastrophic or what it might ask of the insurance industry in exchange for such support. In the absence of clear government support, many cyber insurers have taken to excluding certain types of catastrophic risks (such as state-sponsored cyberattacks and attacks on critical infrastructure) from their coverage and strictly limiting how much cyber coverage they sell.
Much has been made of the potentially catastrophic nature of AI risks and many of those risks cannot be insured, such as the elimination of enormous numbers of jobs, or the destruction of the planet. But other not-quite-so-catastrophic risks might pose similar problems to those faced by cyber insurers. AI is a similarly concentrated industry, with a small number of large tech companies providing foundation models that are relied on by the vast majority of companies, so that a vulnerability or malfunction in one of the widely used models could rapidly escalate to affect a significant proportion of an insurer’s policyholders. For these reasons, it is likely that insurers will want to exclude some of these risks from their coverage, or cap how much AI coverage they are willing to sell, as is the case in cyber insurance.
What Cyber Risk Can Teach Us About AI
There are, of course, some notable differences between cybersecurity and AI. For one, AI harms may result from deliberate attacks but can also be caused by accidental malfunctions and mistakes, whereas cybersecurity incidents are almost always intentional. Although in theory this could make AI risks easier to predict and model using historical data, the rapid pace of AI development makes that unlikely outside of narrowly defined domains where AI has been deployed for some time. Narrow performance guarantee insurance, which a few insurers have offered for years, illustrates this limited potential. But such products do not address the types of AI safety risks that many commentators envision liability insurance covering through accurate, risk-based pricing designed to promote safer AI practices.
Additionally, while most cyber risk is now covered in stand-alone policies, AI risk is spread out more across different insurance lines because it is so tightly intertwined with other domains. But this might actually make it harder for insurers to get a handle on modeling and mitigating AI risks since they will have fewer opportunities to focus on developing their AI expertise in a single product line.
Taken together, these parallels with cyber insurance suggest that AI liability and the resulting insurance market are extremely unlikely to produce meaningful results when it comes to making AI models safer and more secure. Instead, we need to turn our attention to developing ex ante regulations for AI, recognizing that that, too, will be extremely challenging—for many of the same reasons—but that it is the only viable path to changing how companies safeguard their AI models. Regulators will not get everything right, but they should start experimenting with regulations that require companies to report on AI harms and to perform audits and tests early on, rather than waiting for the insurance industry to handle this problem for them.
The history of cybersecurity regulation offers a cautionary tale for what happens if you wait too long. In 2022, a full decade after the Department of Homeland Security roundtable on cyber insurance, and two decades after the passage of SB 1386, President Biden signed the Cyber Incident Reporting for Critical Infrastructure Act (CIRCIA). After years of hoping that insurers would collect data on cybersecurity incidents and safeguards, the government finally decided it was time to start collecting cybersecurity incident data itself. It would be wise to begin that process for AI now, rather than wasting another decade hoping that insurance will save us.