Cybersecurity & Tech

The Problem of Liability Overexposure for Software

Micah Musser
Tuesday, September 2, 2025, 10:47 AM
Most software tort proposals in the U.S. focus on defining a standard of care for developers. But what happens after a finding of liability?

Published by The Lawfare Institute
in Cooperation With
Brookings

In recent years, many scholars and policymakers have advocated for the application of tort liability to software developers for insecure or faulty code. Proponents of a federal software liability regime have spent most of their energy trying to clearly define what sorts of developer conduct or product features should trigger liability (or, alternatively, what should qualify for a safe harbor and immunity from lawsuits). Many are concerned that any sort of liability regime that places too much faith in judges or juries to define software quality will result in unpredictable liability exposure. To protect innovation and ensure that companies know how to comply with the law, the argument goes, the U.S. needs clear standards and certifiable safe harbors.

But there is a related, but distinct, problem: that of unlimited liability exposure. Unless a statutory safe harbor is designed in such a way that all companies always comply with it, there will eventually be mistakes. The question then becomes: How much liability is too much? Software faces a number of unique challenges that make it likely that simple oversights could destroy even large companies. Even in cases in which a developer has cut corners and should be subject to some liability, a poorly designed regime will quickly bankrupt many valuable companies.

Here, I lay out a few of the challenges that make liability overexposure problems particularly acute in the context of software. I consider one potential strategy to address this problem in the form of liability caps. However, while liability caps make sense as a way to limit fines in a regulatory oversight system, they likely would not work—and may in fact exacerbate the problem—as part of a federally created civil tort regime. Designing a workable liability regime requires increased attention to the procedures through which tort claims will be adjudicated.

The Problem of Liability Overexposure

Three intertwined sets of problems make the risk of bankrupting liability particularly acute in the context of software.

First, the nature of software—in which every consumer receives an identical copy of the same code—means that a flaw affecting one consumer simultaneously affects all. When a company like CrowdStrike pushes out a buggy update, that update is received by millions of devices. If there is a flaw in the software, the related economic damages can easily exceed the company’s annual revenues within a few hours. The problem of a defective design manifesting in millions of goods is not unique to software; this is also true of all mass-produced physical products. But in the case of software, there is a much higher likelihood that all copies exhibit the flaw simultaneously—either as part of a bad update or because a malicious hacker launches a large-scale cyberattack, which has little analogue in the realm of tangible products. This is the problem of correlated risk.

Second, and relatedly, software failures typically cause economic harms—ransomware payments, business shutdowns, data breach remediation, intellectual property theft, and so forth—rather than physical injuries. Tort law is usually suspicious of duties to avoid negligently causing economic losses to strangers, in large part because economic harms can easily cascade and impact remote third parties. The exceptions to this default rule, like liability for professional malpractice, typically involve duties owed to a particular client, not an undefined class of third parties. But the same code may be sold or licensed to millions of users, and an even larger class of consumers may indirectly rely on it. (For example, imagine a payment processing company that sells its software to thousands of companies that interact with millions of consumers, all of whom would be indirectly affected by a data breach.) Even if CrowdStrike had predicted that a shoddy update would result in grounded flights, it is hard to predict the economic injuries that grounded flights might cause to unknown third parties. This is the problem of cascading risk.

Third, the market dynamics of software make the dangers of liability overexposure particularly serious. If an attorney commits negligence in their work and is driven out of practice by the resulting malpractice lawsuit, their other clients can easily find other attorneys. The market for professionals is typically relatively thick and competitive. But just four companies control over 50 percent of the market share for endpoint protection software (with CrowdStrike alone controlling 20 percent). Such market concentration is relatively typical for software products. Maybe that’s because software is a natural monopoly; maybe that type of concentration is harmful and could be eliminated by changes in antitrust law. Whatever the explanation, under the current state of things, forcing a company like CrowdStrike to pay for all of the economic harms of a bad update would require it to either raise its prices by an enormous margin or risk bankruptcy. Bankrupting CrowdStrike would, in turn, make consumers more vulnerable to cyberattacks by eliminating one of a small number of major antivirus companies. This is the problem of concentrated risk.

Courts are very suspicious of expanding tort duties when the problems of correlated risk, cascading risk, and concentrated risk are all present. For example, in the canonical case of Strauss v. Belle Realty Co., New York’s highest court faced a lawsuit against Con Edison brought by a tenant who fell down the stairs during a blackout. There were correlated risks: The blackout simultaneously affected thousands of buildings. There were cascading risks, because—as the trip in the dark shows—there was an unpredictable number of injuries caused indirectly by the blackout. And there were concentrated risks, because Con Edison was a local monopoly with a statutory duty to provide power to all New York City residents. Faced with such a situation, the Court of Appeals of the State of New York decided that it had an obligation “to limit the legal consequences of wrongs to a controllable degree” and dismissed the lawsuit.

Liability Caps and Aggregate Litigation

One solution to all of this is to simply cap the liability facing developers. Consider the Cyberspace Solarium Commission’s 2020 legislative proposal, which would have imposed liability on “final goods assemblers” for vulnerabilities in their software products. This proposal would have allowed economic damages greater than $75,000 to be recoverable, but would have capped liability at 15 percent of the developer’s annual revenues. At a glance, this seems like a good way to reduce the risk of bankrupting liability and ensure that companies face serious but not existential consequences for poor code.

There is, however, a critical ambiguity in the proposal: Does the 15 percent cap on liability apply on a per-plaintiff basis or a per-defendant basis? The first isn’t much of a cap at all. If a company’s software is used by 425 of the Fortune 500 companies, allowing each company to sue separately for their damages results in an aggregate liability cap of 6,375 percent of the developer’s annual revenues—hardly low enough to avoid the risk of bankrupting liability. But if the cap is essentially a promise that no company will pay out more than 15 percent of its revenues in aggregate, then it actively incentivizes lawsuits by creating a race-to-the-courthouse problem: Following any major hack, only the first few victims to reach a final judgment may get any recovery at all, which incentivizes everyone to sue the second they think their injuries might have been caused by a vulnerability.

The idea of liability caps is likely borrowed from European laws like the General Data Protection Regulation (which caps penalties at 4 percent of a company’s global revenue) or the EU Artificial Intelligence Act (which caps penalties at 7 percent). These provisions make sense as caps on fines, where only a small number of regulators have the right to levy the penalty. Damages caps are also workable as part of a civil liability regime if they don’t effectively serve to create a limited pot of money over which many plaintiffs need to fight. In the case of medical malpractice, for example, where physicians harm one patient at a time, damages caps don’t create the same race-to-the-courthouse dynamic. But caps on aggregate damages are very likely to backfire when a single course of conduct by a developer harms many software users (and indirect bystanders), each of whom can potentially sue.

Procedural consolidation of claims could mitigate a race-to-the-courthouse dynamic. Consolidation of software tort claims seems particularly desirable, because liability hinges on complicated “upstream” factual questions about a developer’s software quality, and because the harms to many individual consumers following a cyber incident are too small to litigate over absent aggregation. The problem is that plaintiffs with large injuries are likely to avoid litigating as a class if doing so would subject them to a more restrictive damages cap. Even so, under a federal tort statute, defendants could remove all cases to federal court, where they could be consolidated as a multidistrict litigation (MDL). Perhaps an MDL judge could appoint a special master to mete out damages to individual claimants so as to keep overall liability below some threshold.

Unfortunately, there are both legal and pragmatic problems with relying on MDLs to resolve this issue. Legally, the Supreme Court has emphasized that MDL judges have jurisdiction only over pretrial matters, which would prohibit them from making a final determination of damages in any particular case. Additionally, the American Law Institute has observed that where statutes are ambiguous regarding the aggregation of statutorily prescribed damages, courts may often simply refuse to permit aggregation of claims if they believe that doing so would be inconsistent with the statutory purpose. Ultimately, this creates a question of statutory interpretation, where a poorly drafted regime could lead to conflicting decisions in different circuits.

Pragmatically, many cyber incidents cause major harms to both large corporate users of a software product and to individual consumers. (Recall the example of a payment processing product sold to a few large retailers for the purpose of safeguarding the personal data of millions of consumers.) In such a situation, only large corporations can be expected to sue as individual plaintiffs, and they can also be expected to resist further consolidation with individual consumers if consolidation might dilute their own recoveries. This illustrates a broader problem with relying on individual litigation and MDLs to resolve software tort claims: In the class action context, Rule 23 (which governs class actions in federal courts) embeds many protections to ensure that all class members are represented fairly, even where there are structural conflicts between the interests of different class members. Treating an MDL—which lacks such protections—as a quasi-class action subject to an aggregate damages cap would raise serious due process concerns about the representation of absent victims, as well as the representation of different categories of injured plaintiffs.

The upshot of this discussion is that existing software liability proposals have failed to address the procedural mechanisms by which claims would be resolved. This discussion requires attention to issues like the availability of class action litigation and MDLs, as well as the likelihood of structural conflicts between large corporate users of software and the individual consumers whose data may be jeopardized by a cyber incident. These structural conflicts make it particularly difficult to “cap” liability exposure: Even if a federal statute clearly applied a damages cap to class action recoveries, the wide variation in damages across affected parties may incentivize large corporate plaintiffs to opt out of a Rule 23(b)(3) damages class and litigate separately, thus evading the cap. And given the limitations inherent in an MDL court’s authority, alternative forms of involuntary consolidation cannot be relied on to address the problem.

Possible Ways Forward

In summary, there are many factors that make a software liability regime difficult to design without exposing companies to excessive liability, resulting in the problems of correlated risk, cascading risk, and concentrated risk. These problems are exacerbated by the availability of class action litigation in the United States, which makes it likely that any company with a large user base could face massive liability judgments for flaws in their software, even when the harms to individual users are small. And attempts to cap maximum damages—even on a class action basis—may prove inadequate where, as in the software world, there are likely to be a small subset of victims with disproportionately large injuries who may opt out and litigate separately in order to avoid the damages cap.

There are some intuitive—but insufficient—solutions. One would be to start small: Rather than imposing liability for software flaws generally, only some subset of harms should trigger liability. For instance, a liability regime might identify a narrow set of particularly serious harms that aren’t likely to cause correlated and cascading failures. Perhaps it is worth starting with liability for software in medical devices or autonomous vehicles, where risks of physical injury are particularly acute but mass harms are less likely.

Another possible approach is to enact a federal liability regime that disallows class action litigation over software harms. As with liability caps, this would work to reduce a company’s risk exposure at the tail end of bad outcomes, but without creating similar race-to-the-courthouse problems. But it would also serve to turn software liability primarily into a matter of business-to-business disputes. Individuals suffering from identity theft or stolen data would almost never sue absent the possibility of class action aggregation, and software companies primarily offering services to everyday consumers (for instance, social media platforms) might essentially evade liability for data breaches or other software failures. And on the extreme tail end—something on the scale of the CrowdStrike bug or the SolarWinds hack, where many Fortune 500 companies individually lost millions of dollars—liability overexposure may still be a serious concern.

A more subtle adjustment would be to restrict the scope of who can recover following a major software failure. In Strauss v. Belle Realty, the New York Court of Appeals did not decide that no one could sue Con Edison for its gross negligence. Instead, it held that the utility company owed a duty of care only to parties with whom it had contracted to provide service. The landlord who paid for the utility could sue if the blackout injured him; his tenant who fell down a flight of stairs could not. Where a software company contracts with thousands or millions of users, this is a somewhat crude and only partially effective way of minimizing liability exposure, but it at least eliminates the concern about cascading risk. And there may be good reasons to adopt it in the software context; one added benefit of such a limitation is that it would help protect open-source developers who don’t sell their work for profit.

There are also unique regulatory regimes for other industries that could provide a model for software tort liability. Operators of nuclear reactors, for instance, are required by the Price-Anderson Act to purchase the maximum available insurance coverage for nuclear incidents and to not contest fault in any tort suit following an incident. In exchange, the federal government agrees to indemnify reactor operators for any court-awarded damages above the maximum insurance payout (plus a “retrospective premium” assessed equally against all nuclear licensees following an incident at any particular reactor). Like software, the nuclear industry carries major risks of correlated, cascading, and concentrated risk, and also like software, it is unlikely that insurance markets can fully cover the risk of major incidents. But software is also a far more heterogeneous industry than nuclear power. Assessing retrospective premiums across the industry would be nearly impossible to implement, while providing a federal backstop for all liabilities above the maximum insurance payout without a retrospective premium would be enormously expensive.

Finally, rather than seeking to eliminate the possibility of class actions, there are ways that a liability regime could harness them. Return to the intuitive idea that developers shouldn’t have to pay more than some set percentage of their annual revenues as a result of a single flaw in their code. Where there is a limited fund of money available to pay for a large amount of potential liability, plaintiffs can sometimes seek a “mandatory” class action under Rule 23(b)(1)(B). Certifying a class under this provision allows a court to resolve claims by determining how the limited fund will be distributed to everyone with a claim against the defendant. This contrasts with the Rule 23(b)(3) class—which to date has been the norm in data breach litigation—where there is no upper limit on how much a company can be forced to pay, and where impacted users who have particularly large injuries can always opt out and bring a separate lawsuit.

A federal liability regime could affirmatively channel software disputes into the 23(b)(1)(B) class. It could do this by allowing a defendant to convert any lawsuit against it based on a software flaw into a 23(b)(1)(B) class, where the “limited fund” to be distributed would be some set percentage of the defendant’s annual profits. Defendants would not do this unless the aggregate harms they caused to others exceeded that cap; if the cap was sufficiently high, this would be a rarely invoked move, but one that would eliminate the possibility of a bankrupting judgment. And upon converting a lawsuit into a class action, courts acquire far more power to supervise the process and ensure that all impacted parties—whether large businesses that lost millions of dollars or everyday consumers reeling from an identity theft—are fairly represented.

***

Errors are inevitable. In the software context, a badly designed liability regime could make them existential. The combination of hard-to-predict risk, class action aggregation, and concentrated markets makes potential judgment sizes astronomical. The primary solution offered by proponents of software liability so far has been to suggest that companies following secure practices should be entitled to a statutory safe harbor from liability. But unless every company always complies with the safe harbor all the time—a sure sign that the safe harbor requirements are too weak to meaningfully improve outcomes—a liability regime should also be concerned about what happens after a liability finding. Addressing this requires more attention to be paid to the procedural mechanisms that might be used to channel claims, limit liability, or distribute damages. 


Micah Musser is a current law student at the NYU School of Law. From 2020–2023, he was a Research Analyst at the Center for Security and Emerging Technology (CSET), where he worked on the CyberAI Project.
}

Subscribe to Lawfare