How Existing Liability Frameworks Can Handle Agentic AI Harms
Published by The Lawfare Institute
in Cooperation With
Legal debates surrounding artificial intelligence (AI) agents—automated systems operating with minimal human oversight—are increasingly dominated by calls for sweeping reform, including proposals to grant legal personhood to AI. Liability law, in particular, is often portrayed as unable to keep pace with rapid technological change. Because AI systems can be unpredictable and opaque, critics argue that harms caused by AI challenge the foundations of liability law. These claims often reflect technological exceptionalism—the prevailing lens in AI policy debates, referring to the belief that emerging technologies are so novel or disruptive that they demand entirely new legal frameworks—and fuel arguments that agentic AI requires an entirely novel legal regime.
This narrative—that existing liability regimes are inadequate—follows a familiar pattern. Often when a new technology emerges or society evolves, scholars and policymakers warn of exceptional risks, highlight allegedly unique technological properties, and call for bespoke legal regimes. Yet long-standing legal frameworks, which have adapted to profound social and technological change over decades or centuries with minimal overhaul, are often overlooked. As I argue in a recent article, today’s AI agents may resemble traditional products far more than is commonly assumed, and existing negligence and products liability doctrines can, with targeted adjustments, be well equipped to address AI-related harms rather than require sweeping reform.
Law and Economics Incentive Analysis
To develop this argument, it is useful to adopt a law-and-economics perspective, which offers a distinctive normative lens for analyzing the issue. Rather than focusing only on the formal conditions and limits of liability law, this approach asks to what extent liability rules incentivize certain behaviors from an economic standpoint. The core premise is that individuals and firms are, at least in part, motivated by profit. If a company considering the launch of an AI agent must decide whether to invest further in safety, it may forgo that investment if the expected cost of liability claims is lower than the cost of additional precautions. While companies and individuals often also weigh ethical concerns and reputational risks, the economic baseline provides a useful framework for steering the behavior of AI developers and users.
Viewed through this normative lens, AI agents share notable similarities with traditional products. Although the precise obligations differ and AI may be unpredictable and opaque, developers still bear a responsibility, as with conventional products, to prevent foreseeable harm. They can be expected to design and train their AI systems with care, using adequate data and safeguards against known risks. They can similarly be required to ensure that their AI systems are as accurate as reasonably possible. Moreover, they can reduce the risk of harm by ensuring that their agents are explainable so that they can be subject to some level of oversight. Consider autonomous vehicles or medical diagnostic tools: As a society, we expect self-driving cars to minimize accidents and clinical tools to maximize accuracy while permitting effective human supervision.
As with any other product, the producer—the AI agent or system developer—is generally best positioned to address and mitigate the risks of that product. Through their choices about training data, the extent of training, model design, and safeguards, developers are uniquely able to prevent or correct safety issues. They are also well positioned to spread the costs of these measures through pricing, which aligns with the law-and-economics idea of the developer as the “least cost avoider.” This provides a key normative basis for holding developers ultimately accountable for harm caused by AI agents or systems. That normative conclusion is reinforced by AI’s opacity and complexity, which make it difficult for anyone other than developers to detect or remedy defects.
At the same time—too often underemphasized—developers are not the only relevant actors. If a doctor knowingly uses an AI tool with full awareness of its error rate, it makes little sense to hold only the developer accountable. Similarly, if an individual chooses to replace their lawyer with ChatGPT despite explicit warnings that this is inadvisable, placing sole blame on the developer is misplaced. In such cases, it is relatively easy to incentivize users to act responsibly: When users know a product is dangerous, liability rules can deter them from using the product in ways that predictably cause harm. This suggests that, normatively, developers should inform users about an AI system’s limitations and risks. Admittedly, users’ ability to understand an AI agent’s risks may be limited for inherently complex systems—autonomous vehicles, for example—where shifting responsibility away from developers is illogical. Crucially, user accountability should scale with their capacity to make informed deployment decisions.
Being able to hold users, rather than developers, accountable when they are able to understand the relevant risks is normatively especially important because it may be socially desirable to encourage developers to release AI agents and systems despite their limitations. A tool may be inherently imperfect and capable of causing harm, yet still offer significant benefits when used appropriately. Just as we do not—and should not—blindly hold OpenAI responsible if a doctor treats a patient solely on the basis of GPT-5’s symptom analysis rather than applying medical expertise, developers cannot reasonably be expected to bear the entire burden of such misuse. Ensuring that users remain accountable prevents suboptimal incentives that might otherwise discourage developers from making valuable tools available.
A challenge does persist, however. Beyond complexity, AI’s rapid evolution means that its ongoing development and use may generate societal benefits extending beyond those enjoyed by developers or users. This phenomenon—often described as “benefit externalization”—captures the gap between future societal gains (such as fewer accidents from autonomous vehicles) and present user expectations, which are limited to immediate utility. Developers therefore face difficulty selling AI solutions in early stages, even though continuous improvement produces public value over time. Because that future value is hard to capture today, imposing liability too readily on developers and users risks discouraging innovation—potentially delaying or halting progress that could yield long-term societal benefits.
AI and Existing Liability Regimes
Moving beyond these normative arguments, American liability regimes do not necessarily allocate those risks accordingly. Among existing American liability frameworks, negligence liability serves as the general default, while products liability—focused on defective products—functions as a specialized, stricter regime for holding manufacturers accountable. Under negligence liability, a victim can obtain compensation if they show that the harm they experienced was caused by another’s negligent behavior—here, by the AI agent’s developer or user. Negligence, however, narrows the scope of liability: It requires proof that the developer or user failed to act as a reasonable person would in developing or using the agent. Liability is therefore more limited than in a “strict liability” regime where developers or users would be accountable for any harm caused, regardless of their conduct.
Moreover, the peculiar features of AI—autonomy, imperfection, unpredictability, and opacity—pose significant challenges for these negligence law assessments. The complexity and unpredictability of AI agents make it difficult for victims to prove negligence and causation. For instance, it may be unclear whether harm stemmed from substandard AI development or simply from the inherent imperfections of an otherwise high-performing system, as explored in more detail below.
While negligence law has doctrines to manage some of these issues, such as the Learned Hand test—which clarifies that liability arises only when the cost of prevention would have been lower than the expected cost of the harm—these principles struggle with the challenges introduced by AI’s challenging features. This is why products liability, discussed in the next section, offers an important complementary and, in many developer-related cases, more fitting framework for AI-related harms
These difficulties are particularly acute when it comes to causation—a core requirement of negligence liability. In AI contexts, it is often unclear whether harm was caused by negligent development or use of an AI agent, or whether it would have occurred even with diligent practices. Existing legal regimes, however, offer useful guidance. Medical malpractice law, for instance, addresses situations where it is uncertain whether a wrongful act directly caused the harm. If a doctor fails to provide proper treatment and the patient suffers, it may be impossible to know whether the outcome would have been different had the doctor exercised greater care—since the treatment may succeed in only some cases. In such instances, courts treat the doctor as having caused the patient to lose a chance to avoid harm, awarding damages equal to the total harm incurred multiplied by the probability of causation. A similar “loss of chance” framework could be applied to AI.
Nevertheless, the complexity of most AI systems makes assessments of causation and negligence far from straightforward—particularly for lawyers and judges without technical expertise. Victims also face steep challenges in evaluating system performance because of the information and expertise asymmetry between developers or users and those harmed. Reflecting this concern, the European Union considered introducing a presumption of causality to aid victims of AI-related harm, though the proposal was ultimately not adopted. Under such a presumption, a causal link between fault and harm would be assumed if the victim could demonstrate both harm and fault.
Absent such presumptions or comparable tools, victims of AI-related harm are left to bear the burden themselves. This not only undermines compensation but also weakens incentives for developers and users to prevent harm.
AI Agents and Products Liability
Negligence liability, while the default framework, is not the only regime applicable to AI agents and systems. Much of the law-and-economics rationale outlined above—placing accountability primarily on the developer—also applies to products. This is why products liability provides a separate avenue for holding manufacturers accountable. Two of its components are particularly relevant here. The first, design defects, concerns how a product was designed and generally imposes liability if the product could reasonably have been made safer. The second, manufacturing defects, imposes strict liability—liability without fault—when harm arises from flaws introduced during the manufacturing process.
Manufacturing defects and the associated “strict” liability regime are especially interesting in the AI context. At first glance, they suggest a way to hold developers accountable regardless of negligence, consistent with the law-and-economics analysis above. The challenge, however, is that most erroneous AI outputs stem from training processes rather than manufacturing flaws, such as faulty sensors or components. As a result, harmful AI outputs are usually treated as design defects, commonly relying on the risk-utility test, thus limiting the scope of strict liability and leaving negligence-based design defect rules to apply.
That said, products liability still offers valuable tools. Notably, it imposes a “duty to warn,” echoing the role identified in the law-and-economics perspective. Developers must provide warnings to enable users to anticipate and mitigate harm. Yet the effectiveness of this duty is constrained by the complexity of many AI systems, which makes it difficult to convey risks fully or with the necessary nuance.
Recalibrating Liability Law for AI Agents
Taken together, the analysis leads to several conclusions for present-day AI agents: First, while more fundamental reforms may become necessary if AI continues to advance rapidly, existing agents and systems should for now be governed under regimes that broadly mirror those for traditional products, with two caveats. First, as AI capabilities progress, it may become necessary to place greater emphasis on the behavior of the agent itself, rather than solely on the humans associated with it. In the meantime, modifications are needed to account for the complexity of certain systems—justifying stronger accountability for developers who may be unable to adequately inform users. And second, in some cases, there is a strong argument that society should bear part of the risks of AI development and use, to incentivize innovation with long-term public benefits.
Existing tort law does not fully provide those incentives. The traditional distinction between design and manufacturing defects maps poorly onto AI agents, particularly when risks arise from flawed or biased training data—typically treated as design defects. Poor training data that lead to inadequate agent performance could instead be viewed as functionally equivalent to a manufacturing defect, warranting strict liability for developers. Absent such an adjustment, Absent such an adjustment, the regime—like negligence liability—fails to provide sufficient incentives for AI safety.. Negligence can play a key role, however, in holding AI users accountable when they are well aware of an AI agent or system’s risks and nevertheless use it recklessly.
Both negligence and products liability already contain many of the elements needed to address AI-related harms: negligence remains essential for governing user behavior when they are aware of an AI system’s risks, while products liability anchors developer accountability for design or training defects.. What is required is not a wholesale overhaul but a careful recalibration, potentially extending certain targeted regimes—such as those used in medical malpractice—to broader AI contexts. Combined with greater technological literacy among legal professionals, this approach offers the best path to ensuring that the burdens of AI agents fall on the right parties.
