Tarasoff Meets the AI Age
OpenAI recently disclosed that it was aware of concerning behavior by one of its users, Jesse Van Roostelaar from British Columbia, and suspended her ChatGPT account in June 2025. (While OpenAI and authorities have not shared the exact content of her interactions with the chatbot, a New York Times investigation into Van Roostelaar’s social media activity documented her posts about mental health issues, substance abuse, weapons, and online violence.) Following internal deliberations, OpenAI decided not to notify authorities about the “disturbing” nature of Van Roostelaar’s interactions with ChatGPT, stating that the content did not meet their threshold for reporting to law enforcement, which requires evidence of immediate risk of severe physical harm to others.
On Feb. 10, 18-year-old Van Roostelaar carried out a mass shooting in Tumbler Ridge, B.C., killing nine people—including herself.
The warning signs were clear. British Columbia Premier David Eby suggested that OpenAI may have had the opportunity to prevent the mass shooting. The critical question that emerges from this case is whether the company’s failure to act amounts to negligence.
When a therapist learns that their patient intends to harm someone, the law may require them to act. This principle, born from the landmark Tarasoff v. Regents of the University of California decision, raises an urgent and largely unresolved question in the age of generative artificial intelligence (AI): What happens when the entity with foreknowledge of harm is not a human clinician, but a chatbot? As OpenAI, Anthropic, Google, and other AI companies deploy increasingly powerful conversational systems, they may find themselves in possession of information suggesting that a user—or someone that user intends to target—is at serious risk.
Understanding Tarasoff’s foundational holding as well as the doctrinal questions its application to AI would raise, including how courts might navigate the tension between a duty to protect and the privacy interests of users, is essential to answering this question.
Tarasoff and the Duty to Warn/Protect
The seminal tort precedent dealing with this type of duty is the aforementioned, infamous Tarasoff v. Regents of the University of California. The Tarasoff precedent is highly controversial and has led many legal and medical scholars to write volumes of articles for and against it.
The basis of the cases is as follows: Prosenjit Poddar, a psychiatric outpatient at a University of California hospital, informed his doctors of his intention to kill Tatiana Tarasoff. He subsequently murdered her by shooting and stabbing her. Although the doctors determined Poddar needed to be confined and notified campus police orally and in writing, police released him after finding him rational and securing his promise to avoid Tatrasoff. In the resulting wrongful death action, Tatiana’s parents alleged that both clinicians and police owed a duty to either warn Tarasoff and her family or confine Poddar. The court accepted that clinicians knew of the threats and should have predicted the danger.
On rehearing, the California Supreme Court articulated therapists’ obligation as a duty to protect potential victims from foreseeable danger. The broad formulation of this type of duty extended beyond merely warning of threats to proactively protect the victim. This expansion led to significant concern within the mental health community about the scope and implications of this type of duty.
The court found therapists have a duty to exercise reasonable care to protect third parties once they determine, or should determine, that a patient poses a serious danger of violence to reasonably identifiable victims. California codified this principle in 1985 through statute. In 2013, the statute was amended to change the duty from “warn and protect” to simply “protect,” which therapists satisfy through reasonable efforts to notify both the threatened victim and law enforcement.
To date, 29 of the 50 U.S. states have adopted a mandatory duty to warn or protect. An additional 17 states recognize a “permissive” duty to warn and/or protect, allowing therapists to disclose threats or consult with colleagues or attorneys in cases of uncertainty. Of the states that recognize the duty, 10 ground it in case law rather than statute. Only four states have yet to recognize such a duty in any form.
AI Companies’ Duty to Protect
The Tarasoff case and its later developments raise an interesting question in the current AI liability age: Should a similar duty to protect (and warn) be imposed on AI companies if they determine that a user poses a serious danger of violence to others, or even themselves?
Tarasoff offers a particularly rich source of insight into how tort law develops duties that protect classes of potential plaintiffs. The duty articulated, to protect third parties from serious dangers when one possesses special knowledge about risks posed by individuals for whom one bears a particular responsibility, is both morally compelling and adaptable. It can potentially extend to new relationships, including those emerging between users and AI companies that provide generative AI platforms. At the same time, such a duty must be constrained by the nature of the relationship and by the source and reliability of the information underlying that knowledge. The Tarasoff decision illustrates how tort law navigates this tension, offering lessons that may prove instructive in the modern AI era. Whether such duties arise will ultimately depend on the type and scope of knowledge these companies possess and the circumstances under which a duty to protect or warn might reasonably be imposed, questions that will likely be resolved on a case-by-case basis.
In the original case, the burdens imposed on therapists are not only substantial in implementing the duty to protect or warn, but they also conflict with the therapist’s professional role. In Tarasoff, issuing a warning runs directly counter to the principle of patient confidentiality. Therapists owe patients a duty of confidentiality grounded in both ethical codes and evidentiary privilege, yet Tarasoff imposes a duty to breach that confidence—when necessary—to protect third parties. The decision thus requires therapists to balance these competing obligations without clear guidance on when protection trumps privacy. This tension must be weighed against the costs of remaining silent, namely, a potential increase in violence.
Applying this type of duty in the software context might be more straightforward, as norms of confidentiality are weaker. Although scholars have proposed framing technology companies as “information fiduciaries,” that concept has not been widely adopted yet, though new conversations are emerging. Even if such a duty emerges, general-purpose AI platforms would likely face less stringent confidentiality requirements than therapists. This reality provides a strong argument for recognizing a duty to protect or warn in the AI era. This is perhaps one of the rare circumstances in which the relatively lax privacy protections in the digital environment could actually support broader protective obligations. The very weakness of privacy protections in the digital sphere, often criticized as a failure of law to keep pace with technology, may here work in favor of public safety, lowering the threshold for disclosure and making it easier to justify imposing affirmative duties on AI platforms to act when harm is foreseeable.
Challenges of a Duty to Protect in the AI Age
Three critical issues arise when considering Tarasoff and its potential applicability to the relationships formed between users and AI providers. First, predicting violence is inherently difficult, and even that is an understatement. The duty to protect or warn depends heavily on the reliability of those who bear the duty and their exercise of professional judgment. Even in the case of trained psychotherapists, it remains challenging to demonstrate that they possess the professional capacity to accurately predict violent behavior. This raises an important question in the AI context: How can AI developers or moderators reviewing users’ interactions be expected to possess the expertise necessary to anticipate and respond to potential violence? This issue is central to the practical implementation of such a duty and may pose an obstacle to imposing a broad duty to protect or warn in the AI context.
Second, should those who bear the duty be required to take steps beyond merely issuing a warning? Tarasoff left this question somewhat unresolved, and different jurisdictions adopted a variety of standards. If the duty extends to more intrusive measures—such as restricting access, sharing the content of the account with the authorities, or otherwise intervening with a user suspected of posing a risk—it is important to weigh the costs of highly risk-averse actions that could significantly affect individual liberty. Given the vast number of AI users and the prevalence of violent rhetoric in online environments, the likelihood of false positives would likely be substantial.
What’s more, in many generative AI scenarios, it may be unclear to whom the duty is owed. In Tarasoff, the situation was distinctive because the potential victim was identified by name; in many AI-related contexts, no such identifiable target exists. General discussions of violence are not necessarily reliable indicators that harm will occur, nor do they clarify who the potential victim might be. Subsequent cases have taken differing approaches: Some have relied on a foreseeability test, while others have required a closer nexus between the risk and the potential victim before recognizing a duty. Although courts have sometimes rejected such claims on the facts, there are reasons to consider extending the duty to situations involving self-harm as well, potentially by requiring warnings to appropriate third parties, such as authorities or designated family members, depending on the user’s age and the surrounding circumstances.
A different type of legal challenge will include surveillance and privacy violations, which are critical concerns when crafting a duty requiring AI companies to protect or warn. First, the scale differences between duties imposed on therapists and AI companies are significant. Therapists can only treat a limited number of patients, making this duty more contained. AI companies, however, have access to private information from millions of users. This sensitive content may include relationship discussions, legal advice, and medical advice, including therapy in a broader sense. Imposing a duty to protect or warn by disclosing information could potentially compromise sensitive data. Moreover, an overly broad duty might incentivize AI companies to conduct surveillance on users, allowing governments to collect and potentially misuse sensitive information, possibly even violating Fourth Amendment protections in certain situations.
An initial approach to addressing this concern could emphasize privacy protection while carving out certain types of interactions that automated systems flag as potentially dangerous, a functionality that already exists in most generative AI platforms. The line here is blurry and should remain relatively narrow to encourage compliance from both companies and users. This relatively modest threshold is further justified considering how AI companies would likely respond if a duty to protect or warn were imposed. To protect user privacy and make their products more appealing to privacy-conscious users, AI companies would probably make information inaccessible except when classifiers flag it.
As a starting point, even establishing this type of relatively limited duty would be welcomed in the AI age, where companies’ legal duties to users remain unclear. This is because the applicable liability regime when AI causes harm has not been fleshed out yet by the courts and is highly disputed among legal scholars. Expanding it further will prove practically challenging given the scale of users and interactions. The aforementioned OpenAI case is relatively straightforward because the classifiers flagged the content, and humans reviewed it. However, establishing an overbroad duty might create a perverse incentive for companies to avoid monitoring entirely, since a lack of knowledge could release them from duty of care obligations. That said, courts would likely reject willful blindness to platform activities in light of clear evidence that serious harm can occur and is already occurring.
Furthermore, the decision and the broader discussion surrounding a duty to protect or warn provide a possible alternative, or complementary, approach to the complex question of whether AI should be treated as a “product” for the purposes of applying strict products liability, and whether such a product is defective under one of the traditionally recognized categories. In the generative AI context specifically, framing the issue as a duty to protect or warn places the analysis squarely within a negligence framework. Establishing such a standard depends on judicial evaluation and avoids, at least in part, the thorny doctrinal challenges associated with applying strict products liability to AI-based products and services, including the well-known “black box” problem.
This approach also shifts the focus away from technological defectiveness. Instead, it highlights the human dimension of AI systems: the role of human oversight and monitoring, which already exists to varying degrees. Tort law can incentivize companies that develop and deploy these systems to adopt clearer safeguards. These may include restricting access to certain chatbot functions or, in extreme circumstances, alerting authorities or designated emergency contacts. Through such mechanisms, the law may help reduce the risk of severe harms, including instances in which generative AI systems appear to encourage or facilitate violence or self-harm. An important caveat here is the sustainability of the human in the loop over time, which has proved to be fragile in other technological domains. Nonetheless, this should have a positive net impact by creating clearer guidelines on when to engage with flagged users that hopefully will survive the test of time.
Gavalas v. Google: A Case Study
The urgency of conceptualizing a duty to protect or warn grows with each new case demonstrating the harmful consequences that can stem from generative AI interactions. A recent illustration is Gavalas v. Google. On March 4, a grieving father filed a wrongful death lawsuit against Google. The complaint describes what it characterizes as a romantic relationship between the complainant’s son, Jonathan Gavalas, and the Gemini chatbot, alleging that the chatbot encouraged Gavalas to take his own life so that the two could “be together.” The lawsuit asserts strict products liability claims based on alleged design defects and failures to warn, alongside negligence claims grounded in the same theories. For the purposes of this discussion, however, another aspect of the complaint is particularly notable. According to the filing, Gavalas’s account was flagged 38 times over five weeks for sensitive content. Despite these repeated alerts, the account was never restricted or suspended, even after Gavalas reportedly uploaded photos of knives and a video of himself crying and declaring his love for the chatbot.
Cases of this nature—tragic and, unfortunately, appearing with increasing frequency as AI systems become more embedded in everyday life—may present a compelling context for considering a breach of a duty to protect or warn by AI companies. Similar to Tarasoff, Google had ample warning signs regarding Gavalas’s behavior that might be dangerous to himself and others. The fact that his account had been flagged so many times, combined with known previous instances of people harming themselves and others as a result of interacting with AI, could be sufficient grounds to raise a claim to assign a duty to protect users such as Gavalas. This could have been done, for example, by restricting his access to the account or alerting the authorities. It is true that in self-inflicting harm, notifying a third party, similar to Tatiana’s family, might be challenging. This might raise a need to designate an emergency contact when interacting with generative AI, even with adult users. The contours of these protective steps will be developed by the court to establish a tailored, narrow duty to apply to AI companies that offer generative AI platforms to the public.
* * *
Rather than focusing exclusively on the difficult questions surrounding product defects or the technological “black box” problem, a duty to protect or warn offers a separate, appealing path. Such cases could invite a more traditional negligence inquiry: whether the company had a duty to intervene or to alert authorities or an emergency contact once sufficient warning signs emerged.
This line of analysis may be particularly important in a landscape where powerful AI companies profit from technologies that many users still struggle to fully understand, especially their potential risks and harms. Exploring the possibility of a duty to protect or warn could therefore provide a meaningful framework for addressing these emerging challenges. The path to establishing such a duty will be challenging given the obstacles detailed above, but a new course for holding AI companies liable is worth pursuing.
