Courts & Litigation Cybersecurity & Tech

Does Product Liability Offer a Route Around Section 230?

Jonathan G. Cedarbaum
Thursday, March 26, 2026, 1:33 PM

Lawsuits against social media companies are addressing not only Section 230, but also product liability law and the First Amendment.

Los Angeles Superior Court. (Flickr, https://www.flickr.com/photos/justaslice/7637962342; CC BY-NC-ND 2.0, https://creativecommons.org/licenses/by-nc-nd/2.0/deed.en).

Yesterday’s verdict in the social media addiction trial in state court in Los Angeles marks an important step in the campaign to hold social media companies accountable. The plaintiffs have relied on a novel legal approach in an effort to get around the liability shield established by Section 230 of the Communications Decency Act. Rather than the negligence, contract, or statutory claims that have failed in the past, these plaintiffs assert product liability claims that rest on the notion that the harms of social media usage arise not from the third-party content available on platforms but, rather, from features the owners of the platforms build into their systems in order to shape users’ interactions with that content. The platforms, they argue, are more properly understood as products being sold to users, and these harmful features are defects in their products, just like faulty brakes or gas tanks might be in a car.

Section 230 of the Communications Decency Act, enacted in 1996 in order to promote the growth of internet companies, gives online platforms substantial protection against suits arising from content posted on their sites by third parties. Subsection (c)(1) of Section 230 in particular ensures that providers of “interactive computer service[s]” cannot be held liable as though they were “the publisher or speaker of any information provided by another information content provider.” That statutory language, interpreted broadly by many federal courts of appeals, has led to the dismissal of hundreds of lawsuits against social media companies over the years resting on a variety of tort, contract, and statutory claims. The Los Angeles trial in KGM is one of more than 200 lawsuits filed over the past seven years that have shifted legal gears by using product liability as the legal basis to demand compensation for a variety of harms plaintiffs claim arise from social media use.

KGM is the first to go to trial and the first to reach a jury verdict. But it only made it to trial because the judge overseeing the case refused near the outset to dismiss it based on the defense that Section 230 barred the plaintiffs’ claims. On March 25, the jury determined that Meta and Google were liable for negligence in designing features that led to the plaintiff’s mental health distress and awarded her $3 million in compensatory damages and $3 million in punitive damages.

KGM is a bellwether case, selected among the more than 140 other similar cases consolidated before this judge as a test case to gauge the parties’ evidence and arguments and a jury’s reaction to them. The result may prompt settlement negotiations in some or all of the other cases, which involve more than 1,600 individual plaintiffs, as both sides, particularly the defendants, estimate the risks of moving on to other cases in light of the total bill (rising into the billions) if the juries in those cases imposed similar damages awards.

The defendant companies will no doubt appeal the KGM verdict, and the legal viability of both its liability and damages determinations will be reviewed first by an intermediate appellate court in the California system and then by the California Supreme Court. As to the crucial federal law issues—Section 230 and the First Amendment—the appellate path will likely lead to the U.S. Supreme Court as well.

But whatever the ultimate outcome in KGM, the legal fight between plaintiffs relying on product liability law and companies seeking the protection of Section 230 will continue to play out in dozens of state and federal courts around the country. And these cases may well reshape not only the interpretation of Section 230 but also some of the basic tenets of product liability law.

This article first sketches that broader litigation landscape. Then it turns to consider three key clusters of legal issues that these cases call on the courts to address in the context of social media usage: the scope of Section 230, the nature of a product in product liability law, and the First Amendment rights of social media platforms.

The Litigation Landscape

The turn to product liability in social media litigation can be traced to the filing of Lemmon v. Snap in federal court in 2019. The Lemmon plaintiffs were the families of three teenage boys killed in a car crash when one of them was driving more than 100 miles per hour, apparently incentivized by Snapchat’s “speed filter,” an online challenge that provided rewards to users who shared photos of themselves going at very high speeds. The district court judge dismissed the claims as barred by Section 230. The U.S. Court of Appeals for the Ninth Circuit partially reversed, concluding that the plaintiffs’ negligent design claim—“a common products liability tort”—sought to hold Snap liable for decisions in designing its platform product, ones concerning the speed filter and its incentive system, not for choices made as a publisher or an editor of third-party content.

Since Lemmon, more than 200 lawsuits, in both federal and state courts, have been filed against social media companies relying on product liability claims. Large groups of these cases have been consolidated in two actions in California, one in state court and another in federal court. The more than 140 cases in state court, captioned Social Media Cases JCCP (for “Joint Council Coordinating Proceeding”), involve claims by more than 1,600 plaintiffs. The case that has come to be known by many as KGM—the initials of one of the plaintiffs—is the bellwether case in this group. The consolidated action in federal court in the Northern District of California, known as In Re Social Media Adolescent Addiction/Personal Injury Product Liability Litigation, involves five operative master complaints: one stating claims by individuals, one involving claims by school districts, and three asserting claims by various state attorneys general. Beyond these two consolidated actions, at least 25 other lawsuits seeking to hold social liability companies liable under product liability law are being litigated in state and federal courts around the country.

The plaintiffs have named various groups of social media companies as defendants, most frequently Meta, TikTok, Google (principally YouTube), and Snap. They have sought compensation for a wide variety of harms they claim have flowed from social media usage. Some of those harms are physical, such as death or injury, whether allegedly arising from use of a special feature, such as the Snap speed filter; or allegedly flowing from content posted by third parties that the platform failed to screen out, such as the “subway surfing challenge” that has led to a number of deaths of young people who tried to ride on the top of subway cars; or allegedly caused by the online radicalization of a social media consumer, who went on to shoot victims targeted in accord with the ideas the attacker consumed online, as in the murder of 10 shoppers at a supermarket in a predominantly Black neighborhood in Buffalo, New York. More often, plaintiffs have asserted mental and emotional harms, particularly among those below the age of 18, including depression, suicidal thoughts, decreased school performance, and, most generally, addiction to social media use itself.

In order to hold platform companies liable for these harms under product liability law, plaintiffs have to identify defects in the platforms’ designs as the alleged culprits. With product liability claims arising from many different circumstances, the alleged product defects identified have also been quite varied. But they fall into two broad categories—what might be called defects of commission and of omission. The former are features the defendant companies put in place and that the plaintiffs claim led to the harms the plaintiffs (or those they represent) suffered. The latter are guardrails the defendant companies failed to put in place but that the plaintiffs claim they should have built in to protect users from the dangers of certain kinds of social media activity.

Features in the first category range from platform-specific features such as the speed filter on Snap to, most frequently, a combination of features alleged to be intended to addict users, particularly younger users, to social media use, such as “badges,” “streaks,” “trophies,” and “emojis” given to frequent users; autoplay; infinite scrolling; and push notifications. The second category typically includes inadequate parental controls or notification systems.

The Scope of Section 230

Should Section 230 be read to protect social media companies from this latest round of claims, asserted in product liability law? What actions or features constitute the actions of a “publisher or speaker of any information provided by another information content provider”? Courts around the country have come to different, and sometimes conflicting, conclusions. But many judges, even those who have disagreed about the ultimate result, have often relied on three tests for gauging Section 230’s scope developed by the federal court of appeals for the Ninth Circuit. Though developed in cases that did not involve product liability claims, these approaches to Section 230 have proved influential guides as courts grapple with the new product liability claims.

The Ninth Circuit developed the first Section 230 test in its 2008 en banc decision in Fair Housing Council of the San Fernando Valley v.Roomates.com. The fair housing councils accused the online roommate matching service of violating antidiscrimination laws, including the federal Fair Housing Act, by enabling users to select tenants or roommates based on protected characteristics, such as sex and sexual orientation. The trial court ruled for the defendant, relying on Section 230’s protection.

The en banc Ninth Circuit reversed in part. The majority took the view that even though the allegedly discriminatory actions rested on information provided by third parties, the online company could still be held liable if it contributed “materially ... to posted [content’s] alleged unlawfulness.” Those contributions included posing questions subscribers had to answer and thus also inducing subscribers to act on illegal preferences—such as by providing drop-down menus with a limited range of options rather than open text boxes that allowed users to fill in their preferences themselves.

A year later, the Ninth Circuit announced a second Section 230 framework in Barnes v. Yahoo!. Barnes sued Yahoo! because her ex-boyfriend had posted profiles, including nude pictures, of her and had posed as her in chatrooms, urging men to contact “her.” Barnes repeatedly wrote to Yahoo!, asking it to remove the materials, but the platform failed to do so. She sued in breach of contract and promissory estoppel, but the district court dismissed her claims as prohibited by Section 230.

The Ninth Circuit affirmed the dismissal. The court held that in determining whether Section 230 bars particular claims, courts must put aside legal labels and “ask whether the duty that the plaintiff alleges the defendant violated derives from the defendant’s status or conduct as a ‘publisher or speaker.’ If it does, section 230(c)(1) precludes liability.” The court defined publishing as involving “reviewing, editing, and deciding whether to publish or to withdraw from publication third-party content.” Thus, according to Barnes, “[s]ubsection (c)(1), by itself, shields from liability all publication decisions, whether to edit, to remove, or to post, with respect to content generated entirely by third parties.” Under the Barnes test, courts have held many content moderation choices made by social media platforms, including allowing the posting of dangerous content, such as sexual propositioning of minors and instructions concerning hazardous offline behavior, to be shielded by Section 230.

In 2019, in Dyroff v. Ultimate Software Group, the Ninth Circuit considered claims against a social networking site called the Experience Project, which allowed users to post anonymously and identify other users with common interests. The site became a forum for drug sales. The mother of one user who bought fentanyl-laced heroin through the site and died from taking the drugs sued, asserting various tort claims. The district court dismissed her claims under Section 230, and the Ninth Circuit affirmed. Looking back to Barnes and Roomates.com, the court observed that the defendant company’s “functions, including recommendations and notifications, were content-neutral tools used to facilitate communications.” They thus did not implicate the company in the communications undertaken by third parties using the site.

Three brief examples can help illustrate how the courts have often applied these Ninth Circuit tests in judging whether product liability claims should survive Section 230. In the California state court media addiction cases, the trial judge rejected the defendants’ Section 230 defense. “As in Lemmon,” she concluded, the plaintiffs’ claims “based on the interactive operational features of [the defendant companies’] platforms do not seek to require that the defendants publish or de-publish third-party content that is posted on those platforms. The features themselves allegedly operate to addict and harm minor users of the platforms regardless of the particular third-party content viewed by the minor user.”

New York’s intermediate appellate court came out the opposite way in addressing a pair of lawsuits arising from the Buffalo supermarket mass shooting. In June 2025, in Patterson v. Meta, the court held that the “plaintiffs’ strict products liability causes of action against the social media defendants fail because they are based on the nature of content posted by third parties on the social media platforms. Relying on “[t]he immunity test established by Barnes the majority concluded that, despite the plaintiffs’s product liability theory, at bottom they sought to hold the defendant companies liable as “publisher[s] or speaker[s] of third-party content,” an approach prohibited by Section 230.

Two dissenting judges would have rejected the Section 230 bar, on two grounds. They reasoned that “use of an algorithm to push disparate content to individual end users constitutes the ‘creation or development of information[.]’” And, in any event, the plaintiffs’ allegations “do not seek to hold defendants liable for any third-party content ... rather, they seek to hold defendants liable for failing to provide basic safeguards to reasonably limit the addictive features of their social media platforms, particularly with respect to minor users.”

One of the most nuanced analyses of the new kinds of product liability claims being asserted against social media companies appeared in the district court opinion granting in part and denying in part a motion to dismiss the individual claims in the consolidated federal cases titled In Re Social Media Adolescent Addiction/Personal Injury Product Liability Litigation. In her decision, Judge Yvonne Gonzalez Rogers of the U.S. District Court for the Northern District of California carefully parsed through the various features of the defendant platforms that the plaintiffs asserted could be attacked despite Section 230. Drawing on elements of each of the Ninth Circuit’s tests, Judge Rogers determined that the key for identifying which claims could proceed was whether the defendant companies could satisfy the duties the plaintiffs asserted they owed to users without changing any third-party content.

By that standard, she concluded, claims resting on some asserted defects should be allowed to proceed. These defects included not providing effective parental controls, including notification to parents that children are using the platforms; not providing options for users to restrict their time on a platform; offering appearance-altering filters; not clearly labeling filters; and timing and clustering notifications of content to increase addictive use. For some of these features (among others), the court determined that “there is a defect and a harm separate and apart from publication of any third-party content.” For others, the court ruled, “the notifications at issue concern content created by defendants, not third parties.” 

Claims resting on other defects, however, would have to be dismissed because, in the court’s judgment, they directly target defendants’ roles as publishers of third-party content. The defects in this group included failing to put “[d]efault protective limits to the length and frequency of sessions”; failing to institute “[b]locks to use during certain times of day (such as during school hours or late at night”; publishing geolocating information for minors; recommending minor accounts to adult strangers; and, notably, “use of algorithms to promote addictive engagement.” 

Rethinking Product Liability Law

Plaintiffs bringing product liability claims against social media companies must not only surmount the barrier presented by Section 230. They must be able to plead and prove the elements of product liability against defendants whose offerings—social media platforms—don’t fit the traditional model of products under the law. And they must establish that the alleged defects they condemn caused the harms for which they seek compensation. Each of these aspects of their claims—the definition of products and the standards for establishing—are pressing courts to consider whether and, if so, how product liability law should be revised to take account of the pervasiveness of online activity. 

The plaintiffs suing the big social media companies assert two basic kinds of product liability claims: design defect and failure to warn. The rules of product liability are defined by state courts and state legislatures, and so the precise contours of these claims vary across state lines. But certain common core elements have particular relevance for the recent wave of claims against social media companies.

As to design defect claims, a majority of states follow what has been dubbed the risk-utility approach. Under that approach, a plaintiff can prevail if she “demonstrates that the product’s design proximately caused [her] injury and the defendant fails to establish, in light of the relevant factors, that, on balance, the benefits of the challenged design outweigh the risk of danger inherent in such design.”

A smaller number of states use the framework suggested in the 1990s in the American Law Institute’s third restatement of product liability law. Under that rubric, a manufacturer may be held liable for a design defect when “the foreseeable risks of harm posed by the product could have been reduced or avoided by the adoption of a reasonable alternative design” and “the omission of the reasonable alternative design renders the product not reasonably safe."

Failure to warn claims subject manufacturers or sellers to liability for “injuries caused by a product due to inadequate warnings or instructions regarding non-obvious risks.” Plaintiffs must prove that “the product was unreasonably dangerous without a warning, which caused the harm.” 

Neither kind of claim can succeed if the commercial offering that the plaintiffs are challenging is a service rather than a product. And under the predominant view, reflected in the third restatement, a product is quintessentially a “tangible personal property distributed commercially for use or consumption.” Few courts have considered the question of tangibility in the social media cases, and the defendant companies will likely contend that their platforms fail this tangibility requirement and should be considered services rather than products. But the restatement, in line with developing law in a number of states, recognizes that “[o]ther items, such as real property and electricity, are products when the context of their distribution and use is sufficiently analogous to the distribution and use of tangible personal property that it is appropriate to apply” product liability rules to them. Whether social media platforms should be included as products on the basis of this sort of analogy—as some commentators have suggested they should be—will be one of the legal battlegrounds as the current wave of lawsuits unfolds.

Another essential element of product liability claims is causation. Plaintiffs must show that the alleged defects or the failure to warn caused the harm they suffered. One of the key impetuses for the recent proliferation of suits against social media companies has been the publication of a growing body of evidence suggesting that, at least for some kinds of uses and some kinds of users, particularly younger ones, social media platforms may well inflict harm on their users. Some of these studies have been undertaken by academic researchers. Others have been done by the social media companies themselves—notably Meta—and have been revealed by former employees. The complaints in the current round of lawsuits cite many of these studies, and they were the subject of conflicting expert testimony in the KGM case, as they will be in other cases that get to trial.

What About the First Amendment?

The scope of Section 230 and some of the fundamentals of product liability are not the only areas of the law that the latest round of social media litigation draws into question. These cases also implicate the emerging application of First Amendment principles to online activity.

In Moody v. NetChoice, the Supreme Court in 2024 confronted facial challenges to Texas and Florida laws that restricted the ability of social media platforms to moderate certain kinds of content. The Court avoided reaching the merits by holding that the lower courts had applied the wrong standards for facial challenges. But the Court observed that “some platforms, in at least some functions, are indeed engaged in expression” protected by the First Amendment’s free speech guarantee. “In constructing certain feeds, those platforms make choices about what third-party speech to display and how to display it. They include and exclude, organize and prioritize—and in making millions of those decisions each day, produce their own distinctive compilations of expression,” “editorial choices” afforded constitutional protection from government restriction.

Courts addressing product liability claims against social media companies have only begun to consider the implications of Moody’s characterization of content moderation activities as speech undertaken by the platform companies themselves. But these implications are likely to figure more prominently in cases filed since Moody was decided.

In Anderson v. TikTok, the U.S. Court of Appeals for the Third Circuit in 2024 reversed the dismissal of product liability claims seeking to hold TikTok liable largely based on alleged defects in its recommendation algorithm that fed hazardous social challenges into the feeds of young children. The Andersons’ 10-year-old daughter had died after attempting a “blackout challenge” involving self-asphyxiation. “Given the Supreme Court’s observations that platforms engage in protected first-party speech under the First Amendment when they curate compilations of others’ content via their expressive algorithms,” the court concluded, “it follows that doing so amounts to first-party speech under § 230, too,” not mere third-party content. Thus, in the Third Circuit’s view, Moody suggested Section 230’s protection should not extend to platforms’ content moderation functions.

The New York Appellate Division in Patterson v. Meta also took note of Moody’s possible implications. Though disagreeing with Anderson, the Patterson majority suggested that the “the interplay between section 230 and the First Amendment” might give rise to “a ‘Heads I Win, Tails You Lose’ proposition in favor of the social media defendants. Either the social media defendants are immune from civil liability under section 230 on the theory that their content-recommendation algorithms do not deprive them of their status as publishers of third-party content, ... or they are protected by the First Amendment on the theory that the algorithms create first-party content, as per Anderson.”

Conclusion

The KGM verdict has made headlines, and rightly so. But it is only one of dozens of cases in which plaintiffs in recent years have made use of a novel legal theory—product liability—and newly developed evidence—about the addictive nature of social media use and the harms that may flow from it—in an attempt to impose costs on social media companies for their dangerous offerings. For these plaintiffs to succeed, courts will have to rethink not just the scope of Section 230 but also basic elements of product liability law and the proper way to apply the First Amendment to many online activities. The conflicting answers courts have given so far suggest that it will be several years before we know whether this strategy for holding social media companies to account will prove effective.


Jonathan G. Cedarbaum is a professor of practice at George Washington University Law School, affiliated with the program in national security, cybersecurity, and foreign relations law. During the first year of the Biden Administration he served as Deputy Counsel to the President and National Security Council Legal Advisor.
}

Subscribe to Lawfare