Cybersecurity & Tech

The Three-Body Problem: Platform Litigation and Absent Parties

Daphne Keller
Thursday, May 4, 2023, 9:00 AM
Platform liability disputes typically involve three competing interests. So why are only two parties represented in litigation?
The U.S. Supreme Court recognized the issues with censoring obscene material in its review of Smith v. California in 1959. (Tom Thai, https://flic.kr/p/8XLUCu; CC BY 2.0, https://creativecommons.org/licenses/by/2.0/)

Published by The Lawfare Institute
in Cooperation With
Brookings

Disputes about platforms’ responsibilities for online speech almost always involve three competing interests. But in typical litigation, only two parties are represented. As a result, courts don’t hear from absent people or groups who may be seriously affected by a ruling. Put simply, the three parties are people who want to speak, people harmed by speech, and platforms. The missing party is usually one of the first two. 

This dynamic resembles the “three-body problem” in physics, which involves the overlapping gravitational pull of planets or other orbiting bodies. The problem is considered unsolvable, in the sense that no general rule can describe how the three bodies will influence each other in every scenario. Platform liability law’s three-body problem is not strictly solvable either. No simple legal or doctrinal change can account for the competing interests in every case about platforms, online speech, and harms. But for courts and policymakers, simply recognizing that all three interests exist, even in cases that only involve two parties, is an important first step. 

Major cases pending before the Supreme Court, Gonzalez v. Google and Twitter v. Taamneh, illustrate one typical litigation pattern, which we might call “Person Harmed v. Platform.” The cases were brought by plaintiffs who tragically lost family members in Islamic State attacks. The legal question in Gonzalez is about platforms’ immunity under the law known as Section 230. Taamneh asks whether, in the absence of such immunity, platforms would actually be liable under U.S. anti-terrorism law, specifically the Anti-Terrorism Act (ATA) as modified by the Justice Against Sponsors of Terrorism Act (JASTA). The missing parties in Gonzalez and Taamneh are readers and speakers. That includes Americans who want to exercise their constitutional right to read Islamic State propaganda, including for research or news reporting, as well as anyone whose lawful online speech may disappear if the rulings cause platforms to adopt new, overly zealous enforcement practices. Risk-averse platforms seeking to eliminate speech relating to foreign terrorism are particularly likely to penalize people speaking languages like Arabic, Farsi, Chechen, or Indonesian; people talking about Islam; or people reporting on human rights abuses or airing grievances against the U.S. or Israel. 

A second increasingly common litigation pattern might be called “Speaker v. Platform.” Platform law experts often call these “must carry” claims, in which speakers assert a right to force platforms to host their speech. Over 62 such cases have been litigated to date. The Supreme Court will likely agree to hear a pair of such cases, NetChoice v. Paxton and Moody v. NetChoice, later this year. In the NetChoice cases, platform trade associations are challenging the constitutionality of Texas and Florida laws that effectively require platforms to carry “lawful but awful” material like hate speech or medical disinformation. On the other side of the cases are the attorneys general of Texas and Florida, who argue that their laws advance the First Amendment interests of internet users against platform censorship. In NetChoice, then, the people harmed by online speech will be the ones lacking representation, including people harmed by barely legal harassment, pro-suicide or pro-anorexia material, hate speech, and more. The people affected by the NetChoice rulings will also include victims of online speech that is actually illegal, like actionable harassment or fraud. The Texas and Florida laws would give platforms reason to err on the side of leaving such content online, particularly if its illegality is not clear, in order to avoid liability. 

In extremely high-profile cases like these, of course, the three-body problem is reduced, because the Supreme Court hears from interest groups of all kinds. Gonzalez alone generated dozens of amicus briefs, discussing both harms and free expression—including a brief I submitted with the American Civil Liberties Union. The NetChoice cases will likely prompt a similar outpouring. But in some key forums outside the U.S.—most notably, in the Court of Justice of the European Union (CJEU), the EU’s highest court—even highly relevant expert groups are often unable to intervene and offer another side to the story. And most cases about platform liability, in the U.S. and elsewhere, come and go with no such attention or intervention by third parties. Courts hear only what the platforms and one other party have to say. 

The third possible configuration—“Speaker v. Person Harmed” cases, in which the rules governing platforms’ liability are litigated without platforms as parties—arises far less frequently. But at least two Supreme Court cases about pre-internet intermediaries fall in this category, and each demonstrates very different litigation dynamics. In Bantam Books v. Sullivan, book publishers sued a state-appointed commission that sought to prevent harms from obscene literature, because of pressure the commission had put on commercial distributors to restrict certain books. The distributors were not parties. The Supreme Court recognized publishers’ standing, given their “palpable injury as a result of the [commission’s] acts,” and held that the commission’s actions were unconstitutional. Similarly, in Denver Area Educational Telecommunications Consortium, Inc. v. Federal Communications Commission, the Court upheld First Amendment challenges to federal cable legislation in a case brought by cable content creators, with no cable companies as parties. 

U.S. courts have spent relatively little time weighing the competing interests of platforms, speakers, and people harmed by speech. That’s because Congress already did it for them. The Digital Millennium Copyright Act (DMCA) and Section 230 provide statutory resolution to most legal questions about platforms, online speech, and harm. Those statutes encode legislative policy choices about how to balance the competing interests. If the U.S. moves away from such legislated regimes, though, courts will likely face these long-deferred questions—and do so in cases where only two of the three interested groups have legal representation. 

Future Interests 

Importantly, the interests affected by platform liability rulings are not just those of the particular speakers whose posts are at issue in the case, or the people who were harmed by them. Significant rulings can cause platforms to change their large-scale content moderation practices going forward, with real consequences for future speakers or victims of harm. If the Gonzalez plaintiffs succeed in limiting the scope of platforms’ Section 230 immunity, for example, platforms will be exposed to new claims in areas far afield from terrorism, like defamation. Reductions to platforms’ Section 230 protections will give claimants far more leverage in convincing platforms to take down content in the first place—either on a case-by-case basis or by changing platforms’ terms of service to prohibit more speech. Experience with laws like the DMCA suggests that platforms will err on the side of over-removal, affecting substantial swaths of lawful speech. 

The Supreme Court recognized risks of this kind in a 1959 obscenity prosecution against a bookseller, Smith v. California. The Court ruled that strict liability for the bookseller in that case would violate the First Amendment, because fear of liability would prompt him to remove legal books from his shelves: 

[T]hus the State will have imposed a restriction upon the distribution of constitutionally protected, as well as obscene literature…. The bookseller’s self-censorship, compelled by the State, would be a censorship affecting the whole public, hardly less virulent for being privately administered. 

Smith holds several lessons relevant to internet platform liability. First, while the parties in the case were the bookseller and a government prosecutor acting in the interests of people who might be harmed by obscene literature, the Supreme Court looked beyond those two entities in resolving the case. It ultimately ruled based on the First Amendment rights of authors and readers—who were not parties before the Court. Smith tells us that these third parties’ rights matter in cases alleging that intermediaries should be liable for third-party speech. But it doesn’t tell us how to ensure that courts actually consider those rights, or hear arguments about them. 

Second, while Smith concerned a particular book deemed obscene by the lower courts, the Supreme Court’s concern was not about the author or readers of that book. Rather, it was concerned with other books that might be suppressed under poorly crafted liability standards. The obscenity law in Smith would presumably affect bookstores’ or platforms’ tolerance for particular kinds of speech, including legal pornography and then-controversial novels like Ulysses or Lolita. The lower court ruling in Smith gave bookstores reason to avoid books featuring same-sex partners, in particular. The trial court judge, who read the book three times to be certain it was obscene, seemed particularly distressed by the protagonist’s bisexuality. 

Other kinds of laws are likely to affect other kinds of legal gray-area speech. Fear of copyright liability, for example, predictably leads platforms to remove parodies and home videos. Exposure to defamation liability might make them gun shy when users post sexual assault allegations. The ATA as modified by JASTA, the foreign terrorism laws at issue in Taamneh and Gonzalez, would have their own unique footprint. Among other things, they would give platforms reason to err on the side of over-removal for any speech connected to foreign terrorism, but not domestic terrorism. Smith tells us that lawmakers must reckon with this foreseeable behavior of intermediaries in calibrating their liability for third-party speech. 

Whatever the right balance might be between incentivizing platforms to take down illegal content and avoiding over-removal of lawful user speech, courts are not well equipped to provide it. The three-body problem in two-party litigation is one big reason. Institutional competence is another. Courts are not equipped to adjust some of the most important legal dials and knobs used to calibrate trade-offs between competing interests in platform liability law. Many of the best tools for doing so involve prescriptive and detailed process improvements to platforms’ “notice and takedown” operations—improvements that legislators may be equipped to establish, but courts generally are not. 

The DMCA, for example, was drafted to protect online speakers using a detailed, legislatively choreographed notice and takedown process. It grants platforms conditional immunity, which copyright holders may puncture only by submitting notices that meet detailed statutory requirements. Notifiers must, for example, specify the basis of the claim and the location of the allegedly infringing content, and they must support the claim with a sworn statement under penalty of perjury. The law also sets out a process for users to challenge removals they believe were erroneous, and penalties for bad-faith accusers who cause platforms to remove lawful speech. The DMCA is crafted in recognition of the three-body problem. Ultimately, it tries to resolve that problem by getting platforms out of the middle. When notifiers and affected users disagree about whether online speech is infringing, hosting platforms don’t have to decide who is right. They can host the disputed speech without liability unless the notifier takes its claim to court. Other parts of the DMCA, for providers of internet infrastructure such as caching services, put almost no decisions about the legality of speech in the hands of platforms. Instead, those intermediaries are generally immunized unless a court has already determined that particular content is infringing. The EU’s new Digital Services Act (DSA) builds on the DMCA’s model, adding far more robust provisions to protect online speech. These include extensive appeal possibilities, transparency about the role and accuracy of automated content moderation tools, and oversight by regulators charged with considering the interests of both speakers and victims of harm. 

Proxies for Absent Interests

In principle, platforms might serve as proxies for absent speakers or victims, if their interests were sufficiently aligned. So a key question is: Whose side are the platforms on? 

The answer is complicated. But at the end of the day, platforms cannot be expected to side with anyone but themselves. When platforms face liability for user speech, they can best protect themselves from being sued by taking down users’ posts or terminating users’ accounts. Once platforms are sued, the economically rational choice may be to settle cases and accommodate plaintiffs’ content removal demands. So platforms’ interests are not particularly aligned with speakers in these early stages of disputes. That can change once a platform is actually defending itself in litigation about liability for user speech. At that point, platforms have reason to insist on an interpretation of the law that minimizes any obligation to remove user speech. In doing so, they may advance the same legal arguments that an advocate for users’ speech interests would choose. I think that major platforms like Twitter and Google have often served as decent proxies for users’ speech interests in this situation. (But my opinion could be biased—I was counsel to Google until 2015 and worked on quite a few of those cases.)

But even in this litigation context, there are plenty of reasons that platforms might, as has happened in Gonzalez and Taamneh, have little to say about users’ speech rights. One reason has to do with changes in platforms’ practices over time. In Gonzalez and Taamneh, the platforms need to defend their circa-2017 anti-terrorism policies without implicitly criticizing their very different current practices. In the intervening years, YouTube and Twitter, along with Facebook and Microsoft, participated in developing and promoting a new automated detection tool for potential terrorist content. That tool, which relies on duplicate detection for content from a shared hash database, has been widely criticized by civil liberties groups as likely to suppress news reporting, scholarship, parody, and other lawful speech. This criticism of platforms’ current practices makes it awkward for platforms to champion speech-related arguments in defense of their previous practices. 

Another reason platforms may be reluctant to raise arguments based on users’ speech rights in “Person Harmed v. Platform” cases like Gonzalez or Taamneh comes from the increasing prevalence of “must carry” claims. Any social media platform that enforces voluntary content policies (which is to say, any economically viable one) has reason to oppose such claims. That’s what the biggest platforms are doing in the NetChoice cases, arguing for their own First Amendment rights to set editorial policies and remove lawful speech by users. The Supreme Court is well aware of the “Speaker v. Platform” arguments in cases like NetChoice v. Paxton, given its review of an emergency petition in that case in 2022. If the platforms’ counsel in Gonzalez or Taamneh had leaned too heavily on users’ speech rights in oral arguments, they would have invited difficult and distracting questions, particularly from an already-skeptical Justice Clarence Thomas

Avoiding the subject of platform users’ speech rights in oral arguments was probably a sound tactical choice. But it doesn’t mean that platforms would be inconsistent if they had made arguments about users’ speech rights in Taamneh and Gonzalez, while arguing against the speech claims in NetChoice. The cases are about entirely different questions: users’ rights against state action in Taamneh and Gonzalez, and users’ rights against private action in NetChoice. The threat to users’ speech in Taamneh and Gonzalez comes from federal legislation, as interpreted by courts. If the resulting state-created liability standard goes too far in incentivizing platforms to silence their users, it can violate those users’ First Amendment rights—just as the obscenity law for booksellers did in Smith. 

The threat to users’ speech interests in NetChoice is not from the government, but from privately owned platforms. First Amendment claims against private defendants are almost always rejected by courts. Laws overriding private companies’ own First Amendment rights and editorial discretion, like the ones in Texas and Florida, have historically been upheld only in exceptional circumstances. As I have discussed elsewhere, the foundational question for must carry cases is whether major platforms’ role in public discourse has created such a circumstance—and, if so, what kind of state intervention might be constitutionally permissible. Texas’s and Florida’s hastily drafted laws should not survive constitutional review under the Supreme Court’s precedent to date, though the current Court may change these standards. Regardless of the outcome, a ruling in NetChoice is unlikely to tell us much about the platform liability questions in Taamneh and Gonzalez. A users’ rights advocate in those cases might have chosen to spend time explaining all of this to the Court. The platforms’ lawyers, understandably, did not.

A final reason platforms may not adequately represent internet users’ speech rights is that platforms just have too many other interests. They may want to maintain good relationships with business partners, for example, or with foreign governments. Or they may be unwilling to make arguments that could be used against them in other contexts. A good example of this is Facebook’s advocacy before the CJEU in Facebook Ireland v. Glawischnig-Piesczek. Austrian courts determined that the user post in that case, which called a prominent politician a “corrupt oaf” and a “lousy traitor,” was defamatory. Before the CJEU, a key question was whether Austria could order Facebook to proactively find and delete similar speech in the future. Clear and consistent CJEU precedent said that orders of this sort burdened internet users’ privacy rights, as well as their free expression rights. Privacy was particularly relevant, since the Austrian court’s order potentially required running facial recognition scans on users’ photos. But Facebook didn’t raise any privacy arguments, presumably because the platform faces similar privacy claims about its own practices, particularly for ad targeting. The plaintiff didn’t mention that the order she sought might harm users’ privacy either. So the court never heard anything about the topic. The CJEU ultimately determined that Austria could require Facebook to proactively monitor users’ communications in search of similar posts and remove them everywhere in the world.

Conclusion

The public should not expect platforms to adequately represent the interests of absent parties. Ultimately, it should not matter which side platforms align themselves with. What matters is that courts should not leave speech interests by the wayside, simply because speakers are not parties to a particular case. By the same token, courts should not ignore the interests of absent victims of harm, in cases where their interests are not represented. 

Like the three-body problem in physics, this issue in platform liability cases is not strictly solvable. Lawmakers cannot simply interject a third advocate into every two-party dispute. Even identifying adequate representatives for the diffuse interests of future speakers or victims of harm is challenging. 

But knowing that all three interest groups exist is half the battle. Under current law, advocates can and should work to make courts aware that arguments from just two parties are unlikely to provide the full picture. For future law, and particularly any proposed amendments to Section 230, legislators should be aware of the same three competing interests. Legislative responsibility should be heightened precisely because of courts’ limits—including courts’ inability to craft prescriptive rules, like the legislated rules in the DMCA or DSA, to balance competing interests of speakers and victims of harm. For better or for worse, legislators are in the best position to craft such rules and, therefore, should listen carefully to advocates for all three interest groups.


Daphne Keller directs the Program on Platform Regulation at Stanford’s Cyber Policy Center. Her work, including academic, policy, and popular press writing, focuses on platform regulation and Internet users' rights in the U.S., EU, and around the world. She was previously Associate General Counsel for Google, where she had responsibility for the company’s web search products. She is a graduate of Yale Law School, Brown University, and Head Start.

Subscribe to Lawfare