Courts & Litigation Cybersecurity & Tech Terrorism & Extremism

Have the Justices Gotten Cold Feet About ‘Breaking the Internet’?

Quinta Jurecic, Alan Z. Rozenshtein, Benjamin Wittes
Friday, February 24, 2023, 10:57 AM

During oral arguments in Gonzalez v. Google

View of congress from the steps of the court (Anthony Quintano, https://www.flickr.com/photos/quintanomedia/51956417444/in/photolist-2naddLL-2odGBsT-o6HsvZ-2ohn1ud-2o2LfW; CC BY 2.0, https://creativecommons.org/licenses/by/2.0/)

Published by The Lawfare Institute
in Cooperation With
Brookings

Partway through oral argument before the Supreme Court in Gonzalez v. Google, Justice Elena Kagan took a moment to acknowledge the limitations of her institution’s ability to weigh complex technical questions about internet governance. “We’re a court,” she said. “We really don’t know about these things. You know, these are not, like, the nine greatest experts on the internet.”

Kagan’s quip got a laugh, but her colleagues on the bench—to their collective credit—appeared to take her point seriously. Over the course of two days of marathon arguments in Gonzalez and its companion case, Twitter v. Taamneh, the justices appeared to genuinely grapple with complex questions about the responsibility of social media platforms for their users’ posts. When the Court first announced that it would hear Gonzalez and Taamneh, scholars of internet law and technology policy were anxious that these grants of certiorari suggested a willingness on the Court’s part to potentially upend the legal structure that has undergirded the internet for more than two decades. After this week’s arguments, though, the potential for a ruling that would throw the online world into chaos seems substantially diminished. 

The underlying facts of Gonzalez and Taamneh are, for legal purposes, indistinguishable. The plaintiffs, family members of victims killed in ISIS terrorist attacks, sued social media platforms on which ISIS had established a presence—seeking to hold them liable under the Anti-Terrorism Act (ATA)’s secondary-liability provisions passed in the Justice Against Sponsors of Terrorism Act (JASTA). They argued that the platforms had algorithmically recommended ISIS content and failed to adequately enforce policies against abuse of their platforms by terrorists and thus aided in the group’s recruitment and operations generally. Notably, the plaintiffs did not allege that the platforms were used to conduct the specific attacks in which their loved ones were killed but, rather, that ISIS made use of the platforms in general and the platforms thus aided them generally in conducting attacks. 

In both cases, the platforms argued they were immune from liability under Section 230 of the Communications Decency Act of 1996, which—broadly speaking—shields websites from liability for third-party content posted on their services. But thanks to the procedural posture of Gonzalez and Taamneh in the lower courts, each case presented a different question before the Supreme Court. In Gonzalez, petitioners challenged Google’s invocation of the Section 230 liability shield for algorithmic recommendations generated by YouTube, whereas Taamneh concerned whether petitioners’ claims met the standard for liability under the ATA and JASTA. 

Because the cases are so alike, the Court’s decision in each case will likely reverberate to the other. If Section 230 protects Google in Gonzalez, it would likely protect Twitter in Taamneh, too. And if the Taamneh petitioners have failed to state a claim under the ATA and JASTA, then so too have the petitioners in Gonzalez. This entanglement between the two cases was on display during oral arguments in Gonzalez, during which both the justices and counsel for petitioners sometimes blurred the distinction between the legal issues under Section 230 and JASTA.

Section 230’s key provision, (c)(1), which holds that platforms cannot be “treated as the publisher or speaker of any information provided by another” user or entity, has been called the “twenty-six words that created the internet” because of its importance in allowing online businesses and platforms to host third-party content without fear of potentially ruinous litigation. (Section 230 has also been called the “Magna Carta of the Internet” for similar reasons.) 

Yet Section 230 has become increasingly controversial in recent years, reflecting exasperation across the American political spectrum with the role of large social media platforms in public discourse. Democrats have complained that sites like YouTube and Twitter aren’t removing enough potentially harmful content; Republicans have complained that these services are taking down too much content. Members of both parties have proposed altering the statute or even doing away with it altogether. Meanwhile, in 2020, Justice Clarence Thomas mused that the Court should “consider whether the text of this increasingly important statute aligns with the current state of immunity enjoyed by Internet platforms.” 

Thomas seemingly got his wish when the Court decided to take up Gonzalez. The specific question presented by the case concerned to what extent algorithmic recommendations are protected by Section 230, but Gonzalez presented an opportunity for the Court to reconsider the statute more widely. As a CNN headline put it, the case could potentially reshape the “future of internet speech and social media”: with scaled-back protections under Section 230, platforms could face a radically different set of incentives when it comes to leaving up or taking down material online. Advocates for rethinking Section 230 cheered the Court’s decision to hear the case. Others, meanwhile, worried that a hasty decision by the justices could result in a liability regime that pushed platforms to over-remove material or destroy the usefulness of products that rely on algorithmic ranking systems, like search engines.

Taamneh, meanwhile, received less attention but could prove deadly in other ways for platforms. The platforms surely could have done more to control terrorist content on their sites. To read the failure to be optimally energetic in doing so as creating treble damage liability for aiding and abetting terrorist crimes, however, would create an almost impossible business climate for any company that deals with tens or hundreds of millions of customers around the world on an impersonal, computerized basis.

For those worried about what Gonzalez and Taamneh could portend about the future of the internet, though, this week’s arguments suggest the cases might turn out less dramatic than initially expected. Rather than appearing eager to rethink Section 230, the justices seemed full of uncertainty and caution during oral arguments in Gonzalez—which wasn’t helped by an unfortunately clumsy argument by counsel for petitioners, Eric Schnapper. “I don’t know where you’re drawing the line” between immunized and unimmunized actions or content, Justice Samuel Alito told Schnapper at one point. “That’s a problem.” 

All the justices seemed to recognize that there are no easy options for the Court when it comes to interpreting Section 230. None of the nine seems satisfied with the status quo—which, following Zeran v. AOL, the 1997 Fourth Circuit case that first articulated what has become the dominant interpretation of Section 230, makes it almost impossible to hold platforms liable for anything having to do with third-party content. Thus, for example, Justice Sonia Sotomayor pressed Lisa Blatt, who argued for Google, about whether a platform with an expressly racially discriminatory algorithm—for example, a job site that excluded minority candidates from search results—should be immune from suit just because the content that its algorithm served up was created by third parties. Seemingly drawing on an amicus brief filed by the Cyber Civil Rights Initiative, Justice Ketanji Brown Jackson suggested courts should place greater weight on Section 230(c)(2)’s language describing platform actions “taken in good faith” to moderate content—an interpretation that would limit the statute’s protections for platforms refusing to behave as responsible stewards.

Yet the justices also seemed uncomfortable with the position put forward by Gonzalez and the government, which also participated in oral arguments. Schnapper and Deputy Solicitor General Malcolm Stewart argued that it would be possible to preserve immunity for hosting third-party content while eliminating immunity for the platform’s recommendations regarding that content. Yet as multiple justices noted, platforms have to use some sort of organizing algorithm in literally everything they do, since they are dealing with staggering amounts of content that need to be organized in one way or another. As Kagan put it, “every time anybody looks at anything on the Internet, there is an algorithm involved.” 

In one striking concession, Schnapper appeared to say that platforms could even be held liable for how they chose to prioritize results in search engines—which necessarily rely on algorithmic recommendation by providing results to user queries and ranking them in order of relevance. Whatever their distaste at social media platforms encouraging users to watch ISIS videos, none of the justices seemed eager to open up Google to liability for trying to build the best search engine it can.

Beyond the merits of the case, the justices were also highly attuned to the potentially disruptive effects of a decision that upset the status quo. Kagan raised the specter of the “world of lawsuits” that could result if the justices narrowed Section 230’s protections. Justice Brett Kavanaugh worried that “to pull back now from the interpretation that’s been in place would create a lot of economic dislocation, would really crash the digital economy.” In other words, what if, simply by being 27 years late to the party, the Supreme Court has created such weighty reliance interests that it would be difficult for the Court to reorient the judiciary’s approach to interpreting the statute? 

Commenting on oral arguments, Blake Reid of Colorado Law School suggests that reliance on a broad interpretation of Section 230 has allowed courts to accrue years of “interpretive debt”—an absence of case law on key tort-law questions of platform liability, because Section 230 allowed courts to throw out litigation before reaching those questions. Upending Section 230 could cause chaos not only for platforms struggling to figure out what they are and aren’t potentially liable for but also for courts newly struggling to resolve these questions with very little in the way of a road map. Along these lines, Stewart seemed not to reassure the justices but instead to make them even more anxious when he suggested repeatedly that lower courts would be able to work through liability questions by evaluating on a case-by-case basis “the adequacy of the allegations under the underlying law.”

One option, which seemed to appeal to a number of justices, would be to hold back from tinkering with the statute’s interpretation so as to give Congress time to act: “Isn’t that something for Congress to do, not the Court?” Kagan asked after joking about the “nine greatest experts on the internet.” It’s a fair point, but the different factions in Congress critical of Section 230 have shown little ability in recent years to pull together and pass a unified reform. Congress is most likely to act on an issue when powerful interest groups use their political and lobbying capital to put something on the congressional agenda. But the status quo on Section 230 is favorable to big technology companies—so a decision by the Court not to touch the statute might make it easier, not harder, for major platforms to bat away congressional reform efforts. If the Court gutted the statute, though, perhaps tech companies would have more of an incentive to push Congress to make something happen.

The problem, of course, is that there’s no guarantee that Congress will improve on the statute—especially now that Section 230 has become so politicized. There’s also a very real risk that speech on the internet would suffer greatly during the period of chaos after a ruling by the Court along those lines and before Congress steps in. Chief Justice John Roberts seemed conscious of this danger. In response to a suggestion by Schnapper that Congress would have the option to expand Section 230’s protections following a ruling for the petitioners in Gonzalez, Roberts worried, “The amici suggest that if we wait for Congress to make that choice, the Internet will be sunk.”

The Taamneh argument was also a bit of a surprise. Unlike the proper parameters of Section 230, the issue at stake in Taamneh shouldn’t be that hard a question. The statute, after all provides for liability where there is an “injury arising from an act of international terrorism committed, planned, or authorized by” a foreign terrorist group, for “any person who aids and abets, by knowingly providing substantial assistance, or who conspires with the person who committed such an act of international terrorism.” It’s hard as a textual matter to see how Twitter might be found to have provided “substantial assistance” to the people “who committed such an act of international terrorism” that killed the plaintiffs’ family members merely by inadvertently providing service to ISIS members on the same terms it provides the same services to everyone else in the world. Yes, Twitter and YouTube and Facebook are all generically aware that terrorist groups use their platforms—and they all have a variety of policies designed to prevent this. One can criticize their failure to enforce these policies more aggressively, but without an allegation more specifically tying ISIS’s abuse of Twitter to the specific acts of terrorism in question, it’s hard to see how the statute could be reasonably read to extend to Twitter’s conduct.

If Twitter is liable here, it’s hard to see why—for example—a bank that complied fully with all applicable laws but had some generic knowledge that ISIS members were using its services would not also be liable. Ditto phone companies and other communications providers. It simply isn’t reasonable, much less consistent with the text of the law, to construe such activity as providing “substantial assistance” to an entity in connection with its commission of an “act of international terrorism.”

Of particular concern is the potential implications for free speech of creating too strict a secondary liability regime. After all, social media companies are speech platforms. And if the liability regime holds them responsible for misconduct on their systems too readily, the result will be to clamp down on speech that may not be associated with the ill in question. Seth Waxman, arguing for Twitter, acknowledged that a request from law enforcement to take accounts down could make a platform culpable if the site refused to take action and a terrorist attack resulted. Waxman’s argument was that Twitter could be held liable only if a link could be drawn between a government warning and an act of violence—but his point demonstrates that a looser definition of “substantial assistance” would create an incentive for platforms to remove material whenever a government asks. 

That should be concerning for anyone who cares about free speech online, especially because major platforms receive plenty of requests from authoritarian or quasi-authoritarian governments to remove material. Waxman used the example of a request from police in Turkey, a country that has increasingly cracked down on the press and on free expression online. Strangely, though, Waxman didn’t point directly to the danger of quashing speech. The fact that the justices were considering the effect of JASTA on platforms whose business is enabling public discourse—rather than, as in a hypothetical that came up repeatedly, a bank—went oddly unmentioned. The omission was particularly striking given how concerned the justices had been the day before about the potential effects of narrowing Section 230 protections on the digital public square.  

Yet the justices seemed to treat Taamneh as a close case. Several of them probed both sides carefully about the scope of secondary liability for platforms in the ATA and JASTA cases. And while it still seems unlikely that there are five votes to read this provision so as to cover the current facts, the opinion may not be a slam dunk, either in numerical terms or in substantive terms. 

Despite the mess of oral argument and the lack of a clear path forward, the arguments in Gonzalez and Taamneh are cause for celebration in at least one respect. In an age of increasing polarization, the Supreme Court has increasingly been perceived (fairly so) as yet another partisan institution, in which the justices are chosen for their prebaked policy preferences and ideological priors, and where outcomes can be predicted months, if not years, in advance. 

But in the arguments for Gonzalez and Taamneh, a very different Court was on display. All nine justices seemed to be grappling with difficult issues of law and policy, in good faith and with open minds. There was no grandstanding or ideological posturing, and it’s impossible to predict what coalitions will emerge when the decisions are written (if they even are). That’s bad if you’re in the prediction business—but good if, even if only on occasion, you like to think that the Supreme Court is a court rather than a partisan super-legislature.


Quinta Jurecic is a fellow in Governance Studies at the Brookings Institution and a senior editor at Lawfare. She previously served as Lawfare's managing editor and as an editorial writer for the Washington Post.
Alan Z. Rozenshtein is an Associate Professor of Law at the University of Minnesota Law School, a senior editor at Lawfare, and a term member of the Council on Foreign Relations. Previously, he served as an Attorney Advisor with the Office of Law and Policy in the National Security Division of the U.S. Department of Justice and a Special Assistant United States Attorney in the U.S. Attorney's Office for the District of Maryland.
Benjamin Wittes is editor in chief of Lawfare and a Senior Fellow in Governance Studies at the Brookings Institution. He is the author of several books.

Subscribe to Lawfare