Courts & Litigation Cybersecurity & Tech

What the Supreme Court Says Platforms Do

Daphne Keller
Thursday, September 14, 2023, 1:43 PM
The Supreme Court’s Taamneh ruling makes platforms sound like the passive “common carriers” that Justice Thomas wants them to be.
The Supreme Court of the United States, January 2017. (Anthony Quintano, https://www.flickr.com/photos/quintanomedia/50067476316; CC BY 2.0, https://creativecommons.org/licenses/by/2.0/)

Published by The Lawfare Institute
in Cooperation With
Brookings

Legal questions about internet platforms have, at long last, arrived at the Supreme Court. After taking virtually no cases on the topic since the 1990s, the Court accepted two in the 2022-2023 term: Twitter v. Taamneh and Gonzalez v. Google. It will hear more cases about platforms in the coming term, likely including constitutional challenges to so-called must-carry laws in Texas and Florida, which limit platforms’ ability to remove disinformation, hate speech, and other potentially harmful content. The Biden administration recently urged the Court to accept those cases.

In Taamneh, the Court unanimously held that platforms were not liable under federal anti-terrorism law for harms from ISIS attacks. Because of this decision, the Court ultimately did not rule in Gonzalez, which raised questions about platforms’ immunities under the liability shield known as Section 230. The Taamneh ruling, authored by Justice Clarence Thomas, is this Court’s first detailed utterance on an era-defining topic. Its legal analysis is overall very favorable for platforms, as others have noted.

But the ruling is also oddly emphatic about platforms’ supposed “passivity” toward users and content. That characterization is sure to be raised in future platform cases of all kinds, including by plaintiffs seeking to hold platforms liable for content posted by users. Its most immediate relevance could be in disputes about must-carry laws, which compel platforms to carry user content, including disinformation or hate speech, against their will. In a recent brief defending Texas’s must-carry law, lawyers for the state quote Taamneh eight times. They argue that platforms have no First Amendment interest in maintaining editorial control because, as Taamneh describes them, the platforms don’t really have much of an editorial role in the first place. 

According to Taamneh, platforms’ relationship with users and content is “arm’s length, passive, and largely indifferent[.]” Variations on the word “passive” appear throughout the ruling. It does not mention some of the most common ways that platforms interact with content, however, and it is silent about some of their most controversial practices. At no point, for example, does the opinion note that platforms remove posts or “deplatform” users who violate platforms’ rules against lawful but offensive or harmful speech, such as disinformation or hate speech.

Employees at Facebook or YouTube (or even Twitter) who read the opinion might have trouble recognizing the companies they work for. So might lawyers familiar with the numerous Taamneh and Gonzalez briefs describing platforms’ content moderation practices to the Court. A reader who relied solely on Taamneh to understand what platforms do might conclude that they behave like phone companies or internet access providers, taking almost no action to restrict users’ speech. Given the years of public attention to platforms’ actual moderation practices, and the extensive briefing the Court received, this is at least a little bit strange.

Thomas’s description has a lot of similarities to arguments Texas and Florida made in favor of their must-carry laws well before the Taamneh ruling. The states argued that lawmakers could impose “common carriage” obligations, and restrict platforms’ editorial discretion to moderate user content or exclude speakers, because platforms already held themselves out as open to all comers. Justices Samuel Alito, Neil Gorsuch, and Thomas have expressed sympathy for that argument, arguing that the Court should let Texas’s law come into effect while litigation was pending, based on the likelihood that states would prevail. The states’ laws might be acceptable, they said, if platforms already offer “neutral forums for the speech of others[.]” Thomas also endorsed this logic in a 2021 opinion, saying that “[i]n many ways, digital platforms that hold themselves out to the public resemble traditional common carriers.”

Taamneh’s descriptions of platforms as “passive” or “agnostic” about content seem aligned with this reasoning. Its description of platforms as “generally available to the internet-using public[,]” who mostly “use the platforms for interactions that once took place via mail, on the phone, or in public areas” frames them as functional substitutes for traditional common carriers, and tracks Texas’s argument that platforms are “twenty-first century descendants of telegraph and telephone companies.” Those passages don’t mean Taamneh is precedent that supports must-carry laws, of course. It was about an entirely different legal question. But lawyers for Texas clearly found its descriptions helpful in advancing their position.

Plaintiffs who argue that platforms should face liability for carrying users’ unlawful content are likely to invoke Taamneh’s descriptions of platforms as well. They may argue that the Taamneh defendants won because they were passive—and, therefore, that platforms assume liability when they more actively moderate content. As a matter of law, that seems like a stretch for a number of reasons. First, it is often unclear which factual assertions the Court considered legally significant. Its descriptions are often embedded in paragraphs of legal analysis but not directly paired with legal conclusions. Second, this case arose on a motion to dismiss, meaning that the only legally relevant “facts” were the ones alleged by the plaintiffs suing the platform. (That’s not an excuse for all of Taamneh’s omissions, though. Even the plaintiffs sometimes described the platforms as more active than the Court does.) Finally, the platform “passivity” that mattered for the Court’s legal reasoning was passivity toward ISIS. The legally relevant question was not about whether the platform was generally passive toward all users and content. Still, many people share the intuition that passivity or content-neutrality across an entire service is what matters. The ruling describes platforms’ overall services this way, saying they are passive not just toward ISIS but also “everyone else” and “any other content[,]” and calling platforms’ relationship with ISIS the “same as their relationship with their billion-plus other users[.]”

Disputes about Taamneh’s significance as legal precedent will play out for years. This article won’t attempt to resolve them. But it will try to tease out which parts of Taamneh actually matter to the holding, and which of its descriptions are particularly noteworthy. The remainder of this article is descriptive. It reviews some of the most obvious discrepancies between Taamneh’s version of the facts about platform content moderation and the facts as described to the Court in briefs, or as generally understood by industry watchers.

A casual reader of Taamneh might be left with the impression that defendants YouTube, Facebook, and Twitter rarely or never engaged in some very standard content moderation practices, including:

  •  Removing specific content, like tweets or YouTube videos.
  • Deplatforming users by terminating their accounts.
  • Enforcing discretionary policies against lawful content.
  • Proactively screening uploaded content to block prohibited material.
  • Algorithmically promoting or demoting posts based on their content.

The reality is far different. Facebook and YouTube have both described hiring tens of thousands of moderators to do precisely these things. Party and amicus briefs to the Court—including, in some cases, plaintiffs’ briefs—discuss content moderation operations as well. 

Removing Content

Twitter, Facebook, and YouTube all remove enormous numbers of posts by users. They do so in part to avoid legal claims much like the ones in Taamneh, which can arise around the world. A major U.S. law, the Digital Millennium Copyright Act (DMCA), requires such removals as a condition for immunity from copyright claims. U.S. platforms would also risk criminal liability if they did not remove worst-of-the-worst material like child sexual abuse images. 

All of the parties in Taamneh, including the plaintiffs, agreed that the platforms removed ISIS content. Briefs from every platform described doing so. Plaintiffs’ argument was not that platforms failed to remove content but, rather, that they should have proactively searched for and removed more of it. Twitter’s brief included data from the platform’s public transparency report, saying it had terminated 630,000 ISIS accounts (and thus removed the associated content) in the period before the case arose. Briefs from parties and amici like the Trust and Safety Foundation in Gonzalez provided still more detail about this aspect of content moderation, and Justice Elena Kagan in oral argument mentioned Twitter “having a policy against” and “trying to remove” ISIS posts.

Yet the ruling itself is oddly reticent about content removal. It appears to specifically mention the practice only twice. First, it says there is no reason to think the platforms “took any action at all” with respect to ISIS’s content, “except, perhaps, blocking some of it” (emphasis added). Second, in a footnote, it adds that “[p]laintiffs concede that defendants attempted to remove at least some ISIS-sponsored accounts and content” (emphasis added). Elsewhere, even these removals that platforms “perhaps” “attempted” are elided. “Once the platform and sorting-tool algorithms were up and running,” the Court says, “defendants at most allegedly stood back and watched; they are not alleged to have taken any further action with respect to ISIS.”

Content removal is a strange thing to leave out, since it was recognized by plaintiffs and discussed at some length by defendants. The Court didn’t have to discuss it to resolve the case; as it framed the question, liability depended on whether platforms gave ISIS special assistance. So platforms’ active efforts to do the opposite technically didn’t matter. Still, given the central role of content removal in both parties’ legal arguments and extensive briefing from amici—and in high-profile public debates and litigation—the omission is striking. 

Terminating or Deplatforming Users’ Accounts

Major public platforms’ practice of deplatforming particular speakers by terminating their accounts is well known, thanks in part to disputes with figures like former President Trump. Platforms often terminate accounts of users who repeatedly violate the rules, or whose violations are particularly egregious. Platforms also must have account termination policies to qualify for business-critical immunities under the DMCA. As mentioned above, Twitter told the Court in Taamneh that it had terminated hundreds of thousands of ISIS accounts. It described its user base as “[p]eople who promise to follow Twitter’s rules and terms of use[.]”

In the Taamneh ruling, though, account termination is even more invisible than content removal. It is mentioned only once, in the footnote saying platforms may have “attempted” to terminate ISIS accounts. Elsewhere, the Court describes ISIS as “able to upload content to the platforms and connect with third parties, just like everyone else[,]” and chides the U.S. Court of Appeals for the Ninth Circuit for failing to consider that “defendants’ platforms and content-sorting algorithms were generally available to the internet-using public[.]”

These are, again, odd characterizations. Plaintiffs did not dispute that platforms blocked ISIS accounts. On the facts they alleged, platforms treated ISIS “just like everyone else” only in the very limited sense that ISIS members, like everyone else, could be kicked off the platform for promoting ISIS. (And in reality, an ISIS-operated account that posted only kitten pictures, or didn’t post any content at all, would likely still be terminated, both under platforms’ internal rules and as a step to avoid potential liability under the material support statute.) Downplaying platforms’ account termination practices ultimately doesn’t matter for liability in Taamneh, since the Court asks only whether platforms helped ISIS. But if platforms really did allow “everyone” to maintain accounts, as the Court suggests, it could matter for other legal purposes—including Justice Thomas’s arguments supporting common carriage in the Texas and Florida cases.

Enforcing Discretionary Policies Against Lawful Speech

Platforms remove user content for other reasons besides avoiding liability. They also do it to enforce their own rules against “lawful but awful” content like hate speech or electoral disinformation. As Justice Ketanji Brown Jackson noted in oral arguments for Gonzalez, rights to remove such “offensive” content were core to platforms’ legal arguments. Platforms’ decisions about what lawful user speech to deem offensive have, famously, prompted accusations of anti-conservative bias or what Justice Thomas has called “private censorship” that “stifled” speech.

 The platforms’ briefs in Taamneh described rules against potentially lawful speech such as “gore” and “violence.” A trade association amicus went into more detail, saying that “[i]n addition to illegal speech, online services remove ‘lawful but awful’ speech” including “pornography, harassing or abusing other users, hate speech, and spam.”

 This aspect of platforms’ content moderation is not mentioned in the ruling at all. It didn’t need to be, since Taamneh presented a question about liability for legally prohibited content. But, like the other omissions, it may leave readers with a rather lopsided understanding of what platforms actually do.

Proactively Screening Uploaded Content to Block Prohibited Material

 The three Taamneh defendants all used duplicate-detection tools to proactively screen and block certain content from being uploaded. By the time of the case, all three had long had systems to check all user uploads against hashed “fingerprints” of known child sexual abuse material. YouTube and Facebook also screened all uploads for copyright infringement. (The three platforms also later became co-developers, along with Microsoft, of a now widely used system for screening all uploaded content against a database of known terrorist material. These filters—which were described to the Court in Gonzalez briefs—have since been adopted by at least 18 platforms.) The Taamneh plaintiffs’ argument was in part that platforms should be liable because, at the time of the attacks, they did not yet screen for terrorist content. They took “no steps of their own to detect terrorist material,” plaintiffs alleged, despite having the technical capacity to do so.

 The ruling paints a different picture. It says the platforms did “little to no front-end screening” before displaying users’ posts. There is, it wrote, “not even reason to think that defendants carefully screened any content before allowing users to upload it onto their platforms. If anything, the opposite is true: By plaintiffs’ own allegations, these platforms appear to transmit most content without inspecting it.” That may accurately describe plaintiffs’ allegations—which, again, were what mattered for this case. But anyone with a passing familiarity with platforms’ operations in 2017 would know that this characterization isn’t correct. Like many other parts of the ruling, it makes platforms sound much more passive and neutral as to user content than they were at the time, or are now.

Algorithmically Promoting or Demoting Posts Based on Their Content

Platforms’ algorithmic ranking of user content was an unavoidable part of Taamneh and Gonzalez, because plaintiffs argued that ranking created liability and took platforms outside the immunities of Section 230. Platforms argued that algorithmic ranking was an essential part of the editorial activity immunized by that statute (I argued the same in an amicus brief with the American Civil Liberties Union). The Court received at least three briefs from computer scientists explaining how ranking works. The briefs, as I read them, support the factual proposition that algorithms rank users’ posts based on their content. But the Taamneh ruling seems uncertain and inconsistent about this important question.

At points, the Taamneh ruling seems to recognize that algorithms take into account “information … about the content” of posts. At others, it calls algorithms “agnostic as to the nature of the content” and implies that they rely only on noncontent signals such as user behavior or engagement, simply matching “any content (including ISIS’ content) with any user who is more likely to view that content[.]” The ruling asserts that, “[v]iewed properly, defendants’ “recommendation” algorithms are … infrastructure”—a characterization that makes them sound more like dumb pipes than editorial tools.

In the Court’s defense, whether algorithms should be considered “content based” is partly a matter of semantics. Ranking algorithms often factor in machine-cognizable information about content, like whether machine learning models predict that an image includes nudity. YouTube, Facebook, and Twitter have all at times trumpeted their algorithmic demotion of so-called borderline content that comes close to violating the platforms’ speech rules. Since the speech rules themselves are content based, one would think these ranking choices are as well. Overall, the goal of ranking algorithms is to prioritize material according to content-based attributes like subject matter, relevance, or authoritativeness. Algorithms’ success, as judged by platforms’ human evaluators using frameworks like Google’s Search Quality Evaluator Guidelines, explicitly depends on what content they surface. That said, algorithms don’t “know” what message a post conveys in the way a human would. That’s why they make mistakes humans might not, like assuming any image with a swastika is pro-Nazi. In that narrower sense, one could perhaps argue that algorithms are not considering “content” but, rather, “data” or “signals” about the content.

On the Taamneh ruling’s face, algorithms’ relationship to content seems more legally consequential than some other content moderation topics. After calling algorithms “agnostic” as to content, for example, the Court says ranking “thus does not convert defendants’ passive assistance into active abetting” (emphasis added). That makes it sound like algorithms’ putative content-neutrality matters for liability purposes.

In practice, though, I’m not sure what a winning plaintiff’s claim based on ranking under Taamneh would look like. As Eric Goldman writes, lower courts are “likely to focus on the opinion’s broad holding, not the less-than-optimal wording details,” and its “clear and broad takeaway is that the services as currently configured are not aiding-and-abetting.” If the Twitter, Facebook, and YouTube ranking at issue in Taamneh was acceptable, most ordinary ranking tools should be, too. (An exception could be the often-hypothesized case of a platform that deliberately designed its algorithm to promote ISIS content. In that scenario, though, other key considerations like the platform’s knowledge and intent to help ISIS would also be very different from anything in Taamneh or typical content liability cases.) In must-carry cases like the ones in Texas and Florida, the idea that algorithms can be content agnostic might matter more, suggesting that regulators could (very, very hypothetically) identify when platform ranking has deviated from a “fair” or “neutral” baseline

***

The divergence between the facts explained to the Court and the facts explained by the Court in Taamneh could become a footnote in the history of U.S. platform law. I hope that happens for the right reason: because future courts rely only on Taamneh’s legal reasoning and not its descriptions of platform behavior. But the discrepancies could also be forgotten for the wrong reasons: because future courts take the ruling at face value, interpreting its factual characterizations as reasons why platforms won the case, or using it as a reliable source of information about how platforms work generally. Last term’s Supreme Court clerks were aware of all the information that was detailed in briefs but left out in Taamneh. Current and future clerks, as well as clerks and judges in lower courts, will not be.

Taamneh’s descriptions of platforms as passive, content-agnostic forums may be invoked opportunistically by parties on all sides in future litigation. Plaintiffs suing over unlawful user content may argue that YouTube, Twitter, and Facebook prevailed in Taamneh by being passive —but that those same platforms’ more active content moderation efforts today give them more knowledge about user content and thus more legal responsibility. Must-carry proponents may argue, as Texas has now done, that platforms can’t assert First Amendment rights or object to common carriage obligations, because they never played any editorial role in the first place. Courts should be wary of such arguments. Taamneh’s factual descriptions should not lead to unwarranted legal consequences.


Daphne Keller directs the Program on Platform Regulation at Stanford’s Cyber Policy Center. Her work, including academic, policy, and popular press writing, focuses on platform regulation and Internet users' rights in the U.S., EU, and around the world. She was previously Associate General Counsel for Google, where she had responsibility for the company’s web search products. She is a graduate of Yale Law School, Brown University, and Head Start.

Subscribe to Lawfare