Published by The Lawfare Institute
in Cooperation With
Social media companies are not liable for ISIS attacks that victims’ families claimed resulted from algorithms promoting terrorist content on their platforms, the Supreme Court ruled Thursday.
The justices declined to narrow Section 230 of the Communications Decency Act, the decades-old statue that shields internet companies from liability for third parties’ content. In Taamneh v. Twitter, the justices unanimously declined to hold Twitter liable for aiding and abetting ISIS under the Anti-Terrorism Act (ATA) and the Justice Against Sponsors of Terrorism Act (JASTA). In light of Tammneh, they remanded Gonzalez v. Google to the Ninth Circuit for reconsideration of the plaintiffs’ complaint.
In Taamneh, the family of Nawras Alassaf, who was killed in a 2017 ISIS nightclub attack in Istanbul, claimed Twitter, Facebook, and YouTube were liable for the violence under Section 2333(d)(2) of the ATA, via JASTA. In their argument that Twitter had aided and abetted ISIS, the plaintiffs claimed Twitter permitted ISIS to upload content, took insufficient steps to remove that content, and allowed its algorithms to recommend that content to third parties. But the Court found these relationships did not lead to the type of “culpable association” or active participation needed to move the claims forward.
Writing for the Court, Justice Clarence Thomas said the relationship between the social media companies and the ISIS attack was too attenuated to characterize the companies as anything more than careless bystanders.
“At bottom, the allegations here rest less on affirmative misconduct and more on passive nonfeasance,” Justice Thomas wrote. “To impose aiding-and-abetting liability for passive nonfeasance, plaintiffs must make a strong showing of assistance and scienter. Plaintiffs fail to do so.”
The Court’s reasoning relied on the elements of “aiding and abetting” established by the D.C. Circuit’s 1983 ruling in Halberstam v. Welch. Although the defendants met Halbertstam’s first two requirements by showing that ISIS committed a wrong and the social media companies played a role in that wrong, they failed to prove that the companies provided “knowing and substantial assistance” to ISIS.
“Defendants’ mere creation of their media platforms is no more culpable than the creation of email, cell phones, or the internet generally,” Justice Thomas wrote. “And defendants’ recommendation algorithms are merely part of the infrastructure through which all the content on their platforms is filtered.”
Justice Ketanji Brown Jackson, in a two-paragraph concurrence to the Taamneh opinion, noted that the Court’s view of the social media platforms and algorithms at issue in both complaints was narrowed to the particular cases at hand—and could not necessarily be generalized to other future situations.
“Both cases came to this Court at the motion-to-dismiss stage, with no factual record,” Justice Jackson wrote. “Other cases presenting different allegations and different records may lead to different conclusions.”
In a unanimous per curiam opinion, the justices found that claims by the Gonzalez plaintiffs—the relatives of Nohemi Gonzalez, a 23-year-old U.S. citizen killed in a 2015 ISIS terrorist attack in Paris—were materially identical to those in Taamneh. As a result, the claims failed under Taamneh and the Ninth Circuit’s earlier ruling in the case, according to the opinion.
In 2022, the Ninth Circuit found that Section 230 barred most of the plaintiffs’ claims, with two potential exceptions: the direct and secondary liability claims that alleged Google and ISIS shared the proceeds of the terrorist group’s YouTube videos using the platform’s revenue sharing system.
The plaintiffs did not seek the Court’s review of the revenue-sharing claims but instead questioned the Ninth Circuit’s application of Section 230 to their arguments that Google, via YouTube, aided, abetted, and conspired with ISIS. The justices refused to even reach this threshold question.
“In light of those unchallenged holdings and our disposition of Twitter, on which we also granted certiorari and in which we today reverse the Ninth Circuit’s judgment,” the justices wrote, “it has become clear that plaintiffs’ complaint—independent of §230—states little if any claim for relief.”
“We really don’t know about these things,” Justice Elena Kagan said of the Court. “These are not, like, the nine greatest experts on the internet.”
In the meantime, both sides of the aisle have signaled interest in reforming Section 230. Conservatives criticize the current version of Section 230 for providing platforms with the protection to “censor” conservative voices. Meanwhile, liberals argue that Section 230 disincentivizes platforms from regulating their platforms for harmful or abusive content.
Legislative reforms could adjust the scope of Section 230 to encompass the algorithmic recommendation systems underlying both cases. There are also pending questions around the applicability of Section 230 for novel types of content, such as that produced by generative artificial intelligence (AI). For one, the algorithms are owned and hosted by the platforms, but responsive to user-generated prompts. If the model’s outputs are not human-generated, as copyright precedent has suggested, it remains unclear who may be liable for content that raises concerns.