Criminal Justice & the Rule of Law Terrorism & Extremism

Did Congress Immunize Twitter Against Lawsuits for Supporting ISIS?

Benjamin Wittes, Zoe Bedell
Friday, January 22, 2016, 9:14 AM

Back in July, we wrote a lengthy piece about whether Apple could conceivably face civil liability for providing end-to-end encryption to criminals and terrorists.

Published by The Lawfare Institute
in Cooperation With
Brookings

Back in July, we wrote a lengthy piece about whether Apple could conceivably face civil liability for providing end-to-end encryption to criminals and terrorists. Last week, we wrote about a lawsuit against Twitter that is based on substantially the same legal theory we had outlined in the earlier post.

In response to both posts, we received pushback from analysts who thought we had missed a key—or perhaps, the key—defense: Section 230 of the Communications Decency Act (47 U.S.C. § 230).

The argument that this particular statute has immediate bearing on the case is not intuitive. CDA § 230, as the statute is commonly known, reads in relevant part that “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” The law is a really important protection for internet companies in a whole variety of areas, but it didn’t initially seem all that relevant to the question of whether Apple or Twitter might be held liable for allegedly violating criminal laws against material support for terrorism. So we focused instead on the defenses the internet companies might use assuming they were not granted blanket immunity from liability under the law.

But the law turns out to have considerable bearing on the discussion, and we’d like to thank James Grimmelmann of the University of Maryland law school for pushing us to take a closer look at CDA § 230. That closer look reveals that both companies would, indeed, have powerful arguments for immunity under the law. For reasons we shall explain in this post, we think those arguments should probably not prevail under either the extensive extant § 230 case law or under the plain text of the statute itself. That said, it’s a close question, and in the Twitter case filed last week, we suspect it’s likely to be the first question litigated.

CDA § 230 was originally passed in the wake of a court case finding an online service provider, Prodigy, legally responsible for a libelous message posted on one of its boards. Liability hinged on the fact that the company had removed some, but not all, of the offending content, so the practical effect of this ruling was to disincentivize websites from trying to remove any content. To avoid this outcome, Congress passed CDA § 230, explicitly eliminating carrier liability for content generated by third parties.

The language of § 230 is broad and has been interpreted more broadly still. The section first lays out Congress’s findings and policy goals before launching into the substantive provision, § 230(c). Section 230(c)(1) has done the most work legally, even though that provision does not explicitly mention liability, instead just stating that “[n]o provider or user of an interactive computer services shall be treated as the publisher or speaker” of content created by a third party. (Section 230(c)(2) states that no provider or user of an interactive computer service will be subject to liability for having attempted to “restrict access” to inappropriate content.) However, courts have nonetheless interpreted § 230(c)(1) not as a definitional provision, but as a substantive provision providing providers and users of online services with immunity from suits based on the third-party-generated content, whether or not the website otherwise tried to remove or police that content.

Being more specific about who is protected, “interactive computer service” is defined in § 230(f)(2) as “any information service, system, or access software provider that provides or enables computer access by multiple users to a computer server, including specifically a service or system that provides access to the Internet and such systems operated or services offered by libraries or educational institutions.” The scope of this definition does not appear to have been extensively litigated, but it has so far been understood to provide robust immunity to websites and ISPs, as well as to internet companies and the managers of listservs.

How robust? Well, as an initial matter, suits alleging things like defamation simply can’t touch the service providers that merely offered neutral tools through which offending content got delivered. Remember when Clinton White House aide Sidney Blumenthal sued Matt Drudge over material the latter had posted to AOL? The CDA protected AOL. Moreover, this immunity applies even when the company profits off this information, when the content posted is itself unlawful (including the posting of advertisements related to sex trafficking), and when the website provider had been notified about the problematic content.

Courts have also rejected creative attempts to plead around § 230, declining arguments that companies like MySpace are liable for failing to protect users, or that Google is liable for receipt of tainted funds from fraudulent advertising. Courts have recognized these efforts as “merely another way of claiming that [the website] was liable for publishing the communications.”

There are, to be sure, limits. For example, in Fair Housing Council v. Roommates.com, the plaintiffs alleged that Roommates.com had violated federal and state housing discrimination laws. Roommates.com helped connect people renting rooms with people looking for rooms, allowing users to specify and search by protected characterisitics, such as age, gender, sexual orientation, and family status. So here the plaintiffs were alleging a bit more than just that the service provider was publishing offending material written by others.

The Ninth Circuit found that to the extent Roommates.com merely functioned as a publisher, offering a platform for communication and “passively display[ing] content that is created by third parties,” it was immune from suit for the content of those communications—even if the people it was connecting to one another were engaged in discriminatory conduct. But to the extent that the website created content itself—for example, by requiring users to input protected information—the website could be held liable for the content.

That said, the courts’ understanding of what it means to create the content is fairly limited. While immunity does not extend to designing forms that require the input of information in violation of discrimination law, it does extend to providing a questionnaire to solicit information or content when the answers aren’t required, to helping organize information on a website (such as creating a profile page with user-provided data on a social networking site), and to providing a search function to sort, analyze, and present the content. As the Fourth Circuit stated, it certainly appears that “Congress [has] made a policy choice . . . not to deter harmful online speech through the separate route of imposing tort liability on companies that serve as intermediaries for other parties' potentially injurious messages.”

So far, this looks very promising for both Twitter and Apple. Both, after all, seem comfortably to meet the broad definition of “interactive computer service.” And in both cases, the companies could argue that liability—if imposed—would be flowing from content generated by third parties, not from anything the companies themselves produced. That appears to be exactly the liability that § 230 bars.

But here’s where things get complicated. The complaint against Twitter does not—and the hypothetical complaint against Apple would not—merely allege activity that might, but for § 230, convey civil liability. It alleges criminal activity on the part of Twitter (and Apple) itself. And one of § 230’s limits notably includes the fact that it does not extend to the enforcement of criminal and IP law. Here’s § 230(e)(1): “Nothing in this section shall be construed to impair the enforcement of . . . any . . . Federal criminal statute.” Contrast this with § 230(e)(3), which specifically preempts state criminal laws that treat service providers as “speakers” or “publishers”: “Nothing in this section shall be construed to prevent any State from enforcing any State law that is consistent with this section. No cause of action may be brought and no liability may be imposed under any State or local law that is inconsistent with this section.”

Exactly what § 230(e)(1) means for a suit against Twitter or Apple is not entirely clear. On the one hand, the theory of liability in the complaint is predicated on a violation by the service provider of a federal criminal statute. On the other hand, it’s not at all clear that precluding civil recovery by individual plaintiffs can be said to “impair the enforcement” of the material support law. After all, the government remains free to prosecute Twitter if it thinks it can prove a case against the company. Preventing the widow of a terrorist victim from recovering arguably does not impair enforcement of the criminal law. It impairs the operation, rather, of the statute that gives victims the right to sue based on a violation of the criminal law.

The case law is not entirely clear on this point. As a result of § 230(e)(1), courts agree that “the ability of the government to prosecute internet service providers for alleged violations of [criminal law] is not disputed.” But some federal district courts in Texas and Mississippi have determined that this provision does not signal an exception for civil immunity. Other courts, however, have not seemed so sure. The Seventh Circuit and a federal district court in Missouri both considered arguments that websites might lose their immunity if they aided and abetted a crime. Neither court decided the question because the plaintiffs’ arguments fell short on other grounds. But they also did not reject the idea that criminal culpability could abrogate civil immunity.

There is, in our view, a different reason why § 230 probably does not reach this situation. Note that all of the cases in which the courts to date have immunized service providers are cases predicated on offending content of some sort. That is, somebody posted something that was alleged to abridge someone else’s legal rights, and the question was whether or not the service provider bore some responsibility for the third party’s offending content. Construing § 230 broadly, the courts have held that holding the provider of “neutral tools” liable for such offending content makes the provider a “publisher” or “speaker.”

The material support laws, however, do not work this way. Liability under them does not depend on offending content—by the provider, by a third party, or by anyone. Consider 18 U.S.C. § 2339B, which holds that “Whoever knowingly provides material support or resources to a foreign terrorist organization, or attempts or conspires to do so, shall be fined under this title or imprisoned not more than 20 years, or both, and, if the death of any person results, shall be imprisoned for any term of years or for life.” There are many reasons to believe that Twitter has not violated this law by providing service to ISIS users (we spell out some of Twitter’s defenses in our post last week), but note that if it has violated the law, that offense was completed the moment Twitter knowingly provided service to ISIS. The offense does not depend in any way on what ISIS may have tweeted, or even if ISIS used the service in question. If ISIS operatives tweeted cat videos or they tweeted nothing at all, Twitter still would have violated the statute (assuming it did) the moment it knowingly provided “any property, tangible or intangible, or service, including . . . communications equipment” to operatives of a designated foreign terrorist organization.

In other words, one is not imposing liability under the material support laws based on any allegedly offending content. One is imposing liability based on the provision of service as an antecedent matter to a terrorist organization.

This seems to us fundamentally different from the cases in which the courts rejected creative pleadings, in which they made clear that there was offending content and plaintiffs had merely found a way to plead the cause by pretending otherwise.

Based on this difference, we think it reasonably clear that § 230 would not protect a service provider like Twitter from civil liability under § 2333 for violating the material support law (assuming it did violate it), at least if a plaintiff managed to avoid relying on the contents of tweets as evidence. Such a plaintiff would have a clean argument that she is not asking the courts to treat Twitter as a “publisher” or a “speaker.” She is merely asking the courts to treat Twitter as a provider of material support to a designated foreign terrorist organization.

The trouble, of course, is that it’s pretty hard to imagine a plaintiff who could establish all of the elements required under § 2333 (particularly the causal relationship between the material support and the injury) without ever relying on the substance of third-party content. The plaintiff in the current case does a pretty good job of not relying on the substance of tweets, but there is still some reliance. And the complaint, already attenuated in the fashion we described last week, becomes way more so if you remove reference to those tweet contents. So Twitter could still argue that although the plaintiff does not need to rely on third party content to establish a violation of the criminal law, it does require third party content to establish the other elements of § 2333 liability.

This is uncharted territory, to our knowledge, but we think this argument on Twitter’s part would be implausible. To understand why, consider the following hypothetical. Imagine for a moment that Twitter ran a promotion in which it gave selected users the ability to run a certain number of “Promoted” tweets for free. Imagine that full well knowing who he was, Twitter offered this promotion to Abu Bakr Al-Baghdadi. Imagine further that Al-Baghdadi then tweeted, “Kill Zoe Bedell and Benjamin Wittes now!” and that Twitter promoted that tweet to hundreds of thousands of people, one of whom read it, killed us both, and admitted that he did so in response to the tweet.

This scenario—unlike the facts in the current complaint—would involve a clear and unambiguous violation of the material support law. It would also involve clear and unambiguous causality, which the current complaint lacks. So the question would become whether our next of kin would be blocked from relying on the content of that tweet to establish § 2333 liability. Would it be treating Twitter as a “publisher” or a “speaker” to allow plaintiffs to cite the consequences of the company’s criminal conduct? Or would it merely be treating Twitter as a provider in violation of federal criminal law of material support to terrorists, who then used that material support to effectuate an act of terrorism?

To hold the former would really be to turn CDA immunity into an evidentiary privilege shielding the consequences of criminal behavior if those consequences happen to take place in an online forum.

The difference between our hypothetical and the facts pled in the complaint do not implicate in any way the structural relationship between CDA § 230 and liability for a material support violation under § 2333. That is, if you believe Twitter should not be immune in our Al Baghdadi hypothetical, if follows that § 230 immunity doesn’t apply either to the current case against Twitter. Neither should it immunize Apple against the hypothetical lawsuit we imagined a few months ago. As we said in our earlier posts, Twitter will and Apple would have strong defenses against these suits. And § 230 immunity is one argument that they will very likely wield. But properly understood in relationship to the material support statutes, it’s not an argument that should prevail.


Benjamin Wittes is editor in chief of Lawfare and a Senior Fellow in Governance Studies at the Brookings Institution. He is the author of several books.
Zoe Bedell is an attorney in the Washington, D.C., office of the law firm Munger, Tolles & Olson LLP. Her practice focuses on complex commercial litigation, as well as privacy and technology issues. Before joining the firm, Zoe clerked for Justice Elena Kagan of the U.S. Supreme Court and for then-Judge Brett Kavanaugh of the U.S. Court of Appeals for the District of Columbia Circuit. Zoe received her J.D. from Harvard Law School, magna cum laude. Prior to law school, Zoe served as an officer in the U.S. Marine Corps, deploying twice to Afghanistan, and worked at an investment bank for two years.

Subscribe to Lawfare