Courts & Litigation Cybersecurity & Tech

Have Trouble Understanding Section 230? Don’t Worry. So Does the Supreme Court.

Jeff Kosseff
Thursday, March 7, 2024, 12:31 PM

Contrary to suggestions during the NetChoice oral arguments, Section 230 does not require platforms to be “neutral."

Columns of the Supreme Court of the United States (Ron Coleman, https://www.flickr.com/photos/roncoleman/; CC BY-NC 2.0 DEED, https://creativecommons.org/licenses/by-nc/2.0/)

Published by The Lawfare Institute
in Cooperation With
Brookings

Last year, the Supreme Court had a chance to interpret Section 230 of the Communications Decency Act for the first time. The Court was hearing Gonzalez v. Google, in which the lower court held that the 1996 law shielded Google from a lawsuit filed by the family of an Islamic State shooting victim. 

During oral arguments, the justices seemed to realize, in real time, that Section 230 was better left to Congress than the Court. “We really don’t know about these things,” Justice Elena Kagan said. “You know, these are not like the nine greatest experts on the internet.” So it wasn’t a surprise in May when the Court entirely punted on Section 230 and decided the case on narrower grounds.

Almost exactly a year later, on Feb. 26, the nine non-experts were back in the courtroom, hearing arguments in another vital technology case. NetChoice, a technology trade group, was challenging Florida and Texas laws that restrict large platforms’ ability to moderate user content. This time, Section 230 was not directly before the Court. Instead, the Court was examining whether the platforms have a First Amendment right to moderate. Still, the justices and lawyers mentioned Section 230 more than 70 times during the four hours of arguments. 

And some of the mentions were, well, doozies. I published a book about the statute’s history five years ago, just as commentators and politicians across the political spectrum increased their skepticism of the need for such sweeping immunity. As the Section 230 debate intensified, so did the myths about the law’s history, purpose, and operation. And those misinterpretations were on full display during the NetChoice oral arguments. 

In his questions to Solicitor General Elizabeth Prelogar, who argued on behalf of the U.S. government against the state laws, Justice Neil Gorsuch compared social media platforms to a category of businesses that includes phone companies. “Isn’t the whole premise … of Section 230 that they are common carriers, that—that they’re not going to be held liable in part because it isn’t their expression, they are a conduit for somebody else?” 

Likewise, Justice Clarence Thomas, who has previously criticized broad Section 230 protections, asked NetChoice’s lawyer, Paul Clement, whether Section 230 protection means that the platforms are mere “conduits” that carry others’ content. 

“[T]he argument under Section 230 has been that you’re merely a conduit ... that was the case back in the ’90s and perhaps the early 2000s,” Thomas said. “Now you’re saying that you are engaged in editorial discretion and expressive conduct.”

Common carriers such as telephone companies generally receive broad “conduit” liability protection for user content. In other words, Verizon isn’t liable for defamation that occurs over its phone lines. Common carriers also face non-discrimination requirements; for instance, Verizon can’t discontinue phone service because it doesn’t like a customer’s political views. 

Underlying Gorsuch’s and Thomas’s questions is the argument that Section 230 reflects Congress’s desire to convert internet platforms into common carriers, or neutral conduits that carry all user content equally. This is one of the most pervasive claims about Section 230 that has emerged in recent years. And it is absolutely incorrect.

A review of Section 230’s history reflects Congress’s desire to ensure that platforms have the breathing space to delete some user content while not facing liability for the material they leave up. Before Section 230, the common law recognized different types of companies that carry third-party content and attached different liability rules to each. 

First, publishers—such as a newspaper that prints letters to the editors—faced the same potential liability as the authors. So if a New York Times letter to the editor defamed someone, the Times could face a defamation lawsuit just as the author could. Second were “distributors” such as newsstands and bookstores, which were liable for the books and magazines they sold only if they knew or had reason to know of the harmful content. And third were common carriers such as telephone companies, which received the strongest liability protections. 

Congress passed Section 230 because early courts struggled to apply these categories to new online services in the early 1990s. A New York federal judge in 1991 blocked a lawsuit against CompuServe, concluding that because it exercised no “editorial control,” it was a distributor. “CompuServe has no more editorial control over such a publication than does a public library, book store, or newsstand, and it would be no more feasible for CompuServe to examine every publication it carries for potentially defamatory statements than it would be for any other distributor to do so,” the court ruled.

But a few years later, in Stratton Oakmont v. Prodigy, a New York state trial court concluded CompuServe’s competitor was a “publisher” that was just as liable for potential defamation as the author. The court reasoned that Prodigy exercised far more “editorial control,” with detailed content policies and moderators. “Prodigy’s conscious choice, to gain the benefits of editorial control, has opened it up to a greater liability than CompuServe and other computer networks that make no such choice,” the court wrote. In other words, because Prodigy moderated, it was a publisher and not a distributor. 

Lawmakers—who at the time were concerned about the proliferation of online pornography on an internet that was increasingly available in schools and libraries—recognized that the two decisions created strong incentives for platforms either to take an entirely hands-off approach or to block everything that might possibly generate controversy. After all, more “editorial control” could lead to greater liability. So a rational platform might avoid moderating harmful content to receive the protections of CompuServe. But if a platform did engage in moderation, then it might see the need to block any user content that could potentially lead to a lawsuit, as it would then face the same liability standard as Prodigy. 

Congress addressed this perverse incentive throughout 1995 as it was overhauling the nation’s telecommunications laws for the first time in 60 years. Then-Reps. Chris Cox (R-Calif.) and Ron Wyden (D-Ore.) introduced an amendment that would become Section 230, which had two main provisions. Section (c)(1) states that “[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” 

Section (c)(1) precludes the Prodigy outcome by ensuring that online services are not classified as “publishers” regardless of the “editorial control” that they exercise. When the amendment came up for debate on the House floor, Cox said it would “protect [the online services] from taking on liability such as occurred in the Prodigy case in New York that they should not face for … helping us [block offensive content].”

Section (c)(2) further demonstrates a desire to encourage moderation, stating that platforms cannot be liable for “any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.”

In other words, platforms are not liable for the user content they leave up, regardless of whether they exercised “editorial control” over other user content, and they are also generally not liable for efforts to remove content. Indeed, the conference report stated that Section 230’s purpose was to “overrule Stratton-Oakmont v. Prodigy and any other similar decisions which have treated such providers and users as publishers or speakers of content that is not their own because they have restricted access to objectionable material.”

Soon after Section 230’s passage, the U.S. Court of Appeals for the Fourth Circuit broadly interpreted Section 230 to mean not only that online services are barred from being classified as the publishers of user content but also that they receive broader immunity than the middle-ground protections afforded to “distributors,” which could be liable if they knew or had reason to know of the harmful content. That means that under Section 230, platforms receive a liability shield that is basically as broad as what common carriers receive.

But that does not mean that Section 230 requires that platforms satisfy the “neutrality” or non-discrimination obligations that the legal system imposes on common carriers. Section 230’s history evinces an intent for the exact opposite: to give the online services breathing space to set their own moderation policies and practices.

And Section 230 does not require courts or regulators to treat online platforms as common carriers, making these tangents at oral arguments red herrings. And even if Section 230 did attempt to confer a form of common carrier status on platforms, Congress could not revoke the platforms’ First Amendment rights to editorial discretion. Imagine the chaos that would result if lawmakers could take away speakers’ First Amendment rights just by granting them immunity from lawsuits.

At other times, the justices seemed to be using Section 230 to make a somewhat more nuanced—yet equally flawed—argument. If platforms claim that their moderation is First Amendment-protected expression, Gorsuch reasoned, that means that they played at least a partial role in the creation. “And if it’s now their communication in part, do they lose their 230 protections?” Gorsuch asked during the arguments. 

Section 230 applies only to content that was created entirely by a third party such as a user. If a platform created the content “in whole or in part,” it will not receive protection. For instance, if someone posts a defamatory claim on Facebook, the poster could be liable, but Section 230 would shield Facebook. But if Facebook had a section of its site with news articles written by Facebook employees, Facebook could face liability in defamation lawsuits arising from those articles.

Justice Gorsuch’s argument ignores the distinction between two separate actions—creating the content and moderating it. The platforms are not claiming that the Texas and Florida laws violate their First Amendment rights to create the user-generated content. They argue that their moderation decisions—deleting or leaving up content, banning users, prioritizing posts, and the like—are First Amendment-protected editorial choices. Those are the exact sorts of activities that Section 230 encourages platforms to do. For the purposes of Section 230, these editorial activities are in a completely separate category from content creation.

Prelogar correctly urged Gorsuch to consider the two different types of speech. “There are the individual user posts on these platforms, and that’s what 230 says that the platforms can’t be held liable for,” she said. “The kind of speech that we think is protected here under the First Amendment is not each individual post of the user but, instead, the way that the platform shapes that expression by compiling it, exercising this kind of filtering function.”

Many vital First Amendment issues are at stake in the NetChoice case, and we already face a great risk that the Supreme Court will erode quarter-century-old, robust First Amendment protections for online speech. At the very least, I hope that the nine non-greatest experts on the internet continue to defer to Congress on the future of Section 230.


Jeff Kosseff is a nonresident senior legal fellow at The Future of Free Speech Project. The views expressed in this piece are only his.

Subscribe to Lawfare