Criminal Justice & the Rule of Law

The Wall Street Journal Misreads Section 230 and the First Amendment

Berin Szóka, Ari Cohn
Wednesday, February 3, 2021, 3:43 PM

In a new Wall Street Journal op-ed, Philip Hamburger argues that “the government, in working through private companies, is abridging the freedom of speech.” This argument doesn’t hold water.

The U.S. Supreme Court building. (Mark Thomas, https://tinyurl.com/16z3n4rq; Pixabay, https://pixabay.com/service/license/)

Published by The Lawfare Institute
in Cooperation With
Brookings

When private tech companies moderate speech online, is the government ultimately responsible for their choices? This appears to be the latest argument advanced by those criticizing Section 230 of the Telecommunications Act of 1996—sometimes known as Section 230 of the Communications Decency Act. But upon closer scrutiny, this argument breaks down completely.

In a new Wall Street Journal op-ed, Philip Hamburger argues that “the government, in working through private companies, is abridging the freedom of speech.” We’ve long respected Hamburger, a professor at Columbia Law School, as the staunchest critic of overreach by administrative agencies. Just last year, his organization (the New Civil Liberties Alliance) and ours (TechFreedom) filed a joint amicus brief to challenge such abuse. But the path proposed in Hamburger’s op-ed would lead to a regime for coercing private companies to carry speech that is hateful or even downright dangerous. The storming of the U.S. Capitol should make clear once and for all why all major tech services ban hate speech, misinformation and talk of violence: Words can have serious consequences—in this case, five deaths, in addition to two subsequent suicides by Capitol police officers.

Hamburger claims that there is “little if any federal appellate precedent upholding censorship by the big tech companies.” But multiple courts have applied the First Amendment and Section 230 to protect content moderation, including against claims of unfairness or political bias. Hamburger’s fundamental error is claiming that Section 230 gives websites a “license to censor with impunity.” Contrary to this popular misunderstanding, it is the First Amendment—not Section 230—which enables content moderation. Since 1998, the Supreme Court has repeatedly held that digital media enjoy the First Amendment rights as newspapers. When a state tried to impose “fairness” mandates on newspapers in 1974, forcing them to carry third-party speech, no degree of alleged consolidation of “the power to inform the American people and shape public opinion” in the newspaper business could persuade the Supreme Court to uphold such mandates. The court has upheld “fairness” mandates only for one medium—broadcasting, in 1969—and only because the government licenses use of publicly owned airwaves, a form of “state action.”

Websites have the same constitutional right as newspapers to choose whether or not to carry, publish or withdraw the expression of others. Section 230 did not create or modify that right. The law merely ensures that courts will quickly dismiss lawsuits that would have been dismissed anyway on First Amendment grounds—but with far less hassle, stress and expense. At the scale of the billions of pieces of content posted by users every day, that liability shield is essential to ensure that website owners aren’t forced to abandon their right to moderate content by a tsunami of meritless but costly litigation.

Hamburger focuses on Section 230(c)(2)(A), which states: “No provider or user of an interactive computer service shall be held liable on account of ... any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.” But nearly all lawsuits based on content moderation are resolved under Section 230(c)(1), which protects websites and users from being held liable as the “publisher” of information provided by others. In the 1997 Zeran decision, the U.S. Court of Appeals for the Fourth Circuit concluded that this provision barred “lawsuits seeking to hold a service provider liable for its exercise of a publisher’s traditional editorial functions—such as deciding whether to publish, withdraw, postpone or alter content” (emphasis added).

The Trump administration argued that these courts all misread the statute because their interpretation of 230(c)(1) has rendered 230(c)(2)(A) superfluous. But the courts have explained exactly how these two provisions operate differently and complement each other: 230(c)(1) protects websites only if they are not responsible, even “in part,” for the “development” of the content at issue. If, for example, they edit that content in ways that contribute to its illegality (say, deleting “not” in “John is not a murderer”), they lose their 230(c)(1) protection from suit. Because Congress aimed to remove all potential disincentives to moderate content, it included 230(c)(2)(A) as a belt-and-suspenders protection that would apply even in this situation. Hamburger neglects all of this and never grapples with what it means for 230(c)(1) to protect websites from being “treated as the publisher” of information created by others.

Hamburger makes another crucial error: He claims Section 230 “has privatized censorship” because 230(c)(2)(A) “makes explicit that it is immunizing companies from liability for speech restrictions that would be unconstitutional if lawmakers themselves imposed them.” But in February 2020, the U.S. Court of Appeals for the Ninth Circuit ruled that YouTube was not a state actor and therefore could not possibly have violated the First Amendment rights of the conservative YouTube channel Prager University by flagging some of its videos for “restricted mode,” which parents, schools and libraries can turn on to limit children’s access to sensitive topics.

Hamburger insists otherwise, alluding to the Supreme Court’s 1946 decision in Marsh v. Alabama: “The First Amendment protects Americans even in privately owned public forums, such as company towns.” But in 2019, Justice Brett Kavanaugh, writing for all five conservative justices, noted that in order to be transformed into a state actor, a private entity must be performing a function that is traditionally and exclusively performed by the government: “[M]erely hosting speech by others is not a traditional, exclusive public function and does not alone transform private entities into state actors subject to First Amendment constraints.” In fact, Marsh has been read very narrowly by the Supreme Court, which has declined to extend its holding on multiple occasions and certainly has never applied it to any media company.

Hamburger also claims that Big Tech companies are “akin to common carriers.” He’s right that “the law ordinarily obliges common carriers to serve all customers on terms that are fair, reasonable and nondiscriminatory.” But simply being wildly popular does not transform something into a common carrier service. Common carriage regulation protects consumers by ensuring that services that hold themselves out as serving all comers equally don’t turn around and charge higher prices to certain users. Conservatives may claim that’s akin to social media services saying they’re politically neutral when pressed by lawmakers at hearings, but the analogy doesn’t work. Every social media service makes clear up front that access to the service is contingent on complying with community standards, and the website reserves the discretion to decide how to enforce those standards—as the U.S. Court of Appeals for the Eleventh Circuit noted recently in upholding the dismissal of a lawsuit by far-right personality Laura Loomer over her Twitter ban. In other words, social media are inherently edited services.

Consider the Federal Communications Commission’s 2015 Open Internet Order, which classified broadband service as a common carrier service insofar as an internet service provider (ISP) promised connectivity to “substantially all Internet endpoints.” Kavanaugh, then an appellate judge, objected that this infringed the First Amendment rights of ISPs. Upholding the FCC’s net neutrality rules, the U.S. Court of Appeals for the D.C. Circuit explained that the FCC’s rules would not apply to “an ISP holding itself out as providing something other than a neutral, indiscriminate pathway—i.e., an ISP making sufficiently clear to potential customers that it provides a filtered service involving the ISP’s exercise of ‘editorial intervention.’” Social media services make that abundantly clear. And while consumers reasonably expect that their broadband service will connect them to all lawful content, they also know that social media sites won’t let you post everything you want.

Hamburger is on surer footing when commenting on federalism and constitutional originalism: “[W]hen a statute regulating speech rests on the power to regulate commerce, there are constitutional dangers, and ambiguities in the statute should be read narrowly.” But by now, his mistake should be obvious: Section 230 doesn’t “regulat[e] speech.” In fact, it does the opposite: It says the government won’t get involved in online speech and won’t provide a means to sue websites for their refusal to host content.

Hamburger doubles down by claiming that Section 230 allows the government to “set the censorship agenda.” But neither immunity provision imposes any “agenda” at all; both leave it entirely to websites to decide what content to remove. Section 230(c)(1) does this by protecting all decisions made in the capacity of a publisher. Section 230(c)(2)(A) does this by providing an illustrative list of categories (“obscene, lewd, lascivious, filthy, excessively violent, harassing”) and then adding the intentionally broad catchall: “or otherwise objectionable.” Both are coextensive with the First Amendment’s protection of editorial discretion.

Hamburger argues for a “narrow” reading of 230(c)(2)(A), which would exclude moderating content for any reason that does not fall into one of those categories or because of its viewpoint. He claims that this will allow state legislatures to “adopt civil-rights statutes protecting freedom of speech from the tech companies.” And he reminds readers about the dangers of the government co-opting private actors to suppress free speech: “Some Southern sheriffs, long ago, used to assure Klansmen that they would face no repercussions for suppressing the speech of civil-rights marchers.” This analogy fails for many reasons, especially that those sheriffs flouted laws requiring them to prosecute those Klansmen. That is markedly and obviously different from content moderation, which is protected by the First Amendment.

Ironically, Hamburger’s proposal would require the government take the side of those spreading hate and falsehoods online. Under his “narrow” interpretation of Section 230, the law would not protect the removal of Holocaust denial, use of racial epithets or the vast expanse of speech that—while constitutionally protected—isn’t anything Hamburger, or any decent person, would allow in his own living room. Nor, for example, would it protect removal of hate speech about Christians or any other religious group. Websites would bear the expense and hassle of fighting lawsuits over moderating content that did not fit squarely into the categories mentioned in 230(c)(2)(A).

Perversely, the law would favor certain kinds of content moderation decisions over others, protecting websites from lawsuits over removing pornography or profanity, but not from litigation over moderating false claims about election results or vaccines or conspiracy theories about, say, Jewish space lasers or Satanist pedophile cannibal cults. But if Hamburger’s argument is that Section 230 unconstitutionally encourages private actors to do what the government could not, how does favoring moderation of some types of constitutionally protected speech over others address this complaint? This solution makes sense only if the real criticism isn’t of the idea of content moderation, or its constitutionality, but rather that social media platforms aren’t moderating content according to the critic’s preferences.

Hamburger is a constitutional originalist, and he invokes the Framers’ understandings of the First Amendment: “Originally, the Constitution’s broadest protection for free expression lay in Congress’s limited power.” But there’s nothing remotely originalist about his conclusion. His reading of Section 230 would turn “Congress shall make no law...” into a way for the government to pressure private media to carry the most odious speech imaginable.


Berin Szóka is President of TechFreedom, a think tank dedicated to technology law and policy. Before founding TechFreedom in 2010, Berin was a Senior Fellow and the Director of the Center for Internet Freedom at The Progress & Freedom Foundation. Previously, he practiced telecommunications and Internet law at Latham & Watkins LLP and Lawler Metzger Milkman & Keeney, LLC, and clerked for a federal district judge. He is graduate of the University of Virginia School of Law.
Ari Cohn is a Chicago attorney who specializes in First Amendment, defamation, and other speech-related matters and is a Senior Adjunct Fellow at TechFreedom. Prior to establishing his own practice, he served as Director of the Individual Rights Defense Program at the Foundation for Individual Rights in Education and was an associate in the Chicago office of Mayer Brown LLP.

Subscribe to Lawfare