Courts & Litigation Cybersecurity & Tech

Texas, Florida, and the Magic Speech Sorting Hat in the NetChoice Cases

Daphne Keller
Wednesday, February 21, 2024, 12:26 PM

Lots of people want laws to keep good speech online and bad speech offline. That isn’t what the Texas and Florida laws would do.

United States Supreme Court (Domenico Convertini, https://www.flickr.com/photos/42477684@N08/51012498763/; CC BY-SA 2.0 DEED, https://creativecommons.org/licenses/by-sa/2.0/)

Published by The Lawfare Institute
in Cooperation With
Brookings

On Feb. 26, Texas and Florida will go before the Supreme Court in the NetChoice cases to defend the constitutionality of  their “must-carry” laws, which impose a range of new limits on Internet platforms’ content moderation. Judges, lawyers, and commentators arguing in favor of the two states’ laws often make or imply a remarkable claim: that the laws will require platforms to stop censoring important speech, while permitting them to continue removing speech that is bad, unimportant, dangerous, or offensive. 

That claim falls apart quickly upon closer inspection, as I’ll explain here (and as I explore in far more detail in my ongoing FAQ post about the cases). But it’s an appealing enough idea that it keeps resurfacing. The U.S. Court of Appeals for the Fifth Circuit, for example, wrote in upholding the constitutionality of the Texas law that “what actually is at stake” in the case was “the suppression of domestic political, religious, and scientific dissent.” By contrast, the court dismissed the platforms’ “obsession” with content from “terrorists and Nazis” and other “vile expression” as mere speculation. Examples of platforms dealing with that kind of content, it said, were “borderline hypotheticals[.]” 

A hypothetical law that caused platforms to leave up important speech while taking down bad speech would solve a lot of problems. Or, it would if we as a society could agree about what speech belongs in which category. Such a law would in principle be straightforward: It would tell platforms to allocate speech to its proper category as “vile” material or important protected speech, and remove or retain it accordingly. Think of it as a system that would seamlessly resolve difficult questions of categorization, akin to the magic sorting hat in the “Harry Potter” books and movies. But that’s not what the Texas and Florida laws tell platforms to do. If the laws did say that, the states would have even more constitutional problems than they do now.

It’s unclear what platforms would actually do if they had to comply with the states’ laws, including the requirements for “neutral” or “consistent” content moderation. But their options will inevitably be shaped by the vast amount of lawful but harmful, offensive, or scammy content that users try to post. As I’ll detail below, platforms trying to maintain neutrality in the face of this deluge might choose to abandon their current efforts, opening the floodgates to offensive and harmful speech. Or they might go the other way and prohibit entire categories of material, even if that means taking down “political, religious, and scientific” speech. The result would be an internet speech environment very different from the one we experience today. But it wouldn’t be the perfectly sorted version of the internet, with important speech foregrounded and unimportant speech removed.   

A law that was actually designed to achieve the outcomes that the Fifth Circuit described would presumably say something like “platforms must carry important speech such as political dissent, but remove vile speech such as pro-Nazi posts.” Such a law would very clearly put the government in the business of making content-based rules, telling platforms which lawful speech they must carry and which equally lawful speech they may freely discard. It would be unprecedented and almost certainly unconstitutional. A more stripped-down version (like “platforms must carry important speech but remove vile speech”) would not be much better. The Supreme Court has long rejected the idea that states may more freely regulate speech that is “not very important[,]” noting that “[t]he history of the law of free expression is one of vindication in cases involving speech that many citizens may find shabby, offensive, or even ugly.” 

The Florida and Texas laws do not create neat, binary rules that will make platforms carry important speech, while giving them leeway to take down “vile” or offensive material. Any assumption that the laws do work this way defines away some of the most important issues in internet law and makes the cases sound far simpler than they are. 

The internet is full of both “important” and “vile” speech, and internet laws’ real consequences involve both.

The real-world impact of internet laws is shaped by an unavoidable reality: The web is awash in ugly content that many people want nothing to do with. Ordinary users, trolls, and bots generate lots and lots of speech that is “lawful but awful”—material that the First Amendment protects, but that many people consider dangerous, offensive, or morally abhorrent. That includes extreme pornography, beheading videos, pro-anorexia messages aimed at teenagers, Holocaust denial, endorsement of mass shootings, and much more. Amicus briefs in NetChoice and in the 2023 Gonzalez and Taamneh cases documented endless examples of such lawful but awful speech, including some troubling posts about the Supreme Court justices themselves.  

Very few users want to encounter all this material whenever they go online, even if they believe (as I generally do) that it is rightly protected under the Constitution. Platforms largely manage to weed out such material through moderation measures that remove almost incomprehensibly vast swaths of content every month. Meta reported 18 million removals for hate speech alone in the first quarter of 2023, for example. If platforms stopped moderating that content or tried to redefine their rules to avoid viewpoint discrimination, the changes would likely be very consequential—and mostly not in the way that Texas and Florida seem to want. The platforms would indeed start carrying some speech that Texas and Florida lawmakers consider important, but that speech would be drowned out by a tide of internet garbage. 

The point here is not that Texas and Florida were wrong to worry about “censorship” of important speech. Platforms do often remove speech that some people—or many people—consider very important. That can happen by mistake, because of explicit or implicit bias, or because platforms and users genuinely have differences of opinion about the speech at issue. But Texas and Florida did not draft laws to solve that problem with a magic sorting hat, requiring carriage only of political dissent and other speech that lawmakers consider important. Their mandates apply to “vile,” Nazi, or terrorist speech online, too. To ignore that is to set aside a major issue in the NetChoice cases.

The Texas and Florida laws are likely to make platforms leave up “vile” speech, take down “important” speech, or both.

Importantly, nothing in the text of the states' statutes even attempts to set a general rule for distinguishing important from unimportant or harmful speech. 

The laws’ carriage mandates generally fall into two categories. One involves specified state preferences for or against lawful speech based on its content. Florida, for example, requires special favorable treatment for election-related speech and “journalistic enterprises.” (That seems to mean that larger companies can post pro-Nazi messages if they want. Smaller local news outlets get no special privileges, because unless they have over 50,000 paid subscribers or 100,000 monthly active users, they don’t count as “journalistic enterprises.”) Texas’s law has special unfavorable treatment for some lawful but violent, threatening, or harassing speech. It gives platforms special dispensation to remove that content regardless of the viewpoint it expresses. 

The laws’ second major kind of constraint on content moderation comes from Texas’s viewpoint-neutrality mandate and Florida’s consistency mandate. Read literally, those provisions don’t specify state preferences for particular user speech based on its content. Instead, they change the rules about how platforms can define their own content-based preferences. Texas says that platforms can comply with its rule, for example, by blocking “categories of content, such as violence or pornography” (emphasis in original). A platform that wanted to remove white nationalist manifestoes while remaining viewpoint neutral could presumably block all discussion of race as a “category” of content. Similar categorical rules might ban—or permit—all viewpoints on questions like the reality of the Holocaust or climate change. 

No one knows quite what the new platform speech rules would look like under the neutrality and consistency mandates. If the laws came into effect, we would learn the answer by platforms adopting potential rules, and courts rejecting or refining them through litigation. At the end of the day, internet users would be subject to new, content-based speech rules created because of legislators’ mandates and judges’ interpretations. Those rules would distinguish permissible from impermissible speech based on content. But they would do so using fault lines between entire categories or topics of speech—not by distinguishing important speech on one hand from vile or unimportant speech on the other. 

Would a law mandating the carriage of important speech while permitting removal of vile speech be constitutional?

If the states’ laws really did mandate carriage of important speech while permitting removal of bad speech, I think they would be unconstitutional. But rational minds can differ. I had a civil and productive disagreement about this with FCC Commissioner Brendan Carr back when the NetChoice cases were in the lower courts. He argued that “pro-speech guardrails” could be established to require platforms to carry “dissenting political, scientific, [or] religious speech,” while being “very explicit” about letting them take down terrorist content or even profanity. That’s not what the NetChoice laws do, of course. Lawmakers in Texas actually voted down an amendment that would have explicitly allowed removal of terrorist content. But it is worth unpacking the constitutional issues raised by such a hypothetical law, if only to dispel any impression that that law might be easily defended from constitutional challenges. 

It’s true that the kinds of laws that the FCC administers—including rules for broadcast and cable television—set guardrails, including against some lawful speech like profanity. Those companies can also be compelled to carry some content against their will. But the Supreme Court emphatically declined to extend similar state authority to the internet in the seminal First Amendment case Reno v. ACLU. Older media like broadcast were different, it explained, in part because they offered only a few scarce channels for privileged speakers to convey their messages. No such scarcity could be found on the internet, the Court said. Instead, the internet offered abundant channels by which “any person with a phone line can become a town crier with a voice that resonates farther than it could from any soapbox.” As a result, lawmakers could not constitutionally impose special speech restrictions online.

In principle, the Court could abandon Reno’s logic and grant lawmakers more leeway to police online speech under a hypothetical magic-sorting-hat rule. It’s very unclear what that would mean in practice, though. How would platforms know which speech is too important to remove, and which speech is vile enough to take down? Would the FCC decide? The agency’s older rules developed for broadcast or cable could hardly be a fit for platforms that convey ordinary people’s daily communications. Staffing up the agency to define and administer the innumerable disputes about online speech and platforms would be an enormous expansion, and would raise due process concerns if it kept speakers from going to court. But if these issues were resolved through litigation, hashing out the parameters of new and more restrictive rules could clog courts for the foreseeable future—especially if the new rules were different in every state. 

These issues raise a snarl of constitutional issues well beyond the ones in NetChoice. But they are, in any case, separate issues. They are not the questions before the Court this time. The Texas and Florida laws will not magically lead platforms to carry important speech while allowing them to remove vile or offensive speech. No one should pretend that they do.


Daphne Keller directs the Program on Platform Regulation at Stanford’s Cyber Policy Center. Her work, including academic, policy, and popular press writing, focuses on platform regulation and Internet users' rights in the U.S., EU, and around the world. She was previously Associate General Counsel for Google, where she had responsibility for the company’s web search products. She is a graduate of Yale Law School, Brown University, and Head Start.

Subscribe to Lawfare