Courts & Litigation Cybersecurity & Tech

The NetChoice Cases Aren't About Discrimination

Daphne Keller
Monday, January 29, 2024, 12:47 PM
Texas and Florida are telling the Supreme Court that their social media laws are like civil rights laws prohibiting discrimination against minority groups. They’re wrong.
Joe Ravi/Creative Commons

Published by The Lawfare Institute
in Cooperation With
Brookings

Next month, the Supreme Court will hear oral arguments in the NetChoice cases, concerning state law restrictions on platforms’ ability to moderate content on their services. Texas and Florida have now filed their briefs before the Court—and their legal arguments have taken a strange turn. 

In 2021, Texas and Florida enacted laws “protect[ing] First Amendment rights” and preventing platforms from silencing conservatives or advancing a “leftist” “Silicon Valley ideology.” Florida’s law forces YouTube, X (formerly known as Twitter), and other platforms to carry certain content against their will and says all other moderation must be “consistent.” Under Texas’s law, platforms’ moderation must be viewpoint-neutral. If a platform allows users to post comments in support of racial equality and civil rights, for example, the platform seemingly must also allow comments promoting white supremacy or racial segregation.

Platforms sued the states, saying that the laws violate their First Amendment rights to set and enforce editorial policies. The case was thus framed as a “speech rights versus speech rights” dispute, pitting the editorial rights of platforms against the rights of users to share disruptive or controversial speech—or, more accurately, the rights of users to live under speech rules set by Texas and Florida instead of different rules set by private companies. (I examine these legal arguments in more detail in a series of blog posts here.) Yet now, in their briefs, Texas and Florida are also arguing their laws prohibit discrimination, just as civil rights laws do. On that logic, “must-carry” laws that may compel platforms to carry racist diatribes and hate speech are justified for the same reasons as laws that prohibit businesses from discriminating based on race or gender.  

Discriminating against someone based on her race and discriminating against her based on her tweets are not the same thing. The Texas and Florida briefs blur the distinction between the two by conflating different meanings of the word “discrimination.” The states’ laws were enacted to stop platforms from restricting speech based on the message it conveys. Doing that is  “discrimination” in the most basic and literal sense: The platforms are making choices between different things, under rules that treat users differently based on what they say—much as the hosts of a lecture series might exclude speakers or audience members for disruptive or racist remarks. The states’ arguments equate this with the important and distinct issues addressed by civil rights laws. Those laws broadly prohibit discriminating against people based on who they are, like hotels or restaurants refusing to serve Black customers. 

Until recently, this argument based on civil rights precedent didn’t feature very prominently in the states’ NetChoice briefs. But it is unsurprising to see it emerge now. This blurring of speech-based and identity-based discrimination has been common for a few years now in anti-platform arguments from the political right—mostly appearing in discussion panels and public presentations, but sometimes in court as well. For example, in a suit over Twitter’s right to ban a user who espoused white nationalist views, the question arose whether the platform would also claim a First Amendment right to discriminate based on gender or disability. According to Texas’s brief, the company argued in court that it did have such a right. Similar questions arose, but were not resolved, in two cases alleging that YouTube discriminated against video creators based on race or LGBTQ+ status (brought by the same lawyers who brought a viewpoint discrimination suit against the platform for Dennis Prager).

The states’ new framing of the NetChoice dispute adds confusion and opens up two hot-button constitutional issues for the price of one. A ruling that further conflated these two kinds of discrimination could affect ordinary civil rights laws, in addition to the free expression legal issues raised in the case. 

The Florida law at issue in NetChoice never mentions discrimination, and Texas’s law mentions it only once, when it refers to “discriminat[ing] against expression[.]” The states’ Supreme Court briefs, by contrast, refer to discrimination dozens of times. Florida says its law’s goal is “preventing discrimination,” citing a Supreme Court case about discrimination against gay foster parents. Texas compares regulated platforms’ speech rules (which generally prohibit racist speech) to “excluding racial minorities.” It cites iconic civil rights cases against whites-only hotels and schools as support for its law. It even charges that platforms are behaving like segregated businesses in the Jim Crow South, and invoking the First Amendment “as the refuge of last resort for discrimination[.]” (This is, of course, exactly what the states’ detractors say Texas and Florida lawmakers have done by effectively forcing platforms to carry racist speech.)  

Florida, by contrast, seems to suggest that its rules for platform speech rules should survive First Amendment scrutiny because they leave platforms free to discriminate against people in protected classes. It says its provisions are “less intrusive” than the public accommodations laws at issue in older cases, because Florida will let platforms “have discriminatory standards if they apply them consistently.” (That’s a good reminder not to expect platforms to be bastions of anti-racism or civil rights protection. The platform speech policies at issue in NetChoice may be broadly aligned with those values now, or at least more aligned than the policies Texas and Florida seek to impose. But the policies are still very far from being what many civil rights advocates want, and they have every possibility of getting worse.) 

There are some genuinely hard questions about the line between discriminating against people based on characteristics such as religion or gender, and discriminating based on the messages those people choose to express. The NetChoice cases don’t really raise those kinds of questions, but some earlier and highly relevant rulings did. These rulings focused on parties who specifically wanted to express—or avoid expressing—messages about minority rights and discrimination. 

One key case, for example, held that a parade organizer had a First Amendment right to exclude a gay rights organization because of the message the marchers’ presence would add to the parade. The Court contrasted the parade operator’s arguments against carrying that message with the First Amendment claims raised unsuccessfully by a private club against accepting Black people as members. Requiring racial integration, the Court said, did not “trespass on” the clubs’ own message, or require it to admit members whose “manifest views were at odds” with its own. In another case last year, the Court said a web designer had the First Amendment right to refuse service to gay customers who wanted websites for their weddings. (Focusing on gay rights illuminates a depressing through-line in these cases. Sometimes LGBTQ+ advocates are on the “platform” side of the NetChoice issues, and sometimes they aren’t. Either way, they lose.)

One recurring set of legal questions in these cases involves public accommodations laws, which prohibit many forms of discrimination by businesses like restaurants or hotels. An amicus brief filed in the NetChoice cases by the Lawyers’ Committee for Civil Rights Under Law explores the broader relevance of these laws to the platform moderation issue from a civil rights perspective; Adam Candeub’s and Adam MacLeod’s brief provides an alternate take favoring the states. My own take (also discussed at pages 112-113 here) is that speech laws and public accommodations laws were designed to advance different values. The two might overlap or compete in complicated ways, but they aren’t the same thing. As Eugene Volokh has documented, though, public accommodations statutes do sometimes bar businesses from discriminating based on speech-related attributes like “political affiliation”—though there is not much precedent to tell us what that means, or how it relates to the First Amendment.

The states’ justifications for their new roles as civil rights defenders are thin. Texas, which advances the most detailed arguments, does so mostly by insisting that “countless applications” of its must-carry law have “nothing to do with any on-platform expression.” (Texas must mean the on-platform expression of users, since literally every application of these rules affects the messages platforms express through their editorial choices.) While Texas’s appellate brief characterized this entire portion of the law as preventing platforms from “discriminating based on viewpoint[,]” it now says that the same provisions bar “two types of discrimination: one targeting off-platform characteristics, the other on-platform speech.” 

That’s not strictly wrong, but this new emphasis on “off-platform characteristics” is a weird gloss on Texas’s law. Its relevant rules prohibits platform “censorship” based on

              (1) “the viewpoint of the user or another person”

              (2) “the viewpoint represented in the user’s expression or another person’s expression” or 

              (3) “a user’s geographic location in this state or any part of this state.”

These restrictions, the statute says, apply “regardless of whether the viewpoint is expressed on a social media platform or through any other medium.”

Put together, that means that the law covers three “off-platform” things: the user’s off-platform expression of viewpoints, someone else’s off-platform expression of viewpoints, and the user’s geographic location. That’s a pretty sparse basis for arguing, as Texas does, that the provisions “complement preexisting Texas laws forbidding discrimination based on race, color, disability, religion, sex, national origin, or age” and have “nothing to do with” on-platform expression. If that had been the goal, Texas could have just passed a law about online discrimination, as several other states have done

Texas does argue that geographic location and place of residence are “closely correlated with race[.]” That’s true, but Texas’s location-based rule would be a highly attenuated and imprecise way to address racial discrimination. It would seem to preclude a lot of useful platform features (like giving users information about restaurants in Odessa, Texas, instead of Odessa, Ukraine). And it appears to penalize platforms for doing what many of them may wish to do if the law is upheld: stop offering service in Texas. Texas also tries to link its rules to discrimination issues by suggesting that “censoring” users based on someone else’s expression or viewpoints penalizes association, including association based on religion. This focus on a user’s offline relationships seems like a stretch. I suppose the law’s references to “another person’s expression” can be read to cover the things that third parties say to platform users offline. But the rule mostly seems designed to cover Texans who wish to receive speech as well as post it.

In any case, there are good reasons for platforms to consider users’ offline activities when moderating content. Those reasons have everything to do with expression—both that of users and that of platforms. Information about a user’s offline activities can provide critical information about the meaning of that user’s posts. It can help platforms infer what message a user intends to express with a particular post, as well as what message other users are likely to perceive when they see it. To ignore that offline information would be to ignore the real content of the post and its relationship to platforms’ own editorial policies. 

I personally first encountered this in my days working as a lawyer at Google in the 2000s and 2010s, in relation to users who posted quotations from the Quran. As counterterrorism experts explained to me back then, passages that might be benign when posted by one user would likely be understood as exhortations to violence from others, including extremist clerics. Other cases where offline information matters involve words that mean one thing in a language or regional slang from one part of the world, and something else entirely in another. And there are of course terms used as terms of pride by some users (like gay rights activists using the word “queer”), but slurs by others. 

The states’ 11th-hour reinvention as defenders of civil rights is unlikely to fool the Court, and it shouldn’t steal focus from the real issues in NetChoice. This really is a case about online expression rules—the editorial rules set by platforms, the ones users might prefer, and the ones states have chosen to impose. It should be decided on that basis. 


Daphne Keller directs the Program on Platform Regulation at Stanford’s Cyber Policy Center. Her work, including academic, policy, and popular press writing, focuses on platform regulation and Internet users' rights in the U.S., EU, and around the world. She was previously Associate General Counsel for Google, where she had responsibility for the company’s web search products. She is a graduate of Yale Law School, Brown University, and Head Start.

Subscribe to Lawfare