Courts & Litigation Cybersecurity & Tech

The Government’s First Amendment Interest in Ensuring Free Expression on Private Platforms

Kyle Langvardt, Alan Z. Rozenshtein
Monday, September 11, 2023, 8:30 AM
In its brief in the NetChoice cases, the solicitor general shortchanges the government’s interest in ensuring broad access to social media platforms.
Elbert P. Tuttle Courthouse, (U.S Court of Appeals for the Eleventh Circuit, https://www.ca11.uscourts.gov/)

Published by The Lawfare Institute
in Cooperation With
Brookings

Last month, Solicitor General Elizabeth Prelogar filed a brief in the petitions for certiorari for two of what could be the most important Supreme Court cases ever in their impact on the internet. The cases, Moody v. NetChoice, and NetChoice v. Paxton, arise out of Florida and Texas laws that limit the ability of social media platforms to remove or otherwise moderate user content. The Florida law was struck down by the Eleventh Circuit as violating the platforms’ First Amendment rights, while the Texas law was upheld by the Fifth Circuit

The solicitor general properly urges the Supreme Court to grant certiorari in these cases, both because the legal issues are important and because the cases represent a circuit split on a fundamental question of First Amendment law. The government also—correctly, in our view—points out serious flaws in both the Florida and Texas laws, flaws that may well be big enough to justify striking both down. 

But the solicitor general’s reasoning is troubling. In agreeing with NetChoice, a lobbying group representing social media and other technology companies, that there is “no ‘substantial governmental interest in enabling users’ to ‘say whatever they want on privately owned platforms that would prefer to remove their posts,” the solicitor general shortchanges the very real constitutional and public policy rationales for limiting the vast power that platforms have over their users. While these cases are far from a cert grant—not to mention a briefing, an argument, or a decision—it is important that the issue be framed properly from the outset. The platforms may well deserve to win in these cases, but only because the specific state laws at issue are flawed, not because the state has no business encouraging online speech.

The State Laws

The Florida and Texas laws apply only to the largest platforms—those with more than 50 million monthly users in the Texas case, and with more than either 100 million monthly users or $100 million in annual revenues in the Florida case. The Texas law protects all users from having their content “censored” based either on “viewpoint” or “a user’s geographic location,” with “censor” defined broadly to include any form of “discriminat[ion] against” users or content, whether through a ban, a takedown, demonetization, etc. 

By contrast, the Florida law’s protections for general users is limited to requiring platforms to “apply censorship, deplatforming, and shadow banning standards in a consistent manner”—though what counts as “consistent” is not clear. The Florida law also goes further than the Texas law in providing particularly strong protections for journalists and politicians: Platforms can’t “censor, deplatform, or shadow ban a journalistic enterprise based on the content of its publication or broadcast” even if the platform applies its content rules “consistently.” Nor can platforms ban a political candidate for any reason, or even limit the visibility of content about a political candidate. The Florida law also prohibits platforms from updating their terms of service, including content rules, more than once every 30 days, and it requires platforms to let users see a raw chronological news feed on request.

Both the Florida law and the Texas law require platforms to provide general disclosures regarding their content moderation practices as well as individualized explanations to users every time they take adverse action against user profiles or content. And the Texas law goes further, requiring the platforms to provide users with a speedy appeal process.

The Government’s Argument

The solicitor general argues that the main restrictions—specifically, the limitations on platform moderation decisions and the requirement that platforms provide individualized explanations—interfere with a social media platform’s right to “select[], edit[], and arrange[] third-party speech for presentation to the public.” (At the same time, the solicitor general urges the Supreme Court not to address challenges based on the laws’ requirements that platforms provide general disclosures regarding their content moderation practices as well as the allegedly political motivations of the state governments in enacting the laws.)

The Supreme Court has recognized a broadly similar right of “editorial discretion” in cases involving newspaper publishers’ editorial pages, cable providers’ bundles of channels, parade organizers’ choice of floats, and more. The government argues that the Fifth Circuit was therefore wrong to describe platforms’ content moderation activities as conduct, uncovered by the First Amendment, rather than as constitutionally protected speech. “And especially because the covered platforms’ only products are displays of expressive content, a government requirement that they display different content—for example, by including content they wish to exclude or organizing content in a different way—plainly implicates the First Amendment.” 

The government acknowledges that platforms’ “editorial” activities are not completely exempt from regulation. Laws that affect these expressive activities “incidentally”—the government cites public accommodations laws and antitrust laws as examples—would not violate the First Amendment. But the Florida and Texas laws are said to require closer scrutiny because they are “‘directed at the communicative nature’ of the major platforms’ editorial activities.” To survive First Amendment scrutiny, they must be “justified by [a] substantial showing of need.”

In the government’s view, the Texas and Florida laws don’t come anywhere close to making this showing. Quoting NetChoice, the government argues that “there is no ‘substantial governmental interest in enabling users’ to ‘say whatever they want on privately owned platforms that would prefer to remove their posts.’” Indeed, the solicitor general argues that the objective of opening speakers’ access to privately owned platforms is not only insubstantial, but outright illegitimate: “exactly what the general rule of speaker’s autonomy forbids.” The “speaker” whose autonomy the solicitor general is concerned about here, to be clear, is not the user who posts on the platform but the company that owns the platform and “speaks” by moderating third-party content. In support of this view, the solicitor general quotes Buckley v. Valeo, the Supreme Court’s landmark 1976 decision to invalidate certain campaign-finance caps that the Court saw as an illicit effort to “level the playing field” among competing speakers: Government may not “restrict the speech of some elements of our society in order to enhance the relative voice of others.”

Our Thoughts

We don’t think the government is wrong to say that certain provisions of these laws might interfere with “editorial” activity by the platforms. But we are perplexed at the solicitor general’s insistence that the government never has even a legitimate interest in ensuring broad access to private platforms. This view stacks the deck so heavily in favor of the platforms that virtually no direct regulation of platform moderation decisions could ever be constitutional. Even certain laws requiring transparency from platforms about their moderation decisions might be understood to threaten platforms’ “editorial” speech rights if, as the solicitor general argues, there is not even a legitimate interest in promoting user access. Even assuming the Florida and Texas laws have serious First Amendment problems, we think it is premature for the solicitor general to close the door to future interventions that may be better inspired and designed.

Dismissing the government’s interest here means losing sight of why free expression matters. Scholars typically cite some combination of three values and purposes behind the First Amendment: enhancing the autonomy and self-expression of speakers and listeners, promoting truth through the “marketplace of ideas,” and supporting democratic self-government. Any one of these purposes could support a public interest in ensuring access to private platforms. 

Platform users are speakers and listeners in the most basic sense. These users’ interest in self-expression and autonomy does not disappear just because the speech happens on a platform property owned by a private company. And platform owners who mismanage the speech they govern can threaten the “marketplace of ideas” and the democratic process in ways that are similar to and in some ways more pernicious than governmental censorship. Private platforms aren’t, of course, subject to the First Amendment. But these are still free-speech concerns, and it is oddly self-defeating to claim that the First Amendment disqualifies government forever from addressing them.

Of course, the government is not inventing its argument against ensuring private access from whole cloth. Many activities can be labeled as “editorial” speech at some level, and Buckley v. Valeo really did reject “restrict[ing] the speech of some elements of our society in order to enhance the relative voice of others.” But Buckley—the direct progenitor of Citizens United, which extended First Amendment protections over campaign speech to corporations—is hardly uncontroversial, and it is mystifying why a self-professedly progressive administration would want to further entrench these cases’ crabbed understanding of legitimate government interests and indifference to corporate overreach. 

What’s especially ironic here is that the editorial analogy, to the extent it applies, leaves a lot of wiggle room. Platforms’ resemblance to newspaper editors does not require the administration to settle for a Buckley-style internet in which platforms enjoy total control over their users’ expression. Of course platforms bear an analogy at some level to newspapers, cable companies, and parade organizers—all of which have had their right to select what “content” appears on their “platforms” granted First Amendment protection by the Supreme Court. But any analogy that lumps together parties as disparate as cable companies and parade organizers is already quite a thin one. That is why, in practice, the Supreme Court has never treated all these “editors” the same. 

The Court has said, for example, that cable service providers make “editorial” choices like newspaper editors when they put cable packages together. Yet it has upheld a close regulatory scheme for cable, including access rights for other speakers, that has no constitutional parallel in the newspaper industry. And the Court has not yet spoken at all on the specific First Amendment status of online platforms when they moderate user content. It’s far too early to assume that social media’s “editorial” prerogatives over content moderation must amount to the total regulatory shelter that the platforms’ techno-libertarian advocates have asserted.

The editorial analogy is just that, an analogy. It is at best the start, and certainly not the end, of a constitutional analysis. The question, in other words, is not whether the activities of social media companies are editorial, but whether, in any given case, the government’s program interferes in an unjustified way with some protected aspect of a social media company’s business. Any persuasive analysis will have to address, at minimum, the following five questions, all of which are context dependent and should not be decided at a high level of abstraction or generality: 

  1. Which of the many activities that platforms engage in to moderate and shape their services qualify as editorial under the First Amendment?
  2. What government action actually burdens those protected activities?
  3. How strong, in terms of the First Amendment, are the government’s interest in furthering free expression?
  4. To what extent does the law at issue advance the asserted interest?
  5. Does the law, in light of First Amendment values, disproportionately burden those platform content moderation decisions that call for First Amendment protection?

And that’s not all. So far, we have only considered the First Amendment arguments that platforms might raise against laws like Florida’s and Texas’s. But platform users may also have some First Amendment grounds to challenge these laws. A law that claims to promote user speech rights may backfire and burden user speech if it is drafted poorly—for example, by creating incentives for platforms to remove discussion on all sides of controversial issues as a low-cost means to comply with a “viewpoint-neutrality” requirement; or by inviting strategic litigation that pressures or forces platforms to take down content the plaintiff doesn’t like; or by causing the platform user experience to deteriorate to the point where users drop out. And so on. Analyzing the effect on users’ rights will be just as complex as analyzing the platforms’ own rights, and if anything, the users’ rights questions may be the more important ones.

The editorial analogy simplifies all this complex analysis by zooming out so far from the issues that all policy detail disappears and courts lose track of what free expression is even for. At this altitude, the First Amendment isn’t much more than an overpowered property right. That’s what makes the editorial analogy such a popular deregulatory tool for private businesses that own huge sectors of the expressive infrastructure—and that’s why the Court should approach it with caution.

Working past the editorial analogy will be difficult and complex, and the full analysis has yet to be done in these cases. We do not expect the government to have an answer at the cert petition stage. But we do think it is important to frame the problem correctly at the outset. The government is right to urge the Supreme Court to grant cert, but it should be clear with the Court as to what resolving the NetChoice cases will entail: not mechanically applying the “editorial” label, but developing an appropriately complex, nuanced set of tests for when government regulation does and does not further First Amendment values.


Kyle Langvardt is an Assistant Professor of Law at the University of Nebraska College of Law and a Faculty Fellow at the Nebraska Governance and Technology Center at the University of Nebraska.
Alan Z. Rozenshtein is an Associate Professor of Law at the University of Minnesota Law School, a senior editor at Lawfare, and a term member of the Council on Foreign Relations. Previously, he served as an Attorney Advisor with the Office of Law and Policy in the National Security Division of the U.S. Department of Justice and a Special Assistant United States Attorney in the U.S. Attorney's Office for the District of Maryland.

Subscribe to Lawfare