Criminal Justice & the Rule of Law Cybersecurity & Tech

First Amendment Absolutism and Florida’s Social Media Law

Alan Z. Rozenshtein
Wednesday, June 1, 2022, 12:26 PM

The Eleventh Circuit’s opinion striking down most of Florida’s controversial social media law mostly gets the First Amendment right but also shortchanges the important government interests at stake.

A digitally created "wall" of popular social media application icons. (Geralt, https://tinyurl.com/yxxt55ma; CC0 1.0, https://creativecommons.org/publicdomain/zero/1.0/deed.en)

Published by The Lawfare Institute
in Cooperation With
Brookings

Last week the Eleventh Circuit upheld an injunction of most of Florida’s controversial S.B. 7072, which restricts “censorship” and “deplatforming” by the biggest social media platforms. The law had been previously enjoined by a district court, and, although the Eleventh Circuit upheld most of the injunction, it permitted some of the law’s data-portability and disclosure requirements to go into effect.

But what’s most notable about the court’s opinion is not the holding itself but how the court applied the First Amendment to the issue of content moderation. Among the courts to have considered the issue, its approach was the most thoughtful yet and is a good example of how courts can avoid First Amendment absolutism. But the court did make some serious missteps, at some points adopting an unhelpfully all-or-nothing approach to digital expression.

When it comes to platform moderation, First Amendment absolutism comes in two flavors. The first kind claims that moderation decisions by platforms get no First Amendment protection at all, either as a general matter or as long as the government chooses to treat platforms as “common carriers.” Absent First Amendment protections, government regulation of content moderation would merely have to satisfy the extremely deferential “rational basis” test and so would almost always be upheld.

This has been Florida’s position throughout the litigation, and it’s one that the court easily, and correctly, disposed of, drawing on Supreme Court precedent and the common-sense observation that “social-media companies are in the business of delivering curated compilations of speech.” As the court noted, 

A reasonable person would likely infer ‘some sort of message’ from, say, Facebook removing hate speech or Twitter banning a politician. Indeed, unless posts and users are removed randomly, those sorts of actions necessarily convey some sort of message—most obviously, the platforms’ disagreement with or disapproval of certain content, viewpoints, or users.

The other form of First Amendment absolutism holds that content moderation speech protected by the First Amendment and so should be immune from meaningful government regulation. The court also rejected certain (thought not all) versions of this position. For example, it rejected the district court’s conclusion that, because many of the Florida law’s supporters were motivated by political objections to the supposedly “leftist” bias of big platforms, the entire law was impermissibly tainted by a “viewpoint-based motivation.” Although there are extreme cases in which it is appropriate to take into account the political motivations of legislatures, the routine use of this factor in First Amendment analysis would substantially hinder government regulation of controversial—and thus politically salient—economic and social issues.

Most importantly, the court recognized that strict scrutiny—the most demanding standard for constitutional review, and which requires both a “compelling government purpose” and that the government action be “narrowly tailored” and use the “least restrictive means”—was not the only appropriate test for the law’s many different provisions. Rather, it applied the more permissive intermediate scrutiny for the content-neutral provisions of the law, such as the provisions that prohibit the deplatforming of political candidates or that require platforms to let users opt-out of algorithmic ranking. Had the court applied strict scrutiny, which especially in First Amendment cases can prove “strict in theory, but fatal in fact” to government regulation, it would have sharply constricted the permissible scope of government regulation of content moderation.

Furthermore, in upholding the law’s requirements that platforms disclose their content-moderation policies and practices, the court applied the even-more deferential Zauderer test, which permits commercial disclosure requirements if they are “reasonably related to the State’s interest in preventing deception of consumers,” are not “unjustified or unduly burdensome,” and do not “chill protected speech.”

But when it came to applying these standards, the appellate court, like the district court before, mostly embraced a highly restrictive view of what counts as a legitimate government interest, especially when it came to regulating content moderation practices: “[T]here's no legitimate—let alone substantial—governmental interest in leveling the expressive playing field.” This cramped view of the government’s interest is a direct consequence of the court’s even more concerningly stingy view of what users can legitimately expect from their digital town squares: “Nor is there a substantial governmental interest in enabling users—who, remember, have no vested right to a social-media account—to say whatever they want on privately owned platforms that would prefer to remove their posts.”

In fairness to the court, it’s just taking cues from the Supreme Court, which for fifty years has argued that “the concept that government may restrict the speech of some elements of our society in order to enhance the relative voice of others is wholly foreign to the First Amendment.”

But it is always a choice as to how broadly to read Supreme Court precedents, and it is illuminating to compare the Florida decision with one issued the day before, by an Ohio state court judge, regarding Ohio’s lawsuit to have Google declared a common carrier. The court, in rejecting Google’s motion to dismiss, concluded that “fostering competition in the marketplace” is an “important governmental interest” and that Ohio’s attempt to apply common carriage regulations to Google did not necessarily violate the First Amendment. The court also recognized the need for both parties to “develop an evidentiary record” as to the specific burdens that common carriage would impose on Google, especially whether it would cause “public confusion between the speaker’s message and [Google’s] message.”

Frustratingly, the Eleventh Circuit’s extreme holding as to the lack of, as it notes numerous times, a “legitimate—let alone substantial—governmental interest” in regulating content-moderation, was unnecessary, since the overly broad and poorly drafted Florida law so obviously violated even the more deferential intermediate scrutiny, as the court itself recognized:

[T]he [Florida law] is so broad that it would prohibit a child-friendly platform like YouTube Kids from removing—or even adding an age gate to—soft-core pornography posted by PornHub, which qualifies as a “journalistic enterprise” because it posts more than 100 hours of video and has more than 100 million viewers per year. That seems to us the opposite of narrow tailoring.

The Florida law and similar efforts—e.g., the even-broader Texas law—are politically motivated and poorly thought out. But that doesn’t mean that the underlying policy question—to what extent should social media giants have unfettered control over the digital public square?—doesn’t implicate legitimate public and governmental issues. 

And as a practical matter, even opponents of government regulation of content moderation might want to avoid taking the extreme position that there is no legitimate government interest at issue. As Justice Alito noted on Tuesday, the issue of government regulation of content moderation is an issue “of great importance that will plainly merit this Court’s review.” Alito, joined by Justices Thomas and Gorsuch, dissented from the Supreme Court’s decision to reverse the Fifth Circuit, which itself had stayed the district court’s injunction against the Texas law, which will now remain enjoined pending a merits decision on its constitutionality. 

Once the issue gets to the Supreme Court, it’s far from clear that the issue will be resolved in the technology companies’ favor. Although the conservative majority on the Supreme Court has generally favored First Amendment arguments used for deregulatory ends—indeed, when still a judge on the D.C. Circuit, Justice Kavanaugh argued that net neutrality regulations would violate the First Amendment rights of internet service providers—there are indications of a new skepticism towards corporate First Amendment claims, at least when it comes to digital technology.

For example, in his dissent in the Texas case, Justice Alito, although emphasizing that he has “not formed a definitive view on the novel legal questions,” also noted that “it is not at all obvious how our existing precedents, which predate the age of the internet, should apply to large social media companies.” Justice Thomas has similarly shown an openness to treating social media companies as common carriers, a position that would be incompatible with broad First Amendment protections for content-moderation decisions. It’s unclear to what extent this view is shared by the court’s other conservatives, but it may well be, given the increased ideological and political polarization of the judiciary, and the fact that technology companies have become increasingly distrusted on the right

For their part, the liberal justices have their own reasons to push back against expansive claims of First Amendment rights for technology companies, which would hamstring progressive policy goals like net neutrality, consumer protection, and data privacy, but would more generally. So it’s notable that Justice Kagan voted with Justice Alito, Thomas, and Gorsuch to lift the injunction of the Texas law, though she did not join in Justice Alito’s dissent. While it is possible that her vote reflects her discomfort with the so-called shadow docket—the Supreme Court’s increasing practice of issuing opinions outside the normal process of full briefing and argument—that may not be the full explanation. As Steve Vladeck, a leading expert on the shadow docket, notes, Justice Kagan has been willing to enjoin laws on an emergency basis; it may also be that she disagrees with the other liberal justices as to whether the Texas law is clearly unconstitutional.

In other words, the issue isn’t going anyway, and no version of First Amendment absolutism should be comfortable about its chances for a full-throated judicial endorsement. Of course, rejecting absolutism doesn’t magically give you the right answer—there are infinite shades of gray between black and white. My own preference is for an approach that focuses on the rights of users to communicate, rather than the rights of companies to moderate, and which would permit light-touch, tightly scoped government regulation to encourage high-value speech.

But whatever the merits of my proposal, the bigger point is that First Amendment law finds itself in a rare window of doctrinal possibility, with laws that tee up authentically novel legal issues and a Supreme Court that might be open to new directions in the jurisprudence around digital free expression. The intellectual agenda for judges, lawyers, and scholars is clear and should not be ceded to all-or-nothing approaches.


Alan Z. Rozenshtein is an Associate Professor of Law at the University of Minnesota Law School, a senior editor at Lawfare, and a term member of the Council on Foreign Relations. Previously, he served as an Attorney Advisor with the Office of Law and Policy in the National Security Division of the U.S. Department of Justice and a Special Assistant United States Attorney in the U.S. Attorney's Office for the District of Maryland.

Subscribe to Lawfare