Courts & Litigation Cybersecurity & Tech

Why the Texas and Florida Social Media Cases Are Important for Research Transparency

Joshua A. Tucker, Jake Karr
Friday, February 23, 2024, 12:10 PM
The NetChoice cases may have far-reaching implications for the power of governments to mandate social media platform transparency and data access.
Supreme Court of the United States (John Brighenti,; CC BY 2.0 DEED,

Published by The Lawfare Institute
in Cooperation With

Editor’s Note: Some of the text of this article draws from an amicus brief filed by New York University’s Center for Social Media and Politics (CSMaP), along with Darren Linvill and Patrick Warren of the Media Forensics Hub and Filippo Menczer of the Observatory on Social Media. One of the authors, Tucker, is the co-director of CSMaP. The other, Karr, served as counsel on the brief.

Next week, the Supreme Court will consider two blockbuster cases, Moody v. NetChoice and NetChoice v. Paxton. Both cases involve lawsuits brought by an internet trade association challenging Florida and Texas laws that attempt to regulate large social media platforms. The primary focus of the litigation is the state laws’ provisions seeking to control platforms’ content moderation policies and practices. But the NetChoice cases may also have far-reaching implications for the power of governments to mandate social media platform transparency and data access.

Since social media platforms launched two decades ago, many questions have been raised by scholars, civil society actors, and policymakers about how they impact society. Does social media make political polarization worse? Can foreign actors use it to influence elections? How does it enable harassment and abuse? From the beginning, efforts to answer these questions—and help the public and policymakers understand the wide-ranging effects of social media—have been frustrated by the platforms’ refusal to share data with researchers. One of us (Tucker) has been wrestling with—and speaking and writing about—these challenges for over a decade now as a researcher in this area.

That’s why, as the justices weigh the Florida and Texas laws, they should leave ample room for common sense legislative and regulatory efforts to mandate transparency and access to data.

Content Moderation, Transparency, and the Florida and Texas Laws

Content moderation has been a contentious issue since the rise of the Web 2.0. As soon as people started posting things online, others started reviewing those posts and deciding what could stay up and what couldn’t. Social media platforms dramatically increased the complexity of this issue by bringing together billions of people worldwide, across a multitude of legal and cultural contexts. Each platform has its own rules, and all of them prohibit clearly illegal content. But decisions are often not black and white. Every day, platforms try to balance the twin goals of allowing freedom of expression while maintaining their preferred version of a product they want to offer to users. 

In recent years, particularly since 2020, content moderation has become a major issue in American politics. At the time, the platforms needed to decide how to handle posts on a range of controversial topics, including unfounded claims of voter fraud and information about the coronavirus and vaccines. Since then, social media “censorship” has become a rallying cry for conservative politicians and commentators, who claim their posts are being taken down or flagged simply because platforms don’t like their political views.

Florida and Texas responded by passing the laws at issue in these cases. Each law is directed at larger social media platforms—the Florida law targets platforms with over 100 million monthly users and $100 million in annual revenue, whereas the Texas law targets those with at least 50 million active monthly users in the United States. And each law attempts to regulate these platforms in two main areas (though the line between them is arguably blurry): content moderation and transparency.

The splashier content moderation provisions have received the most attention by the parties, the courts, and the press. Florida’s law prohibits platforms from “censor[ing], deplatform[ing], or shadow ban[ning]” certain types of users like journalists or political candidates, and it generally requires them to “apply censorship, deplatforming, and shadow banning standards in a consistent manner.” Meanwhile, Texas’s law simply states that platforms may not “censor” on the basis of viewpoint. If the Supreme Court were to uphold these “must carry” provisions, it would result in a radical shift in our online information ecosystem, a world in which governments have a free hand in controlling public discourse by telling platforms what content they can and can’t moderate, and in which platforms may cease moderating content at all for fear of running afoul of conflicting state laws.

But the laws’ transparency measures are vitally important in their own, less explosive way. On an individual level, the laws require social media platforms to provide users with a notice and some explanation when they take adverse action against them, such as removing objectionable content. (Texas’s law additionally includes a requirement that platforms allow users to appeal these actions.) More generally, the laws also impose ongoing public disclosure obligations on the platforms, including by publishing regular transparency reports with data on their content moderation practices.

When the U.S. Courts of Appeals for the Fifth and Eleventh Circuits heard these cases, they ultimately agreed on very little. The Fifth Circuit upheld Texas’s law in its entirety, while the Eleventh Circuit struck down most of Florida’s law as a violation of the platforms’ First Amendment rights. But these lower courts agreed on one thing, finding each law’s general disclosure provisions to be constitutional. Perhaps in part because of that consensus, when the Supreme Court decided to take up these cases, it explicitly declined to consider the general disclosure provisions. Since the Court has asked the parties to brief only the laws’ content moderation and individual explanation provisions in advance of oral argument on Feb. 26, it’s unclear what the Court might end up saying, if anything, about the states’ ability to impose broader transparency obligations on platforms. 

The attorneys general of Florida and Texas have asserted that the states passed these laws because they have an interest in protecting consumers in the marketplace. The states have been narrowly focused on this consumer protection angle, but these cases are potentially about a lot more than that. When it comes to social media transparency writ large, governments have plenty of worthwhile interests in mandating disclosures, including the paramount interest in preserving citizens’ ability to engage in democratic self-governance. As Daphne Keller has persuasively argued, demanding greater transparency from the platforms—primary sites of public discourse and political debate—can further these consumer protection and democratic interests, by giving consumers, citizens, and their elected representatives the information they need to make decisions that affect us all. 

Why These Cases Matter for Researchers—and Why Data Access Is Important for an Informed Democracy

The Supreme Court has recognized that social media platforms are “integral to the fabric of our modern society and culture.” Indeed, these technologies have transformed how we consume and share information, how we interact with one another, and who has the power to influence behavior and attitudes in politics.

As New York University’s Center for Social Media and Politics (CSMaP) argued in an amicus brief filed in these cases, independent social science research has played a crucial role in helping the public and policymakers understand the wide-ranging effects of these platforms. Among many other issues, research has provided insights into the recommendations of algorithmic systems, the patterns of foreign influence campaigns, the relationship between social media and political behavior and beliefs, the prevalence of hate speech and harassment, and the efficacy of interventions.

But this research is only possible with access to data that is commensurate with the complexity and scale of social media platforms. For example, experts from CSMaP (which one of us [Tucker] co-directs) analyzed more than 1 billion tweets to provide systematic evidence that hate speech did not increase on Twitter over the course of the 2016 presidential election campaign and its immediate aftermath. In another CSMaP study, experts analyzed roughly 1.2 billion tweets from over 640,000 Twitter accounts and concluded that most users do not inhabit strict ideological “echo chambers,” counter to the prevailing received wisdom. Independent researchers working in the field of computational social science are able to analyze these massive data sets with cutting-edge tools that allow for an understanding of the actual nature of the activity on social media platforms and its impacts. 

Unfortunately, researchers often lack the data they need to answer crucial questions at the intersection of social media and democracy. There are three reasons why.

First, social media platforms unilaterally control and limit access to their data. We got a peek into the kinds of research projects platforms undertake when former Facebook employee Francis Haugen leaked internal documents in 2021. The leaks demonstrated a significant consequence of the platforms’ opacity: The public cannot trust what companies reveal about themselves because many of the findings directly contradicted Meta’s public statements. For example, the company previously said its content moderation policies apply equally to all users. But the documents showed they had in the past exempted certain high-profile users from enforcement actions. Similarly, despite internal research suggesting Instagram use might lead to negative mental health outcomes for adolescent girls, Meta minimized the risk publicly. 

Second, platforms voluntarily disclose insufficient information. Platforms often tout voluntarytransparency reports” as evidence that they already provide adequate information. But there are numerous deficiencies in the reporting practices. Platforms can frame information in ways that give misleading impressions, and they can pick and choose what type of information to report and how to categorize it. For example, Facebook’s 2020 fourth quarter transparency report promoted the success of its automated moderation tools in removing large quantities of hate speech. Documents from Haugen’s leaks, however, suggest Facebook removed only 3 to 5 percent of what it considered hate speech. Platform transparency reports fail to explain why information is presented as it is, why certain actions were taken, and a host of other questions necessary to understand the true meaning of the reported numbers.

Third, independent researcher access to data can be restrictive, incomplete, and subject to withdrawal at any time. Although several platforms have programs to provide data access, because researchers must apply for this access, platforms effectively have the power to screen out projects they might not like. The data available is also often inadequate. Only some platforms have historically provided researcher access to data at all, so existing studies have skewed toward research inquiries that could make use of the data available, rather than toward the most pressing questions of public importance. Even where platforms voluntarily make some data available, they can revoke that permission at any time, for any reason, and with little recourse. Twitter, for example, previously had the most open data sharing regime of any platform, until new owner Elon Musk shut it down. Facebook has also closed off research tools with no warning, rendering many projects obsolete overnight. (This is not to say that the platforms do not also choose at other times to make data—and sometimes very important data—accessible to researchers. But the point is that these decisions occur when the platforms decide they will occur. And what the platforms giveth, they can also take away.)

As a result of these restrictions, independent researchers are limited in their efforts to study the causes, character, and scope of the various phenomena attributed to the rise of social media. There is widespread alarm over perceived problems such as a rise in hate speech across platforms, algorithmic systems that push users into ideological echo chambers or extremist rabbit holes, and the spread of inaccurate information from low-credibility news sources. Some government actors in the United States have attempted to ban the video-hosting platform TikTok based on alleged national security concerns, while others seek to regulate a host of platforms out of concerns for adolescent mental health. And the rise of generative artificial intelligence is now raising new fears about the spread of dis- and misinformation on social media. 

Without researcher access to accurate, comprehensive, and timely platform data, the public and policymakers are forced to rely on guesswork when grappling with these important cultural, social, and political issues. On the one hand, members of the public are unable to make informed decisions about social media use both as consumers in the marketplace and more fundamentally as citizens in a democratic society. On the other hand, policymakers are unable to develop effective social media regulation: Without an evidence-based understanding of the nature of the risks posed by platforms, policymakers are hampered in their ability to design policies to mitigate those risks or to evaluate those policies once implemented. 

How the Court’s Ruling Could Help (or Harm) Transparency

This untenable status quo points to the need for, and overriding public interest in, meaningful platform transparency mandates. Although the Court has not asked the parties to re-litigate the general disclosure provisions of the Florida and Texas laws, the Court’s resolution of the remaining provisions—in particular the laws’ individual explanation requirements—may very well govern the future of transparency regulation in the United States.

What matters here is less the conclusion that the Court ultimately comes to on those provisions than how it gets there. These are First Amendment cases, and the first and most important question the Court will need to decide is what standard or standards of constitutional scrutiny it’s going to apply. The parties and their amici have offered multiple paths for the Court to take. On the one hand, in challenging these laws the platforms have been arguing that when they engage in content moderation, they’re exercising editorial discretion that’s fully protected by the First Amendment, similar to newspapers deciding what to print. If the Court agrees with the platforms, it will likely analyze the must-carry provisions under what’s known as “strict scrutiny,” which places an incredibly, often impossibly high burden on the states to justify their laws. On the other hand, the states have been rebutting that the platforms are more akin to “common carriers” like phone companies, with little to no First Amendment rights to control who gets to use their services and how. If the Courts heads in this direction, the must-carry provisions may stand. 

But what of the individual explanation provisions, which as discussed above compel limited disclosures around platforms’ content moderation decisions, but do not—at least directly—attempt to commandeer those decisions? In between the two extremes of strict scrutiny and no scrutiny at all, the Court has other options from which to choose. For example, there’s “exacting scrutiny,” which the Court has applied to compelled disclosures in other areas like campaign finance; “intermediate scrutiny,” for the ill-defined category of “commercial speech,” among other things; and the lower standard of scrutiny that we now call the “Zauderer test,” which courts have applied to a subset of compelled commercial speech that is “factual” and “uncontroversial.”

Under any of these standards, a state must defend its ability to regulate by demonstrating that it has a worthy interest in doing so. It’s clear to us, though, that no matter which standard the Court chooses to apply to social media regulation, the interests that governments generally have in mandating greater transparency—including consumer protection and democratic self-governance—should be enough to get over that first hurdle.

The real hurdle is what comes next, where these standards call for either a tighter or looser “fit” between the state’s interest and how the law on the books purports to serve that interest. If the Court decides to apply a heightened level of scrutiny to the individual explanation provisions—and if it does so in a broad, sweeping ruling that does not carefully distinguish them from general disclosure obligations—lower courts may be inclined to expand the Court’s reasoning in ways that would make even the most targeted, tailored platform transparency laws vulnerable to constitutional challenge. Perhaps the prime example of such a law is the Platform Transparency and Accountability Act, a federal bill introduced by a bipartisan group of senators. The bill would require social media platforms to share data with independent researchers, facilitated through a program run by the National Science Foundation. This is a responsible, thoughtful way to enable greater platform transparency that would allow researchers to get the data they need to help inform public opinion and policy making.

Lawmakers have begun to recognize that mandating greater platform transparency, including consistent and comprehensive access to data, is the only way to ensure that independent researchers will be able to overcome current barriers and enable evidence-based debate about the role and impact of social media platforms. Beyond Florida and Texas, legislative proposals abound. No matter how it rules, we hope the Court will uphold strong principles of transparency and ensure that these efforts survive constitutional scrutiny.

Joshua A. Tucker is Professor of Politics, affiliated Professor of Russian and Slavic Studies, and affiliated Professor of Data Science at New York University, and Co-Director of NYU’s Center for Social Media and Politics (CSMaP)
Jake Karr is the Deputy Director of the Technology Law & Policy Clinic at New York University School of Law and a Knowing Machines Fellow at the Engelberg Center on Innovation Law & Policy.

Subscribe to Lawfare