Cybersecurity & Tech

Meta’s Oversight Board and the Need for a New Theory of Online Speech

Paul M. Barrett
Thursday, November 9, 2023, 8:00 AM
Can a quasi-independent, quasi-judicial body made up of 22 experts on human rights and law improve Meta’s decision-making?
An aerial view of Meta's headquarters, January 2023. (InvadingInvader, https://tinyurl.com/yha37z9y; CC BY-SA 4.0 DEED, https://creativecommons.org/licenses/by-sa/4.0/deed.en)

Published by The Lawfare Institute
in Cooperation With
Brookings

In 2024, some 2 billion people in 50 countries—including the United States, India, Indonesia, and the European Union—will vote in a record-breaking number of elections. This surge of political activity is almost certain to bring with it an upswing in attempts to use social media to spread falsehoods, further inflame already-polarized populations, and influence election results. Rather than fortifying their defenses, however, major social media companies are retrenching on content moderation. This retreat has unfolded as part of broader industry-wide workforce reductions and in some cases—such as that of YouTube and X (formerly Twitter)—because top management has decided to make protecting elections less of a priority.

The U.S. Supreme Court, meanwhile, has agreed to hear a slate of First Amendment cases concerning the degree of influence governments may exert over content decisions by private social media companies. And in Europe, regulators for the first time are seeking to enforce the Digital Services Act, a pioneering law that took effect this year and imposes new transparency and risk-assessment obligations on large social media companies.

Against the backdrop of all of this consternation over content moderation, Meta’s Oversight Board is celebrating its third birthday. Empowered to overrule enforcement actions by Meta’s Facebook and Instagram platforms, as well as issue nonbinding policy recommendations, the board has involved itself in some of the company’s biggest controversies. In the midst of global chaos and ongoing disagreements over how to carry out content moderation, the question of whether the board can succeed in its work carries a particular urgency. Can a quasi-independent, quasi-judicial body made up of 22 experts on human rights and law improve decision-making by a global social media platform?

Over the past three years, the board has provided valuable, if limited, insight into Meta’s otherwise opaque inner workings. But a close look at the board’s record so far reveals two significant flaws, one structural and beyond the body’s control, and the other, a disappointing failure on the board’s part to build a theoretical foundation for its work. The first problem is simply that the board lacks the authority to force Meta to act on its recommendations, and in some major cases, the company has refused or dragged its heels.

The second is that the board has failed to grapple effectively with an overarching question it is uniquely positioned to address—namely, how free-speech principles designed to constrain government censorship ought to be applied to the private corporations that collectively make hundreds of millions of daily decisions about who gets to say what online. Lacking this kind of overarching theory, the board has not articulated clearly how Meta—and, by extension, other social media companies—ought to balance the competing goals of promoting freedom of expression without amplifying hatred, divisiveness, and falsehoods that potentially harm individuals and democratic institutions like elections. 

Assessing the Oversight Board’s impact

If you look only at its bottom-line verdicts, which Meta accepts as binding, the board isn’t having much of an impact. It has published only 53 rulings, a microscopic 0.0018 percent of the nearly 3 million Meta moderation decisions that users have unsuccessfully appealed to the company and then brought to the board. 

But to its credit, the Oversight Board has consistently challenged Meta, overturning the company’s moderation decisions nearly 80 percent of the time. And it has done so in public opinions that often reveal inconsistent company policies and ad hoc enforcement practices. “We have pulled back the curtain on things that the company would have preferred we not reveal,” board member Paolo Carozza, a professor of constitutional law and political science at Notre Dame, told me in an interview.

The board achieves some of its modest victories on transparency as a result of nonbinding recommendations it makes to Meta, distinct from the binding verdicts on individual cases. It can take credit for Meta’s new practices of revealing when the company removes content at the request of government officials and informing users whose posts are taken down which rules they (allegedly) have violated.

The Oversight Board has discretion to choose cases it believes are important and relevant. In October, it announced that it would review Meta’s policies on manipulated media—an issue likely to arise during the 2024 elections in the U.S. and other countries. The specific case concerns the company’s refusal to remove a video from the 2022 midterm elections showing President Biden placing an “I Voted” sticker on his adult granddaughter’s chest, near her neckline, and kissing her on the cheek. The video was altered to make it appear that the president’s hand repeatedly touches the young woman’s chest, and a caption calls Biden “a sick pedophile.” The misleading clip has had only sparse viewership on Facebook, although it was widely circulated on Twitter, now known as X.

While the Biden video was fabricated without the use of sophisticated artificial intelligence, the board has said that it will use the case to scrutinize how Meta handles AI-generated deepfakes—video, still images, or audio that make a target appear to do or say something they never did or said. One of the major risks associated with the recent introduction of increasingly potent generative AI systems is that the technology will be used to turbocharge disinformation—both imagery and text—during elections.

“Almost like a Supreme Court”

Meta’s chairman and chief executive, Mark Zuckerberg, first floated the idea for “some sort of structure, almost like a Supreme Court” in 2018. At the time, skeptics dismissed the trial balloon as an attempt to deflect attention from the social media industry’s failure to stop Russian interference with the 2016 U.S. presidential election and Meta’s subsequent Cambridge Analytica user-privacy scandal. Even once Meta—then Facebook—began assembling the Oversight Board, that skepticism persisted. Some observers called it a “clever sham” designed to “cloak harmful decisions in a veil of legitimacy.” Others praised the board for the wisdom of its rulings, even as they expressed concern about whether the company would, in response, change its behavior in significant ways. 

Meta has irrevocably transferred $280 million to a stand-alone trust that funds the board’s operations. Yet despite that impressive-sounding sum, the board has never filled its initially advertised 40 seats. Current members have said they are satisfied with the existing reduced head count. The board and Meta have refused to disclose how much board members are compensated for their part-time work, but published reports have mentioned annual pay in the six figures. The board employs a support staff of more than 80 people in London, San Francisco, and Washington, D.C.

I recently spent an afternoon at Meta’s New York office, interviewing six of the several dozen employees responsible for implementing Oversight Board decisions and policy recommendations. The staff members I interviewed (on a background-only basis) proudly described a number of hard-won reforms, some of which encountered initial resistance from others at the company following the board’s rulings. In particular, they pointed to the “cross-check” program. 

Revealed by the Wall Street Journal in October 2021, cross-check for years provided special treatment to celebrities, government officials, professional athletes, and prominent journalists. When Meta’s automated moderation system targeted VIPs’ posts for removal for violating platform policies, a human reviewer would intervene to determine whether the algorithm had been too censorious. This favoritism sometimes resulted in harmful content remaining on Facebook.

Following the Journal’s reporting, Meta formally asked the Oversight Board for recommendations on how to reform the cross-check system. In December 2022, the Oversight Board publicly criticized not only the program’s detrimental effects but also the company’s disingenuousness in describing its purpose. “While Meta told the board that cross-check aims to advance Meta’s human rights commitments, we found that the program appears more directly structured to satisfy business concerns,” the board said. Responding to the board’s 32 specific recommendations, the company has made some changes but rejected others—a push-and-pull process that often follows the board’s nonbinding suggestions. In a March 2023 thread on what was then Twitter, the board took a victory lap for what it called a “landmark moment” but also complained that Meta hadn’t gone far enough, noting that the company rejected a recommendation that “deserving users be able to apply for the protections afforded by cross-check.”  

Today, cross-check still provides extra human review, according to Meta, but it emphasizes giving special consideration to people acting in the public interest, including civil society advocates and human rights defenders, whose posts may stir controversy but deserve circulation in the name of free speech. In an update in October, the Oversight Board noted that the company also has reduced backlogs of cross-check cases, which lessens “the risk of users being exposed to violating content while it is awaiting review.”

A Dictator’s Threats

Earlier this year, the Oversight Board’s value and its weaknesses became evident in the convoluted tale of the board’s response to a Facebook video posted by then-Cambodian Prime Minister Hun Sen.  In a speech in January 2023, which streamed live on his Facebook page, the strongman leader vowed to “beat up” foes, “send gangsters” to their homes, and potentially take them to court as “traitors.”

Multiple users reported the video to the company, saying that it violated Meta’s rules against “violence and incitement.” Two human company reviewers rejected these objections. Next, policy and subject area experts had a look and concluded that the video did transgress Meta’s standards but determined that it qualified for an exception because its “newsworthiness” outweighed the risk of harm.

While the Oversight Board receives most of its cases from user appeals, Meta sometimes asks the board to weigh in. That’s what happened with the Hun Sen video. In March, two months after the video had been posted, the board agreed to review the case. By then, the video had been viewed some 600,000 times, and six Cambodian opposition party members had been violently attacked by men in dark clothes and motorcycle helmets.

In June, the board issued its ruling: It overturned the company’s decision to leave up the video and, in a nonbinding recommendation, suggested that Hun Sen’s Facebook page and Instagram account should be suspended for six months. Meta removed the video. But in August, nearly seven months after Hun Sen issued his threats, the company rejected the suspension recommendation—because, it said, the video had not appeared in the midst of what Meta had designated an official crisis under its internal protocols.

Human Rights Watch condemned Meta’s response, saying that the company’s actions allowed Hun Sen and other authoritarian leaders to “weaponize Facebook against their opponents and suffer barely a slap on the wrist.” Despite this criticism—and the slow pace of decision-making by both Meta and the board, each of which took months to respond to a dangerous and incendiary video—the Meta staff members I spoke with said the Oversight Board’s role had sparked constructive public debate and shed light on an internal process that otherwise would have unfolded in the dark. 

Trump and January 6

The Oversight Board’s best known engagement concerning a head of state using social media to incite violence yielded a similarly ambiguous outcome. In the wake of the Jan. 6, 2021, riot at the U.S. Capitol, Meta indefinitely suspended then-President Trump’s access to his widely followed Facebook account. Asked by the company to offer its views, the board four months later upheld the suspension in light of Trump’s use of social media to praise the rioters. But the board said that an indefinite suspension wasn’t among the punishments specified by the company’s rules and that Meta should “determine and justify a proportionate response.” Meta settled on a two-year suspension, which ended in early 2023—at which point Trump’s account was reactivated.

But at the same time, according to Evelyn Douek, a leading expert on content moderation on the faculty at Stanford Law School, the company effectively stonewalled another board admonition. It did so, Douek has written, by refusing “to take up in any meaningful way one of the most consequential recommendations in the Trump Suspension case, to ‘review [Meta’s] potential role in the election fraud narrative that sparked violence in the United States on January 6, 2021, and report on its findings.’” Meta similarly did not follow a board recommendation to provide a specific transparency report about the company’s enforcement of its content standards during the coronavirus pandemic. In these and other instances, the board’s inability to force the company to account for its role in major national crises has been painfully obvious.

I asked board member Suzanne Nossel, the chief executive of the literary-advocacy group PEN America, about these episodes. “No one believes the board can achieve impact by adjudicating a few dozen pieces of content on a platform that traffics in the trillions,” she told me. “The board’s efficacy will ultimately be judged by Meta’s willingness to take seriously our most consequential recommendations, even when they pose commercial or reputational challenges. Thus far, the signals are mixed.” In Nossel’s view, the Oversight Board’s importance is less as the “Supreme Court” that Zuckerberg envisioned, issuing law-like rulings on discrete disputes, than as an advisory council, prodding the company to improve across a wide range of its activities.  

Other efforts to broaden the impact of Oversight Board rulings have run into technical and bureaucratic obstacles. In theory, Meta has a process for applying the board’s one-off rulings to “identical content in parallel contexts,” or ICPC. But automated content-matching sometimes isn’t reliable and requires laborious human double-checking. Exacerbating the problem, company-wide layoffs in 2022 ravaged the teams that were doing the manual matching, and much of the ICPC work ground to a halt. Meta implicitly expressed its priorities by choosing which workers were shown the door. 

Needed: A New Theory of Online Speech 

The board reviews cases from around the world and typically grounds its pronouncements in international human rights law (IHRL) on free expression, such as Article 19 of the International Covenant on Civil and Political Rights. But the body’s application of IHRL often seems rote, Douek of Stanford Law School has written—accurately, in my view. Its analysis consists of invoking the standard three-part IHRL test requiring that Meta’s rules reflect “legality,” meaning they are clear and accessible; promote a “legitimate aim”; and are “necessary and proportionate.”

But IHRL, like the First Amendment of the U.S. Constitution, is designed to limit official censorship carried out by governments, not the regulation of speech by private corporations like Meta. In its three years of operation, the board has conspicuously failed to get its arms around how legal principles created to restrain governments should apply to companies. Without a theory about the dimensions of free speech in the corporate digital sphere, content decisions by companies—and by quasi-independent bodies like the Oversight Board—will inevitably seem unmoored. 

Vigorous limitations on official regulation of speech are often anchored in the idea that combining the state’s power to punish with the authority to censor undermines democratic self-government. There may well be a convincing analogous argument that platforms should also be limited in their ability to silence users, because dominant social media companies have the ability to punish individuals by cutting them off from opportunities to connect with others, express their opinions, and even pursue commercial opportunities. But the Oversight Board so far has not worked through this challenge.  

Kenji Yoshino, a board member who teaches constitutional law at New York University, conceded in an interview that “there is a lot, lot, lot to be done” in terms of articulating affirmative principles for corporate governance of online expression. But he argues that the board is gradually putting down markers in decisions that approve of the removal of hate speech that, under the First Amendment, the U.S. government could not censor.  In 2021, for example, a divided Oversight Board upheld the company’s removal of a post showing white individuals in blackface, finding that the racial caricatures were “inextricably linked to negative and racist stereotypes.”

More work is needed on this difficult problem. In the absence of what might be thought of as a coherent jurisprudence of platform self-governance, irresponsible actors have more leeway to mischaracterize what social media companies are doing when they moderate content. Thus, it has become an article of faith among conservatives that Silicon Valley liberals routinely censor right-leaning viewpoints and speakers—even though there is no empirical evidence supporting this contention. A more systematic, consistent theory of free speech as governed by private platforms could help address concerns on the right over bias by demonstrating that platforms are making decisions on a principled basis, rather than—as so often seems to be the case—making things up as they go. 

This claim of anti-conservative bias inspired the statutes enacted by Republican lawmakers in Florida and Texas that are now at the center of the NetChoice cases on free speech and social media that the Supreme Court will review during its current term. It remains to be seen how the high court will grapple with the task of theorizing how free speech should be regulated on the internet. 

The problem also shadows the European Union’s recently enacted Digital Services Act (DSA). The DSA imposes a range of transparency and risk-mitigation requirements on major social media platforms, but it remains unclear how these requirements will be enforced across the EU’s 27 member countries. The EU’s executive arm, the European Commission, has announced an investigation of X, Facebook, and TikTok over what the commission alleges is a failure to mitigate disinformation, such as dubious videos related to the war between Hamas and Israel. But the probe could easily run aground because of the DSA’s lack of clarity on what exactly platforms are supposed to do. 

If it took a more theoretically systematic approach to its work, the Meta Oversight Board might be able to propose an intellectual model applicable across multiple bodies of law, including the DSA, international human rights conventions, and Meta’s in-house “law,” which is to say, its content moderation policies. Adapting familiar free speech principles to the new circumstances of a digital age is no easy task. Even if the board were inclined to undertake it, the difficulties it has encountered persuading Meta to fully embrace its recommendations—the mixed signals to which  Suzanne Nossel referred—would make it far more difficult for the appellate body to be taken seriously. In the end, Meta itself would have to act as if it sees the board as a consistently legitimate source of authority on corporate governance, not a mechanism for outsourcing responsibility for content moderation or an elaborate public relations tool. It’s not at all clear that the company is prepared to make that commitment. 


Paul Barrett is the deputy director and senior research scholar of the Center for Business and Human Rights at New York University’s Stern School of Business and an adjunct professor at the NYU School of Law. He formerly worked for more than 30 years for the Wall Street Journal and Bloomberg Businessweek and is the author of four nonfiction books, including the New York Times bestseller “Glock: The Rise of America’s Gun.”

Subscribe to Lawfare