Cybersecurity & Tech

Facebook’s New ‘Supreme Court’ Could Revolutionize Online Speech

Evelyn Douek
Monday, November 19, 2018, 3:09 PM

The Supreme Court of Facebook is about to become a reality.

Published by The Lawfare Institute
in Cooperation With
Brookings

The Supreme Court of Facebook is about to become a reality.

When Facebook CEO Mark Zuckerberg first mentioned the idea of an independent oversight body to determine the boundaries of acceptable speech on the platform—“almost like a Supreme Court,” he said—in an April 2018 interview with Vox, it sounded like an offhand musing. But on Nov. 15, responding to a New York Times article documenting how Facebook’s executives have dealt with the company’s scandal-ridden last few years, Zuckerberg published a blog post announcing that Facebook will “create a new way for people to appeal content decisions to an independent body, whose decisions would be transparent and binding.” Supreme Court of Facebook-like bodies will be piloted early next year in regions around the world, and the “court” proper is to be established by the end of 2019, he wrote.

It is difficult to overstate the potential this has to transform understandings of online speech governance, international communication and even the very definition of “free speech.” Zuckerberg’s blog post literally asks more questions about the anticipated tribunal than it answers. (He writes, “Starting today, we're beginning a consultation period to address the hardest questions, such as: how are members of the body selected? How do we ensure their independence from Facebook, but also their commitment to the principles they must uphold? How do people petition this body? How does the body pick which cases to hear from potentially millions of requests?”) But it’s worth unpacking the underlying ideas behind the proposal and the most difficult challenges that will need to be resolved in how it’s set up.

Why is Facebook setting this up?

On its face, Zuckerberg’s proposal looks like an renunciation of power by Facebook. If the Supreme Court of Facebook is a truly independent body that Facebook will accept as binding authority on its content moderation decisions, Facebook would be giving up power to unilaterally decide what should and shouldn’t be on its platform. Why would it do this? Surely one of the benefits of being “CEO, bitch”—as Zuckerberg’s business cards reportedly read early in the company’s history—is that you can operate without checks and balances. But his proposal for an independent check on his power would recreate the separation of powers: a kind of judicial body overseeing the executive action of the thousands of content reviewers who implement the legislation that is Facebook’s Community Standards. This is a highly unusual governance structure for a private company. It is, of course, less unusual for a nation-state, which might be closer to what Zuckerberg is getting at when he refers to Facebook as a “community.”

The most generous interpretation is that Zuckerberg has heard the consistent calls for Facebook to offer more transparency and due process in its content moderation decisions. Just earlier this week, 88 human rights groups sent Zuckerberg an open letter asking Facebook to set up a structure that provides users with reasons for why their content has been restricted and the chance to appeal content moderation decisions. Earlier this year, the U.N. special rapporteur on the promotion and protection of the right to freedom of opinion and expression called on social media companies to “open themselves up to public accountability” and noted that “third-party non-governmental approaches ... could provide mechanisms for appeal and remedy.” So, despite the criticism Facebook has faced for looking to outsource the hard work of moderation to third parties, the model is actually one that has been countenanced and called for by many experts. It could be a significant step forward for the protection of users’ due process rights and the entire online free speech ecosystem.

More cynically, it’s also in Facebook’s interest. During the company’s last two years of reckoning—what has been dubbed the “techlash”—even Zuckerberg has accepted that regulation of Facebook is probably “inevitable.” The question now is, “What kind?” Research shows that even relatively modest voluntary efforts by private firms to restrain their own behavior can stave off much more stringent public regulations. A good example of this is ad transparency on social media platforms. Facebook, Google, and Twitter have all recently unveiled ad transparency measures; meanwhile the proposed Honest Ads Act, which would compel these sorts of disclosures, has made little progress in Congress. This is despite extensive reporting about weaknesses in Facebook’s ad transparency tools.

There is also the user-relations dimension. When Zuckerberg first floated the idea in April, I analogized Zuckerberg’s desire for restraint to the voluntarily segregated hotels and restaurants that counterintuitively supported anti-discrimination legislation during the Civil Rights Movement. They did so because they would earn more profits if they provided their services to everyone, but would pay a social cost in their white communities if they decided to voluntarily stop discriminating. Content moderation decisions on Facebook are hard and any call is likely to upset a proportion of Facebook users. By outsourcing the decision and blame, Facebook can try to wash its hands of controversial decisions.

How will the independent body make decisions?

Whether the body can meet these goals of providing due process, staving off more stringent regulation and deflecting controversy will depend on how it is structured. On that issue, Facebook has still released barely any details. Zuckerberg’s post raises some of the most obvious—how will it be staffed and how will it manage its workload? Rather than speculate wildly on the basis of such sparse information, I want to focus on perhaps the most pressing issue: what will be the body’s guiding code? What standards, past decisions and values will it consider when evaluating, for example, whether a particular post is “hate speech”?

This is not an easy question. Indeed, the difficulty of answering that question seems to be one of the reasons Zuckerberg wanted such an independent body in the first place. In March 2018, Zuckerberg told Recode, “I feel fundamentally uncomfortable sitting here in California at an office, making content policy decisions for people around the world. … [T]hings like where is the line on hate speech? I mean, who chose me to be the person that [decides]?” No doubt his unease with this situation was only furthered when he sparked off controversy by suggesting in a later interview that he didn’t think Holocaust deniers should be removed from Facebook—a perfect example of the difficulty Facebook faces. The U.S. has a famously expansive interpretation of free speech, and the court rulings that the First Amendment protected the right of Nazis to march in Skokie is remembered as one of the “truly great victories” in American legal history. By contrast, Holocaust denial is a crime in Germany. Putting aside the wisdom of either position, how should Facebook—a global platform connecting over two billion monthly users—respect conflicting standards of free speech, of which the example of Holocaust denial is only one?

Unfortunately, Zuckerberg’s Nov.15 Facebook post suggests he hasn’t given this issue enough attention. His post itself suggests several, sometimes contradictory, options. When he writes of the forthcoming independent body, he says, “How do we ensure their independence from Facebook, but also their commitment to the principles they must uphold?”—implying that the values in question are Facebook’s. These are embodied in the company’s Community Standards—which, along with its internal guidelines, are the rules that determine what content is allowed on the platform and which the 30,000 content reviewers use to make individual calls. Given these are the rules that the first-instance decision-maker will be applying, it makes sense that the tribunal should also be guided by them. This is consistent with Facebook’s goal that the standards “apply around the world to all types of content.”

But in his post, Zuckerberg also notes that “services must respect local content laws.” So will the Supreme Court of Facebook be charged with interpreting this local law? In deciding whether a post was justifiably taken down, will it interpret Thailand’s Lèse-Majesté laws prohibiting criticism of the Thai monarchy? Will it try and interpret the sometimes differing decisions of German regional courts on what is hate speech under German law?

Zuckerberg also suggests that “it's important for society to agree on how to reduce [harmful content] to a minimum—and where the lines should be drawn between free expression and safety.” If it’s society that decides the lines for free expression, how will the independent body determine what society’s views are? Will it take polls? If so, will those polls be national, regional or global? Will Facebook take into consideration national voting ages? Furthermore, doesn’t leaving the decisions to “society” risk undermining protection of minorities?

These options by no means exhaust the possibilities raised by Zuckerberg’s proposal in his post. One potential route, not mentioned by Zuckerberg, is that international human rights law could be the guiding code—as called for by the U.N. special rapporteur on the promotion and protection of the right to freedom of opinion and expression. Facebook has previously said it looks to international human rights documents for guidance, and I’ve written before about the strengths and challenges of this approach. For now, what is relevant is that it’s an entirely different (and, commentators have observed, less protective) body of law than, for example, U.S. free speech jurisprudence or Facebook’s “values.” Will the independent body be staffed by international law experts? How will they view, for example, the European Court of Human Rights’ recent controversial decision holding that an Austria’s blasphemy laws did not violate the right to freedom of expression?

None of this is clear. The independent body may end up being staffed by a handful of American lawyers, acculturated in First Amendment norms and scouring Facebook’s Community Standards for meaning by reference to the founder’s (Zuckerberg’s) professed original intention to “bring people closer together.” Or it may be a range of experts with the broad understanding of different languages and cultures necessary to make the highly context-dependent decisions of whether a particular post constitutes hate speech. Or it may be something else entirely.

Even once the substantive code of reference is determined, how the body’s decisions will be implemented in Facebook’s day-to-day operations is another question. Will the human content moderators have the time and training to translate the body’s rulings into concrete decisions in analogous cases, which they may have to do thousands of times a day?

Façade only, or a substantive institution?

Though Zuckerberg appears to be seriously pursuing the idea, currently his conception of the independent body is more soundbite than substance. When he says that the SCOF will “ultimately make the final judgment call on what should be acceptable speech in a community that reflects the social norms and values of people all around the world,” he sets an impossible goal. There is no homogenous global community whose norms can be reflected in the decisions of a single body deciding contentious issues. But that doesn’t mean the proposed body cannot be an important development in online governance, creating a venue for appeal and redress, transparency and dialogue, and through which the idea of free speech in the online global community develops a greater substantive meaning than simply “whatever the platform says it is.”

How the independent body is set up will determine whether it furthers or hinders rights to freedom of expression and due process. There is a rich literature in comparative law showing that decisions of institutional design can have significant impacts not only on outcomes but the entire stability and legitimacy of a governance structure. These choices give substance to the idea that the body is “independent.” The question of how Facebook defines the body’s jurisdiction is particularly important. Presumably it will cover any take-down decision, but what about the decision to demote content and limit its distribution and engagement, a tool Facebook has said it is using to deal with more and more problematic content? These decisions are particularly opaque and controversial and have generated controversy. If the independent body cannot review these decisions as well, Facebook will be left with a large degree of control over what claims get ventilated and reviewed, and will be able to determine the ambit of the body’s promise of due process.

Emerging from the Philadelphia Convention, Benjamin Franklin famously told onlookers that the United States would be “a republic, if you can keep it.” Zuckerberg’s announcement of the independent body in a frazzled press call doesn’t have majesty of Franklin’s response. But given the volume of speech that Zuckerberg’s decisions affect, the announcement deserves serious thought and attention.


Evelyn Douek is an Assistant Professor of Law at Stanford Law School and Senior Research Fellow at the Knight First Amendment Institute at Columbia University. She holds a doctorate from Harvard Law School on the topic of private and public regulation of online speech. Prior to attending HLS, Evelyn was an Associate (clerk) to the Honourable Chief Justice Susan Kiefel of the High Court of Australia. She received her LL.B. from UNSW Sydney, where she was Executive Editor of the UNSW Law Journal.

Subscribe to Lawfare