Criminal Justice & the Rule of Law Cybersecurity & Tech

The Supreme Court of Facebook: Mark Zuckerberg Floats a Governance Structure for Online Speech

Evelyn Douek
Thursday, April 5, 2018, 9:00 AM

Mark Zuckerberg is on a charm offensive. Fresh off a round of interviews with various media outlets following the revelations about Cambridge Analytica’s use of Facebook data in the 2016 election, the Facebook CEO sat down this week with Vox’s Ezra Klein for an interview about future governance structures for his platform.

Published by The Lawfare Institute
in Cooperation With
Brookings

Mark Zuckerberg is on a charm offensive. Fresh off a round of interviews with various media outlets following the revelations about Cambridge Analytica’s use of Facebook data in the 2016 election, the Facebook CEO sat down this week with Vox’s Ezra Klein for an interview about future governance structures for his platform. The conversation was hardly the Constitutional Convention—but it did shed light on Zuckerberg’s thinking about the potential for a more robust framework of accountability and dispute resolution, complete with separation-of-powers mechanisms, for its two billion monthly users. Among the more interesting and revealing proposals he floated was the creation of an independent tribunal to adjudicate the bounds of acceptable speech in the “community” of the Facebook platform: a “Supreme Court” of Facebook, as he put it. The idea lacked specifics and is still embryonic, but given the powerful role Facebook plays in social and political communication, it’s worth engaging with Zuckerberg’s thought-bubbles—because they could have profound consequences for modern discourse.

In the diverse set of controversies that surround Facebook at the moment—among them issues of transparency, privacy and social fragmentation—it was the difficulty of deciding how to draw lines for content moderation that appeared to be at the front of Zuckerberg’s mind. When Klein asked Zuckerberg to expand on the idea that Facebook is now more like a government than a traditional company, Zuckerberg replied that this was a “fair question” and said that his goal was to create a “more democratic or community-oriented process” for deciding its community standards. He singled out two principles as guiding this effort to ensure greater accountability: first, transparency, and, second, the development of an independent appeal process.

Journalists, academics, legislators and public interest groups have long been calling on Facebook and other social media companies to provide more information about what is occurring on their platforms. But Zuckerberg’s idea of an independent appeal process is new. To understand the reasoning behind this suggestion, it’s necessary to examine just what problem Zuckerberg is trying to solve with this proposal and why it’s the issue that is at the front of his mind.

The difficulty of hate speech

Amid the growing “techlash”—that is, the turning tide of public sentiment against the major technology companies—calls have grown for stricter regulation of companies that have typically been given very wide berths in which to innovate. The wide range of problems created by the unprecedented power of social media requires diverse responses. Concerns about lack of transparency in online political advertising have prompted the Honest Ads Act bill, which would largely bring treatment of online political advertising into line with that of other political ads. Meanwhile, revelations about the use of 50 million Facebook users’ data by the political consulting firm Cambridge Analytica has generated demands for rigorous and comprehensive privacy laws. As Danielle Citron and Quinta Jurecic discussed last week on Lawfare, Congress recently passed an unprecedented statutory carve-out to one of the foundational immunities for online platforms—Section 230 of the Communications Decency Act—in an effort to crack down on sex trafficking. The appetite for regulation is growing.

While it’s important not to lose sight of the big picture, each of these problems presents its own unique difficulties that must be addressed independently. For massive global platforms like Facebook, hate speech poses a particularly difficult problem requiring a tailored response.

First, every jurisdiction regulates speech differently—and not just in small, technical ways. The United States is famously an outlier among democratic nations in its tolerance for all manner of views as constitutionally protected free expression. Virtually every other democratic jurisdiction imposes some limits on extreme or hateful speech, and the regulation of speech in non-democratic countries raises an entirely different set of issues. However, as Kate Klonick has shown, the platforms’ content moderation policies were developed by American lawyers accustomed to American free speech norms. Those lawyers faced a sharp learning curve in discovering that those laws and norms were ill-suited in other contexts. As Zuckerberg acknowledged in his interview with Klein:

I think it’s actually one of the most interesting philosophical questions that we face. With a community of more than 2 billion people all around the world, in every different country, where there are wildly different social and cultural norms, it’s just not clear to me that us sitting in an office here in California are best placed to always determine what the policies should be for people all around the world.

Even if it were possible to draw clear jurisdictional lines and create robust rules for what constitutes hate speech in countries across the globe, this is only the beginning of the problem: within each jurisdiction, hate speech is deeply context-dependent. For example, outcry arose from the LGBTQ community when Facebook started blocking the posts of many LGBTQ users who described themselves using phrases like “dyke” and “fag”—words that can be highly offensive when used with hateful intent but are also often “reclaimed” as a means of self-expression by LGBTQ people. Another elusive line delineates where hate speech ends and satire begins: Twitter sparked controversy earlier this year when, under a new German law targeting online hate speech, it blocked the account of the German satirical magazine Titanic, which had parodied anti-Muslim comments.

This context dependence presents a practically insuperable problem for a platform with over 2 billion users uploading vast amounts of material every second. As Zuckerberg acknowledged in an interview with Wired, hate speech can’t be moderated by machine learning alone but requires more intensive human moderation. And while Zuckerberg said artificial intelligence is more successful at proactively identifying other categories of problematic content such as nudity or terrorist content, it’s worth noting that AI has caused problems in those areas as well—such as when Facebook censored the Pulitzer Prize-winning photo of “napalm girl” as child pornography, or when Youtube proved unable differentiate between terrorist content and evidence of war crimes. Regardless, even with the expanded taskforce of content moderators Facebook has announced, the company will only have one human moderator per every 100,000 user accounts. This hardly facilitates meaningful engagement with difficult moderation decisions.

Myanmar brings home the magnitude of the problem

These are not abstract academic questions. As the current situation in Myanmar highlights, this is an urgent problem with potentially horrific consequences. The United Nations’ high commissioner for human rights, Zeid Ra’ad al-Hussein, has called the ongoing violent military campaign against Rohingya Muslims in the country a “textbook example of ethnic cleansing.” In March, U.N. investigators specifically singled out Facebook for playing a leading role in the violence by spreading hate speech. Because of Facebook’s “Free Basics” program, in which the company partnered with local telecommunications companies to provide access to a limited suite of internet services including Facebook for free, Facebook has become a primary conduit for information in many developing countries, including Myanmar. As a result, U.N. Myanmar investigator Yanghee Lee said, “Everything is done through Facebook in Myanmar … It was used to convey public messages but we know that the ultra-nationalist Buddhists have their own Facebooks and are really inciting a lot of violence.” She added, “I’m afraid that Facebook has now turned into a beast, and not what it originally intended.”

When asked about the violence in Myanmar by Slate’s April Glaser and Will Oremus shortly after these comments by U.N. representatives, Adam Mosseri, vice president of product management at Facebook, said that the situation was “deeply concerning” and that “[w]e lose some sleep over this.” Facebook usually has a practice of partnering with third-party fact-checkers to counter misinformation on its platform, he added, which allows it to leverage knowledge of the local context while avoiding becoming the “arbiter of truth” itself. But the company had been unable to do this in Myanmar because there were no such organizations available.

In an earlier interview with Recode, Zuckerberg himself talked about his unease with being forced to make decisions about what speech was acceptable in a society far removed from his own. “I feel fundamentally uncomfortable sitting here in California at an office, making content policy decisions for people around the world,” he said … [T]hings like where is the line on hate speech? I mean, who chose me to be the person that [decides].” It’s unclear whether Zuckerberg appreciates the true costs of the situation. His reassurance to Klein that “this is certainly something that we’re paying a lot of attention to” is small comfort.

Zuckerberg’s proposal

The tension between general principles and policies that are transparent and fair, and the need for a contextual approach to acceptable speech that takes into account the specific circumstances, is one that legal systems are accustomed to grappling with. The ideal of the rule of law encompasses the notion that laws should be clear and accessible in advance, so people know what standard their conduct will be measured against, while their application should be fair and just in the circumstances of each individual case. More ink has been spilled on this than there is space to review in here, but the point is that Zuckerberg does not have to reinvent the wheel. Incorporating this aspect of legal systems may be the instinctive motivation behind the model that Zuckerberg proposes for an independent appeals process:

Right now, if you post something on Facebook and someone reports it and our community operations and review team looks at it and decides that it needs to get taken down, there’s not really a way to appeal that. … [O]ver the long term, what I’d really like to get to is an independent appeal. So maybe folks at Facebook make the first decision based on the community standards that are outlined, and then people can get a second opinion. You can imagine some sort of structure, almost like a Supreme Court, that is made up of independent folks who don’t work for Facebook, who ultimately make the final judgment call on what should be acceptable speech in a community that reflects the social norms and values of people all around the world.

But this proposal raises more questions than it answers. Structural and institutional issues first: who would sit on the new “Supreme Court” and, perhaps more importantly, who would decide who should sit? What would those judges’ qualifications be, and what constitutes “independence” from Facebook? On the more substantive questions: If the key constraint for Facebook in Myanmar currently is the lack of objective umpires with local knowledge, how is this solved by a new tribunal sitting in Menlo Park? Will the human content moderators have the time and training to translate the “court’s” rulings into concrete decisions, which they may have to do thousands of times a day? How binding will the court’s decisions be? Will Facebook observe its rulings even if they involve an individual or company who is particularly powerful or popular with a large segment of its users?

Voluntary restraints

In a recent article, Cass Sunstein describes how, during the civil rights movement, many segregated restaurants and hotels counterintuitively supported anti-discrimination legislation because they would earn more profits if they provided their services to everyone. Yet they wanted to be compelled by law to desegregate because social norms meant they would incur a high cost in their communities if they voluntarily decided to stop discriminating.

This motivation seems relevant to Facebook’s current situation. To again cite Klonick, one of the key drivers in platforms’ content moderation decisions is that their economic viability depends on meeting user’s speech and community norms. In a diverse global community, however, these norms are often at odds—and voluntarily choosing between norms could put Facebook in the position of taking a political stance on divisive issues, which the company doesn’t want to do. But when a government imposes constraints through regulation, social media companies can become the underdog and the hero in the story, even if ultimately the company may have decided on its own that the option of remaining neutral and failing to remove objectionable content was becoming too costly.

As Danielle Citron has recently written in an important article raising alarm at increasingly ambitious attempts by governments to impose their own standards on the internet as a whole, “[u]ltimately, Silicon Valley may be our best protection against censorship creep.” Many in academia and civil society have come out in defense of Facebook in jurisdictions like Germany, which has passed laws imposing liability on social media platforms that don’t remove unlawful content within 24 hours. But there will also be jurisdictions on the other end of the spectrum where there is no governmental regulation, whether because constitutional constraints (such as the First Amendment) make it unlikely or because, as in Myanmar with respect to anti-Rohingya speech, the government is unwilling to confront the problem. In these markets, Facebook has to take responsibility for its choices. For all the problems a Facebook Supreme Court might create, perhaps it could mitigate that one.


Evelyn Douek is an Assistant Professor of Law at Stanford Law School and Senior Research Fellow at the Knight First Amendment Institute at Columbia University. She holds a doctorate from Harvard Law School on the topic of private and public regulation of online speech. Prior to attending HLS, Evelyn was an Associate (clerk) to the Honourable Chief Justice Susan Kiefel of the High Court of Australia. She received her LL.B. from UNSW Sydney, where she was Executive Editor of the UNSW Law Journal.

Subscribe to Lawfare