Cybersecurity & Tech

Facebook Releases an Update on Its Oversight Board: Many Questions, Few Answers

Evelyn Douek, Kate Klonick
Thursday, June 27, 2019, 3:41 PM

It’s been roughly six months since Facebook started collecting global feedback on its proposal to create an oversight board for content moderation decisions. This morning, the platform released the findings of that process in an epic report—almost 250 pages of summary, surveys, public comment, workshop feedback and expert consultations.

Published by The Lawfare Institute
in Cooperation With
Brookings

It’s been roughly six months since Facebook started collecting global feedback on its proposal to create an oversight board for content moderation decisions. This morning, the platform released the findings of that process in an epic report—almost 250 pages of summary, surveys, public comment, workshop feedback and expert consultations. The good news is that the report does give some much needed transparency on what users and key stakeholders want from this new board, which will make Facebook more accountable should the company choose to set counter-majoritarian policy unilaterally. The bad news is that (as one might expect from an attempt to achieve global consensus) it is not at all clear what exactly users want.

If you were hoping for decisions on what the next steps will be—or even what they should be—this report isn’t that. That seems likely to come when the board’s charter is released in early August. Today’s document, despite its length, continues the trend of Facebook statements that raise more questions than they answer. The overwhelming impression from reading the extensive public input about the proposal, as well as the summary of that feedback by Facebook representatives, is that no one really knows what the body will be or how it will work.

As the report notes, the vision for Facebook’s oversight board is to provide “additional transparency and fairness” to the platform’s content moderation system. The establishment of the board comes at a moment in which there is greater public awareness and dissatisfaction with the way that large tech platforms make decisions concerning what they do and don’t allow on their sites, and Facebook’s proposal is the most ambitious—a proactive experiment with a fundamentally different model of governance. This report makes at least two things incredibly clear: first, how necessary it is to have a forum to publicly work through fundamentally irresolvable disputes about the right way to manage online speech; and second, how monumentally difficult the task of creating such a body truly is.

What’s in the Report?

The report begins by outlining the (relatively short) history of the idea of an oversight board: the process began with Harvard Law Professor Noah Feldman pitching the idea to Facebook in early 2018, followed by CEO Mark Zuckerberg first publicly floating the idea of a Facebook “Supreme Court” on a podcast in April 2018. Zuckerberg announced that the board would become a reality in a blog post in November, and Facebook released a draft charter for the body in January 2019.

Facebook then began a global consultation process following the draft charter’s release. The process included two-day “workshops” in six cities and 22 roundtables around the world, which included more than 650 people from 88 countries. Facebook also spoke with more than 250 people in one-on-one meetings. (Disclosure: both of the authors attended some of these workshops and meetings.) The company supplemented these efforts through an online questionnaire, to which 1,206 people responded; 138 people also submitted additional essays through this portal, and 52 of those people consented to have their essays shared publicly. These 52 essays are included in an appendix to the report, and the range resembles that on any Facebook comment thread: Some writers earnestly respond to Facebook’s questions, others take the opportunity to troll, and a certain number believe they have the answers and can’t work out why everyone else finds this all so difficult.

The report then summarizes research commissioned by Facebook into the range of oversight models that exist globally. This research report, also appended, surveys six “families” of oversight design (investigative institutions, supervisory institutions, arbitral adjudication processes, administrative adjudication bodies, national judicial systems and international judicial systems) and concludes that “there is no perfect system—all systems involve trade-offs and require prioritization based on overall goals.” The “core finding” that “key design issues can drive longer-run legitimacy” is common sense but only thinly linked to the actual substance of the report. The methodology of how these particular families of oversight were selected or grouped is unclear, and the link between this survey and the determination of “design features of particular importance” is not strongly established. For example, despite repeatedly claiming that the Indian Panchayat dispute resolution systems of local councils was one of the institutions studied, these systems are described in a single sentence and only a handful of sources are referenced. This example reflects the overall problem with the attempted scale of this research: It’s laudable in its ambition for inclusion but ultimately carves out an overly ambitious goal of a global survey of oversight institutions that cannot be adequately accomplished in such a short report, or maybe in any report. The attempt to sweep so broadly also means that much of the report has little room to dive into some of the rich literatures that exist on different varieties of institutional design and that could have provided valuable insight into the very issues Facebook is grappling with.

Perhaps the most interesting part of the report is the feedback Facebook received during its public consultation process, which the report groups around the topics of membership, decisions and governance. This quasi-democratic survey is the most substantial part of the report, and the summary by the report’s authors is “intended to guide Facebook as it continues to answer open questions about the design of the Board.” But, few answers and little consensus seem to have emerged out of these discussions. If anything, in summarizing the feedback so that every single issue has a “on the one hand” and “on the other hand” set of contributions, the authors of the report seem inadvertently to be proving a very different point: These are really hard questions, and there are no right answers.

Yet with the board due to be in operation by the end of the year, it is time for Facebook to take the lead and articulate a clear vision for it. A careful read of the summary does reveal a few important developments, however.

Narrow “Subject Matter Jurisdiction”

There is perhaps only one thing about which the report is unequivocal: Both the foreword (written by Nick Clegg, Facebook’s vice president of global affairs and communication), and the last pages of the report note that the oversight board is intended only to focus on “content decisions.” The report states that Facebook has “been relatively clear” that this includes only individual content decisions under the community standards. Clegg specifically singles out News Feed ranking and political advertising as “important issues” beyond the remit of the board—two areas that one of us (Douek) has argued need to be included in the board’s jurisdiction to make the body a meaningful contribution to the legitimacy of Facebook’s content moderation ecosystem.

The report asserts that the remit of the body, which is intended to oversee content decisions will obviously be limited to disputes about individual pieces of content under the community standards. But it is disingenuous to suggest this is so clear-cut. Mark Zuckerberg’s original post announcing the board discusses at length the ranking decisions the company makes in relation to “borderline content”—content that approaches the line drawn by the community standards regarding what will be prohibited. This itself reveals one of the central problems in creating the scope of the board: The company will have to address situations in which a decision on any one piece of content has implications for both new policy recommendations and controversial changes to Facebook’s core product. In particular, Zuckerberg’s reference to borderline content was an acknowledgment of how ranking decisions are bound up in the overall ecosystem of content moderation. Fundamentally, if the board cannot review any ranking decisions, then Facebook is creating a loophole for itself: It can simply prevent itself from being able to make decisions about certain posts by severely limiting the circulation of those posts rather than outright banning them (a practice colloquially known as “shadowbanning.”) Under this regime, private content moderation, which many have struggled mightily to move out of a mysterious black box, just moves into a different type of black box.

This threat is just as real for the site’s exclusion of political ads from the board’s purview. Facebook has extensive rules around political ads, including who can run them and what must be disclosed. An excellent recent paper shows that while Facebook claims to resist being an arbiter of political discourse, it actively vets paid political content on its platform “in often opaque ways, according to policies that are not transparent, and without clear justifications to campaigns or the public as to how they are applied or enforced. This limits options for political practitioners to contest regulation decisions.” This is exactly the kind of problem that the board is ostensibly being set up to solve, and the mere fact that content is paid rather than “organic” (as Facebook refers to unpaid posts) should not be cause for different treatment. Paid political advertising is of substantial importance to public discourse, and so its moderation should be as transparent and principled as the rest of the platform’s content.

Of course, it is not surprising that Facebook does not intend to give the board any say in decisions about News Feed rankings or advertising, which go far more directly to the company’s business model. But by failing to do so, it undermines the legitimacy that Facebook is working so hard to give the board and its content moderation more generally.

Broader Remedial Powers

The report notes that “[a] strong consensus emerged that the Board’s decisions should influence Facebook’s policy development. Without some policy influence, the Board would not be seen as valuable or legitimate.” This position was less clear in the draft charter, which noted only that “Facebook can incorporate the board’s decisions in the policy development process [and] may request policy guidance from the board” (emphasis added). But 95 percent of respondents to the public consultation favored the board’s being able to recommend changes to Facebook rules, and there is little point to a consultation process that ignores such an overwhelming response. It is still unclear what form this dialogue between the board and Facebook over policy will take, but it is promising that Facebook seems to have heard this feedback and adjusted the board accordingly. The question now is what standard Facebook’s policies should be measured against.

Statement of Values

One of the most interesting parts of the report is the discussion of the board’s need for a foundation for its decision-making: some sort of “constitution” or “values commitment,” which both of us have called for in previous work. One of us (Douek) has noted that the list of human rights buzzwords that were included as a statement of values in the draft charter avoids exactly the difficult questions that such a statement should solve, and “the very nature of hard freedom of speech cases is that they involve trade-offs between these values.” While the report acknowledges this critique, it does not give any further insight into how Facebook might resolve these trade-offs.

Yet another question is whether Facebook will set these values for the board, or if it should be left up to the board to better define these values for Facebook. The report quotes one of us (Douek) as arguing that Facebook itself, not the board, “must make the difficult choices about which values it wishes to prioritize.” Platforms do have a prerogative to define their own missions and models. Given they carry this out in the way they operate their sites anyway, it is better for them to do so openly—but even making these choices upfront will only go so far. Government constitutions abound with vague phrasing like “free speech,” “dignity,” and “right to a fair trial.” In framing those constitutional values, they make clear which values have been chosen to be prioritized over others. After that, it is the business of legislatures and courts to continually define what this means in practice and what the precise balance should be.

Tellingly, the original vision for the board included exactly this sort of objective. In his papers setting out his conception of a Supreme Court for Facebook, made public for the first time in this report, Professor Noah Feldman describes the board as an answer to the “the most pressing threat” to freedom of expression that comes from the asymmetrical “pressure that the platforms face to limit expression in order to satisfy engaged, committed advocacy groups.” While “[f]aced with such demands, a legislature would almost always say yes. The courts, however, can say no. And their ‘no’ is informed by their commitment to the greater principle of free expression. As keepers of the basic right, the courts have become an effective counterweight to censorship.” While retaining the core vision of a separation of powers by introducing a “judicial”-style body as a check against Facebook’s “legislative” decisions in writing its community standards, later descriptions of the board have not so clearly linked its purpose as a guard against censorship or moderation.

Rather than a simple list of values that sets voice alongside a number of other interests, Feldman’s original proposed statement of values reads: “Facebook commits itself to the values of free expression and free association, consistent with respect for public safety, equal dignity, community, and law.” This is a clear prioritization of voice over other interests. But sometime between this proposal and the release of the draft charter, Facebook stepped back from this stance and included voice as only one value among many, saying that the board should make its decisions based on a “set of values, which will include concepts like voice, safety, equity, dignity, equality and privacy.” There is nothing inherently wrong with choosing to give a greater priority to safety and dignity rather than voice, for example, but trying to serve all values equally is doomed to fail and does not provide adequate guidance or constraint to future board members.

Despite noting these disputes, the report does not give any further answers on how they might be resolved. Nor does it weigh in on the live issue, raised by many in civil society, of whether Facebook should formally adopt international human rights norms as the basis for its decision-making—only reciting (as it does on every issue) the arguments for and against.

Importance of Diversity

The only matter on which there is almost complete unanimity is the importance of diversity in the ultimate composition of the board, given the diversity of users who will be affected by the board’s decisions. This came through clearly in the appended public comments as well—there is a clear hope that the board will be able to bring a greater degree of diversity in cultural and linguistic knowledge to content moderation at Facebook, which has historically been highly influenced by American legal culture and norms. But this, too, suffers from a lack of definition. Sure, the board should be diverse—but what does Facebook mean by diversity? Is the board meant to represent diversity like a legislature, which represents its constituents in a somewhat proportional sense? Or does Facebook have a more amorphous idea that it would just like “to see” a range of diversity like in the U.S. Supreme Court—a body that, while becoming slowly more diverse in gender and race, has become less diverse in terms of nonlawyers serving as justices, and instead has become monolithically populated by those who attended a small handful of elite law schools. Should regional diversity trump race? Should race relate to religion? Should economic diversity be at play? Educational background? Somewhat amusingly, so important is diversity that feedback was even “split” over the level of familiarity board members should have with technology and social media—a qualification that might have seemed uncontroversial.

A Board in Want of a Vision

As the report notes, the obvious next task for Facebook is to affirmatively state its goals for the board—a much needed act of “[e]xpectation management” given the many different understandings of what the body might look like that are expressed in the public feedback. A selection of writers clearly associate the word “board” with corporate boards. The report notes that some commenters pointed to the fact that “most boards are part-time positions” in the section on board members’ terms of service. But clearly the Facebook oversight board will be a very different body from corporate boards, including Facebook’s own. Other commentators seem to envision a role more familiar in many human rights contexts, like U.N. missions, suggesting a “fact-finding function” for the board. Yet others seem to expect a more audit-style function, where board members not only reactively review policies in the cases of disputes but also “proactively monitor the policies and their enforcement.”

Facebook should not be faulted for waiting to articulate this vision until now, and until after public consultation. As the report also notes, a project like the board “is the work of years.” But with only a few months until the board is supposed to hear its first case, this is the moment for Facebook to “resolve tensions between minimalist and maximalist visions of the Board. Above all, it will have to demonstrate that the Oversight Board—as an enterprise worth doing—adds value, is relevant, and represents a step forward from content governance as it stands today.”

Ironically, the very lack of coherent agreement in the public responses makes the best case for the value that the board can add. These are hard questions, many with no right answer. In such cases, where there is likely to be substantial disagreement on any decision, it is through transparent public reasoning that uses arguments that reasonable people might be expected to respect that an authority’s exercise of power is legitimized. The board is exactly the kind of forum where this kind of reasoning can occur, and the body can bring coherence and legitimacy to Facebook’s substantial exercise of power over public discourse.

Facebook should be commended for making the results of its consultation public, including the high level of angst and frustration at the mistakes Facebook has made to date expressed in many of the comments. Now the company must start answering the questions that have been raised—here’s hoping the board’s charter, which is planned to be released in mid-summer, will do exactly that.


Evelyn Douek is an Assistant Professor of Law at Stanford Law School and Senior Research Fellow at the Knight First Amendment Institute at Columbia University. She holds a doctorate from Harvard Law School on the topic of private and public regulation of online speech. Prior to attending HLS, Evelyn was an Associate (clerk) to the Honourable Chief Justice Susan Kiefel of the High Court of Australia. She received her LL.B. from UNSW Sydney, where she was Executive Editor of the UNSW Law Journal.
Kate Klonick is an Assistant Professor at Law at St. John's University Law School, an Affiliate Fellow at the Information Society Project at Yale Law School, and Future Tense Fellow at New America. Her research and writing looks at networked technologies' effect on the areas of social norm enforcement, freedom of expression, and private online governance. Her work on these topics has appeared in the Harvard Law Review, Maryland Law Review, New York Times, The New Yorker, The Atlantic, Slate, The Guardian and numerous other publications.

Subscribe to Lawfare