Cybersecurity & Tech

Trade-Offs in the Design of Facebook’s Oversight Board: No Silver Bullet

Radha Iyengar Plumb
Wednesday, November 6, 2019, 8:00 AM

An overview of the research and analysis used by Facebook in developing the company’s new oversight mechanism.

A hand holding a smartphone displaying the Facebook app. (Flickr/stockcatalog, CC BY 2.0)

Published by The Lawfare Institute
in Cooperation With

Technology companies worldwide are grappling with the complex question of how to provide a platform where every voice can be heard, while also ensuring that this online space is safe and bound by agreed-upon standards of behavior. We at Facebook are no exception. With more than 2 billion users, Facebook has been working to balance these core values of free speech and safety through our Community Standards—which aim to create a place for expression and give people voice with authenticity, safety, privacy and dignity.

As we build processes and policies to support those goals, my role at Facebook is to ensure the company uses the best available research and analysis, allowing the community of people who use Facebook to share their ideas and opinions without jeopardizing the safety and sense of community of others. Here, I lay out how the research and analysis served both to identify ways in which institutions had addressed a number of key challenges and where important trade-offs in priorities had to be made. This work complemented the broader engagement process and helped inform the Oversight’s Board design.

We wanted to make sure our community had a say in developing our community standards. So in November 2018, in a public note, Mark Zuckerberg announced our intention to create a mechanism for external review and input by building an “independent body, whose decisions would be transparent and binding.” Over the course of a six-month collaborative co-design process, many teams across Facebook met with more than 650 people from 88 different countries in workshops, roundtables, and town halls to receive feedback and suggestions—and with more than 250 people in one-on-one meetings. All together, these included experts from multiple disciplines in both the private and public sectors (including freedom of expression, technology and democracy, the rule of law, journalism, child safety, civil rights, human rights protection and others). We also conducted a public consultation via a questionnaire with both open-ended and closed-ended questions to ensure that additional people could provide their feedback and ideas.

These workshops highlighted a number of questions and concerns. For instance, we often heard that our current systems seem opaque and inconsistent. And as legal experts Kate Klonick and Evelyn Douek noted, “[T]he same terms [came up] over and over again: ‘due process,’ ‘transparency’ ‘independence,’ ‘diversity.’… [B]ut it seems like no one [could] agree on what those terms mean when actualized.” Commentators highlighted the value of different models for generating oversight, like generating fiduciary obligations as a means of better aligning safety and privacy incentives. They worried about how to incorporate core principles like representation and conceptions of procedural justice. Underlying these issues was a fundamental question: What structures and factors are necessary to grant such an institution legitimacy?

Many critiques recognized that these debates were not new: A variety of organizations in the public and private sectors have wrestled with similar problems across the world. To learn from these experiences, my team, in partnership with Professor and O.K. Patton Fellow in Law Paul Gowder, conducted a detailed analysis of existing oversight models. We built an analytic framework and associated assessment criteria within which to assess these models, and then assessed how the design and execution of various models fared against these criteria. We reviewed the existing research from a range of legal and academic sources on a broad range of oversight models from the Swedish Parliament, to U.S. corporate board audit committees, to the Indian Panchayat system. We also reviewed a range of judicial and quasi-judicial models (such as the U.S. court system, the French Court of Cassation and a range of international courts). The analysis helped ground the subsequent design and definitional work in empirical evidence while also highlighting the ways in which the Facebook Oversight Board would necessarily be different from existing models.

This combination of inputs informed the board’s recently released charter, which discusses some key design decisions including governance structure, case referral process and membership selection process. While this charter is only one of many documents that will govern the Oversight Board, it does lay out some core principles and decisions from Facebook, many of which were informed by research and analysis inputs—from discussions in workshops to quantitative and comparative analysis of models.

Before discussing some of the key insights from this analysis, it’s worth noting that each part of the research could have been an individual project in its own right. Critiques of the work rightly note that for practical and substantive reasons our research is a relatively high-level review, and in fact should be read as highlighting when institutions might provide relevant experiences that could inform the Oversight Board’s design rather than a comprehensive analysis. With that purpose in mind, we learned a lot from this analysis and have highlighted a few key points here.


1. The judicial model is only one relevant form of oversight. Analysis of a broader range of models is critical to understand trade-offs in different design decisions.

Our comparative analysis highlighted how important it was to consider a full range of oversight models—not just the common law judicial model. Our research involved defining a few buckets of oversight models: investigative institutions, supervisory institutions, arbitral adjudication processes, administrative adjudication bodies, national judicial systems (including both European continental-style appellate courts and American appeals courts) and international judicial systems.

The comparison of these different models highlighted certain operational structures and barriers that can impact the effectiveness and, ultimately, the legitimacy of the oversight board. For instance, a notable difference in these institutions was the downstream effects of the decisions they make. On one end of the spectrum is the U.S. Supreme Court, which establishes a strong and formally binding body of precedent that makes new rules that apply to all actors. On the other end is, for example, one-off arbitrations—which do not exercise influence beyond the specific dispute presented to them. The capacity to set policy through precedent can ensure that the adjudicative process effectively incorporates abstract principles into organizational decisions and policies. But this binding structure—which requires decisions to impact policy—can come at the cost of reducing organizational autonomy. That is, it may limit the ability of the rule-making institution to set rules—an ability that can be valuable for both fostering policy innovation and ensuring operational feasibility. This also risks promoting what U.S. critics decry as “judicial activism” and efforts by interested parties to politicize the board by exercising undue influence over the selection of board members.

Thus, institutions balance different outcomes of processes: For instance, they may opt to preserve autonomy at the expense of incorporating internal, salient information that could increase operational feasibility. This trade-off provides one example of how, despite a common goal in providing oversight, these different oversight models make decisions on how to balance different priorities, which ultimately leads to the processes serving different functions.


2. In defining independence, legitimacy and autonomy, a focus on inappropriate dependencies and influences may be more practical and meaningful.

In some ways, the most practically applicable insight we gained from the research derived from asking what constituted oversight itself, agnostic of the specific model in question. We began by exploring the extensive research on the concept of legitimacy, which has been used both in a moral sense—for example, a state may be morally right in passing a certain law—and in a sociological sense, denoting the empirical acceptance of an actor or institution by some relevant group. These two versions of legitimacy are distinct but are arguably related, and ultimately the Oversight Board sought to achieve “legitimacy” in both senses.

This highlighted the need for an appropriately independent structure from the organizational policy process that could serve this oversight function. Drawing in part from Lawrence Lessig’s definition of institutional corruption, we concluded that it was helpful to understand independence in the sense of autonomy from improper dependence. Thus, an oversight board is autonomous when its decisions are grounded in relevant factors and not impacted by inappropriate influences.

We sought to put this into practice by establishing a separate body, called a trust. The trust will receive funding from Facebook, and Facebook-appointed trustees maintain and approve the board’s operating budget, including member compensation, administration and other needs. The trustees will also formally appoint members and can, if necessary, remove them for breaches of the board’s code of conduct. Thus, while the trust will be directly connected to Facebook in terms of both funding and appointment, it will retain the ability to make decisions on members and operations autonomously from Facebook. Trust documents establishing the formal relationship between the board, the trust and Facebook will be publicly released to increase transparency.


3. Legitimacy requires transparency and executability of findings—fair and accurate proceedings are not enough.

Our review of the detailed literature on oversight helped us highlight other key elements to focus on, such as validity (decisions consider facts and accurate information), salience (decision makers consider applicable institutional details and operational constraints), and procedural fairness (people have genuine access to a fair decision-making process) in evaluating design features and possible trade-offs. While these attributes are critical to ensuring fair and accurate decisions, they alone cannot impart legitimacy on an institution.

A key aspect of legitimate oversight is how the bureaucracy can itself be reviewed and held accountable—that is, the transparency of its processes and findings. Both theory and practice suggest that if people understand the reasons behind a decision, this will help establish fairness, ensure consistency of decisions and raise public willingness to accept decisions in the face of any remaining disagreements after the deliberations. In fact, an extensive body of research suggests that people are more likely to accept decisions arrived at by fair procedures and are more satisfied with authorities and institutions using procedures considered fair, even when controlling for their preferred outcomes.

But often there is an inherent trade-off between the capacity to provide full and unbiased access to dispute-resolution procedures and the capacity to provide consistent, well-articulated and autonomous decisions that represent the full reasoning capacity of the board. This is because often the material that is needed to inform decisions is personal, private or in private-sector settings that contain proprietary information. Consistent with balancing these trade-offs, each decision will be made publicly available and archived in a database of case decisions on the board’s website, subject to data and privacy restrictions.


4. Design trade-offs are just the first step. All oversight bodies must grapple with questions of control, quality and scope in order to be operational.

Given the number of potential cases that boards might consider, time-burdened board members in all institutions must make decisions on the degree of control, quality and scope of cases that they will review. This means setting limits on the number of matters for decision before the board, along with the total amount of time available for the board and the amount of time necessary to decide a case. Different institutions have arrived at a few different options to manage case flow. Some, like the U.S. Supreme Court, have found a way to maintain a fixed, small number of cases by having highly discretionary elimination of appeals. Some international courts, meanwhile, empower staff to make more decisions, sacrificing board authority. Still others divide the board into subpanels, like the French Court of Cassation and U.S. courts of appeals.

The workload management in these different examples highlights the benefits and risks of different design decisions. In many cases, staff functions within oversight bodies will vary from receiving paperwork to rendering preliminary resolutions of disputes, which may be accepted wholesale as the final decision if the oversight body agrees. Not surprisingly, the extent of the role that staff play in the decision-making process can create concerns regarding accuracy and may be seen as exercising undue influence. For instance, if staff review matters and recommend which should be considered, this can impact both board authority and case selection. At the same time, empowered staff can help manage workload and help ensure bureaucratic expertise and consistency in large and rotational oversight bodies like the proposed Facebook board.

Implementation decisions will interact significantly with design decisions in impacting both operational feasibility and, in many cases, overall legitimacy. So, while both Facebook and its users will be able to refer cases to the board for review when fully operational, the board will begin its operations by hearing Facebook-initiated cases and user-initiated appeals will be made available in 2020 and the board and staff may grow accordingly.


5. Success will ultimately depend on an empirical question. With this in mind, design features can and should be revisited and revised in the future.

All oversight bodies are in some sense an effort to solve a complex principal-agent problem— that is, when some group (the principal) must delegate its authority or decision-making power to another (the agent), which is motivated by a different and sometimes contrary set of incentives. In these situations, the principal tries to align its incentives with that of the agent; whether that is successful depends on the relative benefits for the agent of complying with the principal’s goals versus its own.

In the oversight context, there are a number of nested principals and agents. For instance, Facebook could serve as the principal relative to the board, playing the role of the agent, where Facebook wants the board to make fair and accurate decisions that comport with operational feasibility requirements. Thus, Facebook wants to encourage the board to make decisions that will be feasible for Facebook to implement, not simply to choose preferred but impossible-to-implement resolutions.

In another nested principal-agent model, the community (for example, people who use Facebook) represents the principal and delegates decision-making authority to the agent (Facebook), which must decide what content to remove in order to balance norms of safety and freedom of expression. In this context, the board serves as another agent to publicly “double-check” Facebook, inducing Facebook’s decisions to align more closely with its community’s preferences. Separately, the board itself faces a principal-agent problem with regard to its staff: The board wants staff functions to align with the board’s priorities.

The empirical effectiveness and intended impact of design decisions like member selection and case selection, along with the downstream effects of the board’s decisions, is uncertain—especially given the complexity of these different combinations of incentives, the global nature of the board’s oversight and the evolving context in which it must operate. Like any oversight institution, the board’s design will need to be revisited to address issues as they arise. Facebook has developed the board’s charter and bylaws specifically to allow for certain adjustments to be made as the board evolves over time.


Ultimately, our findings serve as a starting point to ground and structure Facebook’s thinking on how to execute meaningful oversight in the context of social media and internet-mediated communication and services. There is no silver bullet for institutional design that will address all issues for all constituencies in all conditions. As Klonick and Douek note, “There are hard trade-offs involved, and likely no perfect answers, made all the more difficult because the world has never seen an institution quite like this before.”

When building new governance models in social media—a largely new and rapidly evolving space—it is worth considering the underlying priorities and values of external oversight in assessing and resolving trade-offs. As this oversight process is put into practice, we at Facebook will continue engaging with outside experts and conducting research to inform and improve our policies, processes and governance practices.

Radha Iyengar Plumb is the Global Head of Policy Analysis at Facebook. She has previously served in senior staff positions at the Department of Defense, Department of Energy, and National Security Council and as faculty at the London School of Economics. She has a PhD in Economics from Princeton University.

Subscribe to Lawfare