Cybersecurity & Tech

How Much Power Did Facebook Give Its Oversight Board?

Evelyn Douek
Wednesday, September 25, 2019, 8:47 AM

Last week, Facebook announced the final charter for its independent Oversight Board, which will have the power to hear cases and overrule Facebook’s decisions about what can and cannot remain on Facebook’s platforms. This is a potentially pivotal moment in the history of online speech and an unprecedented innovation in private platform governance.

An aerial view of the Menlo Park offices of Facebook, Inc (Source: Wikimedia/Fabio Isidoro)

Published by The Lawfare Institute
in Cooperation With
Brookings

Last week, Facebook announced the final charter for its independent Oversight Board, which will have the power to hear cases and overrule Facebook’s decisions about what can and cannot remain on Facebook’s platforms. This is a potentially pivotal moment in the history of online speech and an unprecedented innovation in private platform governance. While a number of details are still to be worked out—the operational procedures for the board, for example, will be outlined in a set of yet-to-be-released bylaws—this is a good opportunity to take stock of how much power the Oversight Board will actually have.

The answer is “it depends”—even now that the charter is final. There are a number of promising signs that the board will have substantial influence on Facebook’s decisions and policies, but a lot depends on how the experiment plays out in practice and the amount of good faith that actors bring to their roles.

Operating Procedures

It is trite to observe that procedural matters can significantly impact substance: For example, the process for petitioning the board will determine whether marginalized voices can access review, and what materials the board can consult in making decisions will determine whether the body will consider a sufficient diversity of viewpoints. Many of these matters are still unsettled for the key reason that they concern decisions that will (and should) be left to the board itself, which does not have any members yet (the first will be appointed by the end of the year). If Facebook could control procedure, it could control substance. Therefore, a great deal hinges on a set of bylaws, which the charter says will be “adopted by board members,” suggesting that the board will control the content of those bylaws (even if Facebook is initially “crafting” them).

Importantly, the charter suggests that the board will be able to control its own docket: It will have “the discretion to choose which requests it will review and decide upon” (article 2, section 1). It will also “establish its own set of procedures that its staff will use to select a pool of cases from which the board can choose” (art. 2, s. 1). This is critical: Facebook makes millions of content moderation decisions every day, so case selection will be a significant part of determining the board’s work.

However, the practical design is important. While Facebook has said that “there will be a portal on the board’s website for verified Facebook users to submit cases for consideration,” it is not yet clear what this will look like. It matters how easy to use and accessible the portal will be. For example, in Germany, Facebook was fined for under-reporting complaints under its new NetzDG law because the reporting tool was “too hidden.”

For any review, the board is empowered to “gather additional information, including through subject matter experts, research requests or translation services, that may be required to provide additional context for the content under review” (art. 3, s. 3). This, too, is a promising sign: If the board were constrained to consider only the information that Facebook presented to it, it would not be substantively independent. Furthermore, many decisions about speech and its meaning are highly context dependent—indeed, one of the major criticisms of Facebook’s content moderation decisions to date is that it has not paid enough attention to local context in drawing lines. This additional inquisitorial power for the board is an important measure in ameliorating this deficit by allowing the board to gather all the information it needs to understand critical context.

But whether the board will be able to gather sufficient information in practice will depend on whether it has sufficient budget and time to do so in each case. So far, all that’s public on this matter is that Facebook is establishing an independent trust that will be responsible for reviewing and approving the board’s budget. Facebook has committed to supporting the board “consistent with a reasonable allocation of Facebook’s resources” (art. 5, s. 3)—a standard with some room to move.

Even less promising, the board is also expected, in all cases except for special “expedited cases,” to issue decisions “within two weeks.” As I have argued previously:

The two-week deadline seems somewhat arbitrary: A functional board would have to balance speed in decision-making in order to assure the possibility of substantive remedy, with the need to review cases carefully. It is unclear why a strict standard of two weeks strikes a good balance—a fortnight is an age in terms of the internet zeitgeist (justice delayed is virality denied) but perhaps not long enough for a multi-member board to gather and consider all the materials it needs.

The board is also empowered to receive what are essentially amicus briefs in cases from “individuals and groups, immediately depicted or impacted by the content in question” (art. 3, s. 3). Who counts as “immediately impacted” could be a matter of some contention. Might Facebook start objecting to the board’s interpretation of this power if the body consults too broadly in certain cases?

Of all the things that are still unspecified, one is the most concerning: how board members can be removed before the expiry of their terms (which last for three years and are renewable for a maximum of three terms [art. 1, s. 3]). The charter merely provides that “[t]he trustees may remove a member before the expiration of their term for violations of the code of conduct, but they may not remove a member due to content decisions they have made” (art. 1, s. 8). The relevant code of conduct is not yet public. Security against removal is one of the most important guarantees of independence, and so it is crucial that the code of conduct defines clearly and narrowly the grounds on which board members might be discharged before their term ends.

Subject Matter Jurisdiction

I have long been arguing that if the board’s “subject matter jurisdiction” (that is, the range of Facebook decisions that it is empowered to review) is too narrow, its legitimacy will be undermined. For example, if the board can review only cases where Facebook decides to take a post down and has no power to review algorithmic ranking decisions, Facebook can avoid having the board pronounce on cases it would prefer not to be overruled on by simply downranking a troubling post so that no one sees it without formally triggering the board’s jurisdiction. Ad policies, and especially political ad decisions, are critical decisions about political discourse—yet they can be opaque and inconsistent. There is no reason these decisions should not also be the subject of independent review.

In a development since previous documents, there are some signs that Facebook has moved toward giving the board jurisdiction in such cases. The charter defines the board’s authority as including the power to “[i]nterpret Facebook’s Community Standards and other relevant policies (collectively referred to as “content policies”)” (art. 1, s. 4), which could conceivably include ad policies and any other content decisions. There is also a tantalizing suggestion buried deep in an attachment to the announcements from last week that “[o]ver time, the board may look to decide upon other actions (e.g. downranking or applying interstitial warnings). However, these options may not be available when the board first begins it operations.”

It is good that the charter has not ruled out this more expansive jurisdiction. But Facebook should be more transparent about when and how the board will be given the full jurisdiction it needs to serve as a true check on Facebook’s content decisions.

Weak-Form Review Is Not Necessarily Weakness

When the draft charter was released, Facebook indicated that the board would hear cases about “taking down or leaving up content” and that Facebook might request policy guidance. But in the six-month global consultation process that Facebook conducted, there was essentially only one issue on which near consensus emerged: “[T]he Board’s decisions should influence Facebook’s policy development. Without some policy influence, the Board would not be seen as valuable or legitimate.” The final charter somewhat reluctantly adopts this feedback and says that in any final decision the board can “include a policy advisory statement, which will be taken into consideration by Facebook to guide its future policy development” (art. 3, s. 4). Separately, Facebook “may request policy guidance from the board”—but all such guidance will be “advisory” (art. 3, s. 7.3). Finally, while the decision in any individual case will be binding on Facebook with respect to the content directly involved, “[i]n instances where Facebook identifies that identical content with parallel context — which the board has already decided upon — remains on Facebook, it will take action by analyzing whether it is technically and operationally feasible to apply the board’s decision to that content as well” (art. 4). While Facebook needs to “transparently communicate” actions taken as a result of policy guidance, there is no such requirement when it comes to whether decisions are implemented in these “identical” cases.

There is a great deal to parse here. But one key point is that the situations in which the board’s decisions are actually binding on Facebook are very narrow—namely, just the cases involving pieces of content directly before the board. Additionally, Facebook has a wide degree of discretion as to whether and how to implement the board’s decisions and policy guidance more broadly.

To American lawyers, this may seem to be a significant weakness. The board’s power to bind Facebook directly is limited, and even in that narrow realm Facebook could overturn any decisions of the board it did not agree with by simply changing its Community Standards or underlying “values.” The U.S. Supreme Court, by contrast, is the paradigm example of an apex court that exercises strong-form judicial review: Its interpretations of the Constitution override the other branches of government, and the Constitution itself is very difficult to amend, so the Supreme Court’s decisions are hard to displace. But few constitutional democracies have courts that exercise this level of finality in their decisions, and there is a rich comparative constitutional law literature on the value of these more “weak(ened)” varieties of judicial review (where, for example, the legislature can pass laws “notwithstanding” any judicial interpretation of the constitution, or the doctrine of parliamentary supremacy gives legislatures the final word). In constitutional democracies, these arguments rest in part on the ability of weak-form review to address the countermajoritarian difficulty created by unelected judges overruling more democratically accountable branches of government.

Of course, in Facebook’s case no actors are democratically accountable. Still, there are benefits to not merely “exchanging one set of tyrants for another,” even in this context. As the Electronic Frontier Foundation commented, “We've been worried that because such a board might have no more legitimacy to govern speech than a company, it should not be given the power to dictate new rules under the guise of independence. So we think an advisory role is more appropriate, particularly given that the Board is supposed to adhere to international human rights principles.” I have explored this at length elsewhere, but there are three points worth noting here.

First, consider the qualification that Facebook will only apply the board’s decisions to “identical content with parallel context” in cases where it is “technically and operationally feasible.” Applying the board’s rulings in these “identical” cases seems like the most obvious way for the board to have impact: After all, with millions of content moderation decisions made every day, if the board’s authority is confined to the few dozen cases per year that actually appear before it, its influence will be negligible.

What is “technically and operationally feasible” is an opaque and malleable standard, which gives Facebook considerable discretion to apply board decisions very narrowly. But as Daphne Keller has argued persuasively, because speech is so contextual, automatically taking down identically worded posts across the platform risks censoring a large amount of valuable speech. Indeed, this is Facebook’s position before the Court of Justice of the European Union (CJEU) in a pending case, which may end with the CJEU ordering Facebook to take down posts “identical” and “equivalent” in certain circumstances. It would be ironic for Facebook to resist this order from the CJEU but grant its own “court” greater power.

Second, the online speech environment is young, dynamic and under-researched. The best rules are likely to evolve constantly as the ecosystem changes and we learn more about it. For Facebook to be unable to respond to this constantly changing state of affairs could significantly hamper its ability to respond to new developments or understandings. Because of this, it is better for Facebook to have acknowledged the reality that the board’s decisions may not always be generally or enduringly applicable rather than commit itself to the unrealistic stance that it will never depart from the board’s rulings.

Third, experience suggests that the practical strength of “weak-form” review is often much stronger than it appears in theory. Facebook is establishing the board at least in part because it wants to garner legitimacy for its content moderation processes and outsource controversial decisions—Facebook would undermine these goals and its own experiment if it consistently restricts the impact of the board’s decisions. This is especially true given the direction in the charter that the board should select cases “that have the greatest potential to guide future decisions and policies” (art. 2, s. 1). For Facebook to direct the board to select such cases and then continually disregard such future influence would be an unforced error.

If this weak-form review is necessary or beneficial, a lot still hinges on how it is implemented in practice—including the good faith with which Facebook interprets what is “technically and operationally” feasible and how transparently it communicates this reasoning. The main benefit of the board is the potential to bring transparent public reasoning to difficult and unresolvable disputes about where to draw free speech lines, which may ultimately be better facilitated by a dialogic process between Facebook and its new board. There is a legitimate focus right now on restraining the almost unbounded and unaccountable discretion of Facebook. But hopefully, the Oversight Board will become an institution that will endure and represent more enduring values than merely correcting this current imbalance.


Evelyn Douek is an Assistant Professor of Law at Stanford Law School and Senior Research Fellow at the Knight First Amendment Institute at Columbia University. She holds a doctorate from Harvard Law School on the topic of private and public regulation of online speech. Prior to attending HLS, Evelyn was an Associate (clerk) to the Honourable Chief Justice Susan Kiefel of the High Court of Australia. She received her LL.B. from UNSW Sydney, where she was Executive Editor of the UNSW Law Journal.

Subscribe to Lawfare