Cybersecurity & Tech Democracy & Elections

Facebook's Oversight Board Bylaws: For Once, Moving Slowly

Evelyn Douek
Tuesday, January 28, 2020, 4:15 PM

The new bylaws include a number of promising signs about Facebook’s commitment to the Oversight Board experiment. But the board’s original ambit of operations will be fairly limited.

Facebook CEO Mark Zuckerberg presents a keynote address (Anthony Quintano, https://tinyurl.com/vy5tsrl; CC BY 2.0, https://creativecommons.org/licenses/by/2.0/)

Published by The Lawfare Institute
in Cooperation With
Brookings

As America’s longest-serving member of Congress once said, “I'll let you write the substance … and you let me write the procedure, and I'll screw you every time."

This wisecrack explains why the newly released bylaws for Facebook’s Oversight Board are important. Facebook announced in 2018 that it would be setting up the board as an independent institution to review the company’s decisions about what is or is not allowed on its services.

Last year, Facebook released its global consultation report and final charter for the board. As I wrote at the time, those documents were high level and vague, and the board’s power would depend on practical and operational matters. (I was invited to a consultation on an earlier version of the bylaws and have participated in several workshops on the board, all unpaid and in my academic capacity.) Now, little by little, Facebook is filling in these details. The bylaws are a substantial step forward in drawing a picture of how the board, which is due to start hearing cases in the first half of this year, may work in practice. Although seemingly technical, they will therefore have a large impact on the substantive work of the board.

The new bylaws include a number of promising signs about Facebook’s commitment to the Oversight Board experiment—not the least of which is the company’s reaffirmed pledge to fund the trust that supports the board for at least six years. But the bylaws also betray an uncharacteristically incremental approach from Facebook: The board’s original ambit of operations will be fairly limited, and although the bylaws promise to ramp up the board’s power “in the future,” there are no firm timelines.

At 46 pages, the bylaws are full of procedural rules that will no doubt become important once the board starts working. Here, I examine some of the key new details.

Jurisdiction

Since Facebook first announced the idea of the board, I have written about how the board’s influence will be significantly impacted by its “jurisdiction”—that is, the range of cases it will be allowed to hear. So it’s good news that the bylaws set out quite an expansive jurisdiction for the board. The bylaws state that “in the future” people will be able to appeal to the board in matters relating to a broad range of content types, including groups, pages and—perhaps most significantly, given the constant ongoing controversy—advertisements. The bylaws also state that the board will review content rated “false” by third-party fact-checkers—although it is not clear if this is a role the board is well placed to perform, given that it involves a purely factual, rather than doctrinal, inquiry. But overall, the board will be empowered to review a large amount of the kinds of decisions Facebook makes about what is allowed on its platform.

But there’s a catch. The bylaws leave unstated when exactly the board will gain this broad jurisdiction, other than “in the future.” And the timeline will be “subject to Facebook’s technical and procedural improvements.” Facebook has set itself no obligations to report on its progress in making these improvements.

Instead, when the board begins operations, it will only be able to review decisions to remove individual pieces of content. “Take-downs” could appear to be the kind of decision that most threatens free expression. But a number of the most controversial content moderation decisions made by Facebook in recent years have been decisions to leave content up, not take it down: Think of the Nancy Pelosi cheapfake video in which footage of the speaker of the House was edited misleadingly so that she appeared intoxicated, or hate speech in Myanmar, or the years that Facebook hosted content from Infowars chief Alex Jones before finally deciding to follow Apple’s lead and remove Jones’s material.

Limiting the board’s jurisdiction to take-down decisions stacks the deck somewhat. It is like introducing video appeals to tennis to make calls more accurate but allowing players a review only when balls are called "out" and not when a ball is called "in," no matter how erroneous the call seems. For those in favor of longer rallies—which probably includes the broadcasters and advertisers—this is a win, because only those rallies cut short can be appealed. For those in favor of more accurate calls generally, not so much. Indeed, on a press call, Facebook made this bias toward leaving things up explicit: The limited ambit of operations to start is “due to the way our existing content moderation system works, and in line with Facebook’s commitment to free expression” (emphasis added). Maybe so, but it is a disappointing limitation and represents an uncharacteristically incremental approach from a company famous for “moving fast.” It is important to hold Facebook to its commitment that this will be changed in the near future.

The bylaws also omit any discussion of whether the board might review how Facebook’s algorithm ranks content to display to users. I previously noted a tantalizing suggestion—buried in an attachment to the final charter—that the board might ultimately review decisions to downrank content. This is not mentioned in the bylaws—and the absence of this authority would be a significant constraint on the board’s power. Most worryingly, this provides Facebook with a loophole through which to avoid board oversight: The platform can simply downrank hard cases rather than taking them down completely.

Finally, the bylaws state that the board cannot review cases in which Facebook has taken down content under what the company decides is a legal obligation to do so. This makes sense. Facebook cannot give the board more power than Facebook itself has under the law. But this will not always be so clear cut—as demonstrated by the controversy over Instagram’s recent decision to take down posts expressing support for assassinated Iranian general Qassem Soleimani, ostensibly because of U.S. sanctions laws. On a press call about the bylaws, however, Facebook seemed to backtrack and said that posts removed for “praise and support” would “absolutely” be reviewable.

Case Selection and Docket Control

The other big determinant of the board’s influence will be its control over the cases it hears. Facebook makes literally millions of content decisions every day, so the board could be swamped addressing low-impact issues if it did not have discretion to choose the cases it will review. Here, the bylaws confirm the good news the charter implied: The board has sole discretion over its docket, including the power to accept or reject referrals from Facebook. (This is necessary lest Facebook could keep the board busy only with cases it wanted the board to hear and crowd out cases submitted by users.) The only exception to this discretion is an expedited review process—to be used in “exceptional circumstances” with “urgent real-world consequences”—which allows Facebook to send cases to the board for an automatic review to be completed within 30 days. It is not hard to imagine this process being used to address decisions concerning political content in the days before an election, for example.

The bylaws state that cases will be selected by a case selection committee made up of board members, which will set and make the criteria for its decisions publicly available. Notably, members of the case selection committee will have short terms of only three months, after which the seats will rotate to other members of the board. Given that the criteria for case selection will determine the strategic direction of the board and its influence, the turnover on the committee has the potential to be very disruptive.

Timelines for Decisions

The bylaws set a number of hard deadlines for board operations. The entire process of case decision and implementation must take place within 90 days of Facebook’s last decision on the content in question, including seven days for Facebook to implement a decision once the board has issued it. The 90 days must also incorporate time for potential panel dissolution and rereview: When a majority of the board does not agree with the decision of a panel on a case, it can send the decision to a new panel for a fresh decision. (There is no apparent limit on the number of times this could occur.) As mentioned, for an “expedited review,” the board must complete its review within 30 days.

The deadlines for non-expedited cases seem arbitrary. It is unclear why these time limits should be set in stone, and, indeed, this could be a matter on which the board itself might amend the bylaws in the future. (Though it can do so only subject to Facebook’s agreement, as I examine in more depth below.) As I have written before:

A functional board would have to balance speed in decision-making in order to assure the possibility of substantive remedy, with the need to review cases carefully. It is unclear why a strict standard ... strikes a good balance—[90 days] is an age in terms of the internet zeitgeist (justice delayed is virality denied) but perhaps not long enough for a multi-member board to gather and consider all the materials it needs.

Timelines should be more flexible. Some cases will obviously be urgent; others will be of a kind that has vexed constitutional courts around the world for centuries and will require ample deliberation to produce a quality decision. Quality reasoning is perhaps the most important variable in shoring up the board’s legitimacy. Rushed decisions in hard cases serve no one.

Information to Which the Board Will Have Access

Encouragingly, the bylaws grant the board complete discretion to request and receive information from a global pool of subject matter experts before coming to any decision, as well as the power to request issue briefs from public interest organizations. This will help ensure that the board makes informed, well-reasoned decisions that inspire confidence in its rulings. These initial bylaws, however, do not provide for third parties to file amicus briefs without a request from the board itself. The board itself may—and should—amend this. Amicus briefing would be a valuable way of ensuring that stakeholders can be heard, regardless of whether the board knows to ask them.

The board can request a range of additional information from Facebook—including the level of engagement and reach of the content, and information about additional pieces of content similar to the case in question—if it views this as “reasonably required” to make its decision. This kind of information will be critical for the board’s understanding of the impact of any particular decision: A ruling on a piece of content with a large reach and many similar posts will be much more far-reaching than a ruling on a unique post on a page with a small audience. Facebook can decline to provide this information, however, if it determines that the information is “not reasonably required for decision-making in accordance with the intent of the charter” or for a range of other technical and legal reasons. This is an ambiguous standard, which might become the site of conflict if Facebook uses it to avoid releasing relevant engagement data about the content on its platform.

Removal and Code of Conduct

The board will be tasked with making controversial decisions on hard questions. Its members must be ready to be unpopular. Therefore, protection against removal for board members is essential for preserving its independence. Courts around the world deal with this problem in various ways, including life tenure and fixed terms. But, as Mark Tushnet summarizes:

The difficulty lies in designing mechanisms of discipline and removal ... whose operators will find it difficult to use some pretext for discipline or removal actually predicated on disagreement with rulings on the merits. There are two generic solutions. The first is to specify relatively precise grounds for discipline or removal; the second is to place the decision within the control of the judiciary itself.

The current bylaws are a halfway house. Removal of a member requires a two-thirds vote of the board but is subject to approval of the trustees managing the trust that stands between Facebook and the board and may be considered only for a violation of a code of conduct annexed to the bylaws. Trustees are charged with determining whether a member has violated the code of conduct, and can act upon requests to remove members from the board, the board’s administrative director or the public—or on their own accord. It is unclear how the public can make these requests, when the trustees will act on their own accord, and how the trustees will deliberate about whether a member has breached the code of conduct.

These omissions would be less concerning if the code of conduct was narrowly and clearly drawn. However, it is expansive and includes ambiguous terms. The terms of the code are explicitly stated to be “not exhaustive” and include the possibility of disqualification on grounds of “morality,” for example, which are clearly open to interpretation. The board’s legitimacy will be severely undermined if there is any appearance of pressure or discretion around when and how members are removed. Although the bylaws state that “[m]embers will not be removed due to content decisions they have made,” less discretion in the removal process would have increased confidence that this will hold true.

Potential Constitutional Crises

Finally, the bylaws raise a vexing issue to do with disputes about amendments. The bylaws include technical specifications about who can amend which sections and which other parties must agree to amendments. A number of amendment rules are subject to the amendment in question not “contradict[ing] the board’s charter.” But the bylaws and charter are silent on who is the authoritative interpreter of the charter or who decides if a party is withholding agreement to amendments in bad faith. I look forward to the Facebook Oversight Board’s own Marbury v. Madison, should a standoff occur.

This is just an encapsulation of a larger truth that hangs over the entire Oversight Board experiment: For the board to work, all participants will need to act in good faith. The bylaws are a big step forward in showing more detail about how the board will operate. But as the various crises around the world right now suggest, written rules get you only so far—and unwritten norms are crucial too. The coming months will give actors plenty of opportunity to show their bona fides as the board ramps up into operation in 2020.


Evelyn Douek is an Assistant Professor of Law at Stanford Law School and Senior Research Fellow at the Knight First Amendment Institute at Columbia University. She holds a doctorate from Harvard Law School on the topic of private and public regulation of online speech. Prior to attending HLS, Evelyn was an Associate (clerk) to the Honourable Chief Justice Susan Kiefel of the High Court of Australia. She received her LL.B. from UNSW Sydney, where she was Executive Editor of the UNSW Law Journal.

Subscribe to Lawfare