Cybersecurity & Tech

The Facebook Oversight Board’s First Decisions: Ambitious, and Perhaps Impractical

Evelyn Douek
Thursday, January 28, 2021, 11:23 AM

In its first five decisions, four of which overturn Facebook content moderation decisions, the board set an ambitious agenda for itself and Facebook.

A phone displays the Facebook login screen. (Kanhaiya Raut, https://pixahive.com/photo/using-social-media/; CC0, https://creativecommons.org/share-your-work/public-domain/cc0/)

Published by The Lawfare Institute
in Cooperation With
Brookings

After seemingly endless announcements heralding the arrival of the Facebook Oversight Board (FOB) over the past 18 months, the board has, at last, released its first batch of decisions.

There are five cases, each running a little over 10 pages, and each telling Facebook what to do with a single piece of content. All but one are unanimous. Four overturn Facebook’s original decisions to remove posts, and only one agrees with Facebook.

In the time it took you to read that sentence, Facebook probably made thousands of content moderation decisions to take down or leave up pieces of content. So it would be natural to question what possible difference the FOB’s five decisions could make to the ocean of content moderation. But reading the decisions, the FOB’s greater ambitions are obvious. These decisions strike at matters fundamental to the way Facebook designs its content moderation system and clearly signal that the FOB does not intend to play mere occasional pitstop on Facebook’s journey to connect the world. The question now—as it has always been with the FOB experiment—is whether Facebook will seriously engage with the FOB’s recommendations.

Facebook’s decision to refer to the board its decision to suspend President Trump put the FOB in the spotlight recently. But these five more quotidian cases could, in the long term, have a far greater impact than the Trump case on Facebook’s rule writing and enforcement and people’s freedom of expression around the world. Jacob Schulz has a good summary of the docket here, and Lawfare will have summaries of outcomes in coming days. Here, though, I offer a few overarching observations from the first set of decisions, and what to watch next.

The FOB’s Ambitions

The eighty percent reversal rate is not the only sign that the FOB does not intend to extend Facebook much deference. These decisions show the FOB’s intention to interpret its remit expansively to attempt to force Facebook to clean up its content moderation act.

In its decision about COVID-19 misinformation, for example, the FOB complains about how difficult it is to track Facebook’s policy changes over the course of the pandemic. Many updates to policies have been announced through the company’s Newsroom (essentially its corporate blog) without being reflected in the Community Standards (the platform’s actual rule book)—and, as the FOB notes, the announcements sometimes even seem to contradict the standards. Scholars of content moderation are used to having to scour and synthesize Facebook Newsroom announcements, blog posts from Mark Zuckerberg and even tweets from Facebook executives in an attempt to discern what the platform’s policies are at any given time. The FOB’s recommendation that Facebook should consolidate and clarify its rules is welcome.

But the FOB goes further still. Drawing on a public comment submitted by the digital rights non-profit Access Now, the FOB also recommends Facebook publish a specific transparency report on its enforcement of its Community Standards during the coronavirus pandemic. This is, again, ambitious and important. The pandemic provided a natural experiment in content moderation, as Facebook rolled out more expansive misinformation policies and relied more heavily on artificial intelligence tools as content moderators were sent home. As yet, however, there has been no meaningful or detailed public accounting of how this experiment played out in practice. The call for such a report from the FOB is an aggressive demand that, if Facebook complies, could provide useful insight into the company’s systems.

The FOB’s decision to accept a case about female nipples has prompted the predictable chuckling from the peanut gallery. But the fact that the FOB selected the case should not be surprising: Facebook’s Adult Nudity policy has long been one of its most controversial. The case is unusual for another reason, though. After the FOB accepted the case, in which Facebook removed an Instagram post showing symptoms of breast cancer during breast cancer awareness month in Brazil, Facebook admitted its decision to remove the post was a mistake. Its artificial intelligence tools had accidentally flagged the post, Facebook said, but given the Adult Nudity policy has a clear exception for breast cancer awareness the company restored the picture. Case closed, right?

Wrong, said the FOB. In a strong assertion of its own power to decide its own jurisdiction, the FOB said Facebook could not moot a case after the FOB had accepted it simply by deciding to reverse a decision. The FOB’s Charter states that the FOB can review cases where users disagree with Facebook’s decision and have exhausted internal appeals, and the FOB argued here the requirement of disagreement is at the moment the user exhausts Facebook’s internal processes and not after. If it were otherwise, the FOB says, Facebook could “exclude cases from review (the FOB’s own emphasis) simply by mooting any case it didn’t want the FOB to pronounce on.

The FOB went on to confirm that Facebook’s original decision was wrong, but along the way the FOB made two important observations that have little to do with nipples. First, the FOB noted that the relationship between Facebook’s Community Standards and Instagram’s much shorter Community Guidelines is unclear. The latter has an unexplained hyperlink to the former, but that’s it. The FOB recommended that Facebook make it clear that in the case of any inconsistency between the two sets of rules, Facebook’s Community Standards should take precedence. If accepted, this is pretty significant: Facebook’s and Instagram’s rules would, for all intents and purposes, be explicitly harmonized.

The second recommendation from the FOB is even more far-reaching. Noting that the mistake in this case was because of over-reliance on automated moderation without having a human in the loop to correct the error, the FOB calls for some pretty fundamental changes in how Facebook uses such tools. Facebook had urged the FOB to avoid this in its submissions: “Facebook would like the Board to focus on the outcome of enforcement, not the method” the FOB’s decision notes. But the FOB refused to so narrow its scope. Instead, the FOB accepts that while automated technology may be essential to detecting potentially violating content, Facebook needs to inform users when automation has been used, and ensure they can appeal such decisions to human review. It also suggested Facebook implement an internal audit procedure to analyze the accuracy of its automated systems. These are all sweeping, systemic recommendations, and potentially set the stage for the FOB to request updates on their implementation in future cases. Facebook has already replied that this recommendation suggests “major operational and product changes” and it may “take longer than 30 days to fully analyze and scope” what it requires.

This decision sends a strong shot across the bow to Facebook. The board is establishing that it will not limit its view to just the outcomes in the cases before it, but will interrogate the systems that led to them.

Due Process for Users

A constant theme in these cases is the lack of adequate notice and reasoning for users who have been found to violate Facebook’s rules. The FOB shows concern that users who have been found to violate their rules simply cannot know what they are doing wrong, whether because Facebook’s policies are not clear or lack detail or are scattered around different websites, or because users are not given an adequate explanation for which rule has been applied in their specific case. Many of the FOB’s recommendations suggest more transparency and due process for users, to help them understand the platform’s rules.

The Importance of Context

The board also focused on the need to take context into account when applying rules to specific facts. Whether it be the significance of breast cancer awareness month in Brazil, ongoing armed conflict in Azerbaijan, the rise of what the FOB describes as “neo-Nazi” ideology around the world, or the pervasive hate speech against Muslims in Myanmar (and Facebook’s historical role in helping turbocharge it), the FOB repeatedly acknowledges that the specific context matters. This is not particularly surprising: It’s impossible to understand speech without taking it in context. But it does present a challenge. Platforms generally have one set of global rules, not least because that’s easier to enforce. It was always naive to hope connecting the world would flatten cultural and societal differences. In that sense, the FOB’s decisions are only making explicit what has long been obvious. But Facebook still has not devoted the resources necessary to making this a reality, so expect this to come up again and again in the FOB’s jurisprudence.

What “Law” Does the FOB Use?

One of the key questions for FOB-watchers has been what source of “law” the FOB will apply. The FOB is, self-evidently, not an actual court, and it has no legal mandate. The FOB’s Charter states that it will “review content enforcement decisions and determine whether they were consistent with Facebook’s content policies and values” and “pay particular attention to the impact of removing content in light of human rights norms protecting free expression.” This is a somewhat awkward mix of authorities. It is not clear how the FOB should reconcile applying Facebook’s private set of rules and values with paying attention to the public law body of rules known as international human rights law.

These cases do not answer the thorniest parts of this question. Instead, in every case, the FOB first assessed Facebook’s decisions against Facebook’s own standards and then separately against international human rights law. But in all of them, the FOB came to the same conclusion under each set of norms, and in no case did the FOB confront the question of what happens if Facebook’s rules conflict with international human rights law. This interesting “jurisprudential” question of the FOB’s ultimate source of authority has been kicked down the road for now.

The FOB as an Information-Forcing Mechanism

One of the main promises of the FOB, as aside from external watchdogs or reporting, is its power to force Facebook to provide information that is not otherwise available. This is evident throughout the decisions, which make references to internal rule books and designations that aren’t available to the public. Lack of transparency has been one of the most consistent criticisms of Facebook’s moderation, and the hope is that the FOB can prove a useful mechanism in changing this.

Take the decision about a Joseph Goebbels quote, which concerns a post of a picture of Goebbels with a quote stating that truth does not matter, which the user said was intended to be a comparison to the presidency of Donald Trump. In its opinion, the FOB noted that Facebook’s decision rationale “clarified certain aspects of the Dangerous Individuals and Organizations policy that are not outlined in the Community Standards.” These included that the Nazi party is a designated hate organization (unsurprising, but still good to confirm); that Goebbels is a designated dangerous individual under Facebook’s internal rules and that all quotes from such individuals are considered “praise or support” for that individual unless there’s additional context explicitly saying otherwise. The FOB lamented the “information gap between the publicly available text” of the policy and Facebook’s internal rules. Indeed, the Dangerous Organizations policy is one of Facebook's most opaque. The board recommends that Facebook clarify terms like “praise,” “support” and “representation” in the policy, and give more visibility into what organizations and individuals are designated as falling within it. This could have ramifications from Facebook’s actions against QAnon, to its decision in the Trump case.

Will the FOB Turn the Tide in Facebook’s Content Moderation?

There has been a general trend, especially in the last year, towards more heavy-handed content moderation. Republican Sen. Josh Hawley once called the FOB a “special censorship committee,” but many have speculated that the FOB will play exactly the opposite role, reversing the general trend and standing up for free expression as public pressure demands Facebook take ever more content down.

The first five cases could be seen as vindication for this view, with the FOB reversing four out of five take down decisions. They could also be seen as a rebuke to Facebook—a 20 percent win rate is not something to brag about. Furthermore, a lot of the FOB’s recommendations (a new transparency report, greater oversight of automated moderation) would be expensive. A more cynical take would be that Facebook will be delighted with the FOB’s vindication of its “paramount” value: “Voice.” Content is product for platforms. As Noah Feldman, a law professor who first proposed the idea of a board-like institution to Facebook, quipped to Facebook CEO Mark Zuckerberg in an interview once, “No voice, no Facebook, right?”

Both of these takes likely have some truth to them. At this stage, however, beware of any simplistic takes that say the bottom line is that the FOB will be inherently more speech-protective than Facebook. That may well be the case over time, but this first batch of decisions is a poor sample to make any predictions off given the skewed nature of the FOB’s current jurisdiction which only allows the FOB to review appeals when content has been taken down, not when it has been left up. (It’s in my contract that I have to make this point at least once per piece until Facebook remedies this design flaw.)

That said, there are clear signs of the importance that the FOB places on freedom of expression. The onus in these cases is clearly on Facebook to explain and justify limitations, and where it can’t, a takedown will not stand. In the coronavirus misinformation case, for example, the FOB writes that “Facebook had not demonstrated that the post would rise to the level of imminent harm.” This requirement that Facebook must explain restrictions is a common theme across the decisions. Facebook has already pushed back against this decision, noting that while its policies could be clearer, “our current approach in removing misinformation is based on extensive consultation with leading scientists, including from the CDC and WHO. During a global pandemic this approach will not change.”

Furthermore, the decisions reflect a general skepticism towards takedowns as a necessary or effective solution to many problems. For coronavirus misinformation the FOB was not convinced that taking down posts was the “least intrusive means of protecting public health” and recommended Facebook consider other tools, like labels and friction. In the Myanmar case, the FOB states “removing this content is unlikely to reduce tensions or protect persons from discrimination. There are more effective ways to encourage understanding between different groups.” The FOB does not, however, suggest concrete other steps to take to answer the timeless question of how to bridge these divides.

Are These Decisions Operationalizable?

The FOB has some tough medicine for Facebook in these decisions, and much of it well-deserved. Still, I would not want to be on the policy team tasked with operationalizing the FOB’s recommendations at scale now.

Lawfare recently published an analysis of how long it takes legal systems to come to decisions in hate speech cases: In many places months pass, if not years. The FOB’s decisions, brief and extra-legal as they may be, resemble this decision-making model far more closely than the front-line content moderator who has seconds to make a call or the choices confronting a platform content moderation systems designer. The Myanmar hate speech case, for example, turned at least in part on a difference in the English translation provided by Facebook and that from the FOB’s translators, and found that while the first part of the post might have been derogatory, the post needed to be considered as a whole. The FOB noted that Facebook’s sensitivity to anti-Muslim hate speech was “understandable” (the UN has accused the company of playing a “determining role” in the genocide there), but that its decision to take the post down in this case was not justified. The case was clearly borderline, and involved a degree of hair-splitting that seems impossible to scale.

The FOB’s attitude seems to be that enforcement is not its problem. (Indeed, on a press call this morning when asked whether the FOB considered the difficulty of implementing its recommendations, the response was “In a word, ‘No.’”) And perhaps that’s quite right. But if its decisions can’t be enforced, then they won’t be—which could undermine the board’s authority in the long term.

On the other hand, I have long argued that the role of the FOB should be seen as more dialogic than prescriptive. In giving recommendations that Facebook responds to publicly, it opens up a dialogue, to which the public will be privy, about the range of possibility, constraints, and incentives in content moderation. Whether this conversation is productive will depend a lot on whether Facebook engages in it in good faith.

What Happens Now?

Facebook now has seven days to restore the four pieces of content the FOB decided it had improperly removed. In three cases, it has already done so. Decisions are not retrospective, and Facebook has not committed to restoring other similar content it has removed. In future cases where the FOB orders the take down of content Facebook has left up, Facebook will also review if there is “identical content with parallel context” it can remove.

The more interesting response will come within 30 days, when Facebook has committed to providing a public reply to any policy recommendations and follow-on actions in the FOB’s decisions. This includes, for example, the recommendations about a pandemic transparency report, its over-reliance on automated tools, and the need for clarification of what “praise,” “support” and “representation” mean in its Dangerous Individuals and Organizations policy (just in time, perhaps, for the FOB’s decision in the Trump case).

Some critics have voiced skepticism as to whether a body that takes perhaps a few dozen cases a year could possibly have an impact on a content moderation system that makes millions of decisions every day. With this batch of decisions, the FOB has shown it intends to do just that. These rulings take aim at some fundamental principles in the way Facebook does content moderation and set out the beginnings of an aggressive agenda for reform.

Your move, Facebook.


Evelyn Douek is an Assistant Professor of Law at Stanford Law School and Senior Research Fellow at the Knight First Amendment Institute at Columbia University. She holds a doctorate from Harvard Law School on the topic of private and public regulation of online speech. Prior to attending HLS, Evelyn was an Associate (clerk) to the Honourable Chief Justice Susan Kiefel of the High Court of Australia. She received her LL.B. from UNSW Sydney, where she was Executive Editor of the UNSW Law Journal.

Subscribe to Lawfare