Cybersecurity & Tech

What to Make of the Facebook Oversight Board’s Inaugural Docket

Jacob Schulz
Tuesday, January 12, 2021, 10:20 AM

It’s useful to take a close look at what cases the board has agreed to take on so far, and to try to tease out what it might be trying to accomplish. 

A floor mat with Facebook's name written on it (Ian Kennedy/https://flic.kr/p/5HRy3w/CC BY-NC 2.0/https://creativecommons.org/licenses/by-nc/2.0/)

Published by The Lawfare Institute
in Cooperation With
Brookings

You will be forgiven if you missed Facebook’s big milestone from early last month. Lost in the haze of pardons, frivolous election suits and vaccine news, the Facebook Oversight Board released its inaugural docket on Dec. 3, 2020.

Evelyn Douek argued yesterday on Lawfare that the board ought to add an additional case to its docket: the platform’s suspension of the President of the United States. And during a week where Facebook thrust itself into the spotlight, it’s useful to take a close look at what cases the board has agreed to take on so far, and to try to tease out what it might be trying to accomplish.

To anyone with a passing understanding of the types of controversies that hound Facebook, the inaugural cases for the board—the so-called Supreme Court of Facebook—will feel familiar. As Douek wrote on Twitter, “The first six cases basically read like a greatest hits of Facebook content moderation controversies: hate speech, hate speech, hate speech, female nipples, Nazis and COVID health misinfo.”

The board’s choices haven’t pleased everybody. Some critics bemoan the absence of U.S. political issues from the slate. There’s no Steve Bannon. There’s nothing related to the U.S. election. Other detractors argue the picks aren’t “impactful enough” or despair that there isn’t a “high-profile controversy” among the bunch. As Emily Bell suggested, perhaps the board wanted to “avoid clickbait cases.”

The concerns about the first batch of cases follow a long debate about what the board will actually do and what it can plausibly accomplish. Since Mark Zuckerberg first publicly floated the idea of a Facebook “Supreme Court” in April 2018, the board has attracted a lot of skepticism. Will the board be able to intervene in real time to stop dangerous viral misinformation? Will it just be “a giant P.R. Potemkin village to assuage critics”?

The inaugural docket can reveal only so much. But taking a closer look at the cases helps clarify how the board intends to operate. How will it deal with high-profile political controversies? Will it wade into them at all? What won’t it do? What might its value be?

All the cases situate the board at a comfortable distance from lightning-rod cultural questions. And even in the cases tied to juicier controversies, layers of complicated—and often highly contextual—speech questions insulate the board from weighing in on, say, France’s relationship with Islam.

The board was never going to be a culture war ombudsman. And it’s not a rapid response unit either. But the initial cases do help illustrate how the board hopes to produce transparent thinking about murky speech problems, applicable to tons of different content moderation situations and helpful in shaping the bounds of the platform’s community standards. The first batch of cases, in other words, shows a board interested in building a common law jurisprudence to help clear up the margins of the platform’s rules. No, this approach won’t have the same impact as, say, taking a sledgehammer to the “groups” feature, overhauling the community standards, or adding more “friction” to a platform that thrives on virality. But getting answers from the board on tough margin calls may allow Facebook to bolster its rules with the transparency and legitimacy of public rulings—offering the public a window into how the famously opaque platform balances important equities.

So what made it onto the docket?

The picks reflected some geographic diversity, although nothing to write home about. At least four different languages were represented among the initial six cases: English, French, Portuguese and Burmese. One case came from Instagram, the rest came from Facebook proper. Five of the cases made their way onto the board’s radar through user appeals, and two came through referrals from the platform (one case on the initial docket was mooted and replaced with a user referral).

The first case deals with a piece of content related to the dust-up over France and the perceived hostility of laïcité—its brand of state secularism—to Islam. Here’s the description of the case that ended up being mooted after a user voluntarily took down the post to which the content in question was attached:

A user posted a screenshot of two tweets by former Malaysian Prime Minister, Dr Mahathir Mohamad, in which the former Prime Minister stated that “Muslims have a right to be angry and kill millions of French people for the massacres of the past” and “[b]ut by and large the Muslims have not applied the ‘eye for an eye’ law. Muslims don’t. The French shouldn’t. Instead the French should teach their people to respect other people’s feelings.” The user did not add a caption alongside the screenshots. Facebook removed the post for violating its policy on Hate Speech. The user indicated in their appeal to the Oversight Board that they wanted to raise awareness of the former Prime Minister’s “horrible words”.

The press release announcing the docket change-up clarifies that the mooted case “concerned a comment on a post, with the user who made the comment appealing Facebook’s decision to remove it.” (A clear area of growth for the board: figuring out how to more clearly describe the content up for review while still respecting user privacy.)

That the board was interested in hearing this particular case before it got mooted demonstrates a desire to take on sensitive issues without saddling itself with already-contentious pieces of content. The Mahathir post in the user’s screenshot was a big deal. Mahathir posted his polemic on both Twitter and Facebook on the same day that a terrorist hacked three people to death in a Nice church. Both platforms removed his posts. France’s minister for digital communications wanted Twitter to go further, posting, “I just spoke with the MD of @TwitterFrance. The account of @chedetofficial must be immediately suspended. If not, @twitter would be an accomplice to a formal call for murder.” Mahathir, meanwhile, complained, as is the wont of aggrieved users, that the platforms misunderstood his post: “I am indeed disgusted with attempts to misrepresent and take out of context what I wrote on my blog.” This pick would have allowed the board to get at the Mahathir controversy in an oblique way.

The selection showed the limited way in which the board will deal with cultural flare-ups. This particular case would have set up the board to engage with the controversy in a highly technocratic way. It wouldn’t have entailed relitigating the merits of the high-profile takedown of Mahathir’s post. And it wouldn’t have required drawing any bright lines about what is and isn’t okay to say about French President Emmanuel Macron or France. The questions at hand are more subtle: How can you parse a user’s intent when she posts a screenshot with no caption? What clues might you use to determine a commenter’s intent when he replies to a post that is itself opaque in its meaning? Hardly the culturally juicy issues that people fight about in the New York Times op-ed section. But they are important questions and ones that help to begin the process of creating a piece-by-piece jurisprudence on Facebook’s hate speech rules.

The case that replaced the screenshot post situates the board slightly closer to the center of the laïcité controversy. The press release announcing the swap describes the new case as follows:

A user posted a photo in a Facebook group, depicting a man in leather armor holding a sheathed sword in his right hand. The photo has a text overlay in Hindi that discusses drawing a sword from its scabbard in response to “infidels” criticizing the prophet. The photo includes a logo with the words “Indian Muslims” in English. The accompanying text, also in English, includes hashtags calling President Emmanuel Macron of France “the devil” and calling for the boycott of French products.

Facebook removed the content for violating its policy on Violence and Incitement. In its referral, Facebook stated that it considered this case to be significant, because the content could convey a “veiled threat” with a specific reference to an individual, President Macron. Facebook referred to heightened tensions in France at the time the user posted the content.

Facebook further indicated that although its policies allow it to determine a potential threat of real-world violence and to balance that determination against the user’s ability to express their religious beliefs, it was difficult to draw the line in this case.

Here, the board again reveals itself unafraid to weigh in on high-profile polemics. Facebook hints that it referred this post to the board partially because of the broader controversy. The description above implies that the backdrop of the arguments about Macron and Islam added context to the post that make it a “difficult” content moderation question worthy of the board’s time.

But this case again helps illustrate how the board might sidestep weighing in on cultural questions directly. This case includes a number of peculiarities that mean the board won’t really be deciding anything as uncomplicatedly controversial as whether you can use normal text to call Emmanuel Macron the devil, for example. The post likely had lots of company on Facebook in calling for boycotts of France, and it doesn’t seem like Facebook has removed those posts (here is, for example, an entire Facebook page called “Boycott France”). And image posts remain up on the platform with captions that refer to, for example, “French President Macron Devil [sic]” and include vague insinuations of revenge. The board will have to reflect: Does the combination of the boycott rallying cry, the appellation of Macron as the “devil” and the ambiguous “man in leather armor holding a sheathed sword” put the post beyond the pale? Does the image—is it a cartoon? a meme? a historical picture?—make the post so threatening that it creates a “genuine risk of physical harm”? These are difficult questions. But the complexity of the speech problems also does some work in muting the controversy over whatever the board’s final decision is. I suspect future cases that the board takes will entail a similar dynamic; the most immediate questions the cases present will be not nakedly political questions but technical speech problems aimed at carving a common law out of Facebook’s community standards.

And one conspicuous absence from the docket shows the limits of the board as a real-time harm prevention mechanism. Perhaps counterintuitively, the docket does not include the only Facebook post connected to the blow-up over Charlie Hebdo cartoons that led to a tangible loss of life: a post from a father of a middle school student calling out a teacher, Samuel Paty, for using the cartoons during a lesson. The video post calls for action against the teacher for showing the cartoons. The video asserts, “[T]his thug shouldn’t remain in the national school system, shouldn’t be allowed to teach children, he has to go and educate himself.” It exploded in popularity on the platform, and the killer spoke with the poster of the video multiple times before the attack. I can’t spot any part of Facebook’s community standards that the video would have violated. And the video remained on the platform. That means that it’s out of reach for the oversight board, which has the authority (for now) to review only takedown decisions, not leave-up decisions. And either way, Paty is already dead.

The next case on the docket also touches on the France-and-Islam polemic. 2020-002-FB-UA is a Facebook post that was taken down for violating the platform’s hate speech policy. It includes “two well-known photos of a deceased child lying fully clothed on a beach at the water’s edge. The accompanying text (in Burmese) asks why there is no retaliation against China for its treatment of Uyghur Muslims, in contrast to the recent killings in France relating to cartoons.” The docket anonymizes each post and provides only a very general description, so it’s not easy to make complete sense of what’s going on with the post.

Either way, this case, like the others, includes a number of contextual speech questions that insulate the board from stepping directly into the controversy. Facebook took down the post for violating its hate speech rules, but the user behind the post protested that “the post was meant to disagree with people who think the killer is right and to emphasize that human lives matter more than religious ideologies.” Here, the board again sets itself up to tackle the question of user intent. What clues can help determine whether the post “was meant” to make a point about human rights, rather than a polemic about retaliation against the Chinese state? How much does what a user “meant” even matter when parsing a potential hate speech violation? Does the fact that the photos are “well-known” change anything? These are all valuable questions to have public and precedential answers on, given that these issues will come up again and again, even if they have different contextual window-dressing.

The next case completes the hate speech hat-trick for the inaugural docket. It relates to a post that includes “alleged historical photos showing churches in Baku, Azerbaijan, with accompanying text stating that Baku was built by Armenians and asking where the churches have gone.” The user’s post also “stated that Armenians are restoring mosques on their land because it is part of their history. The user said that the ‘т.а.з.и.к.и’ are destroying churches and have no history. The user stated they are against ‘Azerbaijani aggression’ and ‘vandalism’.” The user appealed. The post ought not to have run afoul of Facebook’s hate speech regs, the user argued, because the intent “was to demonstrate the destruction of cultural and religious monuments.”

Here, too, the board shows itself unafraid to take on posts related to sensitive topics—Armenia and Azerbaijan famously don’t see eye-to-eye, and the announcement of the docket came mere weeks after the conclusion of a full-blown war between the two countries. But the case sticks mostly to narrow questions. The board will probe, for example, whether the particular Armenian language epithet used by the poster—“т.а.з.и.к.и”—takes the post over the hate speech line. Other potentially dispositive questions are similarly contextual. Does saying that a particular group of people “have no history” undermine a user’s claim that their post has a good-faith interest in raising awareness about the plight of religious monuments? And the board’s use of “alleged” to modify the “historical photos” is curious, too. How might the board determine whether a photo is actually of “historical” import? Or does the “alleged” imply that the board will query the authenticity of the pictures? If something qualifies as “historical,” how does that weigh on the ultimate hate speech determination? The board’s body of public thought on the hate speech regs will grow here, too, in different ways.

The next case sets the board up to take on another “greatest hit” of Facebook and Instagram’s content moderation woes. The case concerns an Instagram post. Set to a pink background, the post contains “eight photographs within the picture [that] showed breast cancer symptoms with corresponding explanations of the symptoms underneath.” The post had originally gotten flagged for violating the platform’s “Adult Nudity and Sexual Activity” rules, likely because “five of the photographs included visible and uncovered female nipples.” A caption to the post “indicat[ed] that it was to raise awareness of signs of breast cancer” and the user appeal cited noted that the post played a part in Brazil’s “national ‘Pink October’ campaign for the prevention of breast cancer.”

This case is a bit different from the others. After the board announced the slate of cases up for review, Facebook itself reversed course on the initial takedown decision. It determined through “further review” that it “removed this content in error” and reinstated the post on the platform. The press release announcing the shift emphasizes that the platform “welcome[s] the board’s review of this case—–any decision they make on the content will be binding, and we welcome any policy guidance related to it.” The platform’s “Adult Nudity and Sexual Activity” rules carve-out exemptions for “a variety of reasons, including as a form of protest, to raise awareness about a cause, or for educational or medical reasons.” The description of the post suggests that it would satisfy both the “rais[ing] awareness about a cause” exception and the “educational or medical reasons” exception. Per Facebook, “[w]here such intent is clear, we make allowances for the content.” Does the caption and accompanying pink background make the “intent” of the post clear? I would think so. The nudity policy notes some explicit exceptions to the ban on female nipples—“those depicting acts of protest, women actively engaged in breast-feeding, and photos of post-mastectomy scarring.” This post doesn’t fall into one of those buckets, but the board’s ruling here will help to clarify the boundaries of the exemptions—and, more important, will likely begin to create a piecemeal jurisprudence about what contextual clues can inform moderation decisions on posts with nudity and how the platform ought to weigh the competing interests at play.

Next, the board makes a foray into Holocaust content. The platform has taken a lot of heat for its handling of posts related to the genocide. In 2018, Zuckerberg pushed back on insinuations that Facebook ought to take a harsher stance on Holocaust denial—“It’s hard to impugn intent and to understand the intent.” He had an about-face in October 2020, announcing that Facebook was “expanding our policy to prohibit any content that denies or distorts the Holocaust.” The board’s intervention here will be much more modest. It’ll review a post that is “an alleged quote from Joseph Goebbels, the Reich Minister of Propaganda in Nazi Germany, on the need to appeal to emotions and instincts, instead of intellect and on the unimportance of truth.” The post—notably the only text-only entry on the docket—got dinged for “expressing support or praise for groups, leaders, or individuals” The user pushed back on Facebook’s decision to remove the post: “[T]he quote is important as the user considers the current US presidency to be following a fascist model.”

Here the challenge is the lack of any clues about the user’s intent. Zuckerberg’s tone-deaf defense of Holocaust denial did contain a grain of truth: It is often hard “to understand the intent” of user posts. The board’s decision here will provide some clues as to how besieged moderators might do that. How will they determine whether the post was a clumsy meditation on U.S. democratic backsliding or a veneration of the Nazi master propagandist? Do they look to a poster’s history? Put a blanket kibosh on any stand-alone Nazi quotes?

With its last pick, the board steps into the coronavirus misinformation maelstrom. The case concerns a video post in a COVID-19 Facebook group. Per the description in the announcement of the docket:

In the video and text, there is a description of an alleged scandal about the Agence Nationale de Sécurité du Médicament (the French agency responsible for regulating health products) purportedly refusing authorization for use of hydroxychloroquine and azithromycin against COVID-19, but authorizing promotional mail for remdesivir. The user criticizes the lack of a health strategy in France and states that “[Didier] Raoult’s cure” is being used elsewhere to save lives.

The video exploded: It was “viewed approximately 50,000 times and shared under 1,000 times,” before the platform took it down because it presented “a genuine risk of physical harm or direct threats to public safety.(Here, too, the board’s commitment to respecting user privacy comes with consequences for transparency—the public will never know what exactly was in the video.)

Hydroxychloroquine has presented big problems for Facebook. The platform in July 2020, for example, took down a Breitbart video featuring a doctor making baseless claims that hydroxychloroquine is a “cure for Covid.” That video dwarfed the French video in circulation; it got 17 million views before Facebook pulled it. And the doctor referenced in the post on the docket, Didier Raoult, a hydroxychloroquine evangelist, is himself a prolific user of social media. But the board won’t be reviewing the viral Breitbart video, nor will it look to the doctor’s feed to adjudicate the propriety of posts baselessly lauding the efficacy of “Raoult’s cure.” And however the board comes down, it won’t fix the disinformation dumpster fire that is Facebook groups. Instead, the board will likely take on more granular questions: What about this particular groundless post might have brought it over the line? Can users criticize a sound public health strategy so long as they refrain from making erroneous claims that a particular drug saves lives? Here, though, the board—replete with lawyers and speech scholars, not scientists or doctors—runs into a bit of a problem. How might they parse this case without having to render their own judgment on the scientific merits at play? Are they even qualified to take on scientific questions at all? Either way, the ruling won’t radically change Facebook’s rules regarding coronavirus misinformation, but it will help clarify what exactly crosses the line, maybe just in time for widespread vaccine rollout. Future cases the board takes on—whether about the coronavirus or about Tamiflu—will add to this baseline to slowly (probably very slowly) build out a common law on dangerous health misinformation.

The board has 90 days from the announcement of the docket to make its rulings. It is possible that the board will use the docket to take big swings at pinch points in the community standards. And maybe the Oversight Board will reverse course in the future. Perhaps Facebook will take Douek up on her challenge and refer the Trump decision to the board for review. Or maybe the board will stray further away from polemical issues. Or, as Douek has been pushing it to do, the board might even begin to review leave-up decisions. (It says it will at the beginning of 2021, but who knows.)

But the docket nonetheless offers reasons for optimism about the value of the board. Facebook official Brent Harris has stressed that “[t]he purpose of the board” is to deal with “complex challenging content issues that have wide-ranging impact.” These cases do present difficult content questions. The picks will force the board to release public reasoning on how it balances the different equities at stake and what it uses to draw lines for the platform’s often vague community standards. And despite the idiosyncrasies of the choices, the decisions will have a wide-ranging impact. There are likely, for example, hundreds of millions of posts on Facebook with aggressive criticism of heads of state like the image of the armored man with the anti-Macron caption. It’s valuable to have the board weigh in on where exactly the “threatening speech” line sits. How should Facebook balance the importance of offering a platform for political critique while deterring posts that could lead to violence. How might that line differ for speech directed at a major public figure as opposed to a “minor public figure”? How does news context impact that determination?

Maybe the board will take on the Trump decision at some point. But for now, there are much better uses of its time than to deal with Steve Bannon.


Jacob Schulz is a law student at the University of Chicago Law School. He was previously the Managing Editor of Lawfare and a legal intern with the National Security Division in the U.S. Department of Justice. All views are his own.

Subscribe to Lawfare