Cybersecurity & Tech

New U.N. Report on Online Hate Speech

Evelyn Douek
Friday, October 25, 2019, 1:28 PM

David Kaye, the United Nations special rapporteur on the promotion and protection of the freedom of opinion and expression, recommended in June 2018 that social media companies adopt international human rights law as the authoritative standard for their content moderation. Before Kaye’s report, the idea was fairly out of the mainstream. But the ground has shifted.

Protest against Islamophobia and hate speech (Source: Flickr/Fibonacci Blue)

Published by The Lawfare Institute
in Cooperation With
Brookings

David Kaye, the United Nations special rapporteur on the promotion and protection of the freedom of opinion and expression, recommended in June 2018 that social media companies adopt international human rights law as the authoritative standard for their content moderation. Before Kaye’s report, the idea was fairly out of the mainstream. But the ground has shifted. Since the release of the report, Twitter CEO Jack Dorsey responded to Kaye agreeing that Twitter’s rules need to be rooted in human rights law, and Facebook has officially stated that its decisions—and those of its soon-to-be-established oversight board—will be informed by international human rights law as well.

Now Kaye has a new report, released Oct. 9—a timely evaluation of one of the biggest challenges in the regulation of online speech. Despite some tech companies expressing openness to Kaye’s approach, in general these companies continue to manage “hate speech” on their platforms, as Kaye notes, “almost entirely without reference to the human rights implications of their products.” And it remains unclear how these standards, developed for nation-states, can be put into practice in the very different context of private companies operating at mind-boggling scale and across a wide variety of contexts. These questions are the central concern of Kaye’s latest report, which evaluates the human rights law that applies to regulation of online “hate speech.”

Why “Hate Speech”?

Hate speech is one of the most controversial areas of content moderation, given the difficulty of defining the category and the importance of context. It is also the area in which international human rights law is perceived as being furthest from U.S. First Amendment law. In advocating for the adoption of international human rights law, Kaye is upfront about these critiques. Indeed, the report starts by acknowledging that:

“Hate speech”, a short-hand phrase that conventional international law does not define, has a double-edged ambiguity. Its vagueness, and the lack of consensus around its meaning, can be abused to enable infringements on a wide range of lawful expression. … Yet the phrase’s weakness (“it’s just speech”) also seems to inhibit governments and companies from addressing genuine harms such as the kind that incites violence or discrimination against the vulnerable or the silencing of the marginalized.

Kaye emphasizes, however, that international human rights law in fact has very robust protections for freedom of expression. Though Article 20(2) of the International Covenant on Civil and Political Rights (ICCPR) obligates states to prohibit certain types of incitement, for example, it does not require states to criminalize inciting speech. Nor does it permit prohibition of advocacy of minority or even offensive views that do not amount to incitement. As Kaye says, in a comment seemingly aimed at those concerned that international human rights law will be insufficiently protective of free speech, “[T]here is no ‘heckler’s veto’ in international human rights law.” And as Nadine Strossen, a notable skeptic of using censorship to deal with hate speech, recently noted, “[I]t is just not nearly well enough known as it should be that the international human rights standards of free speech … have been authoritatively interpreted … [as] incredibly close to core U.S. free speech principles.” Strossen also advocates for international human rights law as the basis for platforms’ rules, because this body of law is significantly more speech-protective than what most companies already do.

It is in this context that Kaye looks at some areas that have proved especially controversial in content moderation in recent years, noting that international human rights law would not condone the criminalization of either blasphemy or Holocaust denial. In regulating incitement, Kaye calls on states to consider the six factors outlined in the Rabat Plan of Action: the context of the speech, the status of the speaker, their intent, the content and form of the speech, its reach, and the likelihood and imminence of it causing harm. Critically, he repeatedly emphasizes that international law requires states to take the least restrictive measure available to confront hate speech problems—and this is rarely criminalization.

At one point, Kaye contemplates a “hypothetical” government considering legislation to hold online intermediaries liable for failure to take action against hate speech. But, of course, this is not a hypothetical. Regulators all around the world are thinking about what to do about hate speech on social media. Kaye reiterates that these states must observe the requirements of legality (specified in a precise, public and transparent law), legitimacy (justified to protect rights or reputations of others; national security; public order; public health or morals), and necessity and proportionality (the least restrictive means to achieve that aim). As in previous reports, he expresses concerns about laws that delegate responsibility for censorship decisions and defining hate speech to private companies, rather than using public legal processes that comply with the rule of law.

What Should Companies Do?

Given the novelty of private companies having control over the speech of so many around the world, international law is still developing the tools companies need to apply human rights standards in their content moderation. Kaye makes concrete recommendations to companies seeking to implement human rights standards in their content moderation, including the following:

  • Conducting periodic human rights due diligence assessments and review—there is still very little public information about platforms’ actual effects on human rights, and any assessments that are undertaken are currently far too little too late.
  • Aligning hate speech policies to meet the requirement of legality, which means a lot more definitional clarity than most companies currently provide (more on this below).
  • Improving processes for remediation in cases where people’s rights are infringed, especially creating a transparent and accessible mechanism for appealing platform decisions. Kaye also urges companies to consider more graduated responses according to the severity of the violations of their hate speech policy, including remedial policies of education, counter-speech, reporting and training those involved to be aware of the relevant standards.

In particular, Kaye notes that adopting such standards can give companies a framework for making rights-compliant decisions, along with a globally understood vocabulary for articulating their enforcement decisions to governments and individuals.

International law permits the limitation of freedom of expression only where necessary to protect the rights or reputations of others, national security, public order, or public health or morals (Art. 19(3) ICCPR). How companies are to assess these interests remains an open question. Importantly, Kaye clarifies that “companies are not in the position of governments to assess threats to national security and public order, and hate speech restrictions on those grounds should be based not on company assessment but legal orders from a State.” This essentially means that companies should restrict hate speech only where that speech interferes with the rights of others, such as incitement to violence that “threatens life, infringes on others’ freedom of expression and access to information, interferes with privacy or the right to vote.” To give users greater notice and transparency about how these interests are accounted for and company rules are applied in practice, Kaye again urges companies to develop a kind of “case law” of examples of how their policies are enforced.

International Law Doesn’t Mean Universal Rules

One of the greatest misconceptions about companies adopting international human rights standards on their platforms is that it will result in universal, one-size-fits-all rules. In fact, as Kaye notes throughout his report, determining whether speech constitutes hate speech and what the appropriate response is if it does requires close attention to the context of the speech. Kaye accepts that “[c]ompanies may find the kind of detailed contextual analysis to be difficult and resource-intensive,” especially because it cannot be done by artificial intelligence tools but requires human evaluation, but that “if companies are serious about protecting human rights on their platforms” this is the only solution. Kaye makes the interesting proposal that, given the expense of such an approach, “the largest companies should bear the burden of these resources and share their knowledge and tools widely, as open source, to ensure that smaller companies, and smaller markets, have access to such technology.” This would prevent human rights compliance from becoming a barrier to entry or a privilege reserved to the most lucrative markets.

The standards need not be one-size-fits-all, and neither are the tools to enforce them. One upside of private platforms moderating so much speech is that the range of tools available to companies is often far broader than that enjoyed by governments This allows the platforms to adopt much more tailored responses to problematic content. Indeed, if deployed well, these tools (such as downranking, demonetizing, friction and warnings, geoblocking and countermessaging) could result in a much more nuanced approach to dealing with hate speech online. But this appears to be a long way away.

Kaye also weighs in on the ongoing controversy about whether politicians and other public figures should be exempt from the normal rules. Kaye says generally not, but he accepts the importance of context:

In the context of hate speech policies, by default public figures should abide by the same rules as all users. Evaluation of context may lead to a decision of exception in some instances, where the content must be protected as, for instance, political speech. But incitement is almost certainly more harmful when uttered by leaders than by other users, and that factor should be part of the evaluation of platform content.

Conclusion

Mark Zuckerberg, Facebook’s CEO, recently gave a highly promoted speech about the importance of protecting freedom of expression online—which was immediately criticized as taking too binary a view of the issues and presenting a false choice between free expression and Chinese censorship. By contrast, Kaye’s report is a very real reckoning with the trade-offs involved in protecting free speech while dealing with the real harm caused by some forms of expression, and is an attempt to find guiding, consistent standards. There is still a lot of work to be done, not least by the companies themselves, to make this a reality. But this latest report will be a useful and influential guide in that process.


Evelyn Douek is an Assistant Professor of Law at Stanford Law School and Senior Research Fellow at the Knight First Amendment Institute at Columbia University. She holds a doctorate from Harvard Law School on the topic of private and public regulation of online speech. Prior to attending HLS, Evelyn was an Associate (clerk) to the Honourable Chief Justice Susan Kiefel of the High Court of Australia. She received her LL.B. from UNSW Sydney, where she was Executive Editor of the UNSW Law Journal.

Subscribe to Lawfare