Criminal Justice & the Rule of Law

U.N. Special Rapporteur’s Latest Report on Online Content Regulation Calls for 'Human Rights by Default'

Evelyn Douek
Wednesday, June 6, 2018, 8:00 AM

David Kaye, the U.N. special rapporteur on the promotion and protection of the right to freedom of opinion and expression, released his latest report to the U.N. Human Rights Council last week.

Published by The Lawfare Institute
in Cooperation With
Brookings

David Kaye, the U.N. special rapporteur on the promotion and protection of the right to freedom of opinion and expression, released his latest report to the U.N. Human Rights Council last week. The report calls for states and companies to apply international human rights law at all stages of online content regulation: from creating rules about what content should be taken down, to conducting due diligence about how changes to platforms affect human rights, to providing remedies for people harmed by moderation decisions.

This is the first U.N. report to examine the regulation of user-generated online content. It could hardly be more timely. The ongoing reckoning over social media has been one of the most pervasive stories of 2018 as nation-states and companies grapple with social media’s effects on communities and democratic institutions both online and offline.

So far, however, the changes prompted by this reckoning have been patchwork. Individual social media companies are implementing voluntary initiatives to try to quell concerns and fend off further regulation. In the past few months alone, Facebook has created transparency measures for political advertising, publicized the internal guidelines it uses to make decisions under its community standards about what content stays up and comes down, announced a new appeals process, and, for the first time, published numbers relating to the enforcement of those community standards. Twitter has also announced ways it intends to regulate political advertising and efforts to help foster better “conversational health” on its platform. These are merely prominent examples of a general trend.

However well-intentioned those actions are, these companies are trying to use internal improvements to get ahead of further state regulation. So far, they are seeing varying degrees of success in persuading regulators that new laws are unnecessary. As I wrote last month, the European Union continues to threaten social media platforms with heavier regulation. A U.K. parliamentary inquiry into "fake news" rumbles on and continues to be aggressive in charging platforms with lax practices.

These are just two examples against the background of a global increase in governmentally imposed obligations to monitor and remove user-generated content. Kaye’s report cites recent Chinese, German, EU and Kenyan examples in two short paragraphs alone. There is little coordination or collaboration in the rollout of these public and private measures as companies and regulators alike focus on their individual concerns rather than the global online ecosystem. Kaye’s report, therefore, offers a rare international lens on the intractable problem of content regulation. This is much-needed insight in the nascent, interconnected world of online public spheres. The framework the report proposes, based on international human rights law, offers a way of ensuring greater consistency and transparency in the different environments that social media companies operate.

For states, respecting human rights law means their laws should not unduly restrict freedom of expression, either online or offline. Many of Kaye’s previous reports have focused on these obligations. As his latest report argues, this includes refraining from imposing disproportionate liability on social media companies because this creates an incentive for companies to over-censor content, which has a chilling effect on freedom of expression. Such laws also delegate responsibility for censorship decisions to private companies, rather than using public legal processes that comply with the rule of law.

For most companies, the report’s recommendation that they adopt international human rights law as the authoritative standard for their content moderation would be a significant change in their operating model.

International Norms: 'Human Rights by Default'

As Kaye’s report explains, most companies do not explicitly base their content standards on any single body of law. They retain large amounts of discretion and generally regulate content according to their own terms of service, subject to complying with local laws in the jurisdictions where they operate. This commitment to local legal compliance can cause problems where the local laws are vague, where they are themselves inconsistent with human rights law or where they are not sufficient to protect human rights. In these circumstances, company decisions are often driven by commercial considerations and the extent to which local governments can apply effective pressure. Where a state is strong, or a market is valuable, this can make users vulnerable to violations of their rights when a country insists on censorship that does not accord with human rights law.

Kaye identifies Germany’s new “NetzDG” law, which imposes extremely high potential penalties and requires tight time frames for removal, as raising this concern. Other violations may occur where a strong state seeks content removals outside legal processes or using arrangements that have limited transparency. Kaye points to an instance in which Pakistan compelled Google to offer a local version of YouTube that removed content the government found offensive and cited an agreement between Facebook and Israel to remove content the government flagged as “incitement.”

On the other hand, in places where a state is weak or business in a given country is not particularly valuable to the company, there may be insufficient pressure or commercial incentive for companies to monitor their platforms and preserve healthy speech environments. The most notorious example of this is Myanmar, where hate speech has fueled ethnic violence and genocide while Facebook devoted fairly limited resources to content moderation. In Sri Lanka, as the New York Times reported, the government struggled to get Facebook’s attention regarding viral hate speech in the context of escalating ethnic violence—until the government blocked most social media in the country.

The result is that these transnational companies apply an opaque and variable kind of regulation that Kaye calls “platform law,” which allows platforms a great deal of discretion with little accountability. Human rights law, he argues, offers a solution:

Private norms, which vary according to each company’s business model and vague assertions of community interests, have created unstable, unpredictable and unsafe environments for users and intensified government scrutiny. National laws are inappropriate for companies that seek common norms for their geographically and culturally diverse user base. But human rights standards, if implemented transparently and consistently with meaningful user and civil society input, provide a framework for holding both States and companies accountable to users across national borders.

Kaye calls for companies to move from individual “platform law” to “human rights by default.”

As I have written before, there is little to compel transnational companies to observe international law. As non-state actors, they are not parties to international human rights treaties. The U.N. Guiding Principles on Business and Human Rights were developed to provide a framework for holding multinational corporations accountable for their impact on human rights, but they are not binding. Kaye argues that the companies’ “overwhelming role in public life globally” is a strong reason for adopting the guiding principles. But whatever the normative appeal of these guidelines, they do not have legal force.

The report offers another reason, however, that the companies should adopt these norms. Abiding by international human rights norms “enables forceful normative responses against undue State restrictions.” This means that if countries pressure companies to enforce content standards in a way that is idiosyncratic or infringes users’ freedom of expression, companies are on stronger ground if they can appeal to international norms as their reason for resisting these demands.

Facebook’s founder and CEO, Mark Zuckerberg, has seemingly expressed a desire to find just such a higher norm. He has voiced discomfort with being put in the position of deciding where the line is on hate speech in different contexts and has even suggested a kind of Facebook Supreme Court to make decisions in hard cases. Kaye’s report is a reminder that each company doesn’t necessarily need to create a new legal framework from whole cloth—one already exists in the form of international human rights law.

Kaye’s report explains other benefits of adopting global standards beyond the substantive protection of human rights. For users, companies being guided by the same set of norms regarding freedom of expression in every country would offer greater consistency and predictability across markets and platforms. Furthermore, human rights law not only protects freedom of expression itself but also provides standards for transparency, due diligence about companies’ human rights impacts and due process for remediation of harms.

Objections to Human Rights Law

But international human rights norms are not a panacea for the intractable problems of online speech regulation. It is something of a misnomer to speak of international human rights law as if it is a single, self-contained and cohesive body of rules. Instead, these laws are found in a variety of international and regional treaties that are subject to differing interpretations by states that are parties to the conventions as well as international tribunals applying the laws.

In a recent comprehensive survey of international laws regulating speech, for example, Amal Clooney and Philippa Webb showed that different sources provide conflicting guidance on speech rights and have been interpreted inconsistently by different tribunals. These norms may not supply as much certainty and uniformity as promised or expected. Even if there were no problems of vagueness or conflicting authorities, international law on freedom of expression is not universally venerated: Clooney and Webb argue that the right to insult is not sufficiently protected and that while technology companies “should be able to draw inspiration from international law,” that law needs legal reform to play a more positive role.

More generally, the particular U.S. objections to international norms on freedom of expression are well known. The absolutism of the First Amendment is an outlier even in the democratic world, and the United States has entered reservations to international treaties containing speech rights—such as the International Covenant on Civil and Political Rights—to make clear that it will not be bound by international laws more restrictive of speech than the First Amendment. As Kate Klonick’s research has shown, content moderation policies at the dominant U.S. social media platforms were developed by lawyers acculturated in this First Amendment ethos.

But these platforms already depart from free-speech absolutism, removing content that the government could not proscribe under the First Amendment, for example harassment and nudity. Klonick also shows that in the years since their founding, the platforms gradually developed more nuanced approaches to regulation in countries with different speech standards than those in the U.S. The application of more universal human rights norms would not defeat this need to respect local context in determining how best to regulate speech. As Kaye writes, human rights principles can offer a predictable and consistent baseline but are not “so inflexible or dogmatic” as to deprive companies of the ability to consider relevant context. Importantly, though, they would provide a “common vocabulary” for explaining decisions, which allows for greater dialogue and the development of a kind of precedent to help move toward better and more uniform decision making.

Thinking Long-Term and Big Picture

As Kaye’s report and ongoing media coverage make clear, there is a growing trend of increased regulation of social media globally. States should adhere to their international legal obligations to respect individuals’ rights to freedom of expression, regardless of frontiers. Where they do not, however, companies will need tools to allow their platforms to enable the exercise of human rights rather than instead enabling the violation of users’ rights by governments. Transparent commitment to human rights as guiding content moderation would certainly be a voluntary constraint on their discretion—but it would also provide normative backing for their decisions.

No legal system is perfect. International human rights law, in particular, is dynamic and still in development. But just as the vaguely worded First Amendment has crystallized into more concrete rules, so too can international law. Compared with the First Amendment, international law on freedom of expression is young, having come into existence only since World War II, and active engagement with these norms and transparency around decisions could facilitate its ongoing development. Kaye suggests that companies can develop a kind of case law by explaining their decisions, which would enable users, civil society and states to better understand their standards. He further advocates an industry-wide social media council, modeled on press councils, to provide an accountability mechanism and to be a credible and independent body to help develop transparency and norms.

“Content regulation” is an umbrella term that encompasses a wide range of complex issues, which play out on different platforms run by a diverse range of companies in countries around the world. Each of these issues requires sustained individual attention, but it’s also important not to miss the forest for the trees. Kaye’s report offers a framework based on process and transparency that takes as its starting point this bigger picture—and, within it, the intrinsic value of a greater harmonization of norms across companies and countries.


Evelyn Douek is an Assistant Professor of Law at Stanford Law School and Senior Research Fellow at the Knight First Amendment Institute at Columbia University. She holds a doctorate from Harvard Law School on the topic of private and public regulation of online speech. Prior to attending HLS, Evelyn was an Associate (clerk) to the Honourable Chief Justice Susan Kiefel of the High Court of Australia. She received her LL.B. from UNSW Sydney, where she was Executive Editor of the UNSW Law Journal.

Subscribe to Lawfare