Cybersecurity & Tech

How Facebook Can Use International Law in Content Moderation

Hilary Hurd
Wednesday, October 30, 2019, 8:00 AM

Speaking at Georgetown University on Oct. 17, Mark Zuckerberg said what many did not want to hear: Facebook would not be doing more to restrict “bad” speech.

Mark Zuckerberg, CEO of Facebook (Source: Flickr/(CC) Brian Solis, www.briansolis.com and bub.blicio.us.)

Published by The Lawfare Institute
in Cooperation With
Brookings

Speaking at Georgetown University on Oct. 17, Mark Zuckerberg said what many did not want to hear: Facebook would not be doing more to restrict “bad” speech. In response to repeated calls that Facebook more actively police fake news and hate speech on the world’s largest social media platform, Zuckerberg resisted: “You can’t impose tolerance top-down.” Eulogizing the history of America’s free speech laws, Zuckerberg warned that America’s values would not necessarily endure as new, less liberal Chinese tech platforms compete for global users—suggesting it is up to Facebook to preserve free speech online.

Like most publicly traded companies, Facebook has a duty to maximize shareholder profits and follow the local laws of countries where it operates. Yet, unlike most publicly traded companies, Facebook has formally stated a moral mission: to “bring the world closer together” and build community online through free expression. The central question is whether it can do both. After all, to comply with local laws or ad hoc government orders in illiberal states, Facebook will inevitably feel pressure to take down user content that may be part of an important political discussion. Relatedly, should it defy local laws or government orders in the name of free speech, Facebook may risk endangering profit margins—or even employees who may be arrested for Facebook’s failure to comply.

But there is a potential middle course in the diverging paths to principle and profit. Through the establishment of its new Oversight Board, Facebook could bolster its commitment to free expression globally by requiring governments to justify their take-down requests in keeping with the International Covenant on Civil and Political Rights (ICCPR). Article 19 specifically lays out three conditions for when—and under what circumstances—governments can restrict speech. By insisting governments frame their take-down requests in keeping with Article 19’s requirements before removing any content, Facebook would honor its stated goal of promoting free expression globally while shifting the burden to governments to justify their actions. The Oversight Board could in turn make this commitment credible by promising to restore any content removed because of a government take-down request, unless the government adhered to Article 19’s formal steps.

This is not to say that Facebook should decide whether any government’s derogations “satisf[y]” Article 19; international law experts and national courts are better positioned to do that. Rather, by mandating that states make these derogations and by making the derogations public, Facebook would be empowering citizens to judge their own governments’ actions and, in turn, pressuring governments to be more selective about which voices they seek to silence. Relatedly, I do not intend to suggest that Facebook must model its own content standards on the ICCPR, as some suggest. While Facebook may have taken some vague inspiration from the ICCPR in developing its own content rules, my chief concern is with the rules Facebook applies to states when, for a range of motives, they pressure Facebook to remove user content.

The ICCPR is one of the most important treaties in the realm of human rights and for global free expression more broadly. Article 19 of the ICCPR provides:

  1. Everyone shall have the right to hold opinions without interference.
  2. Everyone shall have the right to freedom of expression; this right shall include freedom to seek, receive and impart information and ideas of all kinds, regardless of frontiers, either orally, in writing or in print, in the form of art, or through any other media of his choice.
  3. The exercise of the rights provided for in paragraph 2 of this article carries with it special duties and responsibilities. It may therefore be subject to certain restrictions, but these shall only be such as are provided by law and are necessary: (a) For respect of the rights or reputations of others; (b) For the protection of national security or of public order (ordre public), or of public health or morals.

In addition to guaranteeing individuals the right to speak and hear ideas, Article 19 guarantees that the freedom of expression shall apply across “frontiers,” irrespective of the relevant “media” through which ideas are being communicated. In short, an ICCPR signatory cannot restrict free speech just because someone lives in another country. Furthermore, the same government cannot restrict freedom of speech just because a person decides to broadcast his or her ideas over the radio, as opposed to the television. A 2011 series of reports by the special rapporteur on the promotion and protection of the right to freedom and expression emphasized that Article 19 applies equally to the internet.

Governments facing a national security emergency or public policy crisis can derogate from Article 19’s proscriptions, but three key conditions must be met: (a) The derogation must be provided for by law (i.e., not an ad hoc statement), (b) it must be necessary (i.e., narrowly tailored), and (c) it must serve a public interest. A public interest can be justified with reference to other articles in the ICCPR. Article 20, for example, requires a ban on “any advocacy of national, racial, or religious hatred that constitutes incitement to violence, discrimination, or hostility.” Nonetheless, General Comment 34 to Article 19 establishes that it is “not compatible with the Covenant for a restriction to be enshrined in traditional, religious, or other such customary law.” Thus, restricting Islamophobic hate speech might be justified under Article 20. Restricting poems about gay marriage would not.

Most international law experts agree that Article 19 applies only to states, like the rest of the ICCPR. Nonetheless, there have been renewed calls to apply Article 19 to technology companies. Most prominently, the U.N. special rapporteur on the promotion and protection of the Right to freedom of opinion and expression, David Kaye, released a report last year in which he specifically called on technology companies to apply international human rights law “at all stages of online content regulation” rather than to rely on myriad local laws and ever-shifting community standards.

While Kaye did not argue that the ICCPR should be legally binding on private actors, he emphasized that it might provide the appropriate framework to make decisions. In a more recent report released this October, he specifically discussed how companies might align their hate speech laws with international treaties and conventions. Both reports come against a larger backdrop of calls for multinational companies to prioritize international law. In 2011, the U.N. Human Rights Council endorsed a voluntary business code, the Guiding Principles on Business and Human Rights (UNGP), which specifies that companies should avoid impacting human rights, as defined and understood by the ICCPR, even where local laws require it. Principle 23 of the UNGP states that “in all contexts” companies should not only “comply with all applicable laws and respect internationally recognized human rights, wherever they operate,” but also “[s]eek ways to honor the principles of internationally recognized human rights when faced with conflicting requirements.” Moreover, multi-stakeholder collaborations like the Global Network Initiative (GNI) have recently come into existence to provide guidance to companies such as Google, Facebook and Microsoft regarding how their policies might be adopted in line with international human rights law and voluntary guidelines, including the UNGP.

Facebook has repeatedly made reference to the ICCPR as a guidepost for the development of its community standards. In September of this year, Facebook’s vice president for global policy management, Monica Bickert, said that Facebook “look(s) to international human rights standards” when applying the content rules for the company. However, Facebook has not clearly relied on the ICCPR to challenge government take-down requests or government regulations whose nature runs afoul of the ICCPR itself.

First, consider Facebook’s take-down request figures. In its annual Transparency Report, Facebook provides figures for how many content restrictions it makes based on local laws—a category distinct from the category of content taken down because it violates Facebook’s community standards. Facebook states that when it removes content for violating local laws, it does so “only in the country or region where it is alleged to be illegal.” Where content is restricted in multiples countries, Facebook counts that take-down multiple times. From 2013 to 2018, the volume of Facebook’s content restrictions more than doubled—increasing from 7,000 to 15,000. While there are likely to be some cases where a country may have a security rationale to restrict speech in keeping with Article 19’s three-pronged derogation test, it’s doubtful such a rationale applied in 15,000 cases.

Second, consider the case studies Facebook publishes to show how it has responded to specific government requests to take down content otherwise compatible with its community standards. While some of these government requests appear compatible with the ICCPR, others involve stark violations. In Taiwan, for example, Facebook apparently restricted access to a page “promoting transnational marriages” on the basis that it violated Article 58 of Taiwan’s Immigration Act, which provides that “[n]o person shall disseminate, broadcast or publish advertisements of transnational marriage agencies through advertising, publication, broadcast, television, electronic signaling, internet or other means that can make the advertisements publicly known.” In India, Facebook made a photo of the Prophet Mohammad unavailable, because the country prohibits the prophet’s depiction; it also made a photo of a boy urinating on the Indian National Flag unavailable, on the basis that it violated laws requiring respect for the national flag.

Facebook’s formal policy states that, “in cases where reports are not legally valid, are overly broad, or are inconsistent with international norms, we will request clarification or take no action.” It goes on to say that “we consider the impact our decisions will have on the availability of other speech.” While the statement acknowledges the possibility of international norms as a constraint, it does not appear to be a very significant one.

Rather than make passing reference to international law, Facebook should require that governments submit formal explanations of why, and how, their take-down requests are compatible with their international commitments under Article 19. Where content is taken down and governments do not provide those explanations, the new Oversight Board should reinstate the content until an explanation is formally provided.

By requiring states to justify their take-down requests in terms of Article 19, Facebook would nudge states to contemplate their international obligations—obligations that are ultimately more speech-protective than local laws—without compromising any obligations of its own. For example, Facebook might require states making take-down requests on the basis of their local laws to justify their requests with Article 19’s three-pronged test. Returning to an earlier example, Taiwan would need to explain how its local law, Article 58 of the Immigration Act, is both “necessary” and “serves the public interest.” While Taiwan may fail to provide a satisfactory explanation, the act of publishing whatever justification the government provides would be useful information for citizens to have. For example, it might allow citizens to compare different political administrations for their respective speech-restrictiveness. By simply disclosing these derogations, Facebook need not violate these local laws. It need only certify that states provide a formal explanation for their take-down request.

Where states make take-down requests without any reference to local laws, the ICCPR would give Facebook greater leverage to simply say no. After all, a central theory of international relations law is that states—while they do not share the same interests—do care about their reputations vis-a-vis other states. While Facebook is not a state, its global presence endows it with significant potential to “name and shame” states that might otherwise feel inclined to bully Facebook into suppressing speech with the threat of no market access. Eventually this might even deter governments from requesting take-downs in the first place.

A cynic might argue that every government could presumably find some way to justify its take-down request in terms of Article 19’s three derogations, thus limiting whatever constraint “reputational costs” would otherwise have on states. Yet, even if cynics are right, it nonetheless would still benefit Facebook to mandate formal derogations. Global reputations do not just matter for countries. They also matter for companies, especially big companies like Facebook facing increasing domestic political pressure. In requiring states to formally explain their take-down requests in terms of Article 19, Facebook would inevitably shift the optics of caprice off the company and onto states themselves.

By reviewing how Facebook processes take-down requests, the Oversight Board can ensure greater consistency in Facebook’s approach—and ensure that Facebook does not make exceptions for select government requests without independent review. After all, an inconsistent reliance on the ICCPR would arguably discredit whatever moral legitimacy it confers and lead some governments to treat the article derogation requirements as a perfunctory box-checking exercise.

In 2018, Facebook announced it would establish an independent oversight board to govern its online content decisions. This September, Facebook published a preliminary charter for this new board. Though detailed bylaws have not yet been released, the board is already, in the words of Evelyn Douek, an “unprecedented innovation in the realm of private platform governance.” If it sets the right priorities, it could have significant impact on how Facebook interfaces with governments around the world.

As Douek explained on Lawfare, the Oversight Board has the authority to hear cases and overturn Facebook’s content decisions. Most importantly, it also has the ability to control its own docket and set its own priorities about which kinds of content it should review. Should the board decide to focus on content removed only pursuant to a government take-down request—and not because it violated other Facebook rules—it would be well-positioned to police Facebook’s application of Article 19. For example, imagine a scenario in which a state asks Facebook to remove content without citing a conflict with local law as required under Article 19. Should Facebook acquiesce, which in some cases it would be tempted to do, the Oversight Board could reinstate the content. In this way, the Oversight Board might ensure that governments seeking to pressure Facebook into removing content don’t circumvent Article 19 and that Facebook, mindful of its profit margins, does not abandon the principled commitment it has made to free expression.

As the world’s largest social media site with more than 2 billion active users, Facebook is well positioned to protect global speech and deepen compliance with international law—most notably ICCPR Articles 19 and 20. What better way to “bring the world closer together” than to commit to the free speech rules that the vast majority of countries have, at least formally, embraced?


Hilary Hurd holds a J.D. from Harvard Law School. She previously worked for Transparency International as their U.S.-defense lead and global advocacy manager. She has an M.Phil. in International Relations from Cambridge University, an M.A. in Conflict, Security, and Development from King’s College London, and a B.A. in Politics and Russian Studies from the University of Virginia. She was a 2013 Marshall Scholar.

Subscribe to Lawfare