Cybersecurity & Tech

Why Were Members of Congress Asking Mark Zuckerberg About Myanmar? A Primer.

Evelyn Douek
Thursday, April 26, 2018, 1:00 AM

During Mark Zuckerberg’s 10-plus hours of testimony before Congress on April 10 and 11, the Facebook CEO was asked at least six times about his company’s alleged censorship of conservative internet personalities Diamond and Silk—to the point where New York Times tech

A U.N. camp for displaced persons in Rakhine state. Credit: UK International Development Agency/Wikimedia

Published by The Lawfare Institute
in Cooperation With
Brookings

During Mark Zuckerberg’s 10-plus hours of testimony before Congress on April 10 and 11, the Facebook CEO was asked at least six times about his company’s alleged censorship of conservative internet personalities Diamond and Silk—to the point where New York Times technology reporter Kevin Roose commented, “I think if you were an alien who dropped down to earth to observe the testimony … you would think there was no one on earth who was more important than Diamond and Silk.” In contrast, Zuckerberg fielded exactly three questions about his company’s role in the ongoing violence in Myanmar—one of the most pressing human rights atrocities of our time.

This was a missed opportunity. At this point, that Facebook has played a role in spreading the violence in Myanmar is indisputable. The ongoing crisis raises many legal questions and exposes a lacuna in the law that political and public pressure is currently trying to fill. But one thing is clear: This crisis should be an important part of the conversation about the role of the big technology companies in societies around the world. If developed countries are concerned with the impacts of social media on civil discourse, they should not forget that the same impacts can have more drastic effects in developing countries—and will continue to do so as millions more people come online.

Ethnic Cleansing in Myanmar

The U.N. High Commissioner for Human Rights, Zeid Ra’ad Al Hussein, has called the latest wave of violence against Rohingya Muslims in Myanmar has a “textbook example of ethnic cleansing.” The crackdown by the country’s security forces, which began in August 2017 has, precipitated the fastest-growing refugee crisis in the world: Over 800,000 Rohingya refugees in total have now fled to Bangladesh. Myanmar has blocked most international access to Rakhine state, where the violence is occurring, making reliable estimates of death tolls near impossible. Doctors Without Borders said late last year that its conservative estimates of casualties were at least 6,700, including at least 730 children under the age of five.

This is just the latest flare-up of long-simmering ethnic tensions between the country’s Rohingya Muslim minority and its Buddhist majority, who view the Rohingya as illegal immigrants from Bangladesh. Zeid has drawn attention to the progressive stripping of the political and civil rights of the Rohingya population by “successive Myanmar governments … since 1962.” But the tension has come to the fore again since the country started its transition to democracy in 2010, as deep ethnic tensions have erupted into periodic waves of violence and fractured a society no longer suppressed by military rule.

Facebook’s Role in Myanmar

Facebook founded the controversial “Free Basics” initiative in 2013 under the name Internet.org, with the goal of bringing greater internet connectivity to rural and low-income populations around the world. The Free Basics platform allows users to access supported services for free by giving them a “zero-rating,” meaning that those services did not count towards the user’s data cap. Free Basics launched in Myanmar in 2016, and its influence has been profound. Free Basics’ arrival coincided with the arrival of the internet more generally: Internet penetration in Myanmar has gone from only 2 percent in 2013 to 25.1 percent in 2017, and over 90 percent of the country’s population now have access to a phone with internet service.

This means that to many in Myanmar, “Facebook is the internet.” Facebook entered into a society that had low internet literacy to begin with, carrying the characteristics of its platform that have also caused controversy elsewhere: promotion of echo chambers that cause fragmentation and polarization of the public sphere, algorithms that optimize for engagement and prioritize extremist content, and the capacity for information to go “viral” and reach audiences at unprecedented speed and scale.

As a result, Facebook has become a primary conduit for the spread of anti-Rohingya propaganda and hate speech. U.N. investigators have directly pointed the finger at Facebook as playing a “determining role” in the violence, saying that “Facebook has now turned into a beast, and not what it originally intended.” Reporting has shown that hate speech spreads through “affinity groups” in Facebook or WhatsApp group messages that include friends or relatives who are “trusted sources.” As people are often members of many such groups, hate speech and misinformation jumps from group to group, each time appearing to come from a trusted source.

Facebook has also given key figures a megaphone. The leader of a prominent group of radical Buddhist monks, Ashin Wirathu, built up a large following on the platform before Facebook permanently disabled his account in January this year. A New York Times reporter in Myanmar, Paul Mozur, said Wirathu used to print paper flyers to spread his messages, but Facebook allowed him to get 100 times the reach. And it is the singular feature of the internet that Wirathu could also obtain that reach more quickly and cheaply online.

Flat-footed Facebook

In 2013, almost five years ago, Zeynep Tufecki, a leading researcher on social implications of digital connectivity, tweeted:

That is to say: The recent events were not only foreseeable, but they were actually foreseen. Reporting at the time documented the proliferation of hate speech on the platform. A 2014 New York Times article on violent riots that left two dead described how the riots were “set off” by unconfirmed rumours of a rape on Facebook and noted the influence of Wirathu’s radical teachings.

And yet Facebook has largely been an “absentee landlord” in Myanmar, as Phil Robertson, deputy director of Human Rights Watch in Asia, has put it. Facebook has no office in Myanmar, which makes it much more difficult for the company to evaluate the complicated factors inherent in content moderation decisions that require deep understanding of context. Facebook does not disclose how many content moderators it has with local language expertise—those it does have appear to work in Dublin, where it apparently has some difficulty finding qualified candidates.

During the congressional hearings earlier this month, Zuckerberg was prepared to answer questions about his company’s role in Myanmar with a three-step plan: hiring dozens more Burmese-language content reviewers, working with civil society in Myanmar to take down accounts of “specific hate figures,” and creating a product team to implement (unspecified) product changes in Myanmar and other countries that may have similar issues in the future.

But Zuckerberg announced these steps only after a large amount of recent press coverage of the situation. Even as recently as March 15 this year, Facebook didn’t seem to have a plan. In an interview with Slate, Facebook’s head of newsfeed, Adam Mosseri, said that while people at the company “lose some sleep over this,” the situation was particularly challenging for a number of reasons--including that there were no local third-party fact-checkers for Facebook to partner with. Civil society organizations in Myanmar have capitalized on public attention, publishing an open letter specifying the problems of Facebook’s content moderation in the country, in particular:

  • an over-reliance on third parties (to flag content, if they came across it in time, rather than monitoring for such content itself);
  • a lack of proper emergency escalation mechanism (it had taken days for Facebook to step in after the organizations had tried to raise concerns about the messages, and they went viral in the meantime);
  • lack of engagement with local stakeholders (requests to talk to Facebook’s engineering and data teams about systemic solutions had gone unanswered); and
  • a lack of transparency (seven months after an incident Zuckerberg cited as a success story because Facebook had blocked a series of messages inciting specific violence, the organizations still did not know the details of what had happened).

The overall picture that these facts paint is that Facebook’s role as an instrument in ethnic violence in Myanmar is a recurrent and systemic problem, and one which the company has been (or should have been) aware of for years.

A Lacuna in the Law

Despite this background, Facebook has few binding legal obligations and the relevant legal frameworks are weak.

First, and most obviously, Facebook must comply with the local laws of jurisdictions in which it operates. However, the government of Myanmar has shown little appetite for placing political or legal pressure on Facebook to improve its operations, instead downplaying the level of violence altogether. The government has consistently denied claims that genocide or ethnic cleansing are occurring, claiming that the Rohingya are burning their own villages. Aung San Suu Kyi, the country’s de facto leader—and the recipient of a Nobel Peace Prize of which there have been calls for her to be stripped given her inadequate response to the crisis—has even called reports of the crimes “fake news.” Her office has itself used Facebook to disseminate inflammatory content contradicted by reports from local journalists.

International law in this context also lacks teeth. Myanmar is one of the 26 states that has not ratified the International Covenant on Civil and Political Rights, and therefore has not committed itself to the obligation to ensure that corporations respect the human rights of people within its territory. It is also not a state party to the Rome Statute, which makes it hard (absent a U.N. Security Council referral) for International Criminal Court prosecutors to bring charges for crimes against humanity. (However, a filing published earlier this month argues that the ICC has jurisdiction because the crime of deportation is occurring on the territory of Bangladesh, a state party.)

Of course, Facebook is also not a party to these conventions: It is not a state, despite growing comparisons to that effect. As such, these international human rights instruments do not bind it directly. In some ways, then, the Myanmar crisis is just another incarnation of the difficulty international law has had in holding multinational corporations accountable for the impacts of their operations on rights. The U.N. Guiding Principles on Business and Human Rights were developed in order to help bridge this gap. But the principles are non-binding, and Facebook is not legally liable if it fails to adopt them.

The Guiding Principles do, at least, create a framework for thinking about Facebook’s responsibilities--including its duty to not contribute to human rights abuses, conduct due diligence on its human rights obligations and provide transparency into its impacts. These obligations are reflected in the complaints of the Burmese civil society organizations in their open letter, and Zuckerberg’s commitments to Congress would go some way towards meeting them. “Dozens” more Burmese language reviewers will help, but in comparison, Facebook said it would hire an additional 3,000 moderators by the end of 2018 and open a second content moderation center in Germany when the country passed laws threatening large fines if hate speech wasn’t removed within 24 hours. This is a stark differential, and one that seems unlikely to be wholly justified by reference to the estimated number of users in each country alone (14 million in Myanmar, compared with 39 million in Germany in 2017). Furthermore, these newfound commitments suggest systems have been inadequate for the intervening years, and underline the necessity of monitoring their effective implementation.

Moreover, there are some features of Facebook’s business that make it a more difficult case than the typical multinational corporation. It operates in many countries without any physical presence at all—meaning that in states in which the government does want to regulate Facebook, its options and leverage are much more limited. In Sri Lanka, for example, officials found Facebook unresponsive to official concerns about hate speech and misinformation on its platform until the government blocked access entirely. The head of public information told the New York Times that big companies like Facebook “look at us only as markets … We’re a society, we’re not just a market.” This is the second way Facebook is distinguishable from other multinational corporations—its dominance in these markets means it becomes the de facto public sphere. It is not just like any other commercial product: It is integral to the flow of information.

Finally, Facebook’s global reach means that these individual markets are insignificant to the company as a whole. Facebook has become integral to Myanmar, but Myanmar matters very little to Facebook.

A Broader Conversation

It’s important to avoid the urge to oversimplify this issue. Myanmar’s example raises difficult questions, factually, legally, and philosophically.

The exact nature and extent of Facebook’s contribution to the spread of violence in Myanmar is unknown and perhaps unknowable. The connection between hate speech and mass atrocities does have a long and bloody history in the 20th century alone. Nazi Julius Streicher’s anti-Semitic articles in Der Stürmer were found by the Nuremberg Tribunal to have “infected the German mind with the virus of anti-Semitism and incited the German people to active persecution.” “Hate radio” played a key role in the Rwandan genocide, and villages with access to radio broadcasts had increased participation in the killings. As Samantha Power has written, “[k]illers often carried a machete in one hand and a transistor radio in the other.”

So U.N. investigators have precedent of declaring that the spread of hate speech through Facebook has had a “determining role” in the violence in Myanmar. Academic studies are shedding light on the correlation, as is journalism. In a recent article about recent outbreaks of violence in Sri Lanka, Amanda Taub and Max Fisher of the New York Times “found that Facebook’s newsfeed played a central role in nearly every step from rumor to killing. Facebook officials, [interviewees said], ignored repeated warnings of the potential for violence, resisting pressure to hire moderators or establish emergency points of contact.” But correlation is not causation, factually or legally. As Susan Benesch has written, “[t]he idea that inflammatory speech is a catalyst for genocide is widely believed, likely correct and of no small importance … But the impact of speech on groups is complex, and difficult to measure or prove.”

Ethnic violence existed in Myanmar long before Facebook arrived, and Facebook is hardly the most culpable actor in the spread of misinformation and hate speech. Facebook’s mission to “bring more people online and help improve their lives” is not a fig leaf but a noble goal, in many cases realized. People in Myanmar rely on Facebook for much of their communication, as do news outlets and journalists for getting important information out. The platform is becoming increasingly important for local businesses. Even if it were possible, it is by no means clear that Myanmar would be better off if Facebook exited the country tomorrow.

These complexities should be borne in mind when thinking about how a company like Facebook could be more regulated internationally. As Tufekci sums it up,

There are questions to ask about Facebook’s flat-footedness in the face of clear evidence of its role in a mass atrocity. Myanmar is only the most vivid example of dynamics playing out in many developing countries—and as developed countries with political and economic power brainstorm ideas for the regulation of Big Tech, this should be a part of the calculation. Legal frameworks have the power to provide standards against which Facebook’s conduct could be measured, and a common vocabulary with which to have the conversation. They could also provide procedural mechanisms that provide transparency into what resources Facebook is dedicating to minimizing its contribution to harm in these situations and visibility into specific issues such as the steps it is taking to prevent moderators from again removing content that could potentially be important evidence of human rights abuses.

International law will need to account for the role of new communications technologies in crimes against humanity, and the unprecedented power a few corporations have over the public spheres of foreign countries. But this reckoning will take time, and meanwhile, domestic legal systems are already contemplating action. As they do so, lawmakers in countries like the United States should consider what tools are available to shed light not only on the effects of the activities they are thinking of regulating domestically, but also on the power of these platforms in societies with weaker institutions.

To ask Facebook to remove all hate speech or misinformation from its platform is unrealistic and unhelpful. But to ask it to take reasonable steps to create systems to prevent the spread of the majority of the worst content in a timely fashion could save more than a few lives.


Evelyn Douek is an Assistant Professor of Law at Stanford Law School and Senior Research Fellow at the Knight First Amendment Institute at Columbia University. She holds a doctorate from Harvard Law School on the topic of private and public regulation of online speech. Prior to attending HLS, Evelyn was an Associate (clerk) to the Honourable Chief Justice Susan Kiefel of the High Court of Australia. She received her LL.B. from UNSW Sydney, where she was Executive Editor of the UNSW Law Journal.

Subscribe to Lawfare