Cybersecurity & Tech

COVID-19 and Social Media Content Moderation

Evelyn Douek
Wednesday, March 25, 2020, 1:10 PM

The pandemic is shaping up to be a formative moment for tech platforms.

Mark Zuckerberg delivers a 2018 Keynote address. (By: Anthony Quintano, https://tinyurl.com/wsdnb8b; CC BY 2.0, https://creativecommons.org/licenses/by/2.0/deed.en)

Published by The Lawfare Institute
in Cooperation With
Brookings

The coronavirus pandemic has forced people around the world to reexamine many things that are usually taken for granted. On that list is social media content moderation—the practice of social media platforms making and enforcing the rules about what content is or is not allowed on their services.

The pandemic is shaping up to be a formative moment for tech platforms. Major tech companies have begun sending home human workers who review social media content and relying more heavily on artificial intelligence (AI) tools to do the job instead. Meanwhile, platforms are also taking an unusually aggressive approach in removing misinformation and other exploitative content and boosting trusted content like information from the World Health Organization (WHO). These actions have resulted in some rare good news stories for platforms, with one commentator even speculating that “coronavirus killed the techlash” that has been plaguing these companies the past few years. Whether or not the changes in platform practice or public sentiment last, it is worth taking stock of how platforms are responding and what those responses say about content moderation, not only in times of crisis but every day.

Content Moderation During a Pandemic

Tech platforms have been unusually proactive about transparency in the past few weeks, with many making public announcements about the steps they are taking in relation to the pandemic.

Facebook has a page with running updates the company is taking, including a new Information Center at the top of people’s News Feeds providing real-time updates from national health authorities and global organizations such as the WHO; banning ads that seek to capitalize on the crisis and exploit panic; and removing false content or conspiracy theories about the pandemic “as an extension of [the platform’s] existing policies to remove content that could cause physical harm.”

Twitter has similarly announced that it is “broadening [its] definition of harm to address content that goes directly against guidance from authoritative sources of global and local public health information.” The indicative list of types of content Twitter is now removing is long, and includes everything from descriptions of treatments that are not harmful but are ineffective, to specific and unverified claims that incite people to action and cause widespread panic, to claims that certain groups are more or less susceptible than others to COVID-19, the respiratory disease caused by the coronavirus.

And Google has released a statement in the same vein, if less detailed. The company is also boosting authoritative information across its services: YouTube’s homepage now hosts videos from the CDC or other relevant public health agencies. Google’s post includes a general statement that it is “removing COVID-19 misinformation,” including taking down “thousands of videos related to dangerous or misleading coronavirus information” on YouTube and removing “videos that promote medically unproven methods to prevent coronavirus in place of seeking medical treatment.” It is also removing advertisements seeking to capitalize on the crisis and has announced a temporary ban on ads for medical masks and respirators.

These examples reflect a broader trend toward more aggressive content moderation across social media platforms during this crisis. Pinterest is limiting all search results about the coronavirus to results from “internationally-recognized health organisations.” Facebook-owned WhatsApp, which has experienced a “flood” of misinformation and conspiracy theories, is partnering with the WHO to provide messaging hotlines to get users accurate information and resources. Reddit has “quarantined” two subreddits that were boosting misinformation—an unfortunate term used by Reddit to describe measures requiring users to opt-in to see certain subreddits, which prevent people from viewing content accidentally. Medium took down a widely shared post, debunked by many experts, claiming that the pandemic was overblown, and has announced new content policies specific to COVID-19.

Facebook, Google, LinkedIn, Microsoft, Reddit, Twitter and YouTube also released a brief “joint industry statement” saying that they are working closely together on their response efforts and “jointly combating fraud and misinformation about the virus, elevating authoritative content on our platforms, and sharing critical updates in coordination with government healthcare agencies around the world.” They invited other companies to join them.

Misinformation will find a way, though. Even as platforms crack down, a great deal of false information remains online—and there has been a surge in false claims traveling by texts and even by the old-school telephone. But after years of disavowing their role as “arbiters of truth,” platforms are embracing it—in this limited realm, at least.

Next they must face the practical issue of how to enforce these new rules in the context of a pandemic. Just as tech companies were implementing these more aggressive content moderation measures, they suddenly faced the same problem as businesses around the world: Their staff had to work from home. Transitioning to working from home was not possible for many of these workers for various reasons, including basic logistical issues like access to necessary hardware and software, privacy issues with setting up review feeds out of controlled environments, and concerns about contractors reviewing disturbing content without proper mental health support and in homes where others, including children, may be exposed to this material.

So the major companies had to accept that human content moderator capacity would be significantly reduced. They announced they would be relying on automated tools more than normal as a result—and warned that this would result in more errors in moderation.

Facebook stated, “[W]ith a reduced and remote workforce, we will now rely more on our automated systems to detect and remove violating content and disable accounts. As a result, we expect to make more mistakes, and reviews will take longer than normal, but we will continue to monitor how our systems are performing and make adjustments.” Likewise, Twitter said it would be “increasing our use of machine learning and automation to take a wide range of actions on potentially abusive and manipulative content. We want to be clear: while we work to ensure our systems are consistent, they can sometimes lack the context that our teams bring, and this may result in us making mistakes.”

YouTube’s announcement was in the same terms: “[W]e will temporarily start relying more on technology to help with some of the work normally done by reviewers. This means automated systems will start removing some content without human review, so we can continue to act quickly to remove violative content and protect our ecosystem, while we have workplace protections in place. As we do this, users and creators may see increased video removals.”

The Pandemic and Proportionality

It’s notable how little dissent there has been from platforms’ need to take action. This may not be surprising to anyone who has been paying attention to the content moderation debate of the past few years, but it is in fact a marked shift from the libertarian, “content must flow” ethos of the early internet. Indeed, the more common question in recent weeks has been why platforms struggle so much to take down political misinformation if they are so capable of moderating content during a pandemic. That is, the desire is for more content moderation, not less.

In a way, the question seems intuitive: If the problem is “false information,” why should platforms treat that information differently across subject matter? But the topic of the false information does matter. In the context of COVID-19, the potential harm of false information is much more direct and—in some cases—literally life or death, and moderation of health-related misinformation raises fewer concerns about platforms interfering with the ordinary mechanisms of political accountability than does moderation of quintessentially political speech in a normal time. When asked on a press call why Facebook was treating COVID-19 misinformation differently, Mark Zuckerberg invoked the First Amendment trope that “you can’t yell fire in a crowded theater.”

But some of platforms’ rules about COVID-19 sweep more broadly than this narrow test would allow. As Eugene Volokh speculated, “[A]ny attempt to punish even lies about (for instance) how coronavirus is generally transmitted would likely be unconstitutional; the remedy for such lies is public argument.” And as Rebecca Tushnet tweeted—in an example made topical by clogged pipes around the country caused by people using disinfecting wipes in the midst of a pandemic-induced toilet paper shortage—“A court a few years back held that it violated the First Amendment to tell manufacturers not to label their wipes ‘flushable,’ because flushability was ‘controversial.’”

Furthermore, the platforms have acknowledged that these rules will be enforced by AI that is going to find more “false positives,” meaning that content will be removed that should remain up. This is a concession that enforcement will not be “narrowly tailored,” as First Amendment doctrine would require. Of course, platforms are not the government and are not bound by the First Amendment. They can take down more content than the government could make illegal. But doing so makes reliance on First Amendment law and lore misplaced.

Outside of the United States, the globally dominant form of rights adjudication is a “proportionality” standard, which allows the infringement on speech to be balanced against its benefits. This standard more readily accommodates different treatment of content in different contexts. Platform actions around COVID-19, where the potential harm is great and the category of content being removed relatively narrow, seem to easily satisfy this standard. This rise of “proportionality” as the dominant ethos guiding platform actions matches what Jonathan Zittrain has called the rise of the “Public Health Era” of internet governance, which looks at the costs and benefits of different rules as opposed to just focusing on the rights of the speaker. A Public Health Era is easy to understand in the context of a pandemic, but this moment is just the apotheosis of the general trend.

U.S. legal doctrine and culture is famously resistant to such balancing; Justice Antonin Scalia famously proclaimed rights balancing “like judging whether a particular line is longer than a particular rock is heavy.” I will not wrestle here with the merits of proportionality as a form of rights adjudication, a topic on which there is a vast literature. But if platforms are going to balance different interests, it’s important that they do so transparently and consistently. Platforms’ aggressive handling of pandemic content is a stark illustration that companies are embracing more interest balancing in their rules—and of the benefits of that approach. But it also highlights the need to find processes to make such balancing legitimate and contestable in the future, and not only when platforms decide to publicly announce it or deign to answer questions about it on press calls in obvious cases like a pandemic.

The Pandemic and Error Choices

The other notable thing about the platforms’ announcements is the uncharacteristic humility about the capacity of their AI tools: While acknowledging the transition away from human moderation, they warned that “mistakes” would inevitably result. More commonly, platforms point to AI as a kind of panacea in seeking to convince lawmakers that they are taking responsibility for the content on their services. But with many human content moderators suddenly out of commission, platforms have been forced to acknowledge the very real limits of their technology.

There are three lessons to be learned from this. First, platforms should be commended for being upfront about the likely effects of this change in approach. But they should also be collecting data about this natural experiment and preparing themselves to be equally as transparent about the actual effects of the change. Opening up their processes for audit and allowing academics to study the results of these changes could be a useful source of information for future policymaking.

Second, platforms and lawmakers should remember these announcements in the future. As Hannah Bloch-Wehba has noted, “As lawmakers in Europe and around the world closely scrutinize platforms’ ‘content moderation’ practices, automation and artificial intelligence appear increasingly attractive options for ridding the Internet of many kinds of harmful online content.” The candid announcements from platforms in recent weeks about the costs of relying on AI tools should be nailed to the door of every legislative body considering such an approach.

Finally, regulators and academics need to recognize that these announcements are really just an extreme version of the choices that platforms are making every day. Content moderation at scale is impossible to perform perfectly—platforms have to make millions of decisions a day and cannot get it right in every instance. Because error is inevitable, content moderation system design requires choosing which kinds of errors the system will err on the side of making. In the context of the pandemic, when the WHO has declared an “infodemic” and human content moderators simply cannot go to work, platforms have chosen to err on the side of false positives and removing more content. As Sarah Roberts writes, “The alternative is also no alternative at all, for if the AI tools were turned off altogether, the result would be an unusable social media platform flooded with unbearable garbage, spam and irrelevant or disturbing content.”

Platforms announced similar choices in the wake of the Christchurch massacre, to ensure quicker take-downs of copies of the shooter’s livestream. The reality is, though, that the trade-offs between accuracy, comprehensive enforcement and speed are inherent in every platform rule and not just in these exceptional moments. As I noted in the context of Facebook’s recently released white paper on regulatory options for social media, a more mature conversation about content moderation requires much more transparency and realism about these choices.

Conclusion

Content moderation during this pandemic is an exaggerated version of content moderation all the time: Platforms are balancing various interests when they write their rules, and they are making consequential choices about error preference when they enforce them. Platforms’ uncharacteristic (if still too limited) transparency around these choices in the context of the pandemic should be welcomed—but needs to be expanded on in the future. These kinds of choices should not be made in the shadows. Most importantly, platforms should be forced to earn the kudos they are getting for their handling of the pandemic by preserving data about what they are doing and opening it up for research instead of selective disclosure.

One thing is certain: With enormous numbers of people locked inside, spending more time online and hungry for information, the actions taken by platforms will have significant consequences. They may well emerge from this more powerful than ever. Right now the public is asking tech platforms to step up, but we also need to keep thinking about how to rein them in.


Evelyn Douek is an Assistant Professor of Law at Stanford Law School and Senior Research Fellow at the Knight First Amendment Institute at Columbia University. She holds a doctorate from Harvard Law School on the topic of private and public regulation of online speech. Prior to attending HLS, Evelyn was an Associate (clerk) to the Honourable Chief Justice Susan Kiefel of the High Court of Australia. She received her LL.B. from UNSW Sydney, where she was Executive Editor of the UNSW Law Journal.

Subscribe to Lawfare