Lawfare News

Platform Justice: Content Moderation at an Inflection Point

Danielle Citron, Quinta Jurecic
Friday, September 7, 2018, 1:56 PM
In the wake of Russian interference in the 2016 election campaign, technology companies are facing unprecedented scrutiny from the media and within government.

Published by The Lawfare Institute
in Cooperation With
Brookings

On Thursday, Sept. 6, Twitter permanently banned the right-wing provocateur Alex Jones and his conspiracy theorist website Infowars from its platform. This was something of the final blow to Jones’s online presence: Facebook, Apple and Youtube, among others, blocked Jones from using their services in early August. Cut off from Twitter as well, he is now severely limited in his ability to spread his conspiracy theories to a mainstream audience.

Jones has been misbehaving online for a long time. Following the Sandy Hook mass shooting in 2012, he spread theories that the attack had been falsified by the government, ginning up harassment against parents of the murdered children to the extent that one couple has been tormented, threatened, and forced to move seven times. So why has he only been banned from these platforms now?

In the wake of Russian interference in the 2016 election campaign, technology companies are facing unprecedented scrutiny from the media and within government. Companies like Facebook and Twitter, which previously took a largely hands-off approach to content moderation, have shifted—though reluctantly—toward greater involvement in policing the content that appears on their platforms. Even three years ago, it would have been unthinkable that Jones could have been blocked from almost every major platform across the internet. But by the late summer and early fall of 2018, the bulk of the public and media outrage over Jones’ banning was not that technology companies were silencing his voice and limiting speech—it was that Jones had not been banned earlier.

Content moderation is at an inflection point. In a new paper in the Hoover Aegis series, we take stock of the changing regulatory environment around the dominant technology platforms and examine both the possibilities and the dangers of legislative and technological solutions to the problems of content moderation. With the passage of the Allow States and Victims to Fight Online Sex Trafficking Act (FOSTA) in March 2018, Congress has already taken the significant step of amending the safe harbor provided to third-party platforms by Section 230 of the Communications Decency Act—but FOSTA is deeply flawed legislation. In an environment in which the question is no longer whether the safe harbor will be altered, but to what extent, we propose a range of possible modifications to Section 230 that could conceivably maintain a robust culture of free speech online without extending the safe harbor to platforms that do not respond to illegality in a reasonable manner.

Beyond statutory changes, there is also the question of moderation that takes place in the shadow of the law—that is, the technological solutions platforms employ to minimize potential liability under the threat of stricter regulation. Our paper advocates caution regarding the turn toward greater automation in content moderation, though we acknowledge the potential benefits of automation in certain circumstances, such as the use of hash technology to combat nonconsensual pornography. But broad over-moderation in response to shifting political moods risks censorship creep.

Shortly before he was barred from Twitter, Jones appeared in the hallways of Capitol Hill to complain about his being blocked from social media sites. Facebook and Google, he said, are “blocking conservatives involved in their own First Amendment political speech.” Jones’ argument is facile: he was banned for harassment, not for voicing his political beliefs, and under current law he has no First Amendment right to express himself on a privately-owned platform. But whether intentional or not, his confusion between a private company and a forum run by the government is an expression in miniature of the increasingly important role of internet platforms to democratic discourse—and the difficulties of conceptualizing how those platforms ought regulate speech in context of a legal tradition focused on state, rather than private, action.

Platform Justice: Content M... by on Scribd


Topics:
Danielle Citron is a Professor of Law at Boston University School of Law and a 2019 MacArthur Fellow. She is the author of "Hate Crimes in Cyberspace" (Harvard University Press 2014).
Quinta Jurecic is a fellow in Governance Studies at the Brookings Institution and a senior editor at Lawfare. She previously served as Lawfare's managing editor and as an editorial writer for the Washington Post.

Subscribe to Lawfare