Cybersecurity & Tech

Rushing to Judgment: Examining Government Mandated Content Moderation

Jacob Mchangama
Tuesday, January 26, 2021, 8:00 AM

Given various governments’ strict timelines for legally mandated content moderation, platforms may be incentivized to err on the side of removal rather than shielding the speech of their users against censorious governments.

Twitter suspends former President Trump's account (Marco Verch Professional Photographer,; CC BY 2.0,

Published by The Lawfare Institute
in Cooperation With

On March 15, 2019, Brenton Tarrant logged on to 8chan and posted a message on a far-right thread to spread the word that he would be livestreaming an attack on “invaders.” Around 20 minutes later, Tarrant entered a mosque in Christchurch, New Zealand, with an automatic weapon and a GoPro camera. Tarrant livestreamed on Facebook as he embarked on a killing spree resulting in the murder of 51 persons. Facebook removed the livestream 17 minutes later, after it had been viewed by more than 4,000 people. In the next 24 hours, Facebook removed the video 1.5 million times, of which 1.2 million were blocked at upload. Tarrant’s preparation and announcement made it clear that the attack’s horrific shock value was tailor-made for social media.

In May 2019, heads of government from New Zealand, France, Germany, the United Kingdom and several other nations released the Christchurch Call to Action to Eliminate Terrorist and Violent Extremist Content Online. Large online platforms such as Facebook, Twitter and YouTube, which supported the call, committed to taking specific measures for the “immediate and permanent removal” of violent extremist content. The call was in line with legally binding legislation aimed at hate speech and terrorism introduced around the world in the past few years, such as Germany’s Network Enforcement Act (NetzDG), France’s Avia law and, most recently, the EU’s proposal on preventing the dissemination of terrorist content online.

Twitter and Facebook responded to the Jan. 6 attack on the U.S. Capitol by purging President Trump’s social media accounts and QAnon conspiracy content based on their own terms of service (Facebook’s suspension of Trump will be reviewed by its Oversight Board). But the use of social media to spread dangerous disinformation and incite American citizens to attack the very seat of their own democracy has also led to calls for further legislation to ensure the swift removal of harmful content in democracies around the world.

Given the very real harms facilitated by online extremism, the urge to clamp down on social media through laws—rather than relying on the voluntary, inconsistent and opaque terms of service and content moderation policies of private platforms—is understandable. However, when democracies respond to threats and emergencies, there is a real risk of overreach that jeopardizes basic freedoms—not least freedom of expression. For instance, Germany’s NetzDG has been “cloned” by a cabal of authoritarian states including Turkey, Russia and Venezuela. These states cynically abuse Germany’s good-faith effort at countering hate speech and use it to legitimize crackdowns on political dissent. Russian dissident Alexey Navalny criticized Twitter’s suspension of Trump as having the potential to “be exploited by the enemies of freedom of speech around the world.” But government-mandated notice and takedown regimes with very short deadlines may also result in detrimental outcomes for free speech within democracies.

Determining the lawfulness of content is a complex exercise that rests on careful, context-specific analysis. Under Article 19 of the U.N.’s International Covenant on Civil and Political Rights (ICCPR), restrictions of freedom of expression must comply with strict requirements of legality, proportionality, necessity and legitimacy. These requirements make the individual assessment of content difficult to reconcile with legally sanctioned obligations to process complaints in a matter of hours or days.

In June 2020, France’s Constitutional Council addressed similar concerns, when it declared unconstitutional several provisions of the Avia law that required the removal of unlawful content (including terrorism and hate speech) within one to 24 hours. Among other things, the council held that the platforms’ obligation to remove unlawful content “is not subject to prior judicial intervention, nor is it subject to any other condition. It is therefore up to the operator to examine all the content reported to it, however much content there may be, in order to avoid the risk of incurring penalties under criminal law.”

The council also stressed that it was:

up to the operator to examine the reported content in the light of all these offences, even though the constituent elements of some of them may present a legal technicality or, in the case of press offences, in particular, may require assessment in the light of the context in which the content at issue was formulated or disseminated.

In relation to the 24-hour takedown limit, the council concluded that “given the difficulties involved in establishing that the reported content is manifestly unlawful in nature … and the risk of numerous notifications that may turn out to be unfounded, such a time limit is extremely short.” In sum, the council found that the Avia law restricted the exercise of freedom of expression in a manner that was not necessary, appropriate and proportionate.

Likewise, in his comment on the NetzDG, then-U.N. Special Rapporteur on freedom of opinion and expression David Kaye warned:

The short deadlines, coupled with the afore-mentioned severe penalties, could lead social networks to over-regulate expression—in particular, to delete legitimate expression, not susceptible to restriction under human rights law, as a precaution to avoid penalties. Such pre-cautionary censorship[] would interfere with the right to seek, receive and impart information of all kinds on the internet.

The tensions between the legitimate political aim of removing unlawful content and the protection of online freedom of expression is a thorny issue with no clear viable equilibrium. To shed further light on how to reconcile the competing interests, Justitias Future of Free Speech Project has made a preliminary attempt to assess the duration of national legal proceedings in hate speech cases in selected Council of Europe member states. The length of domestic criminal proceedings is then compared with the duration of government-mandated removals of illegal hate speech under laws such as the NetzDG. Using freedom of information requests, the report studies the length of criminal hate speech proceedings in five member states of the Council of Europe: Austria, Denmark, France, Germany and the United Kingdom. The project is based on these jurisdictions as they have passed or are considering stringent intermediary liability legislations to tackle hate speech and other unlawful content. Due to the paucity of data from the selected countries, the project also studied all hate speech cases from the European Court of Human Rights (ECHR) and extracted the relevant dates and time periods from the beginning of the proceeding until its resolution at first instance.

The nature of the available data used in the report does not allow direct and exact comparisons between the different jurisdictions studied. Allowing for this shortcoming, the survey found that domestic legal authorities took significantly longer than the time mandated for social media platforms to answer the question of the content’s lawfulness:

  • Austrian authorities took 1,273.5 days on average to reach their decision, starting from the day of the alleged offense.
  • Danish authorities took 601 days from the date of complaint until the conclusion of the trial at first instance (as per data released by national authorities for cases between 2016 and 2019) and 1,341 days on average (as per data extracted from the two ECHR judgments from other periods).
  • French authorities took 420.91 days on average.
  • German authorities took 678.8 days on average.
  • United Kingdom authorities took 35.01 days from the date of first hearing in court to the conclusion of the trial at first instance (according to data released by national authorities for cases between 2016 and 2019) and 393 days from the date of the alleged offense (according to data extracted from the sole hate speech case from the United Kingdom that was decided by the ECHR).
  • Overall, data extracted from all ECHR hate speech cases reveals that domestic legal authorities took 778.47 days on average from the date of the alleged offending speech until the conclusion of the trial at first instance.

There are crucial differences between criminal proceedings and private content moderation. The former involves the threat of criminal sanctions, including the risk of prison; the latter merely results in the removal of content or, at worst, the deletion of user accounts. Moreover, when restricting freedom of expression, states must follow time-consuming criminal procedure and respect legally-binding human rights standards. At the same time, private platforms are generally free to adopt their own terms of service and content moderation practices that are less protective of freedom of expression and due process than what follows under international human rights law.

However, when governments impose intermediary liability on private platforms through laws prescribing punishments for nonremoval, platforms are essentially required to assess the legality of user content as national authorities. When private platforms are obliged to remove illegal user content, the resulting content moderation ceases to reflect voluntarily adopted terms of service, and the relevant private platforms become de facto enforcers of national criminal law but without being bound by human rights standards that would normally protect users’ freedom of expression.

While recognizing the differences between national criminal law and procedure and private content moderation, it is relevant to assess how the time limits prescribed for private platforms by national governments compare to the length of domestic criminal proceedings in hate speech cases. Large discrepancies may suggest that very short notice and takedown time limits for private platforms result in systemic “collateral damage” to online freedom of expression, as determined by the French Constitutional Council and noted by David Kaye. Platforms may be incentivized to err on the side of removal rather than shielding the speech of their users against censorious governments. Platforms may respond by developing less speech-protective terms of service and more aggressive content moderation enforcement mechanisms that are geared toward limiting the risk of liability rather than providing voice to the users. Indeed, since the adoption of the NetzDG, platforms such as Facebook have expanded the definition of hate speech and dramatically increased the quantity of deleted content.

Justitia’s findings demonstrate that the expectation that tens of thousands of complex hate speech complaints will be processed within hours or days—while trying to uphold due process and freedom of expression—may be unrealistic at best. At worst, this could entail systemic “collateral damage” to the online ecosystem of information and opinion. These findings support the conclusion of the French Constitutional Council in the Avia case, to the effect that government-mandated notice and takedown regimes prescribing very short time limits are incompatible with the meaningful exercise and protection of freedom of expression.

Jacob Mchangama is the executive director of Justitia and the Future of Free Speech project. He is the author of the forthcoming book “Free Speech: A History from Socrates to Social Media” (2022).

Subscribe to Lawfare