Cybersecurity & Tech Foreign Relations & International Law

What’s Going on With France’s Online Hate Speech Law?

Jacob Schulz
Tuesday, June 23, 2020, 10:20 AM

France’s constitutional court struck down the main components of a new online hate speech law. What was in the original bill, and what’s left after the ruling?

The facade of the Assemblée Nationale, France's lower house of parliament, Paris, 2007 (Graham Chandler/https://flic.kr/p/L3DQ4/CC BY-NC-ND 2.0/https://creativecommons.org/licenses/by-nc-nd/2.0/)

Published by The Lawfare Institute
in Cooperation With
Brookings

In mid-May, France adopted a new law aimed at overhauling the online speech landscape in the country. The French constitutional court hollowed out the bill last week, dinging the overwhelming majority of it as unconstitutional.

On May 13, French parliament passed into law the “Loi Avia,” named after the bill’s main sponsor and member of President Emmanuel Macron’s centrist party “La Republique en Marche,” Laeticia Avia. The bill aimed to combat various forms of online hate speech, terrorist speech and child pornography. Originally set to go into effect on July 1, the bill would have mandated that platforms take down certain types of “manifestly” illegal content within 24 hours of a user flagging it. It sought to hold platforms’ feet to the fire: Companies that failed to comply would have left themselves at the mercy of both criminal and administrative high-dollar fines.

Sound familiar? The law shared a good deal of resemblance to a June 2017 German law, the NetzDG, that puts platforms on the hook for significant fines if they don’t remove “manifestly illicit” content within 24 hours of a user reporting it.

The Avia bill quickly attracted fierce detractors from across the political spectrum. Some objected to its timing. On May 2, the French government extended a pandemic-induced “state of health emergency” until July 24, and some critics noted that there’s something inherently fraught about passing a delicate speech-restrictive law during an emergency period. More importantly, many critics also objected to the substance of the law. The center-right party “Les Republicains” seized on the legislation as an “attack on the freedom of speech.” For her part, far-right torchbearer Marine Le Pen decried the measure as “oppressive” (“une loi liberticide”). Alexis Corbière, a parliamentarian from the far-left “La France Insoumise” party, referred to the law using the same derisive moniker. Various industry groups, free speech advocates and human rights organizations expressed similar objections.

And then on May 18, a group of 60 senators from “Les Republicains” submitted a pre-promulgation challenge of the law to France’s Constitutional Council, the court that reviews legislation for compliance with France’s constitution. French legal experts I talked to seemed unsure what the court would do. The Constitutional Council had various judicial scalpels at its disposal⁠: The court can strike down particular paragraphs of laws, and it can also add certain language directly to a bill using so-called “réserves d’interprétation.”

But the court didn’t use any of its scalpels. It opted for a sledgehammer instead—or maybe a guillotine. As Bruno Retailleau, a leader of “Les Republicains,” declared: The court “totally decapitated the law.” On June 18, it struck down the entirety of six of the bill’s provisions and nixed parts of five others. A law that once threatened to reorient the calculus around content moderation in France now amounts to only a couple of very modest reforms.

What Was in the Original Law?

The original law promulgated a number of onerous new obligations for platforms.

Most of the reforms came in the form of changes or additions to Article 6 of France’s 2004 Digital Economy Law. That law mirrors much of the 2000 European Union E-commerce directive (2000/31/EC) and cemented the French legal regime governing digital activity. Notably, it established liability protections for “any person carrying out the activity of transmission of content on a telecommunications network or provision of access to a telecommunications network.” In other words, the 2004 law was, among other things, France’s version of Section 230 of the Communications Decency Act—the 1996 U.S. law that established analogous liability protections in the United States. The 2004 French law also conferred additional regulatory authority over audiovisual communications to the “Conseil supérieur de l'audiovisuel” (CSA), the body vaguely analogous to the U.S.’s Federal Communications Commission (FCC) that was charged with managing enforcement of the Avia bill.

A 2015 decree already mandated that internet service providers (ISPs) and social media platforms take down content within 24 hours of receiving a government order to remove content that violated two French speech laws: Article 421-2-5, which criminalizes content promoting terrorist acts or engaging in glorification of or justification for those acts, and Article 227-23, which criminalizes child pornography. The new law (Article 1, 2) introduced more stringent rules for platforms and search engines, but not ISPs. If the Constitutional Council had not intervened, platforms and search engines would have had only one hour after receiving a government report of terrorist or child pornography content to take down the offending post and to notify government authorities about that content. This particular provision always courted disagreement. The National Assembly added it through a government-issued amendment during the second reading of the bill.

An additional controversy stemmed from a new requirement (Article 1, 9) that mandated that certain platforms operating in French territory take down, within 24 hours of a user’s flagging it, “manifestly” illegal speech that falls under an umbrella of criminal conduct much broader than that covered by the terror incitement or child pornography statute. Should the law have gone into effect, platforms would have had 24 hours from the moment that a user submits his or her report to remove content that “manifestly” violates a host of speech laws, namely France’s hate speech provisions. The hate speech law criminalizes injurious speech that targets a person or group based on their “origin, their belonging or not belonging to an ethnic group, nationality, race or religion” or for reason of their “sex, sexual orientation, gender identity, or handicap.” The same 24-hour obligation would have applied to content reported for violation of a law that criminalizes speech that promotes, glorifies, or engages in justification of sexual violence, war crimes, crimes against humanity, enslavement, or collaboration with the enemy; a law that criminalizes sexual harassment; and a law that bans pornography where it could be seen by a minor—among others. The law did not carve out any exceptions; the 24-hour rule would have applied even in the case of technical difficulties or temporary surge in notifications.

France already had some rules governing the takedown of criminal content. The 2004 law (Article 6, I, 3) conditions platforms’ criminal liability “shield” upon their taking down “manifestly” illegal content—including criminal speech, like slander, not covered by the Avia bill—after a user reports it. The Constitutional Council later added language to the law such that platforms had to “promptly” take down violating content once they became aware of it. How prompt is promptly? Courts have never precisely clarified. In the copyright context, a 2012 court ruling intimated that five days surpassed the “promptly” threshold. But the standard remains fairly vague, particularly in the hate speech context. Per the 2004 law, if a judge rules that a company ought to forfeit its criminal liability shield, courts can hold the platform itself criminally liable for illegal third-party content on its site. But, in reality, courts have not often enforced this conditionality. The new rules sought to put more pressure on the platforms—the idea was that for many types of illegal content, platforms would have a short concrete window after a user report to take down the offending post or risk both criminal and administrative fines for their inaction.

But the court did not take kindly to this idea. The proposed system, the court cautioned, amounted to “an attack on the exercise of the freedom of expression and communication that is not necessary, appropriate, and proportional.” The court expressed particular concerns about the challenge presented to platforms by having to evaluate in such a short time period whether or not a given piece of content passes the “manifestly illegal” threshold.

And what sites would have qualified for the new rules? The law stipulated that only sites that surpass a certain metric of monthly traffic within France (the threshold would have been determined later but likely would have been somewhere between 2 and 5 million unique French users per month) would have been subject to the law (Article 1, 9). The law would have covered social media platforms (Facebook, Twitter, and the like), sites such as Wikipedia or a French version of Craigslist, and search engines (websites that depend on “sorting or search engine optimization, using digital algorithms, of content offered or put online by a third party”) (Article 1, 10). Notably, the new rules never would have applied to ISPs.

What would have happened under the now-rejected rules if a user reported hate speech content and platforms didn’t comply with the takedown rules? Noncompliance could have triggered two types of penalties: a criminal fine of 250,000 euros for individuals and up to 1,250,000 euros for corporations. Additionally, an administrative fine up to 20 million euros or 4 percent of a firm’s annual worldwide revenue could have been imposed by the CSA regulators. The heftier fines would have applied if a platform received formal notice from the CSA but didn’t conform with the notice’s guideline (Article 7, 9). The law also would have explicitly allowed the CSA to publicize formal notices and fines (Article 7, 10). The Senate (the upper house of parliament) had voted to remove the language about those fines, but the National Assembly ultimately overturned that change.

Once platforms removed “manifestly illicit” content, they would have had to put up a message indicating that the content had been removed (Article 1, 14). Platforms would also have had to temporarily save the removed content so that it could be used for research, investigative or prosecutorial ends (Article 1, 15).

If not for the court’s intervention, what might the actual reporting mechanism have looked like? The law stipulated that platforms must put in place, for users in French territory, “a mechanism for notification” that is uniform, accessible and easy to use (Article 2).

The law sought to mandate that platforms communicate the result of the request and the factors that led to their decision to the users who reported the conduct (Article 4, 6).

The law also would have required that platforms make publicly available detailed information about the new policies. The explainers were to include details about the criminal penalties associated with posting “manifestly illicit” content, the resources available to victims, and information about the penalties for users who use the new reporting mechanism for malicious ends (Article 5). And the law required platforms to update their terms of service to reflect the new rules (Article 5, 10). Thanks to the court’s ruling, these provisions won’t make it into the final⁠ law—because the 24-hour reporting system didn’t pass judicial muster, the court stripped all of its adjunct provisions from the law as well.

In addition to putting platforms on the hook for user-generated hate speech, the law envisioned cooperation between platforms and law enforcement in targeting those who post offending speech. Platforms would have had to inform government authorities of all content violating the speech laws covered by the 24-hour takedown (Article 5, 7). This provision came on the heels of other successes by the Macron government in securing cooperation between Facebook and law enforcement authorities. In June 2019, France managed to coax Facebook into sharing with French authorities the “identification data of French users suspected of hate speech”—the first time ever that the platform handed over such information to government authorities. Ditto here; the 24 hour rule is out and so is this.

The law also spelled out the modified relationship between the platforms and the CSA. Effectively, the law tasked the CSA with ensuring that the platforms follow the new rules (Article 7, 4). The bill detailed that the CSA would produce an annual report about how platforms applied the new rules and assess their effectiveness (Article 7, 5). The court struck down this provision as well.

What’s Left After the Court’s Ruling?

Not much. The provisions that remain amount to incremental reforms. As Marc Rees of the French technology blog Next INpact writes, all that’s left are “some dregs.” The two most notable provisions still standing, though not nothing-burgers, don’t do anything to upend the existing online speech equilibrium.

The court okayed a provision that makes certain reforms to the criminal process for hate speech crimes. Article 10 of the bill calls for the creation of a national prosecutor’s office to oversee the criminal process for defendants suspected of engaging in hate speech, regardless of whether the offending remarks appear online or in print. This move echoes a March 2019 law that established an analogous national prosecutor’s office (the so-called PNAT, or “parquet national antiterroriste”) to deal with terror-related offenses and crimes against humanity and war crimes. Before the PNAT, France had only one national prosecutor’s office: a financial crimes office created by a December 2013 law.

Legislators also included a provision that calls for the creation of a research watchdog (“un observatoire”) that studies online hate and tracks the evolution of the types of content covered in the text of the law (Article 16). The court saw no problem with this idea.

What Next?

The Avia bill represented a much more aggressive approach by the Macron government to address online speech problems. In 2018, Macron announced that a collaborative project wherein a collection of civil servants would embed at Facebook for six months to give the government visibility about how the platform deals with hate speech and makes its content moderation decisions. And a May 2019 government-commissioned report directly criticized the Avia bill’s inspiration, the German NetzDG law. As Evelyn Douek wrote at the time, the report “proposes a model it calls ‘accountability by design,’ which seeks to capitalize on the self-regulatory approach already being used by platforms ‘by expanding and legitimizing’ that approach.” That report, in other words, sought to avoid the type of government involvement in content moderation that the new law would have mandated.

But on an international level, French parliament’s passing of the law seemed to signal the entrenchment of a new paradigm of government regulation of online speech. The framework first found its legs in Germany and has now migrated across the Rhine to find a second home in Paris⁠.

What now for this framework? After the Constitutional Council took an eraser to the Avia bill, will other countries refrain from attempts to emulate the NetzDG? Or, does the decision merely reflect the jurisprudential idiosyncrasies of the French court? Only time will tell, but the decision certainly pumps the brakes on whatever momentum the 24-hour report-and-takedown model had gained since the NetzDG went into force.

The author thanks Lucien Castex of Renaissance Numérique and Pierre Ciric for their helpful feedback and insights.


Jacob Schulz is a law student at the University of Chicago Law School. He was previously the Managing Editor of Lawfare and a legal intern with the National Security Division in the U.S. Department of Justice. All views are his own.

Subscribe to Lawfare