Cybersecurity & Tech

European Commission Communication on Disinformation Eschews Regulation. For Now.

Evelyn Douek
Wednesday, May 2, 2018, 7:00 AM

Major technology platforms are having a rollicking 2018. In the past week, Facebook alone has been lambasted with questions by a U.K. parliamentary committee investigating fake news; accused of censoring conservative political views in a hearing with internet personalities Diamond and Silk before the U.S.

Photo: Wikimedia/Bertrand

Published by The Lawfare Institute
in Cooperation With
Brookings

Major technology platforms are having a rollicking 2018. In the past week, Facebook alone has been lambasted with questions by a U.K. parliamentary committee investigating fake news; accused of censoring conservative political views in a hearing with internet personalities Diamond and Silk before the U.S. House judiciary committee; and been the focus of a widely shared article in the New York Times accusing it of amplifying ethnic violence in Sri Lanka.

Compared with these fireworks, the European Commission’s release of its communication on “Tackling online disinformation: a European approach” on April 26 might appear somewhat dull. The document presents a vague and uncontroversial plan of action, emphasizing societal resilience and multi-stakeholder cooperation to combat disinformation. Nevertheless, it is likely to be far more consequential in shaping of the ecosystem of online speech than circus-like congressional hearings. The commission largely places responsibility on social media platforms to deal with the supply of disinformation and on member states to deal with supporting media pluralism and literacy—but by threatening future E.U. regulation if the commission deems platforms’ response inadequate, the commission creates dynamics that are likely to be harmful to freedom of speech.

To regulate, or not to regulate?

The commission’s communication follows a June 2017 resolution of the European Parliament calling on the commission to “analyse in depth the current situation and legal framework with regard to fake news, and to verify the possibility of legislative intervention to limit the dissemination and spreading of fake content.” Despite speculation that the commission might recommend legislative intervention, the communication eschews such proposals for now, instead setting down a self-regulation model that calls on platforms to “decisively step up their efforts” to tackle online disinformation. How they should do this will be detailed in an “ambitious Code of Practice” to guide platforms’ behavior, which is to be published by July 2018 by a multi-stakeholder forum convened by the commission and is expected to produce “measurable effects by October 2018.” This timeline continues the aggressive pace of the commission’s work on disinformation—no doubt driven by the impending 2019 European Parliament elections.

The self-regulatory model embraces the multi-dimensional approach recommended by the independent group of 39 experts convened by the commission earlier in the year to put forward strategies to counter disinformation. In their separate report, delivered to the commission on March 12, the expert group recommended a “soft power” approach and cautioned against simplistic responses such as regulation, which they wrote can be a “blunt and risky instrument.”

The commission itself, however, does not rule out regulation altogether. The communication expressly leaves this possibility open if the results of the Code of Practice “prove unsatisfactory.” The threat is very thinly veiled. The commission’s website about the communication reads (emphasis added):

What will happen to online platforms and social networks that will not follow the suggested Code of Practice?

The Commission calls upon platforms to decisively step up their efforts to tackle online disinformation.... Should the results prove unsatisfactory, the Commission may propose further actions, including actions of a regulatory nature.

Why does the Commission think that self-regulation for online platforms is the right approach to tackle the issue?

[S]elf-regulation is considered the most appropriate way for online platforms to implement swift action to tackle this problem, in comparison to a regulatory approach that would take a long time to be prepared and implemented and might not cover all actors. … Should the self-regulatory approach fail, the Commission may propose further actions, including regulatory ones targeted at a few platforms. Such actions should in any case strictly respect freedom of expression.

These threats create the dynamic that Danielle Citron has aptly termed “censorship creep.” By threatening to impose regulation (and its attendant costs) on technology companies, the EU can pressure companies to remove more content “voluntarily.” But this form of coercion means the changes are made without regular transparency and accountability.

Companies will likely be more responsive to these threats from European Union regulators than similar threats from the U.S. Congress because the EU has shown itself willing and able to impose regulation—whereas the U.S.’s First Amendment and relatively absolutist free-speech culture makes restrictions on speech especially unlikely. As the EU did previously with hateful and extremist material, the bloc is now imposing extralegal pressure to force platforms to adopt policies that conform with the speech norms it sees as desirable.

This effect could be even more pernicious in the context of “disinformation,” which can be especially difficult and politically contentious to define and will include content that is often not itself illegal. According to the communication, disinformation is “verifiably false or misleading information that is created, presented and disseminated for economic gain or to intentionally deceive the public, and may cause public harm.” It does not include reporting errors, satire and parody, or clearly identified partisan news and commentary. The communication offers no guidance on how to distinguish between misleading information and partisan commentary. Does the carve-out for “clearly identified” partisan commentary mean politicians can make misleading claims, but citizens cannot? The commission acknowledges the need to “strictly respect freedom of expression,” but this only raises the question of how to strike the appropriate balance.

The EU has already had experience with the fraught nature of these judgments as made by the bloc’s East StratCom Task Force, an under-resourced team set up in 2015 to counter Russian propaganda. The task force’s classifications of certain articles as disinformation has caused controversy: In its brief three years of existence, it has been sued by three Dutch news outlets whose articles were cited as promoting propaganda, been forced to back down from the classification of certain articles as attempted Kremlin influence, been censured by the Dutch Parliament, and had complaints filed against it with the E.U. Ombudsman. (It’s worth noting that expanded collaboration with the Task Force is one other measure set out in the communication for dealing with disinformation going forward.)

In the face of definitional uncertainty, platforms will likely err on the side of caution and over-censor content to try to stave off the threat of regulation. These fears may be mitigated by a clear and comprehensive Code of Practice. That is yet to be seen, but we do not have to wait long—the commission has given the multi-stakeholder group less than three months to develop a document that resolves these tensions. The code will also commit the platforms to a wide range of goals set out by the commission, including greater scrutiny of advertising placements, increased transparency about sponsored content, more effective removal of fake accounts, development of indicators of content sources’ trustworthiness, clear marking of bots, and greater access to data for fact checkers and academics.

It will be especially interesting to watch how platforms respond to the commission’s effort to commit them to providing “detailed information on the behaviour of algorithms that prioritise the display of content.” Regulators have long been seeking this information, which the companies keep as closely held trade secrets. The commission also wants platforms to offer users “exposure to different political views,” though it is unclear how it will evaluate success.

A comprehensive approach

For all these criticisms, one of the communication’s strengths is its adoption of a whole-of-society, multidimensional approach to tackling disinformation. The communication acknowledges that “[t]he impact of disinformation differs from one society to another, depending on education levels, democratic culture, trust in institutions, the inclusiveness of electoral systems, the role of money in political processes, and social and economic inequalities.” It therefore sets out its support for a range of steps to foster societal resilience and mitigate risk factors, including:

  • The creation of an independent European network of fact-checkers and a secure online platform on disinformation to support research and collaboration;
  • The creation of voluntary systems to facilitate verification of news content;
  • Enhancing media literacy and educational initiatives;
  • General promotion of quality journalism; and
  • Development of strategic communications responses.

This is a product of the underlying approach of the communication that a comprehensive strategy is necessary to deal with the complex problem of disinformation. However, the details of how these measures will be achieved, and in particular the level of funding commitment, are left unspecified. This generally broad-brush tone of the communication is exemplified by the document’s concluding call “on all relevant players to significantly step up their efforts to address the problem adequately.”

Global patchwork

The communication emphasizes that the “cross-border dimension of online disinformation makes a European approach necessary,” but confines to a footnote its discussion of the measures member states are taking or considering independently. Germany has enacted a new law requiring platforms to quickly take down illegal content or face steep fines; France is considering legislation empowering authorities to remove fake news during election campaigns; Italy has proposed various guidelines and fake news reporting initiatives.

The communication is, therefore, unlikely to do much to stem the potential patchwork of regulatory approaches to disinformation that is developing. It’s unclear why the commission did not more explicitly comment on individual countries’ approaches. It might be an effort to give states a margin of appreciation in dealing with disinformation within their borders, keeping in mind the commission’s acknowledgment that dynamics vary greatly between societies. However, there is also a risk that more speech-restrictive jurisdictions end up exporting their standards throughout the EU (and perhaps beyond) as the platforms themselves seek a uniformity of approach in different markets. Much hinges on the Code of Practice, how it approaches difficult freedom of expression issues, and whether the “voluntary” initiatives it requires provide transparency and reassurance to regulators with their fingers on the trigger.


Evelyn Douek is an Assistant Professor of Law at Stanford Law School and Senior Research Fellow at the Knight First Amendment Institute at Columbia University. She holds a doctorate from Harvard Law School on the topic of private and public regulation of online speech. Prior to attending HLS, Evelyn was an Associate (clerk) to the Honourable Chief Justice Susan Kiefel of the High Court of Australia. She received her LL.B. from UNSW Sydney, where she was Executive Editor of the UNSW Law Journal.

Subscribe to Lawfare