Congress Cybersecurity & Tech

UK's New ‘Internet Safety Strategy’ Cracks Down on Online Danger

Evelyn Douek
Wednesday, October 18, 2017, 9:00 AM

The U.K. government released a new “Internet Safety Strategy” Green Paper last week, making clear its intention to follow through on bold campaign rhetoric promising aggressive internet regulation.

Published by The Lawfare Institute
in Cooperation With
Brookings

The U.K. government released a new “Internet Safety Strategy” Green Paper last week, making clear its intention to follow through on bold campaign rhetoric promising aggressive internet regulation.

During the national election campaign earlier this year, the governing Conservative Party’s 2017 manifesto called for a “Digital Charter” that would make Britain “the safest place in the world to be online.” Shortly thereafter, following the June London Bridge and Borough Market terror attack, Prime Minister Theresa May declared “enough is enough”; she promised to work with “allied democratic governments to reach international agreements that regulate cyberspace” and to “do everything we can at home” to prevent the spread of extremism online.

Last week’s green paper is the first part of the digital charter. It is unapologetically ambitious in its proposals to address the problems created by new technologies. It repeats promises in the manifesto and calls for a social media “code of practice,” a reporting mechanism for abuse, a “social media levy” and an annual internet safety report. However, the paper is short on specifics about how the government will implement these initiatives. And the effectiveness of the strategy will be undermined if internet companies (most of which are based in the U.S.) refuse to cooperate with these extensive but voluntary schemes.

Forcing Social Media Companies to Take Responsibility

The paper states that one of the underpinning principles of the government’s approach to internet safety is that “[t]echnology companies have a responsibility to their users.” This responsibility is the rationale for promulgating a social media code of practice, which the government is required to create under Britain’s recent Digital Economy Act 2017. Internet companies will also be asked to shoulder this responsibility in a more tangible way, through the payment of the voluntary social media levy. The government will use the proceeds to address issues such as online abuse and offensive material.

Exactly what the code of practice, reporting obligations or levy payments entail remains unclear. The paper makes bold proposals but is short on detail. Most of its 62 pages describe the broad array and large scale of potential harms created by the internet. While much of the paper—and associated media coverage—focuses on issues such as child welfare, including exposure to internet pornography, cyberbullying and harassment, it also hints that the government has a wider definition of “online harms” in its sights. Sections on fake news, “online misogyny” and “trolling” suggest an expansive view of the content that the government will ask platforms to monitor.

Platforms or Publishers?

Culture secretary Karen Bradley also implied at the paper’s announcement that the government might reclassify social media companies as publishers instead of platforms, signifying the potential for even greater content regulation. “Legally they are mere conduits but we are looking at their role and their responsibilities and we are looking at what their status should be. They are not legally publishers at this stage but we are looking at these issues,” she told BBC Radio. That statement follows comments earlier in the week from Patricia Hodgson, chairwoman of the media regulator Ofcom, in which she expressed her belief that these companies were publishers.

Nevertheless, the paper emphasizes consultation and collaboration with companies to enhance online safety. For example, the paper insists that the proposed levy is not a “tax,” but merely a way of helping “businesses grow in a sustainable way while serving the wider public good.”

Social media companies are unlikely to appreciate the help. Experts are reportedly skeptical about whether these mainly U.S.-based companies will cooperate with onerous, voluntary, U.K. requirements. If they don’t, Bradley has not ruled out making it mandatory through legislation. That may put the government in a corner: While Bradley insisted that the government adopted the voluntary approach as a quicker and more effective way of getting results, the unexpected political reality of the Conservative Party’s loss of a parliamentary majority at the last election might have limited their options.

Part of a General Trend

The U.K. proposals may reflect a general shift in public and regulatory opinion against the internet giants. This month, Germany enacted a law that allows large fines against social media companies that do not remove offensive content quickly. Demands for greater regulation in America also gather steam as social media firms continue to be called to appear before congressional panels investigating Russian interference in the 2016 U.S. election.

In the U.K., the Green Paper calls for a consultation process that runs until December 7, 2017. The government aims to publish its code of practice next year. Internet users everywhere should watch developments closely—regulators around the world definitely will.


Evelyn Douek is an Assistant Professor of Law at Stanford Law School and Senior Research Fellow at the Knight First Amendment Institute at Columbia University. She holds a doctorate from Harvard Law School on the topic of private and public regulation of online speech. Prior to attending HLS, Evelyn was an Associate (clerk) to the Honourable Chief Justice Susan Kiefel of the High Court of Australia. She received her LL.B. from UNSW Sydney, where she was Executive Editor of the UNSW Law Journal.

Subscribe to Lawfare