Cybersecurity & Tech Surveillance & Privacy

The EU’s Proposal on CSAM Is a Dangerous Misfire

Susan Landau
Thursday, June 23, 2022, 9:01 AM

An EU proposal on combating child sexual abuse material online relies on technology not yet invented and, even worse, would create significant national security risks.

European Union technology security (torstensimon, https://pixabay.com/images/id-5840348/; Pixabay, free for commercial use)

Published by The Lawfare Institute
in Cooperation With
Brookings

In his recent post, Robert Gorwa lays out how the Directorate-General for Migration and Home Affairs, an odd part of the European Commission, came to put forward a proposal on combating child sexual abuse material (CSAM). His trenchant analysis, including a description of U.S. actors in this effort, is well worth reading. Here, I pick up where he left off, focusing on the technical issues implicated in the proposal.

The EU proposal would undo 20 years of progress in securing communications, while employing a set of technologies unlikely to achieve its stated goals. Even worse, the solutions it proposes for handling CSAM would create national security risks by weakening the best tool available for securing communications, end-to-end encryption, and defining a mission without the technology to accomplish it.

The Council of Europe reports that one in five children is a victim of sexual abuse, while the U.S. Centers for Disease Control and Prevention states that one in four girls and one in 13 boys experience such abuse. This horrific crime causes long-term damage to many. With the amount of online CSAM increasing rapidly, and the current investigation methods costly, governments have felt compelled to do something. Several European nations had been drafting national rules to combat CSAM, and to prevent fragmenting the European digital market, the EU proposed a law providing a unified approach. Concerns about the proposal are running quite high

The proposal recommends the creation of a European center to prevent and counter CSAM. This center would detect and remove three types of content: photos of children being sexually abused; real-time videos of such abuse; and “grooming” activity, in which children are solicited into providing sexual images or participating in such activities in which the images are shared. The proposal sets out that providers of hosting and communications services will be required to detect this material in their systems, remove it, and inform authorities. The center “will create, maintain and operate databases of indicators” of CSAM. More critically, the center would be responsible for implementing risk assessment and mitigation obligations of the providers.

The proposal presents five possible options for how such a European center might work:

  • Option A would have a European center that provides practical measures to combat CSAM; industry adherence would be voluntary.
  • Option B involves an “explicit legal basis for voluntary detection of online child sexual abuse, followed by mandatory reporting and removal,” setting up the center as a decentralized EU agency.
  • Option C would require providers to recognize known CSAM on their systems and remove it.
  • Option D would require providers to detect and remove not only known CSAM but also previously unseen CSAM.
  • Option E would require providers not only to detect and remove known and new forms of CSAM but also to detect grooming.

All options would rely on the center for the databases of known CSAM. The center would also act in a law enforcement role by aiding national authorities—police—to ensure compliance with the regulations. The organization would be housed alongside Europol, which would be its closest partner. Although in theory the center would be independent of Europol, in practice the proposed center’s functioning would be highly dependent on the police organization.

That’s the formal structure. Let’s take a look at the technology that would actually be employed.

The proposal says that Option A “would consist of non-legislative, practical measures to enhance prevention, detection and reporting of online child sexual abuse, and assistance to victims,” while Option B would include “an explicit legal basis for voluntary detection of online child sexual abuse, followed by mandatory reporting and removal.” But having listed these two options, the proposal summarily rejects them both, saying that voluntary actions have been insufficient for combating CSAM. The proposal instead presses for legal obligations on providers for detecting CSAM on their systems (Options C, D, and E).

Option C requires providers to recognize and remove any images in their systems matching known CSAM images. The current technique for recognizing photos of known CSAM uses a technology called “perceptual hashing,” which is a method that enables image recognition even if there is a small change (such as a resizing or graying of the image). Microsoft’s PhotoDNA, Facebook’s PDQ, and Apple’s NeuralHash are all examples of perceptual hashes.

While this may work with static images, even a short video has far more bits to examine than a photo. Directly using perceptual hashing on each frame of a video to match it with videos of known CSAM wouldn’t work. There’s too much data, and it might seem that using the method would simply take too long. But it turns out that video frames typically change very little from one to the next, so sampling every 30 frames is one way to cut down on the amount of work. An additional form of speed-up involves splitting the video into many small segments, and computing the perceptual hashes of the pieces. These techniques provide a fast way to compare a new video to one known to contain CSAM. This is exactly the techniques used, for example, for the immediate removal of terrorists’ real-time filming of their attacks. Facebook’s TMK+PDQF uses such a technique to recognize matches with known offensive content such as terrorist attacks or CSAM.

Perceptual hashing would appear to be a solution for Option C. But there’s a problem: Perceptual hashes can be fooled. The first issue is false negatives, that is, the system passing a photo or video when, in fact, there is CSAM in it. This can easily occur. It is not hard to manipulate an image so that its perceptual hash appears to be quite different from the image containing CSAM—but the image still contains the offensive content. Furthermore, a 2019 study reported that 84 percent of CSAM images and 91 percent of CSAM videos are reported only once. We don’t know if this reporting occurs early in their appearance or late; either way, a high percentage of CSAM images and videos cannot be recognized by perceptual hashing techniques as there is no a priori known image. In other words, perceptual hashing cannot be a complete solution to the CSAM problem.

Perceptual hashing has even bigger problems than that. The International Telecommunications Union claims that PhotoDNA has a likely false-positive rate of 1 in 10 billion, a low enough rate that it seems plausible to involve human operators to check flagged data. But as my colleagues and I observed, perceptual hashing is subject to “adversarial attacks,” deliberate malicious efforts to fool the algorithm. It took just two weeks for researchers to reverse engineer Apple’s NeuralHash algorithm already present in iOS 14, leading to a breach. Such false positives are costly to administer. Because the false positives are relatively easy to produce, scale is very likely to overwhelm any industry system relying on perceptual hashing. Thus Option C currently does not have working solutions.

That makes the existence of Option D, requiring providers to recognize not just known CSAM but new CSAM, somewhat confusing. Given the problems of recognizing known CSAM, exactly what did the EU proposers have in mind for recognizing new instances of CSAM? It’s not even clear what previously unknown CSAM means from a legal or technical viewpoint. Is a photo of two naked toddlers splashing each other in the bath CSAM or a photo of someone’s adorable kids? 

Industry money is currently betting on machine learning (ML) methods for recognizing photos and videos of many types of communications and images, including CSAM. That’s ML’s advantage over perceptual hashing. By “training” on many examples of CSAM, ML can learn to spot new CSAM examples without having “seen” the same example earlier. The problem, though, is that it is remarkably easy to fool an ML system. False positives and false negatives are real problems for machine learning. Thus, Option D also appears to be a proposal without a technological solution, at least if one is seeking currently working systems.

Recognizing grooming, which is the additional requirement of Option E, is even more difficult. Experts describe grooming as a process that goes through a number of known states—first finding a victim, then establishing trust, and then moving on to sexual contact, but the words and behaviors that grooming incorporates will vary across cases. Thus, there is no known grooming photo or video with which to match. While there is research on how to recognize grooming interactions, there aren’t working tools for doing so. No one knows how to handle this problem at scale.

This lack of current technology leads to the fundamental flaw of the EU proposal. The proposal sets up a bureaucracy requiring industry solutions to a problem that no one—not service or hosting providers, nor governments—has technical solutions for or knows how to operate at this scale.

A 2019 Carnegie Endowment for International Peace report on encryption policy in which I participated observed that “[p]roposals should be tested multiple times—including at strategic levels (for example, do they establish high-level principles and requirements to weed out incomplete or unfeasible proposals?) and at technical levels (for example, what are the technical risks of the specific implementation?).” There’s a reason for this recommendation. Proposals that look good on paper run into unexpected problems, both policy and technical.

Positing that a technical solution will follow the creation of the European center makes little sense. It is roughly akin to sending a mission to colonize Jupiter and send back precious minerals without the ability to survive on the planet or for a spacecraft to escape Jupiter’s gravity. “We’ll figure it out while the spaceship is in flight to Jupiter” is not a solution. 

Ignoring the Carnegie principle on testing means the EU proposal is operating on hope, not reality. That’s foolish at best, and dangerous at worst. If enacted, the EU proposal would create security holes. The EU proposal acknowledges the importance of end-to-end encryption to security, stating that it’s an important tool to guarantee the security and confidentiality of the communications of users, including those of children.

This is nonsense. Options C and D require providers to detect, respectively, known and new CSAM. No one knows how to do this efficaciously on encrypted data; indeed, I have just described how hard it is to do so at scale on unencrypted data. The legal obligations put forth in Options C-E are effectively impossible to satisfy if the providers are working with encrypted data. In other words, the only way to satisfy Options C-E is to do away with end-to-end encryption.

The arguments for the importance of end-to-end encryption in securing society have been made repeatedly by many former senior members of the law enforcement and intelligence communities and on Lawfare. Consider, for example, former FBI General Counsel Jim Baker’s October 2019 post, in which he wrote:

All public safety officials should think of protecting the cybersecurity of the United States as an essential part of their core mission to protect the American people and uphold the Constitution. ... In light of the serious nature of this profound and overarching threat, and in order to execute fully their responsibility to protect the nation from catastrophic attack and ensure the continuing operation of basic societal institutions, public safety officials should embrace encryption.

A single word describes how much more dangerous the world has become since Baker wrote that article: Ukraine. If we ever left it, we have now returned to a polarized world with grave dangers from two major world powers, and arguments for the importance of embracing end-to-end encryption are far stronger now.

There are more problems with the EU proposal than the failure to describe what technology might be used. There is no discussion of how effective these solutions can actually be. This can be the Achilles’ heel of ML solutions—at least for ML solutions under attack, which these likely would be. In a recent white paper, “Bugs in Our Pockets,” my co-authors and I describe some of the known failures of ML under adversarial attacks that fool the systems into ignoring CSAM (false negatives) or labeling innocuous material as CSAM (false positives). If the EU proposal were to go forward, nation-states with serious intent to harm service providers and hosts would find vulnerabilities in systems and use these to cause serious damage to communication systems.

Many surveillance systems founder on efficacy. In the 1990s, FBI Director Louis Freeh argued against end-to-end encryption because of its potential to interfere with the use of wiretapping during kidnapping investigations. But at the time wiretaps were used in an average of two to three kidnapping cases a year—out of a total of 500 such cases annually. More recently, the bulk signals intelligence collection that Edward Snowden disclosed in 2013 ended because it, too, was not effective—and hadn’t been for some time.

The same problems plague the EU proposal: a lack of technical methods to effectively accomplish what the proposal claims to do with a strong likelihood that the technologies at hand would fail to produce the desired effect. A government program that has the potential to have a strong negative impact on security—and this one, with its effect on end-to-end encryption—must at least be efficacious. The EU proposal fails that test.

EU human rights law requires that surveillance solutions be proportionate. The proposal argues that the recommendations are indeed proportionate, but it does so on the basis of technology that does not yet exist (“EU Centre should facilitate access to reliable detection technologies to providers”). A surveillance solution with a low likelihood of accomplishing its goals without impeding fundamental privacy protections cannot be proportionate. Until and unless there is a technology that satisfies the requirements of low false positives and false negatives successfully operating at scale on encrypted communications, the claim that this proposal is proportionate is not a meaningful one.

Bureaucracies rarely die. Once such a European center is created, it will seek to fulfill its role even if it’s doomed to fail. And while the liberal democracies of the EU may believe that setting up a European center will enable resolution of the CSAM problem, the far more likely outcome is an enabling of authoritarianism in some nations of the union and a failure to effectively move forward on the issue for the rest. Options C-E would fundamentally damage communications and device security and privacy, while failing to accomplish the goals of preventing CSAM. Options A and B, while having less of a negative impact, would be an investment whose only apparent purpose is to lay the groundwork for Options C-E. Better that the center be created once proportionate technological solutions are at hand.

CSAM and its closely related issue, child sexual abuse, are appalling problems very much in need of solutions. But the EU proposal is not the answer. “Do something” works only if there is value in the “something” and not unreasonably high costs in the “doing.” The EU proposal fails on both counts. It should be abandoned.


Susan Landau is Bridge Professor in The Fletcher School and Tufts School of Engineering, Department of Computer Science, Tufts University, and is founding director of Tufts MS program in Cybersecurity and Public Policy. Landau has testified before Congress and briefed U.S. and European policymakers on encryption, surveillance, and cybersecurity issues.

Subscribe to Lawfare