Surveillance & Privacy

The Apple Client-Side Scanning System

Paul Rosenzweig
Tuesday, August 24, 2021, 8:01 AM

Apple’s efforts, though commendable, raise as many questions as they answer.

An iPhone open to the display settings page. (Braja Das/https://flic.kr/p/2mgtSDd/CC BY-SA 2.0/https://creativecommons.org/licenses/by-sa/2.0/).

Published by The Lawfare Institute
in Cooperation With
Brookings

Washington, D.C.’s cyber policy summer was disrupted earlier in August by an announcement from Apple. In an effort to stem the tide of child sexual abuse materials (CSAMs) that are flooding across the cyber network (and it really is a flood), Apple announced a new client-side scanning (CSS) system that would scan the pictures that iPhone users upload to the cloud for CSAM and, ultimately, make reports about those uploads available to law enforcement for action. The new policy may also have been a partial response to criticism of Apple’s device encryption policies that have frustrated law enforcement.

The objective is, of course, laudable. No good actor wants to see CSAM proliferate. But Apple’s chosen method—giving itself the ability to scan the uploaded content of a user’s iPhone without the user’s consent—raises significant legal and policy questions. And these questions will not be unique to Apple—they would attend any effort to enable a CSS system in any information technology (IT) communications protocol, whether it is pictures on an iPhone or messages on Signal.

Last year, I wrote an extended analysis of these legal and policy questions, and if you want more detail than this post provides, you might go back and read that piece. My assessment then (and now) is that many of the potential technical implications of a CSS system raise difficult legal and policy questions and that many of the answers to those questions are highly dependent on technical implementation choices made. In other words, the CSS law and policy domain is a complex one where law, policy, and technological choices come together in interesting and problematic ways. My conclusion last year was that the legal and policy questions were so indeterminate that CSS was “not ready for prime time.”

Clearly, the leadership at Apple disagrees, as it has now gone forward with a CSS system. Its effort provides a useful real-world case study of CSS implementation and its challenges. My goal in this post is simple: first, to describe as clearly as I can exactly what Apple will be doing; and second, to map that implementation to the legal and policy challenges I identified to see how well or poorly Apple has addressed them.

Apple’s efforts, though commendable, raise as many questions as they answer. Those who choose to continue to use iPhones will, essentially, be taking a leap of faith on the implementation of the program. Whether or not one wishes to do so is, of course, a risk evaluation each individual user will have to make.

Apple’s New Program

Apple announced its new program through a series of public comments, including a summary on its web site. The comments explicitly tied the new technology to child safety, linking its efforts exclusively to the proliferation of CSAM. At the outset, it is important to note that Apple’s newly unveiled efforts are really three distinct new technologies, two of which don’t have direct bearing on CSS. Two of them (providing new tools in the Messages app to allow greater parental control and allowing Siri to intervene and warn when CSAM material may be accessed) are not directly relevant to the CSS discussion and I will leave them aside.

It is the third effort that raises the issues of concern. Here is how Apple describes it:

[A] new technology in iOS and iPadOS* will allow Apple to detect known CSAM images stored in iCloud Photos. This will enable Apple to report these instances to the National Center for Missing and Exploited Children (NCMEC). NCMEC acts as a comprehensive reporting center for CSAM and works in collaboration with law enforcement agencies across the United States.

Apple’s method of detecting known CSAM is designed with user privacy in mind. Instead of scanning images in the cloud, the system performs on-device matching using a database of known CSAM image hashes provided by NCMEC and other child safety organizations. Apple further transforms this database into an unreadable set of hashes that is securely stored on users’ devices.

Before an image is stored in iCloud Photos, an on-device matching process is performed for that image against the known CSAM hashes. This matching process is powered by a cryptographic technology called private set intersection, which determines if there is a match without revealing the result. The device creates a cryptographic safety voucher that encodes the match result along with additional encrypted data about the image. This voucher is uploaded to iCloud Photos along with the image.

Using another technology called threshold secret sharing, the system ensures the contents of the safety vouchers cannot be interpreted by Apple unless the iCloud Photos account crosses a threshold of known CSAM content. The threshold is set to provide an extremely high level of accuracy and ensures less than a one in one trillion chance per year of incorrectly flagging a given account.

Only when the threshold is exceeded does the cryptographic technology allow Apple to interpret the contents of the safety vouchers associated with the matching CSAM images. Apple then manually reviews each report to confirm there is a match, disables the user’s account, and sends a report to NCMEC. If a user feels their account has been mistakenly flagged they can file an appeal to have their account reinstated.

What does that mean in plain English? Apple has provided an FAQ as well as a technical summary that are intended to clarify the program.

It’s a lot to parse, but here’s a distilled version. A new program, called NeuralHash, will be released as part of iOS 15 and macOS Monterey, both of which are due out in a few months. That program (which will not be optional) will convert the photographs uploaded from a user’s iPhone or Mac to a unique hash.

A “hash” is a way of converting one set of data, like a picture, into a different unique representation, such as a string of numbers. In the past, each unique picture has created a unique hash and so slight changes in a picture, like cropping an image, have changed the hash value. Notably NeuralHash is reported to have the capability of “fuzzy matching,” so that small edits or cropping of an image do not change the image’s hash value—that sort of editing has, historically, been an easy way around hash matching programs.

Before a user uploads photos to iCloud, those image hashes will be matched on the device with a database of known hashes of CSAM that is provided to Apple by organizations like the National Center for Missing & Exploited Children (NCMEC). In other words, the NCMEC hashes will also be on the user’s device—again, without the option of turning it off. The matching function will use a technique called private set intersection, which does not reveal what the content of the image is or alert the iPhone owner of a match.

Instead, the matching alert for any positive match will be sent to Apple without initially identifying the user who is the source of the matching alert. Apple has promised that it will not unmask the alert (that is, deanonymize and identify who the user is) unless and until a threshold of CSAM is crossed. In a public defense of the system, Apple suggested that as a matter of policy that threshold would be approximately 30 images on a given phone that matched known CSAM before an alert would be generated.

If an alert passes the threshold specified, Apple will then decrypt the images and have a human manually review them. If the manual review confirms that the material is CSAM, Apple can then take steps such as disabling an account and reporting the imagery to NCMEC, which in turn passes it on to law enforcement. In its public announcement Apple says there is less than a 1 in 1 trillion chance of a false positive, but the company is nonetheless providing an appeals process for users who think their material has been incorrectly characterized.

The Pros and Cons of Apple’s Approach

The reaction to Apple’s announcement was swift … and wildly divided. Privacy advocates immediately raised concerns about the implementation. While supportive of the overall goal, many saw significant privacy risks. The executive director of NCMEC characterized these objections as the “screeching voices of the minority.” Meanwhile, some security experts suggested that the limited nature of the scanning contemplated would pose few real privacy risks.

My own assessment is more tentative. Some aspects of what Apple proposes to do are highly commendable. Others are rather more problematic. And in the end, the proof will be in the pudding—much depends on how the system is implemented on the ground next year.

One way to think about the pros and the cons is to borrow from the framework I introduced last year, breaking down the analysis into questions of implementation and of policy implications. Using that earlier framework as a guide here is a rough-cut analysis of the Apple program:

Implementation Issues

Some implementation choices necessarily have legal and policy implications.

Mandatory or voluntary? The first question I raised last year was whether or not the system developed would be mandatory or voluntary. Clearly, voluntary programs are more user-protective but also less effective. Conversely a government mandate would be far more intrusive than a mandate from a commercial enterprise, whose products (after all) can be discarded.

On this issue, Apple has taken a fairly aggressive stance—the NeuralHash matching program will not be optional. If you are an Apple user and you upgrade to the new iOS, you will perforce have the new hash-matching system on your device and you will not have the option of opting out. The only way to avoid the program is either to refuse the update (a poor security choice for other reasons) or to change from using Apple to using an Android/Linux/Microsoft operating system and, thus, abandon your iPhone. To the extent that there is a significant transaction cost inherent in those transitions that makes changing devices unlikely (something I believe to be true), it can fairly be said that all Apple users will be compelled to adopt the hash-matching program for their uploaded photos, whether they want to or not.

At the same time, Apple has adopted this system voluntarily and not as the result of a government mandate. That context is one that will insulate Apple’s actions from most constitutional legal problems and, of course, means that Apple is free to modify or terminate its CSS program whenever it wishes. That flexibility is a partial amelioration of the mandatory nature of the program for users—at least only Apple is forcing it on them, not a collection of multiple smartphone software providers, and not the government(s) of the world …. So far.

Source and transparency of the CSAM database? A second question is the source of and transparency of the CSAM database. Apple’s proposed source for the authoritative database will be the National Center for Missing & Exploited Children—a private, nonprofit 501(c)(3) corporation. It is not clear (at least not to me) whether NCMEC’s hash database listings will be supplemented by adding hashes from the private holdings of major for-profit tech providers, like Facebook, who have their own independent collections of hashed CSAM they have encountered on their platforms.

Notably, since some aspects of reporting to NCMEC are mandatory (by law certain providers must provide notice when they encounter CSAM), the use of the NCMEC database may again raise the question of whether NCMEC is truly private or might, in the legal context, be viewed as a “state actor.” More importantly, for reasons of security, NCMEC provides little external transparency as to the content of its database, raising the possibility of either error or misuse. It’s clear from experience that NCMEC’s database has sometimes erroneously added images (for example, family pictures of a young child bathing) that are not CSAM—and there is no general public way in which the NCMEC database can be readily audited or corrected.

Notice provided: When and to whom? How will notice of offending content be provided, and to whom? Here, Apple has done some good work. Before notice is provided to NCMEC, Apple has set a high numerical threshold (30 CSAM images) and also made the process one that is curated by humans, rather than automated. This high threshold and human review should significantly mitigate the possibility of false positives. By providing the notice to NCMEC, which will in turn provide the notice to law enforcement, Apple is taking advantage of an existing reporting mechanism that has proved relatively stable.

To be sure, the apparent decision to not provide contemporaneous notice to the user raises some concern, but delayed notification is common in the law when there is a risk that evidence will be destroyed. Add to this the promise of eventual notification and an appeals process within Apple and, on this score, Apple deserves relatively high marks for its conceptual design.

Accuracy of matching algorithm? A further problem is, naturally, the question of whether or not the new NeuralHash matching system works as advertised. As Matthew Green and Alex Stamos put it: “Another worry is that the new technology has not been sufficiently tested. The tool relies on a new algorithm designed to recognize known child sexual abuse images, even if they have been slightly altered. Apple says this algorithm is extremely unlikely to accidentally flag legitimate content .... But Apple has allowed few if any independent computer scientists to test its algorithm.” Indeed, as I understand it, Apple thinks that any independent evaluator who attempts to test the system without its consent is violating its intellectual property rights (a stance it has taken in many other contexts as well).

To be sure, the added safeguard of having an Apple employee review images before forwarding them to NCMEC may limit the possibility of error, but it is at least somewhat troubling that there is no independent verification of the accuracy of the new program, and that Apple is resisting greater transparency.

Efficacy? A final implementation question is the integration of the whole system. A recent study by the New York Times demonstrated that the systematic linkage between the NCMEC database and screening systems in search engines was incomplete and yielded many false negatives. Again, the new Apple system has yet to be thoroughly tested so it’s impossible to say with certainty that the integration into NeuralHash is successful. It could be a poor system that sacrifices privacy without providing any gains in effectively interdicting CSAM. At this point, only time and real-world testing will establish whether or not Apple’s new system works.

Policy Implications

Irrespective of the details of Apple’s architecture, the company’s implementation choices raise some fundamental policy questions that are also worth considering.

Hash control? The hash database of CSAM will, by necessity, be widely distributed. All databases are subject to degradation, disruption, denial or destruction. While Apple has a good general track record of securing its systems against malicious intrusion, it does not have a record of perfect security—indeed, no one could. As far as I can discern, Apple will be taking no special precautions to control the security of the CSAM hash database—rather, it will rely on its general efforts. Though the risk is small, there is no perfectly secure data storage and distribution system. It requires a heroic assumption to be completely confident of Apple’s hash control. And that means that the hash database may be subject to manipulation—raising the possibility of all sorts of malicious actions from false flag operations to deep fake creation of fictitious CSAM.

Basic cybersecurity? Likewise, I have concerns about the overall security of the system. Any CSS program will necessarily have significant administrative privileges. Again, Apple’s security is quite robust, but the deployment of the NeuralHash as part of the operating system will, by definition, expand the potential attack surface of Apple devices—again, with unknown effect.

Scalability and mutability? The NCMEC database contains more than 4 million distinct hashes. Apple has not said (as far as I am aware of) what portion of that database will be pushed down to devices to conduct the on-device hash matching. It seems likely that a smaller, curated list will be distributed to end users to avoid scalability issues. But, again, this is a question that, apparently, has yet to be fully tested; will the curated smaller list suffice for effectiveness, or will a larger list (with concomitant increases in “bloat” inside the device) be necessary?

Commercial impact? The new CSS system will necessarily increase device processor usage. It will also likely result in an increase in network usage for the database downloads. While other improvements may mask any degradation in device performance, the costs will be small, but real. They will also not be born uniformly, as the entire CSS application will be imposed only on higher-end devices capable of running iOS15 (and thus the entire discussion here doesn’t apply in parts of the country and the world where relatively few people use newer iPhones—or use iPhones at all). Finally, it seems likely that Apple will need to lock down the CSS application within the iOS to prevent tampering, exacerbating the trend away from device owner control.

Form of content and loss of privacy and control? To its credit, Apple has decided to, in effect, let users be the masters of their own control and privacy. By limiting the CSAM scanning to images that are uploaded to the iCloud, Apple allows end users to secure their privacy from scrutiny simply by refraining from using the iCloud function. In that sense, users hold in their hands the keys to their own privacy preferences.

But all is not, of course, that simple, and the credit Apple gains for its choice is not that great. First, and most obviously, the iCloud storage function is one of the very best features of Apple products. It allows users to access their photos (or other data) across multiple devices. Conditioning privacy on a user’s decision to forgo one of the most attractive aspects of a product is, at a minimum, an implementation that challenges the notion of consent.

Second, as I understand it, the choice to scan only uploaded images is not a technological requirement. To the contrary, unlike current scanning programs run by competitors such as Google or Facebook, the Apple CSS system will reside on the individual user’s device and have the capability of scanning on-device content. That it does not do so is a policy choice that is implemented in the iOS15 version of the CSS system. That choice can, of course, be changed later—and while Apple promises not to do so, reliance on the company’s assurances will, no doubt, heighten anxiety among privacy-sensitive users.

In addition, Apple’s decision to limit scanning to content that is in image form (and not, say, to scan text in Messages) is also a policy choice made by the company. Apple has already started down the road of message scanning with its decision to use machine-learning tools to help protect children from sexually explicit images (a different part of their new child-protective policies). But in doing so, it has made clear that it can, if it wishes, review the content of Messages material—and there is no technological reason text could not be reviewed for “key words.” As such, much of the user’s privacy is currently assured only by the grace of Apple’s policy decision-making. Whether and how much one should trust in that continued grace is a matter of great dispute (and likely varies from user to user).

Subject matter and scope? Finally, there is the question of subject matter. Initially scanning will be limited to CSAM materials, but there is a potential for expansion to other topics (for example, to counterterrorism videos, or to copyright protection). Nothing technologically prevents the NeuralHash system from being provided with a different set of hashes for comparison purposes.

This is particularly salient, of course, in authoritarian countries. In China today, Apple is subject to significant governmental control as a condition of continuing to sell in that country. According to the New York Times, in China, state employees already manage Apple’s principal servers; the company has abandoned encryption technology; and its digital keys to the servers are under the control of Chinese authorities. Given China’s history of scouring the network for offending images and even requiring visitors to download text-scanning software at the border, one may reasonably wonder whether Apple’s promise to resist future expansion in China can be relied on. It is reasonable to wonder if at some point a government could compel Apple (or any other company that develops similar CSS products) to use this technology to detect things beyond CSAM on people’s computers or phones.

Legal Issues

Finally, most of the legal issues I mulled over a year ago are rendered moot by the structure of Apple’s CSS program. Since the company has adopted the CSAM scanning capability voluntarily, it will not be viewed as a state actor and there is sufficient distance between Apple and the government (and NCMEC) to make any contrary argument untenable. As a result, almost all of the constitutional issues that might arise from a compulsory government-managed CSS program fall by the legal wayside.

It is likely that the Apple CSS program may impact other legal obligations (for example, contractual obligations between Apple and its vendors or customers), and it may even implicate other generally applicable law. I will be curious, for example, to see how the CSS program fares when analyzed against the requirements of the EU’s General Data Protection Regulation. At least as I understand EU law, the near mandatory nature of the system within the iOS will make questions of consumer consent problematic for Apple. But that area of law is well outside my expertise (as is, for example, an analysis under California’s state-level privacy law). I simply note the problem here for the sake of completeness.

Conclusion

So, what’s next? Some observers might reasonably fear that Apple’s step is but the first of many and that other IT communications providers will be pressured to follow suit. So far, though, there seems to be resistance. WhatsApp, for example, has announced that it will not follow Apple’s lead. It remains to be seen whether WhatsApp, and others, can maintain that position in light of the political winds that will surely blow.

For myself, I hope they do. Because as far as I can see, Apple’s new technology has yet to come to grips with some of the hardest questions of policy and implementation. Much of the validation of the system is highly dependent on the degree to which one trusts that Apple will implement the system as advertised and resist mission-creep and other political pressures. The extent to which one has trust in Apple is highly context dependent and variable. As they say, your mileage may vary.

One gets the sense (perhaps unfairly) that Apple felt itself compelled to act as it did by external factors relating to the law enforcement pressures it had felt from the U.S. government. And it is far too early to tell whether the decision to do so will prove wise, from a business perspective. But at least at a first cut, it certainly seems that Apple’s promise—“What happens on your iPhone stays on your iPhone”—now rings hollow


Paul Rosenzweig is the founder of Red Branch Consulting PLLC, a homeland security consulting company and a Senior Advisor to The Chertoff Group. Mr. Rosenzweig formerly served as Deputy Assistant Secretary for Policy in the Department of Homeland Security. He is a Professorial Lecturer in Law at George Washington University, a Senior Fellow in the Tech, Law & Security program at American University, and a Board Member of the Journal of National Security Law and Policy.

Subscribe to Lawfare