Cybersecurity & Tech Surveillance & Privacy

Breaking the Encryption Stalemate: New Research on Secure Third-Party Access

Alan Z. Rozenshtein
Thursday, March 29, 2018, 8:00 AM

Last month, the National Academies released their report on potential solutions to the problem of law enforcement access to encrypted data. The reaction was polite but unenthusiastic.

Published by The Lawfare Institute
in Cooperation With
Brookings

Last month, the National Academies released their report on potential solutions to the problem of law enforcement access to encrypted data. The reaction was polite but unenthusiastic. The response of Access Now’s Amie Stepanovich was typical: “This report contains a great compendium of information, but doesn’t fundamentally alter the conversation.”

The “conversation” is the longstanding and bitter debate between law enforcement and the information-security community on whether it’s possible to design an encrypted system that is both secure and gives individualized access to third parties (i.e., the government) subject to court orders. But buried in the report is an important development that may end up marking a turning point in that debate: High-level experts in the information-security community itself are trying to build secure third-party-access systems. As the New York Times reported on Saturday, non-government researchers’ sudden willingness to work on the problem has given the FBI and the Justice Department new momentum in their push for legislative mandates for third-party access to encrypted data.

To understand why this is a big deal, it’s important to appreciate just why the current debate has stalemated. The overwhelming consensus among the academic cryptographers, security researchers, and industry technologists who make up the information-security community is that encrypted systems that allow third-party access are always insecure. But this consensus is vulnerable to two criticisms.

The first is that the consensus argument against secure third-party access depends on a very specific meaning of “secure.” It is undoubtedly the case that a system with third-party access is less secure than the same system without third-party access, no matter how such access is designed. One factor is that third-party access adds complexity, and more complex systems are, all else being equal, less secure than less complex systems. This is for the simple reason that more complex systems are harder to design and manage. Thus the information-security maxim: “Complexity is the enemy of security.”

But in the real world, security is never an all-or-nothing proposition. Security always comes at a cost; for example, it takes more time and money to design more secure systems, and security often requires trading off user features like password or data recovery. The real question is whether a particular system is “secure enough.”

This in turn requires that we answer two sub-questions. First, what is the use case? “Secure enough” for a smartphone that can only be hacked if it’s in an adversary’s physical possession is a less demanding standard than it is for the internet-connected systems running the electricity grid. Second, in addition to security’s individual and social benefits, what are its individual and social costs? This question includes considerations that fall outside the expertise of the information-security community—namely the costs to public safety in the form of relevant data that is unavailable to law enforcement. Even more importantly, this question implicates policy tradeoffs and value judgments that neither technology companies nor the information-security community have the necessary democratic legitimacy to make on their own. It’s neither up to Apple nor the Electronic Frontier Foundation (nor, for that matter, the FBI) to unilaterally decide how much information security is worth sacrificing to save a life or stop a crime; that’s a decision for the public, acting through its elected government, to make for itself. (Hence the ultimate need for a legislative solution to settle this debate one way or another.)

This line of argument—that “secure” is neither all-or-nothing nor excludes broader social costs—has been the standard critique of the position that “everyone knows that third-party access is impossible.” (I discuss it at length in this article.) But the strength of the critique has been blunted by the fact that those making it (myself included) could never demonstrate that secure (or, more precisely, “secure enough”) third-party access systems were feasible. The lack of proof of concept (or even proof of feasibility) has rendered appeals for secure-enough systems moot. Whenever anyone would raise the prospect of third-party access, the result would be something like this tweet from a Stanford mathematician responding to the Times story:


Alan Z. Rozenshtein is an Associate Professor of Law at the University of Minnesota Law School, a senior editor at Lawfare, and a term member of the Council on Foreign Relations. Previously, he served as an Attorney Advisor with the Office of Law and Policy in the National Security Division of the U.S. Department of Justice and a Special Assistant United States Attorney in the U.S. Attorney's Office for the District of Maryland.

Subscribe to Lawfare