Published by The Lawfare Institute
in Cooperation With
In Part I of this series, I noted the tendency of commentators—without any actual knowledge—to assert that NSA could simply break into a given locked iPhone. The emergence of a new tool that purportedly defeats iPhone security, even one apparently created by someone other than NSA, makes it all the more likely that in the future, critics will simply assume away the challenges of Going Dark by claiming that, surely, NSA could solve the problem if it was so inclined—and if it was asked.
But as I suggested in the first post, the problem is that there are plenty of things that NSA cannot do. The agency is legally and technologically constrained. Both now and as security systems become more complex, it is not reasonable to simply assume that NSA will be able to get around everything.
The NSA does cryptography, not magic.
In this post, let’s consider the world that many readers will regard as counterfactual—one in which NSA actually can’t unlock a phone to which the FBI requires access. In this world, in other words, the kind of baseless speculation we are currently seeing regarding NSA’s capacities turns out to be not merely baseless but also wrong. Note that despite the emergence of the new tools, this is the world consistent with the sworn representations of the highest officers of our federal government and, indeed, the factual representations Apple has made to the government. It’s a world in which some Israeli company may have capability that NSA lacks.
If this is the case, then the entire premise that we can—as Richard Clarke said to NPR—just call NSA and have them fix the problem is a dangerous fantasy. It attempts to distract from and avoid the very real tradeoffs at play in the Going Dark debate with a falsehood. In reality, we have to confront choices that incorporate some degree of risk no matter what we do. There are real risks to device security and cybersecurity inherent in mechanisms facilitating lawful access or obligations to provide technical assistance. And there are real risks inherent in accepting that there is certain information law enforcement cannot see, no matter how many court orders it holds or how important the information may be.
But the “NSA can do it” line of argument asserts that there’s an escape valve. People offering this option are offering Trump-like logic: “Don’t worry, America, you can have it all. You can have uncrackable devices in every circumstance except the ones you really, really care about where some magical extra powers will show up and deliver. It’s just a matter of will.”
But just because a solution emerged here does not mean a solution will always emerge, nor does it mean that NSA will be able to generate that solution. It would be a serious error to presume that the inevitable existence of vulnerabilities will mean that such vulnerabilities will inevitably be available to and exploited by the NSA—or anyone else for that matter—in the timeframe law enforcement needs. This assumption is superhero logic, only the superhero is an intelligence agency we collectively purport to fear.
If now or in the future, NSA cannot unlock a phone that domestic law enforcement needs access to, then our superhero logic is merely a way for citizens and legislators to spare themselves the labor of reality-based risk analysis. And the risk that this is all make-believe only grows as security systems evolve to become more complex and difficult to penetrate over time. We don’t make policy based on superpowers because they don’t exist. NSA can’t break every system. We shouldn’t make policy on the assumption that it can.
To be fair, Apple never asserted that NSA has superpowers. Rather, it argued—picking up on a line of reasoning first introduced by Magistrate Judge James Orenstein—that the Department of Justice has an obligation to seek out the capacity of the intelligence community before certifying that the assistance requested is “necessary.” In other words, in Apple’s view, we don’t assume the NSA can crack any particular phone, but the government can’t ask Apple for help until it verifies that it can’t.
Here’s the thing: If a judge agrees with this proposition, a company challenging an All Writs Act order would suddenly be entitled to know affirmatively whether or not the intelligence community has the capacity to access its device or beat its encryption. The government would, in effect, be required to provide a list, yes or no, of capabilities in each and every instance--a least if it wanted a company’s help. And the government would, in theory, also need to provide this information to a defense attorney in any resulting prosecution, since the defense is entitled to examine and challenge the sufficiency of the original order. The intelligence community might have reasonable objections to providing such a road map to its capabilities—and to evading lawful surveillance.
Even if we collectively decided to live with the serious compromises to intelligence methods this would entail, we would be left with significant questions as to how such a system might operate in practice. Who in the intelligence community would make such an attestation on behalf of the whole of government? Knowledge of intelligence capabilities, after all, is held on a need-to-know basis, often within Special Access Programs. Part of the system by which we protect classified information is by preventing anyone from knowing everything and preventing people from knowing what they don’t need to know in order to accomplish their jobs. But if there is some statutory obligation to certify the non-existence of a capability when asked, does this mean the Director of the NSA—the person who, by the nature of his position, is the only one capable of speaking for the entire agency—must personally answer each order? Does the DNI have to speak on behalf of the entire intelligence community? What about the capabilities of the US military; are those fair game too?
Note that the expectation of NSA’s involvement would create all of these problems even if there were absolutely no practical benefit to its engagement. That is, these harms would materialize even if the only outcome were that we learned that NSA—or one of its 15 sister intelligence agencies—cannot, in fact, unlock the phone.
Still, that outcome is not nearly as bad as the alternative. Paradoxically, things would be even worse in the universe that Richard Clarke believes we live in: the one in which NSA does, in fact, have superpowers. I turn to that world in the final post in this series.