Published by The Lawfare Institute
in Cooperation With
In a Washington Post story on October 7 (that I just saw), Andrea Peterson writes about the inability of government authorities and privacy advocates to agree on the meaning of “strong encryption.”
I commend this story for those interested in understanding the rhetoric that often surrounds public policy debates involving technical content. Each side uses words to describe its own position that seek to claim the moral high ground and other words to denigrate the positions of its opponents.
In the 1990’s version of the crypto wars, the term “strong encryption” generally referred to encryption using keys that were long enough to resist brute-force decryption techniques for very long times. And when people discussed “escrowed encryption”, they at least acknowledged explicitly the idea that encryption could be both strong and allow for exceptional access (that is, access to encrypted data or communications that might be obtained in exceptional cases).
Today, we have both the government and privacy advocates using the term “strong encryption” to refer to encryption that unauthorized parties cannot break. At some level, we can all agree on that point. But who should get to decide who’s authorized? That’s the real question, and it can’t be answered through a technical analysis alone.
Privacy advocates say that the user of the encryption should determine who is authorized.
The government says that proper judicial and other legal processes should determine who is authorized, and so Department of Justice lawyer Kiran Raj (quoted in Peterson’s story) uses the term “warrant-proof” encryption to refer to the definition of strong encryption used by privacy advocates. Raj could just as easily used the term “intrusion-proof” encryption. But he does not, and thus reveals his intent in using the term—he wishes to underscore the point that such encryption frustrates legitimately obtained search warrants.
The privacy advocates have also used similar rhetorical techniques. They prefer the term “back door” to refer to features that allow exceptional access, which they oppose. Using “back door” mechanisms brings up images of accesses to encrypted data done secretly and surreptitiously—which is indeed often the intent of law enforcement access, at least as far as the user is concerned. But with proper and functioning judicial and other legal processes in place, warrants are made known to all who are legally required to know – and so warrants to use back door mechanisms would not be secret and surreptitious, at least not as far as the law were concerned. A casual perusal of news stories reveals that “back door” mechanisms have won the day rhetorically.
My bottom line in this note to observe that a genuinely neutral technical analysis--on encryption, on cybersecurity, whatever--should seek to avoid loaded language. That may not be possible, but from my perspective, it’s certainly worth the effort. At the very least, readers should keep the motives of writers and speakers in mind.