Criminal Justice & the Rule of Law Intelligence

New Strategies for Securing Our Private Lives

Jonathan Zittrain
Monday, October 24, 2016, 9:41 AM

I recently wrote an essay reflecting on the reality that nearly anyone with a life online is today subject to being hacked and having anything private become public.

If the media is understandably going to publish newsworthy stuff, regardless of its provenance, and people are going to continue to use email and other communications that leave a record, what should be done?

There are some tactical short term ways to mitigate risk.

Tools for Time Travel / Harvard Library Innovation Lab and the Berkman Center for Internet and Security

Published by The Lawfare Institute
in Cooperation With
Brookings

I recently wrote an essay reflecting on the reality that nearly anyone with a life online is today subject to being hacked and having anything private become public.

If the media is understandably going to publish newsworthy stuff, regardless of its provenance, and people are going to continue to use email and other communications that leave a record, what should be done?

There are some tactical short term ways to mitigate risk.

The prosaic basics

First, citizens can up their security game a little. For example, they can turn on “two factor authentication” which is available through all major email providers. That means that logins to an account from unfamiliar machines will require something more than an easily-compromised password. For example, a text message might have to be sent to a preregistered mobile phone, showing that the user not only knows the right password but also is physically holding his or her phone. And password managers can be used to generate unique passwords for each site—a common weak spot, as a password compromised in one place is frequently used elsewhere.

These sorts of steps will mitigate the risk of casual hacks, but they can’t readily stop someone with the determination and resources to break in. For example, someone can go to a mobile phone store and pretend to be the victim, switching phone service to a new unit—and then have the means for two-factor authentication or outright password recovery. Or the victim can be enticed to visit a site that looks like Gmail—complete with a prompt for a mobile phone’s confirmation text after a password is entered—while the site is, in real time, serving as a “man-in-the-middle” relaying the user’s answers to the real email site just in time to then take over the user’s session and view and download all the email.

These shortcomings suggest that it is unrealistic to expect the standard tools for the paranoid to be that effective in reducing the risk of being hacked. Therefore, a further short-term effort is needed to guard against a problem beyond the leak of real private email: the problem of fake email interspersed by the adversary within the real.

Adopting digital signatures

Extensions in mail services can enable digital signing of every email that goes out. These are derived from the technologies that enable the solemnization of contracts at a distance, where both parties can be confident that the person on the other side of the screen is really who she says she is. By having each email signed at the time of sending—effectively guaranteeing that it is real by the use of an additional password—it becomes harder for a malicious actor to salt a real trove with false stuff. The drawback, if drawback it is, is that people then can’t readily repudiate later what they’ve written and signed—after all, they went to great trouble to credibly fingerprint it as their own. For most people this would be a good thing rather than a bad one, but it does eliminate plausible deniability as a defense to leaked private information.

Digital signatures don’t solve the problem of having all one’s email compromised by a determined hacker, though. For that, longer term strategic solutions are vitally needed.

Going Cold: Our every request for data should be handled with the same alacrity

One promising avenue is what I call “going cold.” Services like Amazon Web Services offer commodity hosting for someone else’s Web server or bulk data. Prices are staggered depending on how quickly the data needs to be called up. In 2012, Amazon unveiled “glacier” storage, where your data can be stored in the cloud more cheaply than usual—so long as you are content to live with a three to five hour delay in retrieving it.

Now imagine that delay as a feature rather than a nuisance for email services like Gmail that enable and encourage users to store everything indefinitely. It wouldn’t be used most of the time. Searches and requests to view or download one email or another—the usual behavior in emailing as you graze in your inbox and look up old correspondence—would be handled normally. But requests to download everything, or say a year’s worth of correspondence, would require a cooling-off period, with notices of the requests sent the user’s various devices. This would allow time for someone who has been hacked to realize what was going on—such as a transfer of phone service off of the usual cell phone—and remedy it before a wholesale leak of data. Going cold is a way that the average user can “air gap” our records, in a way that minimally interferes with our day-to-day rhythms, while treating special, sweeping requests as the sensitive events that they are.

Strong dark archives: people or institutions to stand guard over our lives’ data

Air-gapped data can not only be encrypted, but encrypted in a way that requires more than one party to unlock.

An initial case is for, say, a law school library, where a Supreme Court Justice might want to donate his or her papers, not to be released until a certain amount of time or other conditions have passed. The state of the art today is for a library to simply promise to keep the material under lock and key until the designated release—an obligation that libraries take very seriously.

But a determined institutional commitment to lock information away is, perhaps increasingly, insufficient protection, as the contents of “papers” become much larger than before—a whole hard drive or email archive rather than a manila folder—and as the boundaries of litigation, investigation, and hacking creep outward. The Belfast Project is one notable example. There, Boston College researcher Ed Moloney conducted confidential interviews of people who had been involved in the Troubles in Northern Ireland, to be released as each interviewee died over a period of years. When participant Brendan Hughes died, Moloney, abiding by his promise, was able to publish a book based on the interviews, in which for the first time Hughes acknowledged membership in the IRA. The British government, investigating open cases of assassinations from that era, then sought to retrieve the rest of the tapes through its mutual legal assistance treaty with the United States. Years of litigation followed, with Boston College on the losing side. The library’s commitment was trumped.

Imagine if instead of simply storing digital documents in a manner which was readily recoverable, sensitive data at rest were encrypted with a key that is then broken into fragments, with each “horcrux” held by a fiduciary in a different jurisdiction. Rules would spell out under what circumstances—including an unforeseen emergency or some legal process—the fiduciaries are to part with their respective fragments. Early access, whether by the data’s original owner or by third parties acting under legal process, would not be made impossible. But the cost of accessing documents earlier than intended would become appropriately higher than that of a single subpoena to a single entity, as happened with the Belfast Project.

With this shard encryption system of “strong dark archives,” humans ultimately decide—but the human decision makers are positioned within institutions, perhaps across multiple jurisdictions, that by design have some loyalty to the person or entity entrusting the data to a repository to begin with.

Such a system, proved and honed for institutional cases of storage of sensitive documents, could then become routine for the storage of corporate and individual data. People could choose what trust or trusts would hold key fragments for their historical data, and like visiting a safe deposit box, they’d work with those trusts to regain access when needed. Or they could designate family or friends to hold key fragments. No one trustee—nor the donor—could jump the gun and get to the data—but the right group of them can. This would also, in some cases, help mitigate the duress that a particular person could be put under to turn over a key, since only part of a key is held.

The challenges posed by how hackable we all are profound. Before resigning ourselves to a new reality in which either no privacy can be expected or where any sensitive communications must be limited or conducted by analog means to avoid compromise, we should look for ways to employ technology—and networks of trusted friends and institutions—to facilitate protections for ourselves and those with whom we communicate.


Jonathan Zittrain is the George Bemis Professor of International Law at Harvard Law School and the Harvard Kennedy School of Government, Professor of Computer Science at the Harvard School of Engineering and Applied Sciences, Director of the Harvard Law School Library, and Co-Founder of the Berkman Klein Center for Internet & Society. His research interests include the ethics and governance of artificial intelligence; battles for control of digital property; the regulation of cryptography; new privacy frameworks for loyalty to users of online services; the roles of intermediaries with in Internet architecture; and the useful and unobtrusive deployment of technology in education. He is currently focused on the ethics and governance of artificial intelligence and teaches a course on the topic. His book, "The Future of the Internet -- And How to Stop It", predicted the end of general purpose client computing and the corresponding rise of new gatekeepers.

His writing here and elsewhere represents his individual, independent views.

Subscribe to Lawfare