Surveillance & Privacy

Facing Privacy Tradeoffs to Restore Trust and the Rule of Law

Daniel J. Weitzner
Friday, May 6, 2016, 7:30 AM

One of the great technological advances of our time is the ability to put vast amounts of information to use for expanding scientific knowledge, making enterprises more efficient, increasing consumer convenience, and even protecting public safety. At the same time, the public is deeply mistrustful about how both government and the private sector use personal information about themselves.

Published by The Lawfare Institute
in Cooperation With

One of the great technological advances of our time is the ability to put vast amounts of information to use for expanding scientific knowledge, making enterprises more efficient, increasing consumer convenience, and even protecting public safety. At the same time, the public is deeply mistrustful about how both government and the private sector use personal information about themselves. According to one Pew poll, less than 20 percent of the public believes that the government can be trusted to do what is right most or all of the time. And a large majority of Internet users are concerned that social media companies and advertisers believes that information about them is being used without their knowledge or consent.

When privacy trust gaps have emerged in the past, the traditional response has been to extend the rule of law to restrain the actions of either government or commercial organizations. Earlier in the history of information technology, before the advent of the Internet, smartphones and large scale data analytics, law was seen as an effective mechanism to protect privacy. But in those simpler times, the effectiveness of the rule of law as privacy protector also depended on a particular relationship between law and technology that no longer applies.

Before digital telecommunications, email and the Internet, surveillance power available to government was limited not only by statute and constitutional law, but also by inherent limitations in technology. Surveillance and data analytic technologies were relatively weak and expensive to use. To operate a voice wiretap in the 1970s-era telephone technology required three shifts of several law enforcement agents to sit, often in a van parked near the target, painstakingly record conversations, and to conduct real-time minimization of any information that was not covered by the scope of the warrant. Then the tapes had to be transcribed manually onto paper that was not especially easy to search through, sort or find patterns. So too with technology in the commercial sector. Few companies could afford the computing resources necessary to be truly effective privacy invaders, and much of the personal data available today that forms the basis of profiling and analytics was either not available or in paper form was resistant to intrusive analysis.

The degree of privacy protection enjoyed by citizens until recently was actually determined not merely by the rules enshrined in law but also by the relative costs and limitations inherent in earlier technologies. The values of privacy protection were expressed in our laws, but a good part of realizing that protection came as a result of particular technological barriers. There was a limit to how much surveillance the government could do, and how much personal profiling was commercially possible, simply because of technological circumstances.

The traditional balance of strong law and weak surveillance technology has been disrupted on both fronts since the arrival of cheap computing power and the global Internet. Today, the rule of law in the area of privacy is relatively weak, but technology shows surprising strength. First, weakness and gaps in both privacy and surveillance law have grown. Since the emergence of the Internet and smartphones, the US Congress has not managed to update consumer privacy law at all, and even changes in sectoral laws governing health and finance have been limited. Our most important surveillance laws, Title III and ECPA, remains frozen in the pre-Internet 1980s, and arguably even took a step backward with the Patriot Act.

The weakness of US privacy law was underscored by the Snowden disclosures. Average citizens were surprised at the breathtaking reach of government surveillance power. Of even more concern, the independent court whose job it is to provide ultimate oversight to national security surveillance declared itself unable to tell whether the agency it supervises is actually following the court’s orders. According to then-FISA Court Chief Judge Reggie Walton, “the court lacks the tools to independently verify how often the government’s surveillance breaks the court’s rules that aim to protect Americans’ privacy.” And in the larger realm of commercial data usage, Congress has almost completely failed to extend legal protections to consumers in new areas such as personal health data and sensor information from the emerging Internet of Things. The FTC and the White House have worked hard to fill this gap, but legal protections remain thin and users know it. The rule of law is weak when it comes to the new world of globally-connected digital information services.

Against this background, users have sought privacy strength in technology. Encryption is offered as an essential privacy protection tool for everything from human rights activists seeking protection from despotic regimes, to creating an alternative and anonymous payment system, to online consumers looking to hide their browsing behavior from commercial marketers. Yet while encryption is essential to protect the security of personal data against illegal attacks, it is actually not effective at protecting users against misuse of their data where the goal is to allow some uses but block others. Aside from just encryption, technical approaches such as ad-blocking tools have also been offered as an alternative to the rule of law. Absent consumer privacy law, anti-tracking techniques can help users avoid finding themselves represented in increasingly sophisticated online profiling systems.

Yet this technological self-help approach has not yet seemed to quell the online users privacy worries. As users we are bombarded with the awareness of being tracked and profiled, but have little sense that government will protect us. Nor is there a clear path to effective self-help. In fact, it has put us all in a state of privacy uncertainty as shown by the Pew survey data cited above. From a broad social policy perspective, this is the worst of both worlds. Worry about privacy exposure leads to a chilling effect on freedom of association in commercial, political and perhaps even personal arenas, and creates barriers to innovation for companies uncertain about what their privacy practices should be.

Today we don’t have the luxury of relying on friction in old information technology to serve as a break on either government or commercial over-reach. New information technology mostly tips its hand toward the possibility of great privacy intrusion. So our only option to restore the proud place of the rule of law is to be very explicit about what privacy protections we want with respect to both government and private action. First, we need to be very specific about the privacy-utility balance we expect with respect to specific uses of personal data. With the passage of CISA, for example, there will an increased flow of cybersecurity threat information to the government. Some of this will be personal data and some may be potential evidence of crimes. As DHS makes rules about how the personally identifiable information component of threat data is handled, it is important that there be very clear rules that limit onward usage and storage of threat data. Those rules have to be clear enough to set expectations with citizens about what will and will not be done with personal data.

Second, rules must be designed so that they can be enforced in large-scale data flows. CISA is but one context in which the volume of data flow requires that rules be susceptible to machine-assisted accountability. As Susan Hennessey argued yesterday, it is not reasonable to expect manual human review of every transaction, so rules must be sufficiently concrete that computers can automatically identify those transactions that are clearly compliant with the rules, and those that are clearly not. Human judgment can come into place for any transactions that fall into a gray area.

Recognizing that the balance between privacy law and technology has shifted firmly away from privacy protection will enable policymakers around the world to provide citizens with the trust they deserve and create an environment in which new technologies can flourish.

Daniel J. Weitzner is Director of the MIT Internet Policy Research Initiative and Principal Research Scientist at the MIT Computer Science and Artificial Intelligence Lab. From 2011-2012, Weitzner was United States Deputy Chief Technology Officer for Internet Policy in the White House. Weitzner’s computer science research has pioneered the development of Accountable Systems architecture to enable computational treatment of legal rules and automated compliance auditing. He teaches Internet public policy in MIT’s Electrical Engineering and Computer Science Department. Before joining MIT in 1998, Weitzner was founder and Deputy Director of the Center for Democracy and Technology, and Deputy Policy Director of the Electronic Frontier Foundation. Weitzner has law degree from Buffalo Law School, and a B.A. in Philosophy from Swarthmore College.

Subscribe to Lawfare