Cybersecurity & Tech Surveillance & Privacy

How Cambridge Analytica, Facebook and Other Privacy Abuses Could Have Been Prevented

Daniel J. Weitzner
Wednesday, April 4, 2018, 7:00 AM

The shocking misuse of personal data by Cambridge Analytica, actively facilitated by Facebook, was a preventable harm. Hundreds of thousands of individuals who thought they were participating in an academic research project were used as seed corn for a large-scale, unethical profiling scheme. Tens of millions more then had their personal data swept into broad, profit-making political experimentation that gave unscrupulous advertisers the ability to target messages based on highly sensitive personality properties.

Photo: Flickr/Anthony Quintano

Published by The Lawfare Institute
in Cooperation With

The shocking misuse of personal data by Cambridge Analytica, actively facilitated by Facebook, was a preventable harm. Hundreds of thousands of individuals who thought they were participating in an academic research project were used as seed corn for a large-scale, unethical profiling scheme. Tens of millions more then had their personal data swept into broad, profit-making political experimentation that gave unscrupulous advertisers the ability to target messages based on highly sensitive personality properties.

So what’s worse than the dishonesty of Cambridge Analytica and the negligence of Facebook? That this situation was preventable, twice over. First, Congress could have headed off this and many other privacy violations had it enacted the Consumer Privacy Bill of Rights when it was first proposed more than five years ago. Unfortunately, Congress declined to act on this proposal so it never became law. Under that legislation, the conduct of both Facebook and Cambridge Analytica would have been illegal and the Federal Trade Commission could have stopped or deterred it with clear prohibitions and fines. What’s more, the European Union had privacy laws in place during the entire period of illicit data collection and misuse, laws that could have been used to investigate and punish both companies. Yet none of the EU data protection agencies appears to have used its authority to stop this and similar conduct. Thanks to a whistleblower, the UK data protection regulator, the Information Commissioner’s Office, has launched an investigation and the U.S. Federal Trade Commission is examining whether Facebook violated a consent decree it entered into settling a 2011 investigation of earlier privacy violations. Calls are mounting for privacy regulation. While such regulation is necessary, merely enacting tough rules—as the EU arguably has already done—will not be enough. Understanding how best to proceed starts with looking at what substantive law is required to protect citizens in today’s interconnected world and at how to ensure that these rules are enforceable at the scale necessary in the global digital economy and society.

When it was first proposed in 2012, the Consumer Privacy Bill of Rights contained two substantive provisions designed to address the challenges of privacy in this increasingly interconnected age. First, it provided a right of individual control: “Consumers have a right to exercise control over what personal data companies collect from them and how they use it.” This right is necessary but not sufficient. The Cambridge Analytica-Facebook situation shows that placing the burden on individuals to protect themselves is unreasonable given the proliferation of increasingly complex data-collection and -sharing arrangements. Second, the bill proposed a new right called “respect for context”: This new protection establishes an enforceable expectation that companies will collect, use and disclose personal data in ways that are consistent with the context in which consumers first provided that data. Simply put, respect for context establishes the right not to be surprised by how one’s personal data is used. Applied to the Cambridge Analytica abuses, this right would have prohibited the firm from unilaterally repurposing research data for political purposes. Because the original context was academic research, individuals would have had to consent again before their data could be used for political purposes. This new right also would have precluded the wholesale harvesting of personal data of friends of those who consented to be research subjects, individuals who had nothing to do with either the research or subsequent political profiling. Most importantly, the legislation would have put the burden of protecting individual rights on both Cambridge Analytica and Facebook.

Respect for context does not mean shutting down all uses of personal data or shutting off innovative uses of the “social graph” (the data that represents users’ relationships in online platforms). Consider the difference between Cambridge Analytica’s use of Facebook data and how information was used by the Obama 2012 reelection campaign app. Cambridge Analytica took social-graph data collected for research purposes and, without consent or even effective notice, repurposed Facebook users’ information for political profiling. By contrast, during the 2012 presidential campaign Barack Obama supporters were invited to install a Facebook app that was given access to the user’s friend list. Upon the app gaining access to the friend list, users who installed the app were given the ability to send personalized messages to any or all of their friends, inviting them to events or sharing campaign literature. In other words, the individual, not the campaign, communicated with users’ friends. And the campaign itself never got a copy of the whole social graph; that information remained with users (and Facebook). While the same data access permissions were in place—both apps had access to the friend list of each app user—the uses were vastly different. Enabling one individual user to communicate directly with his or her friends was respectful of the context in which data was collected. Users expected that they would hear from their friends. Individuals understandably did not expect that their data would be gathered and assessed merely because they were friends with someone who agreed to be part of a research study, nor that all of their personal data would wind up with an unknown commercial entity to whom the researcher sold their information.

Ensuring respect for privacy context is vital not just for individual rights but also because it is the bedrock of human community. If people can’t control how they relate to others, there can be no genuine community, only blinding transparency from which people will seek to hide. In response to the recent outcry, Facebook founder Mark Zuckerberg has acknowledged his failure to protect users and restated his mission to build a global online community. A key question is whether billions of Facebook users signed up to live in Zuckerberg’s community or, rather, to create and enhance their own communities. Facebook is an extraordinary innovation that has created a platform for diverse communities large and small. And the platform has done an admirable engineering job giving users control over certain aspects of their personal data. But Facebook failed spectacularly here, in part because of the complexity of its data-collection mechanisms. The burden of dealing with this complexity should lie 100 percent with Facebook, not users or third parties. Under current U.S. law, users continue to find their privacy interests harmed, especially in the ever more complex technical environments in which personal data is shared.

Privacy protection faces challenges of substance and of enforcement. As recent reports show, privacy enforcers in Facebook’s largest markets in the U.S. and in Europe failed to detect, deter and punish these egregious violations. It is not sufficient simply to enact privacy principles in law, as Europe has done, nor even to have great enforcement technique, which the Federal Trade Commission does. Aggressive fines would help (Europe’s looming digital standards allow penalties of up to 4 percent of firms’ annual revenue to be assessed), but more is needed. Centralized regulators, including the FTC and data protection authorities in the EU member states, are struggling to keep up with the use of personal data all over the commercial world. In the United States, FTC privacy enforcement historically has been limited to advertising and profiling for marketing purposes. Now the commission is called upon to address data used in home-based internet-of-things networks, face recognition and autonomous vehicles, to name a few emerging fields. The FTC’s enforcement arsenal should be supplemented with rights for citizens whose privacy is infringed. This would also engage the investigation and litigation energy of the plaintiffs bar.

As a means of dealing with the scale of personal data being used, the Consumer Privacy Bill of Rights would have encouraged the development of enforceable industry codes of conduct, or industry-developed rules that implement the legislation’s requirements for a given sector. After a code is developed and presented to the FTC, the commission determines whether or not it complies with the consumer rights in the law. If the FTC approved the code, it would then constitute a safe harbor against FTC enforcement for members of that industry that complied with the code. For members of the industry group that accept the code but do not comply, these serve as the basis for enforceable promises that, when violated, would trigger FTC enforcement and penalties. This co-regulatory mechanism has to be implemented carefully, lest industry be allowed to water down the rules in the statute, a legitimate worry some privacy advocates have expressed.

The failure of existing privacy mechanisms to catch abuses such as those that unfolded at Facebook illustrates that legislators must put in place strong, substantive privacy protections and ensure that they can be enforced as the use of personal data grows.

Whenever Congress considers privacy legislation (which it did most recently in 2000, 2010 and 2015), some in industry have warned about stifling innovation with overly burdensome rules. This is a real concern. The Consumer Privacy Bill of Rights provides flexible rulemaking and enforcement mechanisms that learn from the unique success of the internet economy. Many, but not all, tech companies supported the proposal. Some privacy advocates worried it was too flexible. Continued innovation must be encouraged—but not at the expense of leaving citizens in fear of being swept into out-of-control commercial or political experiments.

As social media platforms and the internet economy as a whole become important for billions of individuals, their operation is clearly subject to public-interest considerations. Governments have an obligation to provide clear rights for their citizens. Protecting individuals and their ability to freely associate online is Congress’s responsibility. The world created by Silicon Valley needs Congress to act, immediately, to protect citizens online.

Daniel J. Weitzner is Director of the MIT Internet Policy Research Initiative and Principal Research Scientist at the MIT Computer Science and Artificial Intelligence Lab. From 2011-2012, Weitzner was United States Deputy Chief Technology Officer for Internet Policy in the White House. Weitzner’s computer science research has pioneered the development of Accountable Systems architecture to enable computational treatment of legal rules and automated compliance auditing. He teaches Internet public policy in MIT’s Electrical Engineering and Computer Science Department. Before joining MIT in 1998, Weitzner was founder and Deputy Director of the Center for Democracy and Technology, and Deputy Policy Director of the Electronic Frontier Foundation. Weitzner has law degree from Buffalo Law School, and a B.A. in Philosophy from Swarthmore College.

Subscribe to Lawfare