Published by The Lawfare Institute
in Cooperation With
Finally, Mark Zuckerberg has spoken. The short version of his response? “We have a responsibility to protect your data, and if we can't then we don't deserve to serve you.” But Zuckerberg is wrong. The Cambridge Analytica scandal is not about a failure to secure users’ data; it is a failure to protect the privacy of users’ data. And Facebook is in no position to change that, for the sharing of user data lies at the heart of Facebook’s business.
Let’s start with unpacking the Cambridge Analytica scandal. It begins with personality quizzes. Is your favorite hobby running or cooking? You’re probably conscientious. Prefer Nightwish to the rapper Waka Flocka Flame? That suggests a more introverted personality. Tests such as these can predict who you are and, more importantly in the context of the scandal, how you are likely to vote.
Cambridge University’s Aleksandr Kogan—also known as Aleksandr Spectre—studies such issues. Kogan developed a personality-quiz Facebook app called thisisyourdigitallife. He needed data, massive amounts of it, so he used his private company, Global Science Research, to run surveys and recruit test takers. The company paid people to take the personality test—or rather, Kogan contracted with Cambridge Analytica, which provided funds to pay for people to take the test. A total of 270,000 did so.
The test takers agreed to let Kogan harvest data from their Facebook account. Because Facebook’s vision was that every app should be “socially enabled” to simplify engagement and connection, Kogan was also able to collect data about the quiz takers’ Facebook friends—all 50 million of them. Only people who had explicitly opted out of this Facebook feature were protected from Kogan’s data collection; being able to do so was not easy. Facebook has since locked this feature—I’d call it a bug—down. Kogan said he was collecting the information for academic research, but his company gave the data to Cambridge Analytica.
This occurred in 2014. According to Facebook vice president Andrew Bosworth, in 2015 Facebook became aware of the data “sharing,” which violated the terms of service under which Kogan had obtained the information of Facebook users. The company demanded that Cambridge Analytica delete the Facebook data in its files. Cambridge Analytica said it had complied, and for Facebook, that was the end of the story.
But it wasn't the end of the story. Cambridge Analytica had not deleted the data; instead, it used the information to conduct micro-targeting of U.S. voters during the 2016 presidential campaign and quite likely changed the course of events.
Facebook failed in multiple ways. The core failure was, and is, excessive information-sharing. The decision that enabled Kogan to scrape the data from 50 million accounts through the acquiescence of the original 270,000 personality quiz takers is not a security breach—it was a conscious Facebook choice to encourage huge information-sharing. That comes at the expense of privacy. Kogan’s obtaining personal data from those 50 million accounts was a major privacy breach—one that was enabled as a result of Facebook’s core vision.
Moreover, Facebook failed to conduct due diligence when it learned in 2015 that Kogan had provided Cambridge Analytica with the information he had gleaned on the 50 million Facebook users. That data dump was a violation of the terms of service under which Kogan had obtained the data. Asking Cambridge Analytica to certify that it had deleted the data—which is all Facebook required from the company—was completely inadequate. One would hope that Facebook lawyers and policy folks understood this. But privacy runs counter to Facebook’s raison d’etre. Proper actions and attention to privacy were sorely lacking.
Finally, Facebook had a moral responsibility to let all 50 million of those users know that their data had been shared with Cambridge Analytica. Mark Zuckerberg has said the company will do so now. That’s good. But it is four years—and one presidential election—too late.
The tools available to the [Federal Trade Commission] and state attorneys general are limited. The commission can assess civil penalties for violations of certain privacy statutes and regulations, and it can issue fines. But the former are limited and the latter low because the fines must reflect calculable losses suffered by consumers. State prosecutors are similarly hampered by having to prove that breaches led to actual harm. These legal constraints limit prosecution in cases where there are real losses but not of high monetary value.
There is a huge mismatch here. The use of user data is how online social networks work. In 2012, Tyler Moore and I studied identity management (the way users are authenticated) and data sharing. We observed that identity providers that shared lots of data with other providers—think of these as apps—were far more successful as “identity providers”:
Facebook provides a centralized system where its users can log in to third-party Web sites using Facebook credentials. To attract wary Service Providers, Facebook shares social network information in addition to demographic information ... [From] OpenID, the Service Provider only learns the e-mail address, while with Facebook the Service Provider learns the name, gender, list of friends, and all public information stored by Facebook. All profile information, including birthday, education and work history is also shared ... [Thus] it should be no surprise that [Facebook] has succeeded in attracting the participation of many more Service Providers than OpenID has.
As long as social networks rely on advertising based on user data, rather than a subscription model, to drive their business, there will be excessive use of personal data. That's why Zuckerberg's response—“Beyond the steps we had already taken in 2014, I believe these are the next steps we must take to continue to secure our platform”—is notable for its emphasis on security and its lack of attention to privacy. As Facebook Chief Security Officer Alex Stamos observed, Cambridge Analytica was not a data breach. Cambridge Analytica’s use of the personal information of 50 million Facebook users was a privacy breach. And so far, Facebook is making precious few alterations to fix that problem.
I'll end on a personal note: In 2012 I wanted to teach a freshman seminar at Harvard on Privacy and Online Social Networks. But because I care about privacy, the only social network I was on was LinkedIn. A fellow faculty member told me I’d have no credibility teaching the course. I taught a course called “Is Privacy Dead?” instead (our conclusion was no, but that it was in great danger). Several years later, I received a LinkedIn connect request, but to an email address I had not supplied to the company. I wrote the person who appeared to have initiated the request and asked if he had done so. If not, I was quitting LinkedIn. I wanted the company to be using the data I supplied them, not to be vacuuming up other information about me. My contact said no, he had not, and could I please tell him how to leave LinkedIn. That did it; within minutes I left LinkedIn. I imagine he did as well.
Sure, not being on social networks is costly. I don’t see photos of a friend’s trip to Tuscany, and I’m not able to advertise my new book nearly as well. But as the recent events have shown, my privacy and security are better protected when I eschew these platforms. So I think my Harvard colleague was wrong. I think I had much more credibility to teach Privacy and Online Social Networks than if I had been on Facebook. It seems many others are coming to the same conclusion. Until the online social networks change how they operate, getting off of these websites is a necessary step for protecting whatever privacy any of us have left.
Editor's Note: Since the publishing of this piece, the New York Times has reported that Cambridge Analytica may have harvested data from as many as 87 million users. This post operates off of an earlier estimate of 50 million.