Cybersecurity & Tech Surveillance & Privacy

Your Voter Records Are Compromised. Can You Sue? Theories of Harm in Data-Breach Litigation

Merritt Baer, Chinmayi Sharma
Monday, August 7, 2017, 11:03 AM

Last year, the Republican National Committee hired a firm called Deep Root Analytics to collect voter information. The firm accidentally exposed approximately 198 million personal voter records. This was 1.1 terabytes of personal information that the company left on a cloud server without password protection for two weeks.

On June 21 of this year, victims filed a class action in Florida court against Deep Root Analytics for harm resulting from a data breach.

Published by The Lawfare Institute
in Cooperation With

Last year, the Republican National Committee hired a firm called Deep Root Analytics to collect voter information. The firm accidentally exposed approximately 198 million personal voter records. This was 1.1 terabytes of personal information that the company left on a cloud server without password protection for two weeks.

On June 21 of this year, victims filed a class action in Florida court against Deep Root Analytics for harm resulting from a data breach.

Donald Trump has denounced such breaches as “gross negligence.” The Deep Root lawsuit took him at his word, using that quote as evidence to make a claim on the legal theory of negligence. The complaint demands more than $5 million in damages.

Defendants in data-breach cases (in this case, Deep Root Analytics is the defendant) often challenge a claim on the grounds that the pleading does not include an injury that is (1) “concrete, particularized and actual or imminent,” (2) caused by the defendant, and (3) redressable by a court of law.

To survive a challenge of standing, a plaintiff can show injury through actual harm, that the violation of the statute amounts to injury in itself, or that there will be imminent future harm.

How Do We Show Harm?

A showing of actual harm is the most straightforward to defend in court—this works if you have evidence of true dollars lost. But exposure of data does not necessarily equate to damages. How many people, once their data are breached, will have their identities stolen? And even identity-theft cost fails to account for longer-term expenses resulting from dips in credit score or the cost of time spent trying to prevent loss. What about data such as health records, educational transcripts and, yes, voter information?

(Not for nothing, when a hacker publicly demonstrated at DEFCON last week that voting machines could be hacked, he discovered that many machines he’d purchased on eBay still had hundreds of thousands of stored voter records. Meanwhile, 10 million voter records were listed last month on eBay for $4, potentially a political statement about societal carelessness. Should carelessness decrease the value of your personal data? Cost of damage to the victim as a result of breaches does not necessarily track to price of the data to buyers on the dark web.)

Occasionally, courts have allowed plaintiffs to proceed based on the claim that the defendant’s violation of a statute constitutes injury in itself. This year, the U.S. Court of Appeals for the 3rd Circuit concluded that a violation of the Federal Credit Report Act gave a private right of action to a plaintiff and amounted to an injury even without an additional showing of harm because the very purpose of the law was to prevent the wrongful disclosure of personal information. The court even implied that data-breach cases always carry imminent risk of future harm and automatically confer standing. But this inference was particular to the court and has not been affirmed. (In general, beyond the data-breach context, there is a judicial split over whether a statutory violation confers de facto standing, or whether there is need for more particularized showing of injury. The 3rd Circuit is the first and only circuit so far to confer standing because of a statutory violation for data breach, without a particularized showing of concrete injury.)

Finally, there is an avenue for plaintiffs who have not yet suffered harm to bring suit: the theory of “future harm.” This theory of recovery is that the defendant wrongfully exposed the plaintiff to the risk of harm. This one is clearly the most apt to apply to data-breach victims, but it is in conflict with the requirements of standing that the harm be particularized and concrete—or, simply, quantifiable. Many courts, including the Supreme Court in Clapper v. Amnesty Int’l and Spokeo, Inc. v. Robins, have dismissed cases involving potential injury from a data breach because the imminent particularized harm was a difficult standard to meet.

Assigning a dollar amount to damages from data breaches presents a persistent challenge. This year, the 2nd Circuit joined the 1st, 3rd and 4th Circuits in requiring a heightened pleading to establish injury through a theory of future harm in data-breach cases. In Whalen v. Michaels Stores, the court found that an increased risk of future identity fraud that forced the plaintiff to invest in preventive measures was too attenuated to amount to real injury.

Similarly, in Beck v. McDonald, the 4th Circuit determined that the threat of future harm was not sufficiently imminent because (1) two years had passed since the breach had occurred, which made the risk of future harm more unlikely with the passage of time; (2) the standing question arose at summary judgment and so both parties had conducted extensive discovery, during which the plaintiffs were wholly unable to uncover a single case of identity fraud; and (3) the court refused to make the same inference that the 6th and 7th Circuits made: that offering free credit-monitoring reports was evidence of a future threat of harm.

How Do We Apply This to the Current Attack Landscape?

The difficulty and inconsistency with which courts have applied the two standards is telling. These standards are tough to reconcile—a justice-based desire for rectifying or preventing damages, and a legal requirement that damages be articulable in terms that data breaches defy. But this year, some cyberattacks have taken on new characteristics that provide even more texture in this landscape.

Lately, we have seen nation-state motivated acts of aggression that lack overt or definable financial motivation. The recent NotPetya attack, first appearing to be ransomware, seems in fact not a source of revenue but a nation-state-level cyber weapon designed to be a “wiper”—in NotPetya’s case, deleting a victim’s master boot record. Even the infamous attack on the Office of Personnel Management in 2015 appears to have been motivated by nation-state-level data compromise, not dollars.

Theories of recovery can be awkward to apply to data breaches that don’t look like crimes we have seen before. We try to protect consumers from companies that fail to safeguard their data and from thieves that use information to steal from victims. But rarely in another context would we see an individual citizen-consumer evaluating when and how to bring suit for an invasion of data that was conducted on a vast and likely nation-state scale. In this new landscape—where companies are the arbiters of our private information in areas as varied as health care and banking—we may turn to the judicial system for recourse when a company’s data protections fail. And it’s difficult for courts to decide what those damages are in this context.

The ability to sue for damages serves a number of purposes, some particular to a case at hand and some extendable to future actions. A settlement in favor of a plaintiff may compensate her for her losses but also may exact a cost on the defendant. On the large scale, we may want to allow more damages against companies whose negligence leads to data breaches to make clear to other companies the costly but avoidable costs of data loss. But flaws in the current system were made clear in a recent case: In late June, Anthem agreed to pay $115 million in a data-breach lawsuit, but by this week 18,500 more records were shown to have been compromised in the breach. When we see breaches with costs that are difficult to calculate, we may have to realign our motives for allowing redress in court.

The views expressed herein are the personal views of the authors and do not necessarily represent the views of the FCC or the U.S. government, for whom one of the authors works.

Merritt is an expert in emerging technology and cybersecurity. Merritt has experience in all three branches of government, most recently in the Department of Homeland Security where she worked in the Office of Cybersecurity & Communications, often termed the United States’ “cyber firehouse.” Before joining the government, Merritt started a business advisory / legal practice, working with emerging tech companies at early stages of growth. Merritt speaks regularly on emerging areas including the future of the Internet, artificial intelligence and robots, current cybersecurity issues in 5G, cloud, mobile, IoT and ICS, corporate interactions with government cyber, women in tech, entrepreneurism and innovation. Her insights on business strategy and tech have been published in Forbes, The Baltimore Sun, The Daily Beast, Talking Points Memo and ThinkProgress. She continues to publish academic articles on Internet questions, and her work has appeared in the journals of Temple, Georgetown, Santa Clara, and the University of Virginia. Merritt is a graduate of Harvard Law School and Harvard College. She is admitted to the Bars of New York, the United States Court of Appeals for the Armed Forces, and the United States Supreme Court. Based in Washington, DC, she is a Fellow at the East-West Institute, founder of women’s tech advocacy group Tech & Roses, Adjunct Professor of Cybersecurity at the University of Maryland, and an amateur boxer.
Chinmayi Sharma is an Associate Professor at Fordham Law School. Her research and teaching focus on internet governance, platform accountability, cybersecurity, and computer crime/criminal procedure. Before joining academia, Chinmayi worked at Harris, Wiltshire & Grannis LLP, a telecommunications law firm in Washington, D.C., clerked for Chief Judge Michael F. Urbanski of the Western District of Virginia, and co-founded a software development company.

Subscribe to Lawfare