Brokered Violence: Safety for Sale in the Free Marketplace of Data
In a world where data brokers enable violence by selling our information, safety requires a data-deletion right that people can reliably enforce.

Published by The Lawfare Institute
in Cooperation With
In the back seat of Vance Boelter’s SUV, police found a semiautomatic rifle and a handwritten list of 11 “people-search” websites run by data brokers. Boelter had meticulously noted which brokers provided the most detailed reports at the lowest price. For less than the cost of a tank of gas, he amassed murder-ready dossiers containing home addresses, family names, and daily routines for 45 Minnesota Democratic lawmakers—information police believe he used in June to murder Minnesota House Speaker Melissa Hortman and her husband, and to injure state Sen. John Hoffman and his spouse.
Boelter’s actions are chilling yet disturbingly familiar. The cheap, convenient dossiers that empowered his violence have enabled targeted attacks for decades, from the stalker who murdered his former high school classmate in 1999 to the gunman who killed Judge Esther Salas’s son in her New Jersey home in 2020. Despite these high-profile incidents, data brokers have done little to change their practices and protect the public. The result is a multibillion-dollar marketplace built on surveillance and intimidation, profiting from transforming public records and privately scraped data for sale into literal hit lists.
In recent years, a flurry of legislative efforts has aimed to blunt this threat, including California’s DELETE Act, New Jersey’s Daniel’s Law, and the federal Daniel Anderl Judicial Security and Privacy Act. But these measures remain inadequate. Even their best provisions still force the targets to stay one step ahead of the brokers or wait weeks for promised protection. So far, laws regulating data brokers tend to be riddled with holes that risk legitimizing and entrenching some of the industry’s most dangerous practices.
Behind the legislative carve-outs and concessions lies a constitutional sleight of hand: Brokers claim a First Amendment right to package data from quasi-public sources—from property deeds to voter rolls to social media—and then sell them to any willing buyer. This dangerous business practice, cloaked in contentious free-speech arguments, elevates commercial profit above human life. Brokers and lawmakers also seem to assume answers to delicate constitutional questions that are far from settled, particularly in light of technological shifts that could, and perhaps should, disrupt long-standing legal doctrines.
The Minnesota assassinations, the latest in a growing list of tragedies, should prompt a long-overdue reckoning with the notion that every commercial compilation of “public” data deserves constitutional sanctuary. It’s time to test whether the First Amendment really places the frictionless, profit-driven dissemination of personal data beyond the reach of regulation.
How Online Data Brokers Facilitate Offline Violence
Data brokers operate by harvesting government records, social media profiles, and private shadow data—and then employing sophisticated machine-learning techniques to identify patterns, predict routines, connect individuals through associations, and compile remarkably detailed personal profiles for sale online. For people seeking to perpetrate violence, these inexpensive and accessible dossiers reduce the complexity and effort required to plan and inflict their attacks. Tracking someone down now requires only an internet connection and a credit card, rather than extensive internet searches and documents requested from local public officials.
The Minnesota attacks tragically illustrate how brokered data can turn a grievance into a kill list. The warning signs have been there for years. In 2020, Roy Den Hollander, a misogynistic “men’s rights” activist, built a dossier of information available online to hunt Judge Esther Salas, who presided over his constitutional challenge of the military’s male-only draft. He arrived at her home disguised as a deliveryman, shot her son, Daniel, dead, and severely wounded her husband.
You needn’t be famous to be at risk. In 2023, investigative reporters revealed that data brokers were selling detailed, real-time location data on U.S. military personnel’s movements at sensitive European bases, compounding the concern over the sale of military personnel’s data. Similarly, countless private individuals—such as individuals seeking reproductive care, immigrants targeted for deportation, and domestic violence survivors whose new addresses can be purchased by their abusers—have been placed at risk by brokered data. Brokers endanger private citizens every day.
Violence represents the primary harm facilitated by data brokers, but the secondary harms are profound, lasting, and often overlooked. Victims endure forced relocations, job loss, ongoing financial burdens, and debilitating hypervigilance—not to mention the re-traumatization of persistently confronting the threats they face. Slowly but surely, these burdens can lead vulnerable individuals to recede into the background, withdrawing from society for their safety while navigating the many-headed Hydra that is the current broker ecosystem. Even though some brokers claim people may opt out of data collection, their onerous, document-intensive processes present victims with a Sisyphean task: Find the data that could be used against you, file hundreds of requests to remove the information, wait weeks or even months for takedowns while you remain exposed, and then remain alert to see if the information crops up somewhere later. Rinse and repeat. This whack-a-mole dynamic forces victims to essentially assume a new full-time job to achieve a modicum of safety.
In the wake of the Minnesota killings, Sen. Ron Wyden (D-Ore.) captured the stakes vividly: “Congress doesn’t need any more proof that selling data to anyone with a credit card is deadly. Every American’s safety is at risk until Congress cracks down on data brokers.”
The Inadequacy and Complicity of Recent Legislative Efforts
Whether through empathy or self-interest, lawmakers are finally considering how to tackle brokered violence. Indeed, within days of Boelter’s killing spree, members of Congress held a press conference calling for a bill that would make it easier for lawmakers to scrub their personal information from the internet. These efforts add to a recent lineage of measures to rein in data brokers. But even the most ambitious legislative proposals leave much to be desired.
New Jersey’s Daniel’s Law was one of the first to address the threat directly, offering privacy protections for judges, prosecutors, and police by prohibiting the posting of their home addresses online. But the law places the burden on individuals to proactively request data removal from government agencies and commercial entities alike. Worse still, enforcement mechanisms were initially weak and the scope woefully limited. It took until 2023, after legal challenges and mounting public pressure, for the law to be amended to include expanded protections and meaningful penalties that had any prayer of curbing brokers’ legal violations. Other states—such as Wisconsin—have begun to follow suit.
At the federal level, the Daniel Anderl Judicial Security and Privacy Act offers similar protections, but only for federal judges. It mandates that data brokers remove information about judges upon request and restricts the posting of personal information. However, it provides broad carve-outs for media and information considered in the “public interest,” while also failing to cover other vulnerable officials at risk of targeting. Even rulemaking has proved unsuccessful. The Consumer Financial Protection Bureau proposed a promising rule at the tail end of the Biden administration to protect consumers from violence and stalking. The rule would have required data brokers to comply with the Fair Credit Reporting Act and to receive explicit authorization from consumers to obtain or share data. But six months after posting the rule on the Federal Register, new leadership withdrew the rule, leaving consumers at risk.
One of the most egregious shortcomings of these state and federal laws is that, by focusing solely on government actors, they do nothing to alleviate the unjust burden placed on other vulnerable groups. In addition, neither law establishes a centralized system for data removal, instead forcing people to navigate disparate takedown requests across hundreds of brokers. This feeds directly into the fractured landscape that produces so many of the secondary harms these laws were intended to prevent. It is questionable how much safety these provisions really provide.
California’s DELETE Act, which passed, and its federal counterpart, which has not yet, attempt to address that fundamental failure by shifting the burden off the shoulders of those least able to bear it. These regulatory regimes aim to streamline the opt-out process by establishing a centralized mechanism that allows people to request deletion from all registered data brokers. In theory, this promises relief from the impossible task of submitting individual takedown requests into perpetuity. But in practice, both efforts are riddled with ambiguities and limitations that undercut their effectiveness. For example, both laws grant brokers unfathomably long windows to comply with takedown requirements, leaving victims endangered for up to 45 days. They also contain ambiguous exemptions that threaten to swallow the protections, and, perhaps most crucially, they are startlingly vague on the implementation details that will determine whether the laws have any effect at all.
A Better Way
In “Brokering Safety,” our forthcoming article in the California Law Review, we propose a solution crafted to rectify some of the gaps in existing and proposed legal interventions. The centerpiece of our proposal is a two-step registry: First, data brokers must register with a federal or state agency; and second, people seeking to invoke protection must register in a centralized victim registry. Unlike the current model, which relies on victims spotting threats and filing takedown requests, our system offers people a single, free, privacy-preserving ability to request protection, demands swift compliance from brokers, and imposes requirements that prevent dangerous disclosures before they happen.
Our proposal, however, limits the scope of its coverage to victims of stalking and domestic abuse. That focus was not driven by a belief that only these individuals deserve protection but, rather, by the desire to avoid the political and legal constraints that have been hostile to the protection of personal data when it butts up against commercial interest. In today’s environment, where legislators are increasingly aware of the dangers posed by brokered data, it is time to consider whether the protections we propose should extend to everyone who wishes to shield themselves from the possibility of brokered abuse.
All data brokers would be legally obligated to check the registry at least once per day (ideally in real time via an API) and purge covered data within 24 hours. This eliminates the dangerous lag that allows abusers to strike during a 45-day grace period. A key aspect of our proposal involves shifting the burden of identifying and deleting sensitive information onto data brokers—and away from registrants. Brokers would be responsible for purging not only direct identifiers (names, addresses, phone numbers) but also linked identifiers (spouse names, workplaces, school pickup locations) and derived or predictive information (income brackets, voting precincts, behavioral analytics). This extends measures such as Daniel’s Law to reach the full spectrum of machine-linked identifiers that can be used to locate and target someone.
To ensure compliance, we propose a strict ban on reselling and republishing covered data for registered individuals. Brokers would have to pass blocklists downstream and audit their resellers. Failure to comply should expose both the seller and the purchaser to legal liability. The government agency tasked with overseeing this system would be responsible for auditing, enforcing, and reporting violations—ideally including statistics about repeat offenders and suspicious behavior, such as buyers repeatedly searching for the same person after they register for protection. This oversight could be done nationally by an agency like the Federal Trade Commission or at the state level by generalist consumer-protection authorities or more specialized bodies (such as the California Privacy Protection Agency).
This framework flips the investigative burden to the least-cost avoider: the broker. It collapses the period of insecurity from weeks to a single day. It sheds further light on the dangers posed by this industry. And it protects not only victims of past violence but anyone who anticipates a threat and takes action to prevent it.
Innovative lawmakers could even pull complementary policy levers to strengthen this framework. For example, brokers might be required to implement know-your-customer rules for any sale of unblocked data, verifying a buyer’s identity and purpose. A private right of action could provide remedies for individuals harmed by noncompliant sales—especially when the injury stems from secondary harm like forced relocation or, worse still, the primary harm of violence. Brokers could be subject to escalating penalties, including disgorgement of profits or license revocation for repeat offenders. And lawmakers could mandate tracking and flagging of buyers who demonstrate patterns of abusive or obsessive data purchases.
None of these proposed reforms are radical. They are reasonable safeguards needed in a world where data is currency and lives are collateral. Brokers currently profit by endangering individuals with impunity. The only remaining question is whether lawmakers will act before more lives are lost.
Calling the First Amendment Question
The greatest legal bulwark for the broker industry is the First Amendment—a shield brokers wield to argue that compiling and selling personal dossiers is constitutionally protected speech. They frame their data sales simply as lawful disseminations of truthful information, but this argument assumes away deeply contested legal territory. A brokered dossier—machine-generated, monetized, and designed for hyper-targeting—is far closer to a dangerous commercial product than political commentary or investigative journalism. It is not clear that such activity should be considered “speech” at all. And if it is, it might well belong in a category of lower-value commercial expression, akin to spam marketing or the unauthorized disclosure of medical records.
Courts have not yet squarely resolved whether the First Amendment extends to data brokerage in its current form. Some lean toward strict scrutiny. Others, such as the district judge who upheld New Jersey’s Daniel’s Law by applying the Supreme Court’s three-factor balancing test in Florida Star v. B.J.F., point to a middle path. That balancing approach gives weight to safety and privacy without eroding legitimate journalism or public-interest research, which can and should be preserved through carefully crafted exemptions. What’s indefensible is the reflexive privileging of corporate speech interests over individual safety and expression. Lawmakers and judges must openly confront that trade-off—and decide if it’s one they’re willing to make.