Cybersecurity & Tech Surveillance & Privacy

Using ‘Sensitive Locations Lists’ to Address Data Broker Harm

Justin Sherman
Thursday, March 14, 2024, 10:12 AM
Stopping the sale of location data on sensitive locations works well for the FTC—but not for Congress.
Federal Trade Commission Building (Kurt Kaiser, https://commons.wikimedia.org/wiki/File:Federal_Trade_Commission_Building_2.jpg; Public Domain)

Published by The Lawfare Institute
in Cooperation With
Brookings

Editor’s note: This is the second article in a two-part analysis. The first article can be found here.

The Federal Trade Commission (FTC) recently reached a settlement with data broker X-Mode Social and its successor company, Outlogic. It is a first-of-its-kind order that requires X-Mode to delete some of its already-collected data, implement a privacy program, put data use requirements in its contracts, and much more. The first part of this analysis unpacked the order and its significant requirements, such as a “supplier assessment program” to ensure that companies supplying geolocation data to X-Mode have received consumers’ affirmative, express consent.

These and other components of the order are significant for both Americans’ privacy and other companies selling location data. Particularly, the development of a list of sensitive locations draws fairly specific lines (in a sense, literally) around facilities where the collection and sale of geolocation data is especially high risk. It is an innovative and important use of the FTC’s authorities to address the harms of a company selling location data. As larger privacy debates beyond the FTC continue, sensitive locations lists could be an interesting way of thinking about the regulation of location data brokerage.

All of which raises an important question: Can legislators, privacy regulators, and even industry leverage sensitive locations lists as a kind of best practice to minimize harm?

The idea certainly has merit in the short term. It allows the FTC to target, under its current authorities, some of the most egregious kinds of harms emanating from the collection of location data surrounding medical centers, domestic violence shelters, places of worship, and the like. But if it were to become the only focus of congressional legislation or broader public policy efforts, a list-based approach raises complicated scoping questions and risks shifting the conversation toward the use of location data once compiled—skipping past the surreptitious, mass collection of it in the first place. Legislatively, the best approaches for privacy, safety, civil liberties, and national security will seek across-the-board protections for all Americans from the sale of location data.

Sensitive Locations Lists and Tackling Harm

Clearly, there is much to unpack in the FTC’s unprecedented settlement. Of particular note is the FTC defining a list of sensitive locations. In building this list, the FTC is giving X-Mode a clear set of locations about which it cannot sell data—and acting within its authorities to investigate “unfair or deceptive acts or practices” (15 U.S.C. § 45). The prohibition stems from the FTC’s argument that the broker is committing an unfair practice when it sells persistently identified data about people visiting said locations.

But this raises a broader policy question, beyond the FTC’s immediate goals: Is the idea of “sensitive locations” a useful way to think about data brokerage, location data, and the corresponding risks and harms to individuals? The use of such a list could have many benefits in a broader policy approach, like giving companies clear compliance checklists and prioritizing protections for the highest-risk places and groups.

Simultaneously, while it may work well for the FTC at this juncture, legislators and other policymakers should consider how sensitive locations lists may shift the policy conversation more toward the sale of data once collected—rather than focusing on data brokers’ widespread aggregation of location data in the first place on virtually all locations and Americans.

If other policymakers wanted to push for protections based on sensitive locations lists (such as in federal law), they would have to think about how to define sensitivity and how to scope locations. The FTC scopes the idea of sensitive locations data in three parts: defining “location data,” defining “sensitive locations,” and then defining “sensitive location data.” First, it defines “location data” as:

any data that may reveal a mobile device’s or a consumer’s precise location, including but not limited to Global Positioning System (GPS) coordinates, cell tower information, or precise location information inferred from basic service set identifiers (BSSIDs), WiFi Service Set Identifiers (SSID) information, or Bluetooth receiver information, and any unique persistent identifier combined with any such data, such as a mobile advertising identifier (MAID) or identifier for advertisers (IDFA). Data that reveals only a mobile device or consumer’s coarse location (e.g., zip code or census block location with a radius of at least 1,850 feet) or that is collected outside the United States and used for (a) Security Purposes or (b) National Security purposes conducted by federal agencies or other federal entities is not Location Data.

This is a lengthy definition, but the reference to what makes a location “coarse” (that is, less precise) is similar to language in state comprehensive privacy laws, which generally define location data as sensitive personal information when it’s precise within approximately 1,750-1,850 feet. For example, California’s consumer privacy regime defines “precise geolocation” as:

any data that is derived from a device and that is used or intended to be used to locate a consumer within a geographic area that is equal to or less than the area of a circle with a radius of 1,850 feet, except as prescribed by regulations.

Here, data about a location that is at the census block level, with a radius of at least 1,850 feet, is not considered location data. The FTC also draws on other actions, such as its ongoing lawsuit against location data broker Kochava, by including language about MAIDs, GPS coordinates, and other ways that location data is gathered and linked to people’s identities. Next, the FTC defines “sensitive locations,” described in the first article of this two-part analysis (including medical centers, religious organizations, and labor union offices). Then, with “location data” and “sensitive locations” defined, the FTC defines “sensitive location data” as “any consumer Location Data associated with a Sensitive Location.” The last part of this scoping is quite straightforward. If policymakers were to pursue this kind of list in a broader privacy approach, they could reference these definitions and consider whether the 1,850 feet language provides sufficient protection for individuals.

Further, to the extent that policymakers should prioritize bigger harms and greater risks, sensitive locations lists could help articulate which places constitute that greater risk to individuals. Mental health and reproductive health care clinics, domestic violence shelters, children’s schools, and religious places of worship, among others on the FTC’s list, are certainly places where the sale of location data can reveal highly personal information about people that is prone to abuse.

Insurance companies, for instance, could learn about people’s medical conditions by purchasing data on geolocation pings at mental health facilities, reproductive health clinics, or treatment centers specializing in HIV or cancer. And advertising firms could sidestep the Children’s Online Privacy Protection Act to buy location data about children (which, unlike data from children, is not covered), including their families and the schools they attend. Individuals can even buy location data to filter down within larger data sets, identify specific queer people, and out their sexual orientation.

Policymakers weighing the use of sensitive locations lists could consider adding other places that pose greater risks to individuals or society. The FTC mentioned in its May 2023 complaint, for example, that X-Mode was advertising data for sale about military bases. Given the number of U.S. data brokers that collect and sell data about U.S. military service members and national security personnel, legislators and policymakers could build a sensitive locations list that also includes certain U.S. military bases and government facilities.

Sensitive locations lists, practically speaking, may also facilitate companies implementing more concrete privacy protections. For instance, if a federal law prohibited data brokers from selling data about a defined list of sensitive locations, brokers could implement internal protocols to identify when possibly collected location data relates to a place of worship, the site of a First Amendment-protected public demonstration, or an addiction treatment facility. The brokers could use a concrete list of location types to identify actual locations in each bucket (such as for “mental health facilities,” developing lists of all mental health facility addresses to not collect on)—and could give those specific lists to salespeople. Then, when a prospective buyer approaches a broker for location data on a specific mental health facility at a certain address, the salesperson could check against the prohibited list and block the transaction. Others involved in a data broker’s transaction process, such as those building contracts and monitoring client account activity, could be trained to use the list to enforce prohibitions on sensitive location sales as well.

Companies implementing the likes of sensitive locations lists, compliance programs, and know-your-customer controls for data sales would be valuable, because it would improve on the status quo. Academic research I have led finds that the data broker industry lacks a set of best practices to screen potential customers and ensure compliance with contracts once executed. Brady Kruse of Duke University argued recently that Congress should require data brokers to implement know-your-customer controls—based on the risk level of the data—and use know-your-customer controls in the financial sector as an analogous model. Sensitive locations lists could be built into such programs and used to ensure that buyers were not seeking data about sensitive locations. Data brokers could then use sensitive locations lists in compliance programs to ensure that once a client buys data about a nonsensitive location, the client does not modify or manipulate the purchased data to produce data about one.

That said, there are many possible drawbacks to legislators and other policymakers relying too heavily on sensitive locations data lists. Corporate controls should be a supplement to federal regulation, not a replacement for regulation entirely. In certain areas of data brokerage—such as narrow cases where data is useful to prevent fraud and identity theft—it may be that one policy response is to regulate covered companies and permit them to still sell data, with required controls around how and when. But in some areas, as with location data, it’s a different picture. The risks to privacy, civil rights and freedoms, and national security are grave—and the supposed “benefits” to consumers are most often those that line the pockets of retailers (such as more granular profiling of consumers and targeting of real-time ads). Not to mention, some brokers have had internal controls for other types of data that salespeople and internal managers then violated to close a sale.

Hence, with data as invasive and risky as location data, the broader policy response should be less about corporate controls and more about prohibiting the sale of location data entirely. Such a measure would implement regulations across the entire data brokerage ecosystem and ensure protections were in place for all Americans. The FTC cannot institute that kind of ban—their list-focused prohibition with X-Mode is based on existing legal authorities—but legislators can address the broader harms flowing from the collection and sale of data.

Centering privacy policy discourse on lists, where some people or places are protected and others are not, also creates many scoping challenges. Take the military as an analogous example. In a recent study I led at Duke University, we bought nonpublic, sensitive, and individually identified data from U.S. data brokers about U.S. military service members’ health conditions, finances, religions, and even children. We did the same through a .asia domain and had the data transferred to servers in Singapore. In both cases, with virtually no vetting from the sellers, and for as low as $0.12/service member, we were able to purchase and transfer service members’ personal data. This kind of individually identified, sensitive, and nonpublic information about active-duty members of the military could be used by foreign adversaries and malign actors to target service members and profile them, blackmail them, and more. A member of Congress who wanted to write legislation to tackle this risk could focus on banning the sale of nonpublic, sensitive, identified data about people serving in the U.S. military. But doing so would not include those people’s spouses—so, military spouses could be included on the list, too, because those people could be targeted by foreign intelligence services to get to a particular service member. Yet this would not fully protect the households of service members with partners to whom they are not married—so the list would have to be broadened again beyond just spouses. And the list would not include the children of service members, so they would have to be added—and the cycle continues. The point is to illustrate that simply identifying a group in need of protection and then designing a “do-not-sell” list around them does not solve the problem.

With location data, there are many facets to the same issue: If this hypothetical, military-focused bill was about location data, the list of do-not-sell locations might focus on military academies (such as the Naval Academy) and military bases (such as Fort Liberty, formerly Fort Bragg). But it’s not so simple. Perhaps foreign intelligence services looking to blackmail cleared service members are less interested in a person’s on-base movements and more interested in where that person goes off-base. A person might be in financial trouble and visiting a financial establishment to get loans; another might be secretly visiting an addiction treatment center; yet another might be visiting an unlisted government site far more sensitive than a publicly known military installation. Each of these scenarios could present counterintelligence risks that a legislator might want to mitigate, yet they also would require building a sensitive locations list that becomes much larger in scope.

Moreover, quick and easy lists become even more unmanageable with location data, because the lack of a location data ping at a particular time is also a data point. If location data pings are available on a person most places outside 9 a.m.-5 p.m. each weekday, and then the pings suddenly become unavailable as the person approaches a certain part of Langley, Virginia (the headquarters of the Central Intelligence Agency), it would not be difficult for a foreign actor to connect the dots.

Other hypothetical examples abound where blocking the sale of location data on a few places would not be sufficient to address privacy or national security risks to individuals and society. Under the FTC’s current regulatory powers, a list-based approach is an effective way to identify and prioritize high-risk locations and groups. But for legislators and policymakers debating broader privacy changes, relying on sensitive locations lists presents more complexities and does not address broader privacy protections needed for all Americans, which legislators could create themselves.

One higher-level problem with adopting this approach in broader policy—beyond the FTC, which is rightfully using its existing authorities—is that focusing on sensitive locations lists shifts the policy conversation away from data brokers vacuuming up individuals’ geolocation data in the first place. Rather than talking about how 24/7 location tracking is already invasive, or how consumers don’t actually read privacy policies and terms of service agreements (and thus don’t really “consent” to their data being sold), the conversation becomes about the sale of data once gathered by data brokers. Companies then avoid more congressional and other accountability for data collection and the risks and harms to people—instead focusing the discourse predominantly on the idea of internal compliance programs. Data brokers could also have lists of sensitive locations about which they do not sell data, but it doesn’t necessarily mean they won’t gather that data in the first place. If regulations do not target collection of data itself, data corresponding to sensitive locations could be sitting on internal company systems, and internal corporate decisions may be the only thing that stops it from being sold to third parties. When collection is not prohibited, for example, a broker could decide to generate metadata based on the raw, underlying geolocation signals to attempt to circumvent the technical language of a bill—not selling data about a location per se but still conducting analytics on people visiting that sensitive place.

Again, this point is not a criticism of the FTC, which continues to punch above its weight given its authorities and resources. Location data brokers quietly tracking and selling consumers’ movements without their fully informed, freely given consent—and with little substantial benefit, serious risks, and no alternatives for individuals—is certainly an unfair trade practice. This kind of list-focused approach makes sense for the FTC and prioritizes protections for some of the most at-risk populations. Simultaneously, it is worth playing out this idea as part of broader public policy and law and considering how it would fall short of robust protections across the board, for all people and locations. Even if legislators wanted to develop comprehensive protections just for a certain group (which they should not), the military example underscores how listing a few locations is not viable when faced with the scope of location data collection and companies’ and foreign actors’ persistence in hoarding and analyzing data.

***

The FTC’s settlement with location data broker X-Mode is unprecedented and expansive. It draws squarely on the FTC’s authority to investigate unfair trade practices and requires X-Mode to delete some of its existing data, implement a supplier assessment program, prevent the sale of data going forward about certain sensitive locations, and create ways for consumers to learn what data is sold about them or have it deleted entirely (among others). Other location data brokers—many of which purchase or acquire their location data from smartphone mobile apps—may find themselves subject to this kind of agreement in the future if their practices fit the FTC’s unfairness criteria. Deletion of already-collected data, supplier assessment programs, internal privacy trainings, and other measures could all be on the table.

For a regulator like the FTC, which already has a notable case output and needs far more congressionally allocated resources, a sensitive locations list prioritizes protections for some of the most at-risk people and locations. In the X-Mode order, for instance, the sensitive locations list includes mental health facilities, domestic violence shelters, places of worship, and children’s schools. Prohibiting this large location data broker from selling data about those locations is an important measure from the FTC lawyers who led and worked on the case.

Yet legislators and policymakers considering this kind of list in a broader privacy approach should focus instead on restricting location data sales altogether. The risks to privacy, personal safety, civil liberties, and national security—such as location data brokers assembling lists titled “Cancer” or “Special Needs Kids,” domestic law enforcement buying location data on protesters, or foreign adversaries potentially accessing location data about security-cleared personnel—are too great to identify just a few locations that require safeguards. Ultimately, those with the authority to institute widespread protections against the sale of location data, such as members of Congress, should draw on the FTC’s findings and implement widespread protections for all places and all Americans.


Justin Sherman is a contributing editor at Lawfare. He is also the founder and CEO of Global Cyber Strategies, a Washington, DC-based research and advisory firm; a senior fellow at Duke University’s Sanford School of Public Policy, where he runs its research project on data brokerage; and a nonresident fellow at the Atlantic Council.

Subscribe to Lawfare