Cybersecurity & Tech Surveillance & Privacy

The Data Broker Caught Running Anti-Abortion Ads—to People Sitting in Clinics

Justin Sherman
Monday, September 19, 2022, 8:31 AM

In 2015, a data broker helped anti-abortion groups target women in clinic waiting rooms. The Massachusetts attorney general decided to act.

A reproductive health clinic. (Source: Carrie Mumah, https://tinyurl.com/ynjum8wk)

Published by The Lawfare Institute
in Cooperation With
Brookings

In July, the House Oversight Committee sent letters to data brokers SafeGraph, Digital Envoy, Placer.ai, Gravy Analytics, and Babel Street as well as five personal health apps interrogating their collection and sale of people’s reproductive health information. Before that, Sen. Elizabeth Warren, D-Mass., wrote letters to SafeGraph and Placer.ai about their sales of location data pertaining to abortion clinics—after which both companies pledged to stop making that information available for sale.

Amid intensifying conversations about the post-Dobbs v. Jackson Women’s Health Organization privacy environment in the United States, particularly for those with the capacity to become pregnant, these congressional letters are hardly the first time data brokers have been accused of exploiting the data of pregnant people. Recently, the Federal Trade Commission (FTC) announced a lawsuit alleging that data broker Kochava sold location information linked to specific devices that could trace individuals’ movements to reproductive health clinics and other sensitive locations—information that also could “be used to identify medical professionals who perform, or assist in the performance of, abortion services.” 

But data brokers have been under scrutiny for similar conduct since long before Dobbs. In 2017, the Massachusetts attorney general reached a settlement with the data broker Copley Advertising—which surveilled women and other people visiting abortion clinics, geofenced advertising around those clinics, and then enabled anti-abortion organizations to run anti-abortion ads to people sitting in clinic waiting rooms. The settlement ensured that the company would not use geofencing technologies near Massachusetts health care facilities again. For policymakers, legal scholars, and citizens trying to evaluate data privacy and data brokerage risks following the overturning of Roe v. Wade, this harmful collection and monetization of health information underscores how data privacy laws focused on these harms are sorely needed—and how state attorneys general may be able to punish or even preempt these abuses.

 

Data Broker Running Anti-Abortion Ads to People in Clinic Waiting Rooms

Copley Advertising, LLC, according to the settlement agreement, was a company that provided geofencing technology and advertising services to its clients. Specifically, its technology

generally encompasses the process of identifying whether an internet-enabled device, such as a smartphone, enters, exits, or is present within a geographic area through the use of any information stored, transmitted, or received by the device, including but not limited to latitude, longitude, GPS (Global Positioning System), information, IP (Internet Protocol) address, wireless Internet access information, so-called Bluetooth technology, Near-Field Communication (‘NFC’) information, or device identification information.

Copley Advertising would “tag” smartphones or other devices entering or leaving an area and then would run advertisements on certain device applications—which would run for up to 30 days—based on that location information.

In 2015, Copley Advertising and its one owner and employee John Flynn provided these capabilities to Bethany Christian Services, an anti-abortion, Michigan-based, evangelical Christian organization that provides adoption services—though until 2021, not to LGBTQ+ parents—and whose website features articles about women deciding to not get an abortion. According to the settlement with the Massachusetts attorney general, Copley Advertising geofenced medical facilities for Bethany Christian Services, including reproductive health clinics, in New York City; Columbus, Ohio; Richmond, Virginia; St. Louis, Missouri; and Pittsburgh, Pennsylvania. It then enabled Bethany Christian Services to run ads to devices within a geofenced area—including abortion clinic waiting rooms.

The ads were titled “Pregnancy Help,” “You Have Choices,” and “You’re Not Alone,” among others. People who clicked on the ad were “taken straight to a landing page or webpage complete with pregnancy options information and access to a live mobile chat with a Bethany pregnancy support specialist”—in other words, an individual who could try to talk or possibly manipulate the person out of receiving an abortion. Copley Advertising also provided these kinds of services to RealOptions, an anti-abortion, so-called crisis pregnancy center network in California, though the settlement did not provide details about the services that Copley provided to RealOptions. In both cases, the purpose was to enable the anti-abortion organizations to target “abortion-minded women” who “were either close to or entered the waiting rooms of women’s reproductive health clinics.”

Flynn stated to the Massachusetts attorney general that it would be possible for him to “tag all the smartphones entering and leaving the nearly 700 Planned Parenthood clinics in the U.S.”

 

The Massachusetts Attorney General’s Preemptive Action

In 2017, the Massachusetts attorney general entered into a settlement agreement with Copley Advertising and Flynn. While Copley Advertising, according to the settlement, had not provided this kind of geofencing service in Massachusetts, the attorney general believed it would be unlawful for the individual to do so under Massachusetts General Law Title XV, Chapter 93A § 2, which makes illegal “unfair methods of competition and unfair or deceptive acts or practices in the conduct of any trade or commerce.” This is because, according to the settlement, this kind of geofencing “intrudes upon a consumer’s private health or medical affairs or status and/or results in the gathering or dissemination of private health or medical facts about the consumer without his or her knowledge or consent.”

Copley Advertising and Flynn denied breaking any law and went a step further to “deny that they engaged in any wrongdoing.” Despite that claim, Copley Advertising and Flynn entered into an agreement with the state that they would not geofence, “either directly or indirectly through others, the Vicinity of any Medical Center located in Massachusetts to infer the health status, medical condition, or medical treatment of any person.” Vicinity was defined in the settlement text as “a distance of 250 feet from the Perimeter of a Medical Center.” A medical center was defined as “any facility that provides mental or physical health care, treatment, counseling, or therapy by or under the authority or supervision of licensed health care professionals,” including hospitals, urgent care facilities, health clinics, and family planning clinics. Interestingly, the category of “Retail Store Pharmacy” was excluded from this definition, “even if such Retail Store Pharmacy administers vaccinations, performs blood pressure screening, provides drug prescription counseling, or engages in other such health care activities.”

Zooming out for a moment, there are just five states in the U.S. with consumer privacy laws, as neatly summarized by the International Association of Privacy Professionals:

What the Copley Advertising case demonstrates, however, is that state attorneys general do not necessarily need a state consumer data privacy law on the books to act against exploitative data collection and use. Certainly, strong privacy laws are needed in the United States—ideally, a comprehensive federal privacy regime for all residents—and added resources and authorities for punishing privacy violations and data abuses would go a long way to preventing and mitigating harm against individuals. In the meantime, state attorneys general can nonetheless look to existing legislation on consumer protection and unfair or deceptive acts or practices to approach companies with specific settlement agreements to stop their harmful behaviors. Indeed, the Massachusetts attorney general did exactly that—even though the recent phone-tracking and ad-targeting did not occur within Massachusetts itself.

 

Policy Implications

Much of the policy conversation about data brokers and abortion- and pregnancy-related information has focused on direct sales of that information. This is certainly an area for concern. The United States’ federal health privacy law, the Health Insurance Portability and Accountability Act (HIPAA), applies only to a few kinds of “covered health entities,” such as hospitals, and does not apply to a range of companies that might also collect individuals’ health information—including data brokers, mobile apps, internet service providers, and social media platforms. These noncovered entities are therefore free to collect, sell, license, or share this information as they see fit. Further, the sale of health- and pregnancy-related data is of great concern in a policing context, because law enforcement does not require warrants to purchase information on Americans—ranging from location data to specific information on people’s medical-related conditions and activities. Law enforcement organizations enforcing laws that criminalize abortion, particularly at the state level, could exploit this vector of data gathering as well, in addition to surveillance mechanisms like following individuals, tailing vehicles, monitoring state border crossings, using facial recognition technology, and deploying license plate readers.

This case study underscores that data brokers’ exploitation of health data is also concerning in a post-Dobbs environment. If data brokers, advertisers, and many other companies can legally collect, analyze, and monetize individuals’ health information without substantial or any restrictions, they can enable anti-abortion actors to target people with advertisements. Those advertisements could include misinformation. Journalistic reporting and academic scholarship have underscored the ways that anti-abortion “crisis pregnancy centers” spread misinformation about abortions and pregnancy that endanger pregnant people’s health. Data brokers could also enable anti-abortion groups to run outright coercive advertisements to people who are recently pregnant or visiting care facilities—using language, content, or even direct communication (like through a chat box) that intimidates individuals and interferes with their ability to make decisions freely and safely about their own body. Running an ad to someone sitting in an abortion clinic waiting room can itself signal to the person that an unknown third party, or a known anti-abortion actor, is aware of their location, which is also fear-inducing.

Fundamentally, this kind of surveillance is also invasive. Citizens do not reasonably have any knowledge of third-party companies that quietly surveil their locations and then monetize the data on the open market. The argument made by the Massachusetts attorney general underscores this point. Even if consumers were aware this was happening, it does not mean they understand how companies and other actors are using their data—and it does not change the fact that those companies and actors can use the data to harm people.

American policymakers and privacy scholars alike must push for stronger, short-term controls on data brokerage practices that invade individuals’ privacy and place their health, bodily autonomy, and physical safety at risk. This could include, for example, outright bans on non-HIPAA-covered entities’ surveillance and sale of Americans’ health conditions, whether related to pregnancy or concerning mental health conditions, HIV/AIDS status, surgical histories, and current drug prescriptions. Waiting for a comprehensive privacy law only continues to leave individuals at risk. The danger to pregnant people is in and of itself reason to act, yet failure to legislate around data brokers also continues to pose risks to elderly Americans and those with Alzheimer’s, survivors of domestic and intimate partner violence, and other vulnerable communities. In the interim, states can pass their own data broker regulations, and state attorneys general should take note of this example from Massachusetts—where the government did not have to wait for harm to occur in its own state to act.

Understanding the full scope of data brokerage practices around health data, from surveillance to data sales to enabling targeted advertising, will only better inform regulatory responses and calibrate the most urgent places for immediate reform.


Justin Sherman is a contributing editor at Lawfare. He is also the founder and CEO of Global Cyber Strategies, a Washington, DC-based research and advisory firm; a senior fellow at Duke University’s Sanford School of Public Policy, where he runs its research project on data brokerage; and a nonresident fellow at the Atlantic Council.

Subscribe to Lawfare