The Lawfare Podcast: Data Privacy and Consumer Protection with the FTC’s Ben Wiseman
Published by The Lawfare Institute
in Cooperation With
The Federal Trade Commission’s data, privacy, and AI cases have been all over the news recently, from its proposed settlement with Avast Antivirus to its lawsuit against data broker Kochava.
Lawfare Contributor Justin Sherman sat down with Ben Wiseman, the Associate Director of the Division of Privacy and Identity Protection at the FTC, who oversees a team of attorneys and technologists working on technology and consumer protection. They discussed the FTC’s recent focus on health, location, and kids’ privacy; its ongoing data privacy and security rulemaking; and how the FTC looks beyond financial penalties for companies to prevent and mitigate harm to consumers.
Click the button below to view a transcript of this podcast. Please note that the transcript was auto-generated and may contain errors.
Transcript
[Introduction]
Ben Wiseman: So in crafting
remedies, we want to establish substantive protections for consumers’ data. And
here's what I mean: in the health privacy cases in GoodRx and BetterHealth and
Premom, our orders didn't allow sharing of sensitive health data so long as
consumers consented to it. The orders banned it altogether. In X-Mode and
InMarket, the orders didn't allow those companies to sell or disclose precise
location information so long as they obtained consumers consent. It was banned.
And then another case, our recent Rite Aid matter, a case in which we alleged
that a company had recklessly deployed facial recognition technology. The order
we obtained included a five-year ban on the use of any facial recognition or
surveillance system. So, when we're crafting remedies, we look to injunctive
relief to stop unlawful conduct, but we're also looking at injunctive relief to
address some of the broader problems we're seeing in the marketplace and some
of the incentives that are driving the harm.
Justin Sherman: I'm
Justin Sherman, a contributor at Lawfare, and this is the Lawfare
Podcast, April 2, 2024. The Federal Trade Commission's data, privacy, and
AI cases have been all over the news recently from its proposed settlement with
a vast antivirus to its lawsuit with data broker Kochava. I sat down with Ben
Wiseman, the Associate Director of the Division of Privacy and Identity
Protection at the FTC, who oversees a team of attorneys and technologists
working on technology and consumer protection. We discussed the FTC's recent
focus on health, location, and kids’ privacy, its ongoing data privacy and
security rulemaking, and how the FTC looks beyond financial penalties for
companies and companies to prevent and mitigate harm to consumers.
It's the Lawfare Podcast, April 2: Data Privacy and
Consumer Protection with the FTC's Ben Wiseman.
[Main Podcast]
Ben, why don't you start off by telling us more about yourself.
How did your career in privacy and cyber security and consumer protection
begin?
Ben Wiseman: Well thanks,
Justin. And first off, just thanks for having me. I'm very excited to get to be
on here and talk about all the exciting work we're doing at the FTC. Let me
start with the standard disclaimer that I give as a government employee, which
is that my views are my own. They don't necessarily reflect the views of the
commission or any particular commissioner. So, my career in privacy and
consumer protection began at the D.C. Office of the Attorney General. The
former Attorney General Carl Racine persuasively convinced me to join what was
a brand new Office of Consumer Protection at the DCAG's office. This was the
first time there was a standalone Office of Consumer Protection focused on
privacy and consumer protection issues. He promised me that it would be a
startup like environment, and it really was.
We were a very small team, and over the course of several
years, we really built up a consumer protection and privacy program from the
ground up. Within a few years, we were one of the lead states on the Equifax
multi-state investigation. We were the first state regulator to sue Facebook
regarding the Cambridge Analytica incident. We led a large Google multi-state
investigation concerning their location collection practices, which resulted in
what I believe still is the largest monetary payment in a multi-state privacy
enforcement case.
So it was just an incredibly exciting time to, one, get
involved in privacy and consumer protection issues. And just to be working at
the state AG level, just have tremendous respect for the folks in the D.C. AG
office, my former colleagues, as well as, AG's offices across the country that
are working on all these issues. In 2022, I had the opportunity to join the
FTC. So, I packed up my things and literally walked two blocks down 7th street
in D.C. to FTC headquarters. I first worked for the Bureau Director for six
months, and then I found my home in DPIP.
Justin Sherman: So,
some listeners hear DPIP, and they're well familiar with that organization.
Undoubtedly, some others may be less familiar. And so can you just talk to us
about what exactly does the FTC's Division of Privacy and Identity Protection, or
DPIP, do and how has that role evolved over the years?
Ben Wiseman: So DPIP
is the Division of Privacy and Identity Protection. Sorry, I know not that
everyone knows that offhand like we do. It was formed in 2006, so when the
internet was really blossoming, when we're starting to see the surge of the
online digital economy, and it was formed in response to really what was a
growing number of privacy and security issues that the Commission was facing at
that time. The Bureau Director at the time in announcing DPIP's creation noted
that in DPIP, it's going to be all privacy and security all the time. And in
the nearly 20 years since then, that has absolutely remained true.
So simply put, DPIP is the chief privacy and data security
enforcer in the United States. Someone recently told me we should start calling
ourselves the U.S. DPA, given our role in privacy enforcement in the U.S. We
rely on Section FTC Act to protect consumers from unfair and deceptive
practices relating to their privacy and the security of their data. We also
have sector specific statutory authority that includes enforcement of statutes
like the Fair Credit Reporting Act, the Children Online Privacy and Protection
Act, or COPPA, and the COPPA rule, the Gramm-Leach-Bliley Act, which involves
financial data, and the Health Breach Notification Rule, which involves health
data. We are a small and mighty group in DPIP. It's under 50 people. That's
lawyers, we have technologists, investigators, and other staff also on the
team. I know you have advocated in your writing for more resources for the FTC.
Thank you. When you look at our resources compared to our European counterparts,
we are quite small.
How has DPIP's role changed over time? I think it's obvious
that over the last 20 years, we've seen incredible technological developments
and changes, and DPIP's role has really evolved with those changes. Our Bureau
Director, Sam Levine, has spoken about the institutional advantages that we
have at the FTC to address emerging threats. And one of those is the FTC Act
itself, which is this really broad and flexible statute. So, in enacting the
FTC Act, Congress gave the FTC this broad authority to combat unfair and
deceptive trade practices. And that broad authority has really allowed the FTC,
over time, to adapt and change as technology has changed and adapt, and that's
what we've done in DPIP.
So, whether it was the, initial growth and start of the
internet, which led to DPIP's creation, whether it's been the sort of increase
in data breaches we've seen and the sophistication in data breaches over time,
the sort of incredible transformation brought upon by mobile phones and
applications, which really led to a rise in commercial surveillance as the
default online business model, or as we're seeing today, the increasing use of
AI and automated decision making in our lives, DPIP and the FTC has been on the
front lines and we're constantly adopting new strategies and tools to address
emerging threats that we see in the marketplace.
Justin Sherman:
That's perfect. Cause let's walk through each of those kinds of things. You
mentioned the piece about authority. So one of the FTC is core authorities, as
you said, is the power to enforce against unfair and deceptive business
practices. What does that mean in practice when it comes to cybersecurity and
privacy? And how should listeners understand that distinction between the idea
of unfairness versus the idea of deceptiveness?
Ben Wiseman: Section 5
of the FTC Act, which is the primary statute we enforce broadly prohibits
deceptive, unfair, and anti-competitive trade practices. I will not get into anti-competitive
behavior. I am not an antitrust lawyer. I know enough to be dangerous. It's a
whole different side of the agency. And I'm sure I'll get something wrong.
Let's start with deceptive trade practices. It's what it sounds
like. It's not providing truthful information. It's statements that are
misleading. It's making misrepresentations of material facts. For example, when
a photo storage company online says, we'll delete your photos and videos when
you deactivate your account. But then they don't delete those photos and
videos. In fact, they hold them indefinitely. That would be deceptive conduct.
And that's actually an example from one of the commission's cases, the Everalbum
case in 2021.
Unfairness is distinct. Unfairness is defined in the statute as
a practice that causes substantial injury that consumers cannot avoid and where
the benefits don't outweigh the harms. Whereas deceptive conduct is largely
focused on statements that are made by companies concerning their goods and
services or what's disclosed or what's not disclosed, unfairness is really
focused on the harm to consumers.
I'll give another example. In our lawsuit against Epic Games, the
maker of Fortnite, which came out last year, we alleged that the deployment of
a default privacy setting, which allowed minors to communicate freely with
adults and led to issues such as stalking and harassment of kids and teens, we
allege that the deployment of the default setting was unfair. So our
allegations didn't turn on whether or not the company disclosed the setting,
but rather it was the substance of the practice itself that was alleged to be
harmful. So when we bring unfairness claims, we are really calling out specific
harms to consumers from certain business practices. And the remedies we seek in
those cases are designed to stop those harms and address the incentives that
lead to those harms in the first place.
In DPIP, we have been bringing unfairness cases for a while,
particularly in data security. I do think since I joined the Commission in
2022, I have seen unfairness being used more and more to address privacy harms
that consumers can't reasonably avoid. So you have a number of cases, including
the Epic Games case, I Just Matter, on some health privacy cases, in a recent
lawsuit we, that the Commission filed in 2022 against a data broker named
Kochava, all involved allegations of unfair trade practices. And I think it's
notable that in those cases, the support for those unfairness claims was
bipartisan. There were four votes from Commissioners to advance the cases that
I just mentioned.
Justin Sherman: The
FTC has had several cases recently, you just mentioned one of them, involving
data brokers and the sale of consumers’ location data, particularly, some of
which are unfairness cases, some of which, as you said, also involve claims of
deception. Talk to us more about this ongoing lawsuit against location data
broker Kochava, As well as the two other settlements you reached recently with
location data brokers.
Ben Wiseman: As you
mentioned, Kochava is an ongoing lawsuit. The other two matters that you're
referring to are recent proposed settlements against X-Mode and InMarket. Those
are proposed settlements. They're working their way through our, to the
commission's administrative process before they're finalized.
Let me backtrack and start with a broader lens on the issue of
data brokers and location data in general. This is an area you've written a lot
about or know a lot about. Location data is so sensitive because it reveals so
much about us, where we live, where we pray, where we seek medical treatment,
the groups we spend time with, where our kids go to school. And what has
happened, I think, over the past decade or so is that as the commercial
surveillance business model, has it become further entrenched as the norm or as
the default location data has become even more valuable. And then as a result,
there's been a rise in firms that as their primary business model, collect
process and sell this information.
So there's this whole ecosystem of companies out there that
consumers have never heard of, that collect and sell consumers’ geolocation
information, often to within meters of where folks are present, along with,
dozens of other data points about individuals that are put together into these
curated profiles.
So the X-MODE case, X-MODE is a company that sits within this
data broker ecosystem. We allege in the complaint that they collect precise
geolocation information in a number of ways, through their own apps, through
SDKs that are incorporated into other apps, and as well as through other data
brokers. They have deals with other data brokers where they collect
information. And these collection methods have allowed them to amass billions
of location data points from all over the world. And they sell their data in
two ways.
The first is as a raw data feed. Which has device identifiers
for individuals, mobile devices, along with longitude and latitude coordinates
for where those devices are located. So little pings that you can follow an
individual device as it's out throughout the day. And the second category of
data they sell are what is referred to as audience segments. These are when
device identifiers are matched with certain locations or categories of
locations to create these profiles. One profile of an audience segment could be
veterans. You get a list of veterans. Or you get a list of people who visit
certain churches. And it can get pretty granular.
So for instance, in X-Mode, one of the audience segments we
alleged in our complaint was that they put together an audience segment of
individuals that went to certain specialist medical practices in the Columbus,
Ohio area. We alleged that the sale of location data that included visits to
sensitive locations, like medical facilities, places of religious worship,
domestic abuse shelters, we alleged that was an unfair trade practice. And we
also allege that categorizing consumers into audience segments based on
sensitive characteristics, based on sensitive locations, that too was unfair.
We also had an allegation in the complaint that the company was
collecting data for certain purposes, namely to sell to government contractors
for national security purposes without obtaining consent for such purposes. The
proposed order, this was, this is again, is a proposed settlement, and the
proposed order in the case I think is notable for a few reasons. One, it
includes a ban on the sale or disclosure of sensitive location information.
That's a first of its kind type of ban. And it also requires the company to
develop what we're calling a supplier assessment program, which ensures that
they, the company confirms that consumers have provided consent to the
collection and specific uses of their location data. So that's the X-Mode case.
Kochava, as you mentioned, it's in litigation. I can't speak in
too great detail about that case because of the ongoing nature of the
litigation. But let me give you like a brief procedural history of where things
stand there. So this was a case that Commission filed in 2022. The allegations
there are similar to the allegations that are made in X-Mode. The company sale
of precise geolocation information could reveal visits to sensitive locations.
And we alleged that was unfair. We alleged that the sale harmed consumers in
two ways. We said that one, it exposed consumers to potential harms like
stalking, physical violence, and harassment. And two, we alleged that it was
also an invasion of consumers privacy, and that was a stand-alone cognizable
harm. The court required us to amend our complaint, which we did last year, and
very recently, we got a decision from the court denying the company's efforts
to dismiss the lawsuit. And so our case is moving forward in discovery.
I do want to highlight sort of one key aspect from the court's
finding. In the denial of the court's motion to dismiss, for the first time, a
court recognized that an invasion of privacy alone can be a substantial injury
under the unfairness prongs in the FTC Act. So that even if that privacy
invasion didn't lead to further harms or these secondary harms like stigma or
harassment, or physical violence, that the invasion of privacy alone can
constitute substantial injury. And that's significant, particularly as we are
looking at some of the other harms that we're seeing in the data broker
ecosystem.
Justin Sherman: Let's
zoom in for a second on something you mentioned, which I think is really
interesting. This idea of sensitive locations and in the proposed settlement
with X-MODE, the scoping of some of the provisions around this idea of
sensitivity when it comes to location data. How does the FTC think about that
and how do you draw those lines, I guess in some cases literally, around what
is sensitive and what is not sensitive?
Ben Wiseman: Let me
start by explaining just generally, so each case is very different. The facts
of each case are different. The business models involved in the companies that
we are investigating or pursue are very different. And the relief we seek will
depend on the facts of the case, as well as the specific harmful conduct that
we investigated.
If you look at the definition of sensitive locations in the X-Mode
case, many of the locations reveal information about an individual that the FTC
has said in the past constitutes sensitive information. So whether it's health
conditions, information about children, religious affiliation, those are areas
where the FTC has spoken in the past about being highly sensitive information
about consumers.
As to location information more generally, the Commission has
made clear that precise geolocation, whether affiliated with sensitive
locations or not, is sensitive. So if you look back at the 2012 Privacy Report,
the FTC specifically called out, among other types of data, that precise
geolocation data with sensitive data, because it can be used to re-identify
people, it can be used to follow them around, and it can also be used to build
extensive profiles about them. And we really made this case in the InMarket
case, which is the other proposed settlement of a data broker that the
Commission has recently come out with.
And I'll talk about that very briefly. Basically, InMarket is a
data aggregator that collects precise geolocation and uses it to create
audience segments to serve ads. Some of the audience segments were things like
Christian churchgoers. In that case we brought claims similar to the claims in
X-MODE, and the order we obtained included a ban on the sale of any precise
geolocation information. It wasn't limited to sensitive location data. So what
should be clear from InMarket, as well as X-MODE, Kochava, and, other FTC
actions in this space is that we take the collection, use, and disclosure of
precise geolocation information very seriously because of its sensitivity, and
that we'll continue to hold players that act unlawfully in this ecosystem
accountable.
Justin Sherman:
Another significant area of work for the FTC, especially recently, has been
health data. And let's get into some of the specific cases in a minute. But
first, if the U.S. has HIPAA, which many often refer to as the federal health privacy
law. How and why is the FTC involved in health privacy protection?
Ben Wiseman: There
certainly is a myth that HIPAA covers the field when it comes to health
privacy. The reality is very different. HIPAA covers only certain entities. And
that leaves a whole host of companies that collect and use and disclose health
information from consumers not covered by HIPAA. I think this really came to
light in recent years where we've seen an explosion of health apps, of fitness
trackers, telehealth companies, and many of those are not HIPAA covered
entities. Those are entities that fall outside the scope of HIPAA. To be clear,
even if you are a HIPAA covered entity, the FTC Act still applies. So, we
talked earlier about unfair and deceptive conduct under the FTC Act. Those
prohibitions apply even to those entities that are covered by HIPAA.
Now, if an entity isn't covered by HIPAA, that doesn't mean
there aren't any other privacy protections. You know there's the FTC Act. And
in addition to the FTC Act, there's also the Health Breach Notification Rule,
which is a rule that we enforce in DPIP. The Health Breach Notification Rule,
we call it HBNR, it's a rule that Congress directed the FTC to promulgate in
the High Tech Act in 2009. It requires certain entities to notify individuals,
the FTC, and in some cases, the media, if there is a breach of unsecured health
data. A breach of unsecured health data, importantly, doesn't just mean a
cybersecurity incident, it can also mean when a company discloses health
information without authorization from consumers. And when companies violate
the rule, when they fail to provide that notification, they can face significant
penalties.
We just, in May of last year, we proposed amendments to HBNR
through a notice of proposed rulemaking to, among other things, just make clear
that the rule applies to health apps and other similar technologies that aren't
covered by HIPAA. So HBNR is just another tool that we have and use to protect
consumers health data for those entities that aren't covered under HIPAA.
Justin Sherman: What
are some of the most recent FTC cases and actions involving Americans’ health
information? What kinds of claims is the FTC bringing in those specific
instances?
Ben Wiseman: The past
year has really been, I think, one of the most significant in terms of
advancing the commission's health privacy work. We've had a number of cases in
the health privacy space. We have a team of fantastic attorneys that are
experts in this area and have been working on these cases. There are cases, and
I'll talk about a few of them, but just run through them quickly cases include
GoodRx which was the first time the Commission alleged a violation of HBNR. BetterHelp
the first time the Commission returned funds to consumers for health privacy
violations. Premom, the second time the Commission alleged a violation of HBNR.
And then there were two significant cases involving genetic data, Vitagene. And
the other one is CRI Genetics, which was a case that was brought by our Seattle
office along with the California Attorney General.
There are a lot of takeaways from these cases. I, there is this
fantastic blog post that an attorney in DPIP put together called “A Baker's
Dozen of Takeaways from Health Privacy Cases.” You can find it at the FTC
website. Quick tangent, just want, just incredible kudos to our Division of Consumer
and Business Education who puts out a ton of resources on the FTC's website
that are digestible for businesses and consumers to really understand the work
we're doing. An incredible amount of work goes into those resources and it's
just an incredible part of the FTC having these resources and guidance
available. online for the American public.
So back to the health cases. I'll focus on a trio of cases
which are the BetterHelp, GoodRx, and Premom cases. So each of these cases
involve sharing sensitive health data. In the case of BetterHelp, it was mental
health information. In the case of GoodRx, it was medication data. In the case
of Premom, it was reproductive health information, it's a fertility app. And
they were sharing the sensitive health information through tracking pixels with
advertising platforms like Google and Facebook. And this sharing was happening
without consumer's authorization and contrary to the privacy representations
that these companies were making. All three of those cases included allegations
of unfair trade practices. So we had unfair counts as well as deception counts.
In particular, with respect to the unfair counts, we allege that the company's
sharing of health data without authorization was an unfair trade practice. And
we also allege that the failure to institute policies to protect consumers
health information was also an unfair trade practice and that it caused
substantial injury to consumers.
A few takeaways from the cases. First, that, health information
is quite broad. It's not just a medical diagnosis. It can be, also be we've set
forth in these cases information from which you can infer health information.
And health information goes beyond just diagnosis or treatment. It really is
quite broad. And then the second takeaway is, the orders we obtained in these
cases were intended to provide real protections for Americans health data. We
notably didn't just require that consumers consent before the companies could
share sensitive health data for advertising purposes, but rather we ban the
practice entirely. So, all three cases include a prohibition on sharing health
data for advertising purposes.
Justin Sherman: And
of course there's overlap in these two areas but the use of children's data has
been another important point of focus for the FTC recently, just based on all
of the public matters. What has the FTC been doing with respect to the use of
children's data and are there any similar takeaways that listeners should have
for what it means about FTC action and what it also means about the U.S.
regulatory landscape?
Ben Wiseman: So I
know I said it was a significant year for health privacy. I also think the last
18 months or so have been incredibly significant with respect to kids privacy
and teen privacy. So, protecting the privacy of kids has long been a top
priority of the Commission. It continues to be to this day, in the past 18
months, some of the most significant actions have been the Epic Games Matter,
which I referred to earlier. Epic's the creator of the Fortnite game. This was
a COPPA case in which the commission obtained the largest civil penalty ever in
a COPPA case, $275 million. The Commission also brought enforcement action
against Amazon concerning its Alexa product, making clear that under COPPA,
companies cannot retain children's information indefinitely. A case against
Microsoft for the Xbox platform, another COPPA case, finding that the company
didn't properly obtain notice and consent as required under the COPPA rule. And
another significant case was the Edmodo case. Edmodo is an edtech provider. And
that case made clear that, COPPA applies in the edtech context, and also that
edtech providers can't outsource their COPPA obligations on to schools and
teachers. At the end of the day, it's the edtech companies that are responsible
for their COPPA obligations and will be held accountable.
And so it's just, we've had several very significant COPPA
cases over the past 18 months. And this was capped off in December when the
commission announced a notice of proposed rulemaking on the COPPA rule. The
COPPA rule was last updated in 2013. As you can imagine, there have been
several advances on technology, particularly technology focused on children
over the past 10 plus years. And so the amendments put forth by the Commission
really look to strengthen protections for children in light of those changing
circumstances and developments in technology. Children's privacy has long been
a priority and will continue to be.
Justin Sherman: Stepping
back now a little bit and thinking more about as you touched on at the outset,
the role of the FTC generally in the space. One of the frequent criticisms, of
course, of a fine or penalty-based approach to privacy or cybersecurity, issues
are now with AI, is that companies will pay a sum of money and then, in some
cases, it appears continue business as usual. And so are there measures that
the FTC has at its disposal if a company is engaged in an unfair or deceptive
practice with consumers data and do those measures include anything beyond
fines?
Ben Wiseman: For
starters, the Commission's ability to obtain monetary relief in cases was
severely curtailed by the Supreme Court's decision in AMG in 2021. That
decision took away what was the most powerful tool that the Commission had to
return money for consumers. There are certain circumstances, particularly where
we have violations of rules like COPPA, where we can obtain civil penalties and
or redress for consumers, but the AMG decision was a very significant
decision in taking away what was the most effective tool for the commission to
return money for consumers.
The most significant tool that we often rely on to stop ongoing
unlawful conduct is our ability to obtain injunctive relief or conduct relief.
And when we think about remedies and we think about injunctive relief in
particular, we're thinking carefully about designing remedies that don't just
effectively address the harms to consumers, but also address some of the
structural incentives that enable the unlawful conduct and enable those harms.
For example, in a number of our data security cases, as well as
privacy cases, but in a number of our data security cases, we have secured
requirements that companies minimize the data they collect and retain it no
longer than it's reasonably necessary. That is a relief provision, an
injunctive provision that goes directly to some of the incentives that we think
are driving some of the harms, the massive overcollection of data, the less
data that is collected and retained, the less data that can be compromised or
misused.
Another thing we look to do when we're crafting remedies is to
create substantive protections for consumers. So I spoke a little bit about
this earlier with respect to our health cases. You've heard several of the
commissioners speak at length about some of the real limitations of the current
notice and consent regime. Notice is really a fiction if it means reading
thousands of pages of privacy policies. I think research has shown that is just
impossible. It's impossible for a consumer to read all the privacy policies of
all the companies they interact with on a daily basis. And consent is really a
fiction in these circumstances, particularly where it's just checking a box.
So in crafting remedies, we want to establish substantive
protections for consumers data. And here's what I mean. In the health privacy
cases in GoodRx and BetterHealth and Premom, our orders didn't allow sharing of
sensitive health data, so long as consumers consented to it, the orders banned
it altogether. In X-MODE and InMarket, the orders didn't allow those companies
to sell or disclose precise location information so long as they obtained
consumer's consent, it was banned. And then in another case, our recent RiteAid
matter, a case in which we alleged that a company had recklessly deployed
facial recognition technology, the order we obtained included a five-year ban
on the use of any facial recognition or surveillance system.
When we're crafting remedies, we look to injunctive relief to
stop unlawful conduct, but we're also looking at injunctive relief to address
some of the broader problems we're seeing in the marketplace and some of the
incentives that are driving the harmful conduct.
Justin Sherman: Can
you talk to us a little bit more about, you mentioned, Rite Aid, there have
been, of course, FTC cases recently with AI. Some of those have required
deletion of AI models. And so why has the FTC pursued this kind of remedy? And,
you, like in practice what does that actually mean for a company?
Ben Wiseman: That's
right. There are now eight cases in which the Commission has required companies
not only to delete data that was unlawfully obtained or unlawfully used, but
also to delete any models or algorithms developed with that data. Everalbum is
one of those cases. I mentioned that earlier as a case. In which that type of
order relief provision was included. How that process works is going to depend
on the facts of the case. Again, each case is different. I think the main
takeaway is that even if there isn't a monetary payment or a civil penalty
involved in a case, violating the law can still be an expensive proposition for
a company.
This is also an area where having technologists at the agency
has been tremendously helpful. In 2023, the Commission, under Chair Khan's
leadership, created the Office of Technology. They have 12 technologists now on
staff who come from an array of backgrounds with different expertise. These are
some of the best technologists, not just in government, but in the entire
country. And having and working with those technologists day in and day out on
things like crafting appropriate remedies is just tremendously helpful. It's
another one of the tools we have in our toolbox to make sure that we're doing
all we can to protect consumers’ privacy.
Justin Sherman: So
related to that then, and this has come up Rite Aid, the FTC had an order with
the company to prevent the use of facial recognition going forward. So, can you
talk to us more about what happened in the case that, or, what was Rite Aid
doing that prompted that? And then what were the terms that the FTC was able to
secure?
Ben Wiseman: The Rite
Aid case has drawn a lot of attention and I think for good reason. There have
been lots of conversations. I'm sure you've had them. I've been at conferences
on hearing panels about some of the theoretical harms from AI and from facial
recognition technologies. But this is a case that really grounded those harms
in reality. I'll give a brief overview of the case.
Around 2012, we allege that Rite Aid introduced facial
recognition in stores to identify folks that it had previously identified as
shoplifters or engage in some other wrongful activity. Essentially, what would
happen is writing employees would take photos of individuals in stores who are
engaged in shoplifting, or they'd use camera footage from those stores to
capture photos, and they'd enter those photos into a database. And then when consumers
enter the store, their face would be captured through the facial recognition
software, and they were run against this database for a match. When Rite Aid
employees were informed that there was a match against someone in the database
when a consumer was in the store, they were given sort of a series of options
to choose from, depending on sort of the score that was provided by the facial recognition
technology system, and these different options included things like approaching
the individual or following them around the store, asking the individual to
leave the store, and in some cases, calling the police.
We allege that the deployment of this facial recognition technology
surveillance system wasn't just unreasonable, that it was really reckless. One,
it was deployed in stores that are in largely plurality nonwhite locations,
even though most Right Aid stores are in plurality white locations. The company
also tried to hide it from the public. We allege that it didn't tell consumers
about facial recognition and in fact, instructed employees not to tell
consumers or the media about it. And we also allege that it just failed to take
even the most basic measures to ensure that the technology was accurate, and
that it wouldn't harm consumers. So it regularly used low quality images, we
allege that it didn't test the technology pre deployment or during its use, and
we allege that it didn't appropriately train employees. And so guess what
happened? The system didn't work. There were thousands of false positives, consumers
were in stores were falsely flagged as wrongdoers. And this is where you really
see the severity of when these technologies can go wrong. The cases are
extremely telling. Rite Aid employees stopped and searched an 11-year-old girl.
Her mother was so distraught that she had to take off work, understandably so. Rite
Aid employees called the police on a black woman who was falsely flagged by the
system. And when employees went to look back at the database of the picture
that, caused the flag, it was actually a white woman with blonde hair.
So you really see the severe consequences when companies really
fail to take proper measures before deploying these systems. The complaint in
that case, we allege that their failure to take reasonable measures to address
these risks and these harms to consumers was unfair. It was an unfairness case
again. The order we obtained, Rite Aid's banned, as I mentioned, from using any
facial recognition surveillance or security system for five years. And
significantly in the future, if they decide to implement any type of biometric
security or surveillance system, they have to maintain a very robust monitoring
program.
It requires testing and training and significantly, if the
testing identifies risks the company can't mitigate, it needs to shut the
system down. So, Rite Aid is a very significant case. It's the first time that
the commission has alleged that a company's use of AI and use of facial
recognition technology was unlawful.
Justin Sherman:
Looking now, both ahead and then at processes that are ongoing, the FTC is in
the middle of a lot of rulemaking right now. That's on commercial surveillance
and data security, on the health breach notification rule, which you spoke a
lot about, and most recently on COPPA, the Children's Online Privacy Protection
Act. So, first of all, what does it mean for the FTC to be engaged in
rulemaking? And then what compelled the FTC to initiate these processes? What
changes might they bring to our current landscape?
Ben Wiseman: What is
rulemaking? Under, so under the FTC it's, it's a process and that's what it
means to be engaged in rulemaking. The FTC Act, empowers the Commission to
prescribe rules that define acts or practices as unfair or deceptive. And so
that's one method of rulemaking that the Commission can engage in. Congress has
also granted the Commission rulemaking authority in a number of other statutes,
COPPA being one of them. And the COPPA rule being the implementing regulation. And
so to be engaged in rulemaking means that there is a process that we follow
before we can promulgate a final rule and a final trade or final trade
regulation.
For rules under the FTC Act, its a little different than rules
that we are promulgating under other statutory authority. The first step under
the FTC Act is we have to issue what's called an advanced notice of proposed
rulemaking. Essentially, these are questions that we ask the public in advance
of any rulemakings. The public then provides comment. Following that comment
period, if the Commission elects to proceed with a rule, it will issue a
proposed rule in a notice of advanced notice of proposed rulemaking. And then
after that step, the public will have a further opportunity to comment. And
then the final step in the rulemaking process is the issuance of a final rule.
I think you're right to note that there's been an increased amount of
rulemaking under Chair Kahn. And, I think it reflects two things. First, as I
mentioned, the Supreme Court's AMG decision was incredibly significant
in taking away the most effective tool to return money for consumers. Essentially
what it's meant is that absent a rule violation, it's much more challenging to
obtain monetary relief in cases. Where there are rule violations, however, the Commission
is empowered to obtain redress as well as civil penalties.
And so by enacting rules that set out unfair and deceptive
trade practices, one example would be the recent rulemaking that was finalized
on government impersonator scams. The FTC now is able to obtain penalties and
redress for that unlawful conduct. That rulemaking, the type of impersonation
scams that were addressed by in that rule have always violated the FTC Act, and
the FTC has brought a number of cases over the years addressing those
violations of the FTC Act, but now that there's a rule in place, the FTC can
seek consumer redress, and it can seek civil penalties in those cases. So that,
I think, is one, one piece that has driven rulemaking.
I think a second piece is that there is a recognition that case
by case enforcement alone sometimes might not be enough to address broader
problems that we see in the marketplace. So along with rulemaking, what you're
seeing at the Commission is using all the tools in our toolkit to protect
American consumers. So it's rulemakings, the use of notice of penalty offenses,
conducting industry studies on our 6B with our 6B authority. These are all
other methods in which the Commission is making sure that it's using all the
tools that we have that Congress has granted to us to protect the American
public.
Justin Sherman:
Looking then at the future of FTC cases and regulatory matters, recent focus
areas, as we've said, have clearly included health information, genetic
information, location data about children and teenagers. Do you expect that to
continue? And what do you see as the FTC, and also DPIP's biggest priority area
is when it comes to privacy and cyber security.
Ben Wiseman: That's a
big question. We've talked about a number of themes today that are really focus
areas in DPIP and at the FTC. Recognizing the limitations of notice and consent
and moving towards substantive protections for people's data. Stopping the
sharing of sensitive data like precise location data, health data. Protecting
data from abuses, requiring firms to minimize the data that they collect and
they retain. Making sure firms aren't training models on illegally obtained
data. Those are all some of the themes of the enforcement cases that we're
bringing and some of the focus areas when we're bringing enforcement cases. You
raise some substantive areas that have been and continue to be priorities in
our privacy work, health data, location data, those priorities continue. Areas
of sensitive data, we are looking to provide substantive protections to limit
some of the harmful and unlawful commercial surveillance practices that we see
with a collection, really the over collection of sensitive data. Kids and
teens, our robust COPPA enforcement seeking to address some of the unique
privacy harms that kids and teens face remains a priority.
AI is a space where, in order to recognize the benefits of the
technology, we really think it's important to understand and take a close look
at some of the emerging threats we're seeing, particularly as to privacy. I
think what you're seeing in the AI space, I've previously spoken about this as,
some of the same incentives that have driven harms in the commercial
surveillance context, the over collection of consumers data, are some of the
same incentives that are driving AI business models. Many of these models rely
on collecting as much data as possible. So that's another area of focus.
Data security has been a pillar of DPIP's work for many years
and continues to be. Looking for ways to limit the collection, use, and
retention of data, to reduce harms of data breaches when they do happen. And
another area that I spoke about recently is worker surveillance. Gig work is on
the rise. It's increasingly becoming a greater proportion of employment in the
United States. AI is on the rise, as we all know. And surveillance is now
impacting consumers more and more, not just in their homes, but when they're
also on the job. And that's an area that we are focusing on.
We have quite broad jurisdiction at the FTC. Our mandate is
quite broad. And it means that we have our hands full. It means that there's a
lot of incoming when it comes to privacy and security. And all these are focus
areas that we are committed to and think are necessary to remain committed to.
As we look to do our best to protect the American public.
Justin Sherman:
Currently, the U.S. legal and regulatory model for at least the consumer
privacy component of what we're talking about is largely set up to have case by
case enforcement or, in some areas, a focus just on certain practices within
certain industry sectors. In the longer term, are there any legal or regulatory
changes the FTC is supporting, or that you think would help improve on the
privacy status quo?
Ben Wiseman: Yes,
there's a big one, which would be passing a comprehensive federal privacy law. The
Commission has for many years spoken about the importance of passing
comprehensive privacy legislation that would provide baseline protections for
all consumers across America.
Justin Sherman: The
FTC has a proposed settlement very recently. with antivirus company Avast that
I think cuts across and touches on a lot of the things we've been talking about
on this episode. Can you tell us more about this case and what happened and
what action the FTC pursued?
Ben Wiseman: Yeah, so
Avast is a, it's a proposed settlement working its way through the
administrative process. This is a company where we allege that it was providing
antivirus software and browser extensions to really protect privacy. That was
the purpose of this software. And that's why people downloaded the software and
use the software, was to protect their privacy. As part of, that the software
was collecting browsing information, consumers, individual browsing
information, the websites they were visiting, and contrary to promises that it
made, it then sold this browsing information on where ultimately it was used
for advertising purposes.
So we have a proposed settlement with the company. It has some
significant features in it. One is similar to the prohibitions I was discussing
in some of the health cases and location cases. Avast is going to be prohibited
from selling or licensing browsing data for advertising purposes. Similar to
the model deletion provisions we were discussing earlier, Avast is going to
have to delete the web browsing information, that was transferred on in that
case, and it's also going to have to delete any algorithms that were derived
from that data. And then the third piece is there's going to be notification to
consumers. So Avast is going to have to inform consumers whose browsing
information was sold to third parties without their consent. So it's a
significant, it's a significant case. Again, it's a proposed settlement in the
process, the administrative process that, that occurs at the FTC before cases
are finalized.
Justin Sherman: Is there anything else you'd like to
add?
Ben Wiseman: The one thing I'd like to add is we want to
hear from folks. Please tell us about your privacy and data security problems.
You can go to reportfraud at ftc.gov and we want to hear from you.
Justin Sherman: Thanks for coming on.
Ben Wiseman: Thanks for having me.
Justin Sherman: The Lawfare
Podcast is produced in cooperation with the Brookings Institution. You can
get ad free versions of this and other Lawfare podcasts by becoming a Lawfare
material supporter at lawfaremedia.org/support. You'll also get access to
special events and other content available only to Lawfare supporters.
Please rate and review the Lawfare Podcast wherever you
get your podcasts. Look out for Lawfare's other podcasts, including Rational
Security, Chatter, Allies, and the Aftermath. Go to
lawfaremedia.org to see Lawfare's written work. This podcast is edited
by Jen Patja, and your audio engineer this episode was Cara Shillenn of Goat
Rodeo. The music is performed by Sophia Yan. As always, thank you for
listening.