Cybersecurity & Tech

Unjust Enrichment by Algorithm

Ayelet Gordon-Tapiero, Yotam Kaplan
Tuesday, July 25, 2023, 8:25 AM
The only effective way to ensure that platforms stop promoting harmful content is to make sure doing so is not profitable for them.
(Geralt, https://commons.wikimedia.org/wiki/File:Mobile-phone-426559_1920.jpg; Public Domain)

Published by The Lawfare Institute
in Cooperation With
Brookings

Content promoted by social media platforms has been linked to a series of catastrophic outcomes. During the coronavirus pandemic, disinformation that spread through social media platforms contributed to dangerous vaccine hesitancy. The problem was so severe that President Biden stated that disinformation spread on social media platforms was literally “killing people.” Despite the ongoing global pandemic claiming millions of lives, Facebook’s personalization algorithm continued to recommend anti-vaccine content to its users. The Jan. 6 attack on the U.S. Capitol can also be traced to false and divisive content recommended to users by social media platforms. Finally, viral challenges among children and teenagers have resulted in physical injuries and, tragically, even fatalities. Many observers are now rightfully concerned with the materializing platform crisis, fearful of what comes next.

Why do platforms consistently recommend such dangerous and harmful content to their users? After all, platforms are not in the business of increasing human suffering, creating political upheaval, or destabilizing democracies. Indeed, the motivating force behind these disasters is far more mundane. Platforms recommend divisive, hateful, and extreme content simply because it proves to be profitable for them. People feel compelled to respond to rage-bait, and false information can be designed to draw more attention than the truth. Such content entices users to engage with the platform for more time, enabling the collection of more user data and the presentation of more advertisements. Socially harmful content is highly profitable for platforms.  

This problem could be addressed using the legal principle of “unjust enrichment,” an option we explore in more depth in our recent paper, which will be published in the George Washington Law Review next year. The concept of unjust enrichment revolves around the idea of unjust or wrongful gains, and rests on the fundamental idea that misconduct must not be profitable. The rationale behind this principle is straightforward: As long as misconduct is profitable, it will persist. This is all too apparent in the current platform crisis. Thus the only effective way to ensure that platforms stop promoting harmful content is to make sure doing so is not profitable for them. The law of unjust enrichment can facilitate this outcome through the doctrine of disgorgement of profits, allowing a court to strip wrongdoers of ill-gotten gains.

The use of unjust enrichment law also makes sense as a legal response to the platform crisis because its operation does not depend on a precise evaluation of harms. The harms of vaccine hesitancy, or of events like the Jan. 6 insurrection, are immense. At the same time, they are extremely difficult to measure accurately and attribute to specific actors. For this reason, many legal instruments that focus on harms, such as tort law, prove largely ineffective in addressing the platform crisis. The law of unjust enrichment focuses on unjust gains rather than on harms, thereby avoiding this hurdle. 

The principle of unjust enrichment can be applied to platforms’ conduct in three main categories. The first category is discriminatory presentation of job, housing, and credit ads. In the United States, it is illegal to discriminate based on protected attributes in these settings. Despite this restriction, carefully constructed experiments have found that Facebook illegally discriminates in the presentation of ads. Facebook’s ability to generate income from illegal patterns of advertising should be viewed as unjust, and any profits from this type of advertising should be disgorged from the platform. Once illegal advertising is no longer profitable for the platform, we can expect this pattern of behavior to cease. 

The second category of unjust platform enrichment is when personalization allows platforms to manipulate vulnerable groups. In particular, our paper looks at the promotion of harmful content to children and teens, who are among the most vulnerable groups of users of social media. They are at an age when they are “less privy to marketing techniques and so more susceptible to the tactics of online marketers and their deceptive trade practices.” They are easily “deceived by an image or a message that likely would not deceive an adult.” At times, content presented to an adult may not be outstandingly troubling or cause harm, but that very same content shown to a child may be harmful. One of the ways that social media companies unjustly enrich themselves at the expense of children is by promoting viral “challenges” among younger crowds. This genre of content has spread on platforms such as TikTok, Instagram, and YouTube and often includes children and teens participating in dangerous activities and self-harm. Famous challenges include the Tide pod challenge, the blackout challenge, the cinnamon challenge, the blue whale challenge, and many more dangerous challenges. TikTok’s algorithm promotes videos with trending hashtags, which enables the poster to get more views and engagement as these videos will be pushed higher up in users’ feeds

The virality of such challenges is of course highly profitable for platforms, increasing user engagement. This would explain platforms’ support and promotion of such content. The clear harmfulness of such activities, however, must mean the enrichment resulting from this personalization should be considered unjust and be stripped away from platforms. Anything less will maintain the profitability of such practices for platforms and perpetuate these repeating harmful occurrences. 

Platforms will not be surprised to find that this type of content is promoted to youth, nor will they be surprised to learn about the extent of the damage they cause. When a certain trend causes what is seen to be “too much” damage, or generates “too much” negative publicity for the platform, TikTok has been known to remove the viral hashtag used to promote the type of challenge, substantially lowering its spread, or to accompany videos of dangerous challenges with a warning. A legal doctrine that would prevent platforms from wrongfully becoming enriched at the expense of vulnerable groups is thus necessary to bring about a real change in the way platforms design their algorithms.

Finally, the third category to apply the doctrine of unjust enrichment to strip wrongful gains concerns cases in which platform behavior results in socially harmful acts. When should socially harmful personalization be considered unjust? The precise answer to this question is complex and will depend greatly on the facts of each case. In our paper, we propose that the doctrine should evolve gradually from case to case as courts are faced with more instances of unjust enrichment driven by socially harmful platform practices. We use the case of Facebook and the Jan. 6 insurrection to illustrate the type of analysis we expect courts will perform. This discussion helps elucidate the type of factors courts would consider when judging socially harmful personalization in the context of an unjust enrichment claim. 

Within such a framework, the nine weeks between the Nov. 3, 2020, presidential election and the Capitol insurrection on Jan. 6 are of particular interest. During that time, Facebook refrained from taking certain steps that its executives knew could have limited the spread of extreme, inciteful, and violent content posted on the platform. Importantly, the insurrection had dire results. As reported by Facebook’s very own Oversight Board, “Five people died,” “lawmakers were put at serious risk of harm and a key democratic process was disrupted.” The full effects of the event on American democracy may still not be fully known.

Facebook came out of the 2016 U.S. presidential election battered. It had not been able to prevent widespread attempts by foreign powers to influence the outcome of the election on the platform. As part of Facebook’s efforts to do better in the period leading up to the 2020 election, it established a task force to police the platform and its groups in particular. An investigation conducted by ProPublica and the Washington Post found that this task force was responsible for the removal of hundreds of groups that included violent and hateful content. Evidently, the company’s detailed preparations for the 2020 election had paid off. The measures adopted by Facebook were believed to be largely successful in preventing the same harms encountered in the previous elections from manifesting once again. Having withstood the test of the election, Facebook breathed a sigh of relief and started going back to “normal,” rolling back many of the safeguards it implemented before the election. Among these rolled-back measures was the task force established to ensure the integrity of the election. On Dec. 2, 2020, the workers of the Civic Integrity Team were dispersed among other parts of the company. 

During the nine weeks between the election and the insurrection, hundreds of thousands of posts appeared on Facebook attacking the legitimacy of the election and of Biden’s victory. These posts turned the groups in which they were posted into echo chambers, reflecting the calls of Trump supporters to use force to “prevent the nation from falling into the hands of traitors.” Posts actively called for violence, confrontation with government officials, and even for executions.

During this period, Trump used his profile page to perpetuate allegations that the election had been stolen and called on his supporters to join the planned rally on Jan. 6. That day, even as his supporters were violently storming the Capitol, Facebook allowed Trump to use the platform to perpetuate his false claims and express his understanding and support for the rioters, telling them they were “loved” and “very special.” By the time Facebook decided to act, during the week of Jan. 6, it was too late. The seeds had been sown and the ground was fertile for the violence that unfolded before a horrified nation. The harm had already materialized. The ProPublica and Post analysis found that the weeks between the election and the January insurrection were marked by a substantially lower removal of content by Facebook content moderators. 

The Post reported that “Facebook moved too quickly after the election to lift measures that had helped suppress some election-related misinformation.” One Facebook worker who had been part of the election task force summed up the company’s response by saying that “Facebook took its eye off the ball in the interim time between Election Day and Jan. 6. There was a lot of violating content that did appear on the platform that wouldn’t otherwise have.”

Following the events of Jan. 6, Facebook’s Oversight Board submitted 46 questions to the company. Facebook refused to answer some of the questions, including “about how Facebook’s news feed and other features impacted the visibility of Mr. Trump’s content; whether Facebook has researched, or plans to research, those design decisions in relation to the events of January 6, 2021.” These are precisely the types of questions courts can be expected to ask when implementing the doctrine of unjust enrichment in the context of platform enrichment stemming from socially harmful behavior. The conclusions of the board’s inquiry leave little room for optimism. Facebook only partially agreed to implement the board recommendation, citing steps that it intended to take in the future, but stopping short of publicly agreeing to conduct an analysis of its role in the Jan. 6 insurrection. There can be no doubt that Facebook was aware of the harmful, extreme, violent, and provoking content being promoted by its recommendation algorithm.

This case clearly illustrates the broad societal harm caused by problematic platform practices such as the employment of harmful optimization metrics coupled with a dismantling of crucial, proven safety measures. The spread of disinformation to individuals susceptible to believing it promotes distrust and blurs users’ ability to differentiate between what is true and what is false. Pushing users to polarizing extremes is harmful as it creates an ever-increasing divide between users with differing initial viewpoints. It pushes people to adopt extreme positions and even ludicrous conspiracy theories. Platforms’ choices regarding what content to present to users causes them to become locked into filter bubbles, which precludes meaningful discourse, central to a functioning and flourishing democracy. Users locked into an echo chamber, surrounded by people reinforcing their positions and even pushing them to further extremes, often are no longer able to differentiate between reality and conspiracy and may be pushed to take extreme, violent actions offline. 

The Facebook case illustrates a culmination of factors supporting a conclusion of unjust enrichment and demonstrates the types of factors that courts may consider in making this type of judgment regarding the injustice of platform enrichment. Importantly, this case also illustrates the types of factors that are of lesser importance for the analysis of an unjust enrichment claim. Mainly, the question of causality, which is a key point in any tort claim, is not a central element in a claim based on unjust enrichment. To hold Facebook liable for its harmful behavior, we need to show that its enrichment was unjust. Three central factors contribute to the unjustness of Facebook’s enrichment. These include (a) Facebook’s intentional choice of its recommendation algorithm, despite the knowledge that it actively promoted false and dangerous content; (b) the removal of existing safeguards; and (c) the great potential risks involved, coupled with the harmful outcome. These factors, together with clear monetary gains, are a grave concern and justify a legal response, even if it cannot be positively proved that Facebook’s conduct “caused” the insurrection. 

This proposal does not seek to hold platforms accountable for harmful content themselves. Indeed, Section 230 of the Communications Decency Act precludes such responsibility. The type of harms we focus on are the promotion of certain content to particularly vulnerable audiences. This past term, the Supreme Court was called upon to decide whether the immunity provided by Section 230 included their targeted recommendations. The Court resolved the case without answering this question. But the Court likely has not heard the last of Section 230. In the meantime, the doctrine of unjust enrichment pertaining to deceptive design used by platforms may provide an alternative way to hold platforms partially responsible for certain types of harmful practices. 

Calculating the precise extent of the enrichment involved in each case will not be a simple feat. Luckily, however, courts conduct these types of calculations on a daily basis. This occurs, for example, when courts calculate lost earnings in the context of tort claims or enrichment in contractual relationships. The method of calculation will depend on the case at hand. For example, ads presented alongside harmful content recommended to teens or played before such a video may be considered unjust enrichment. Some cases may be even easier than that, such as in the context of profits generated from ads presented in a discriminatory fashion. Over the years, courts have learned to develop tools and guidelines to help calculate damages or enrichment in very complicated circumstances and can be expected to be able to do the same in cases of unjust platform enrichment.

This proposal is a game changer in terms of the ability of the legal system to contend with the current crisis, and it enjoys several significant advantages. First, the law of unjust enrichment can ensure effective deterrence, by removing the profits that platforms obtain through their wrongful activities. Second, the harms of platform practices are often difficult to identify and measure. Therefore, harm-based remedies are often unavailable or impossible to implement. Conversely, platform profits are all too real and are much easier to measure. Third, the doctrinal tests embodied in the law of unjust enrichment offer the level of flexibility required to regulate the ever-changing landscape of platform activity. Fourth, the law of unjust enrichment draws on the comparative advantages of diverse actors, including private plaintiffs, courts, regulators, and experts, and can therefore generate effective and informed legal action.

The platform crisis has pushed democracies toward the edge of the precipice. As long as harmful personalization practices generate profits for platforms, they will continue to push. Something must be done soon if we are to pull ourselves back and survive this crisis. A fundamental change to platforms’ incentive structure is required. Our proposal to change platforms’ incentive structure by disgorging wrongfully gained profits not only is necessary as a matter of policy but also follows naturally from existing doctrines of the law of unjust enrichment. 


Ayelet Gordon-Tapiero is a Postdoctoral Fritz Fellow at Georgetown University's Initiative on Tech & Society. Her research focuses on the intersection of Law and Technology.
Yotam Kaplan is an Associate Professor at Bar Ilan University Law School (SJD Harvard Law). He recently received a five-year ERC grant to study the use of the doctrine of unjust enrichment in the regulation of broad societal issues.

Subscribe to Lawfare