Cybersecurity & Tech

Combatting Deepfakes through the Right of Publicity

Jesse Lempel
Friday, March 30, 2018, 8:00 AM

Fake news is bad enough already, but something much nastier is just around the corner: As Evelyn Douek explained, the “next frontier” of fake news will feature machine-learning software that can cheaply produce convincing audio or video of almost anyone saying or doing just about anything.

Published by The Lawfare Institute
in Cooperation With
Brookings

Fake news is bad enough already, but something much nastier is just around the corner: As Evelyn Douek explained, the “next frontier” of fake news will feature machine-learning software that can cheaply produce convincing audio or video of almost anyone saying or doing just about anything. These may be “digital avatars” built from generative adversarial networks (GANs), or they may rely on simpler face-swapping technology to create “deepfakes.” The effect is the same: fake videos that look frighteningly real.

Bobby Chesney and Danielle Citron recently sounded the alarm on Lawfare about the threat to democracy from “deepfakes,” lamenting “the limits of technological and legal solutions.” They argue that existing law has a limited ability to force online platforms to police such content because “Section 230 of the Communications Decency Act immunizes from (most) liability the entities best situated to minimize damage efficiently: the platforms.”

But in fact, a loophole built into Section 230 immunity—the intellectual property exception—could be helpful in combating deepfakes and other next-generation fake news. Victims of deepfakes may successfully bring “right of publicity” claims against online platforms, thereby forcing the platforms to systematically police such content. At a minimum, such right-of-publicity claims are likely to generate crucial litigation

The worst instances of victims being targeted by fake news fit comfortably within classic tort claims, such as defamation. But while a defamation claim could be brought against the individual purveyor of fake news, suing every internet troll is wildly impractical. In most cases, the only effective target of a defamation suit for fake news would be the online platform, like Facebook or Twitter. The catch is that Congress categorically immunized platforms from such liability in 1996 with Section 230(c)(1) of the Communications Decency Act, stating: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” In other words, when a Facebook poster shares defamatory fake news, the poster (the “information content provider”) and not Facebook, will be treated as the publisher who is liable for the tort.

But Section 230 has a few notable exceptions, including preserving liability for a violation of any “Federal criminal statute,” §230(e)(1), and the caveat that “[n]othing in this section shall be construed to limit or expand any law pertaining to intellectual property,” §230(e)(2). The intellectual property exception creates an opening—even if only a crack—for holding online platforms liable for certain kinds of egregious deepfakes posted by third parties on their sites.

Copyright infringement is an exception to Section 230 immunity, but several factors limit its usefulness in fighting deepfakes and other digital frauds. It may be difficult to identify or prove the ownership of the underlying content from which the fakes are synthesized. Or the owners of the images, who are often not the person featured in the video, may have little interest in filing a complaint. Finally, in some circumstances, a fake video may be deemed “transformative” and therefore not protected by copyright.

But there’s another form of intellectual property that doesn’t turn on ownership of a particular image or work: the “right of publicity,” which, as one court explained, “is an intellectual property right of recent origin which has been defined as the inherent right of every human being to control the commercial use of his or her identity.” The right of publicity is a state law claim recognized in most states, whether by statute or common law (with slight variation among states), and is frequently invoked by celebrities seeking to prevent a business from unauthorized use of their images or identity in an advertisement. (For greater detail, see Jennifer Rothman’s Roadmap to the Right of Publicity.)

Could a victim of a deepfake posted on Facebook or Twitter bring a successful right-of-publicity claim against the platform for misappropriating “the commercial use of his or her identity”? This is a tough question that has not yet been tested in the courts. Such a claim would need to get over three basic hurdles: (1) fitting the right of publicity into the Section 230 intellectual property exception; (2) counting deepfakes as “commercial use” of identity for right-of-publicity claims; and (3) First Amendment protections on free speech.

First, the biggest obstacle to matching the right of publicity to Section 230’s intellectual property exception is the Ninth Circuit. The court has construed “intellectual property” in Section 230 to mean only “federal intellectual property.” So a right of publicity claim, which is grounded in state law, cannot pierce Section 230 immunity in the Ninth Circuit—which represents a large chunk of the country. But other courts, including the First Circuit (in dicta) and the Southern District of New York, have disagreed with the Ninth Circuit’s view and applied the intellectual property exception to both federal and state claims.

Yet even if a state law intellectual property claim can overcome Section 230, one might argue that the right of publicity is a privacy issue, not an intellectual property right at all. But the Supreme Court, in its 1977 decision in Zacchini v. Scripps-Howard Broadcasting Co., described the right of publicity as “closely analogous to the goals of patent and copyright law.” Several federal courts have since indicated or expressly held that the right of publicity is an intellectual property right within the meaning of the Section 230 exception. Outside the Ninth Circuit, then, many right-of-publicity claims would likely be able to pierce Section 230 immunity.

Second, the right of publicity only protects the commercial use of one’s identity, most commonly in advertisements. Fake news, especially of a political nature, is hardly commercial. But it’s difficult to deny that online platforms like Facebook and Twitter are making commercial use of such posts. After all, their business model depends on clicks and views.

In a recent case in which a business owner sued Facebook for placing ads next to an unauthorized page critical of his business, a California state judge ruled that the right-of-publicity claim was viable under Section 230’s intellectual property exception because “Facebook’s financial portfolio is based on its user base.” Commentators reacted to this decision with alarm, and the appellate court eventually reversed on the grounds that Facebook merely “displayed unrelated ads from Facebook advertisers adjacent to” images of the plaintiff “posted by third parties,” so the platform did not actually “use[] his name or likeness in any way.”

Yet Facebook did use the person’s likeness by allowing it to be posted on its site, over which it exercises editorial control like newspapers and magazines—and that editorial control is the very reason Section 230 was needed to immunize the platforms from ordinary tort claims. The fact that the likeness was “posted by third parties” only matters for Section 230 protection, which expressly excludes intellectual property claims. While in this particular case Facebook would likely be shielded from liability under the broadly construed “newsworthy” and “public interest” exceptions to the right of publicity (discussed just below), it’s easy to come up with a scenario in which a platform could be held liable under this theory. For example, imagine a video convincingly depicting Donald Trump or his next political opponent performing some heinously illegal or humiliating act. The video would be viewed billions of times. The online platforms hosting this viral video would be massively cashing in on the victim’s misappropriated identity, making a right-of-publicity claim likely appropriate.

But ultimately, the most meaningful constraint on the right of publicity is likely to be not Section 230 but the First Amendment. Exceptions to the right of publicity were developed by the New York state courts in the early 1900s in order to avoid “an unconstitutional interference with the freedom of the press,” William Prosser recounted in a passage approvingly quoted by the U.S. Supreme Court. Likewise, the California Supreme Court affirmed in Comedy III Productions, Inc. v. Gary Saderup, Inc. that “the right of publicity cannot, consistent with the First Amendment, be a right to control the celebrity’s image by censoring disagreeable portrayals.” (For analysis of the right of publicity and the First Amendment, see these articles by Eugene Volokh and Eric Johnson.)

For this reason, New York state courts have held that the state’s statutory publicity protections “do not apply to reports of newsworthy events or matters of public interest.” Other jurisdictions have followed suit, either through the courts or, as in California, with explicit statutory language. Several courts point to the Restatement (Third) of Unfair Competition §47, which explains that “use of a person’s identity primarily for the purpose of communicating information or expressing ideas is not generally actionable as a violation of the person’s right of publicity.”

But these exceptions are not unlimited. The New York Court of Appeals has explained, most recently in Messenger v. Gruner + Jahr, that a book, article, or movie about a person “may be so infected with fiction, dramatization or embellishment that it cannot be said to fulfill the purpose of the newsworthiness exception.” This aptly describes the most sinister deepfakes and similar next-generation fake news specimens, which will be “so infected with fiction” that they lie beyond the newsworthy exception to the right of publicity.

Nor will they be protected directly by the First Amendment under New York Times v. Sullivan and Hustler v. Falwell, even when targeting politicians or other public figures: Such disinformation is generally published with “actual malice” or with “reckless disregard of whether it was false or not.” If we assume that the online platforms will have the technological capability to distinguish between genuine videos and fakes, then failing to remove the fakes within a reasonable time period would seemingly rise to reckless disregard.

But if there is no effective authentication technology, then the potential liability of online platforms would shrink to almost nothing. That’s because the constitutional “breathing space” of free speech, as articulated by the Supreme Court in New York Times and related cases, would likely preclude forcing online platforms to impose blanket censorship just because the fakes are too well disguised. These precedents teach that any liability rule leading to a substantial “chilling effect” on free speech would be unconstitutional. Dissenting in the Zacchini case, Justice Powell warned of the “disturbing implications” of “media self-censorship” in the shadow of excessive right-of-publicity liability. That dystopia would become reality if the platforms faced sweeping liability for violations of publicity rights without an effective tool for identifying fake videos.

Admittedly, the right of publicity is a strange and clumsy device with which to disarm next-generation fake news. It was designed to ward off a wholly distinct evil, and it’s unclear whether it could actually adapt to this new purpose. An arguably better approach would be for Congress to directly amend Section 230 by narrowly repealing online platforms’ immunity from tort claims in deepfake cases where the platforms fail to use the best available authentication technology, or to pass other legislation dealing with the issue. But unless and until that happens, the right of publicity may be the best recourse for victims of the next wave of fake news.

The more troubling issue with the right-of-publicity approach is whether we really want to turn Facebook and Twitter into, as Evelyn Douek put it, “the arbiters of truth.” That thought will turn stomachs. But thankfully, the right of publicity would not push that far. The online platforms would be liable for—and therefore would preemptively police—only commercial misappropriation of identity, which does not cover the vast ecosystem of garden-variety lies that presently thrive online. For instance, the right of publicity does not protect “fleeting and incidental” use of one’s image and cannot be enforced in a way that suffocates the First Amendment’s generous “breathing space.” The courts will have to figure out how to make that work.

The best approach would be for courts to acknowledge very broad newsworthy and public interest exceptions for online content, allowing the platforms to host ordinary unverified information without fear of liability—but enforcing the right of publicity against technologically deceptive impersonations that generate revenue for online platforms, like deepfakes. If this rule were adopted, the online platforms would not be required to determine whether any posts state true or false claims, but only whether the content is technologically genuine or falsified—that is, whether the video or audio has been artificially manipulated in some significant way that is fundamentally deceptive.

Of course, this approach will work only if online platforms develop the capability to efficiently flag fakes—either with digital forensic tools that keep pace enough to spot the fakes or, as Herb Lin has argued, by relying on a system of “digital signatures.” It’s impossible to know whether effective authentication technology will exist. But if and when that technology is available, the law should compel online platforms to use it to the best of their ability. Ultimately, any viable legal solution for next-generation fake news through the right of publicity will hinge both on the capacity of the online platforms to efficiently flag the fakes and on the courts’ ability to enforce the right of publicity in a way that protects free speech.

The author thanks Professors Rebecca Tushnet and Eric Goldman for helpful conversations about the ideas in this post.


Jesse Lempel is a student at Harvard Law School and an editor of the Harvard Law Review. His writing has appeared in the Harvard International Law Journal Online, Haaretz, Tablet Magazine, and other publications.

Subscribe to Lawfare