Courts & Litigation Cybersecurity & Tech

Grammarly Lawsuit Shows Existing Laws Can Combat Deepfakes

Jennifer E. Rothman
Thursday, April 9, 2026, 9:53 AM

Calls for new deepfake laws overlook the strength—and breadth—of existing legal protections.

Artistic representation of deepfakes. (ApolitikNow/Flickr, https://www.flickr.com/photos/92457334@N04/50009305853; CC BY-NC-SA 2.0, https://creativecommons.org/licenses/by-nc-sa/2.0/).

Debates about synthetic media have been dominated by concerns about deepfakes—audio and video fabrications that appear to be authentic recordings when they are not. These deepfakes threaten to erode trust in everything from elections to court proceedings to intimate relationships. They also threaten people’s livelihood. With the recent dramatic improvement in the accessibility and quality of generative artificial intelligence (AI), the locus of concern has expanded to virtually every context. The most recent flashpoint is not a forged video of a world leader or a sex tape, but something much more benign: a writing assistant.  

In early March, Wired reported that the AI-powered software Grammarly, which promises its software tool will help guide and generate your writing, offered users the ability to edit text “in the style” of identifiable journalists and scholars without their consent, and allegedly singling out specific people by name, thereby signaling their participation or endorsement of the service. What might once have seemed like a parlor trick has now become the basis for litigation, raising foundational questions about identity, attribution, and control in an age of generative-AI authorship. One of the key tools to combat such overreaching impersonations is the right of publicity—a legal doctrine that gives individuals control over the use of their name, likeness, voice, and other recognizable aspects of identity when used without authorization by others. The right is governed primarily by state law. 

In a recent New York Timesopinion essay, Julia Angwin, a reporter and opinion contributor to the newspaper, discusses her role as lead plaintiff in the suit against Grammarly. She argues that Grammarly’s alleged conduct exposes a significant gap in existing law. While acknowledging that right of publicity doctrines have long protected against unauthorized uses of a person’s name or likeness—and even that she has such a claim in this very case—Angwin suggests that the existing legal frameworks are ill-suited to the challenges posed by generative AI. In her view, these generative-AI tools underscore the need for new, comprehensive federal legislation—such as the proposed NO FAKES Act—to safeguard individuals against unauthorized digital replication. Her argument situates the Grammarly controversy within a broader policy moment: a perceived mismatch between legacy legal regimes and rapidly evolving technological capabilities. 

This call for new legislation is somewhat mystifying in the context of Angwin’s allegations and lawsuit against Grammarly. As she notes in her essay and in the lawsuit itself, if her allegations are accurate, Grammarly’s actions would be a clear violation of her right of publicity. Angwin accurately and importantly noted that these protective right of publicity laws have a deep history, dating back more than a century.  

In fact, the Grammarly dispute illustrates not the inadequacy of existing law but its underappreciated breadth. Contrary to Angwin’s suggestion that only a subset of states recognize publicity rights, virtually every state provides some form of protection—whether through statute, common law, or both. These doctrines are not limited to celebrities or to narrow commercial contexts; most state laws extend to ordinary individuals and a wide range of unauthorized uses. Some of the confusion over the existence and scope of these laws stems from misunderstandings about this body of law. Many of these laws are governed by common law, rather than statutes. And while not every state has a statutory right of publicity, most have a common law right against misappropriation of a person’s identity, often considered part of those states’ privacy laws. And in the handful of states that haven’t addressed the issue, none has rejected such a right. States that have rejected such a common law right, such as New York, have subsequently adopted a statutory right; many states, such as California, have both common law and statutory protections. The complexity of this legal landscape may obscure its reach, but it does not negate it.

More significantly, both the Grammarly complaint and Angwin’s op-ed overlook a host of additional, potentially powerful claims, including federal ones. For plaintiffs like Angwin, federal causes of action under the Lanham Act, including trademark infringement and false endorsement, appear especially promising if consumers could be misled into believing that she or others participated in or approved a product or service. Depending on how the underlying technology operates, unresolved questions in copyright law may also come into play, particularly regarding the permissibility of training AI systems on protected works. At the state level, the menu of possible claims is broader still, including defamation, false light, emotional distress, fraud, impersonation torts, state trademark and unfair competition violations, among other claims.

Underlying my main concern that a host of existing claims are already directly available to Angwin and others is a more fundamental concern: Before concluding that new legislation is necessary, courts, politicians, and commentators should take stock of the considerable tools already available—tools that may offer plaintiffs a strong likelihood of success. Given all the existing law, there should be a high bar to advocate for something new or different; and most importantly, legislators, advocates, and those affected—which is everyone—should not want to put in place a federal law that makes things worse.  

In considering the many existing federal and state avenues for legal solutions to deepfakes, I have developed some important distinctions and considerations that can help guide our evaluation of existing and proposed laws as tools to combat deepfakes. In particular, I highlight the need to keep our eye on two primary considerations: (a) whether the laws ensure that uses are meaningfully and knowingly authorized by the person depicted and (b) whether the laws protect the public from being deceived as to the deepfakes’ authenticity. Some of the new legislation proposed to combat AI-generated deepfakes fails at addressing these two considerations, which risks making things worse for those depicted in deepfakes while also failing to protect the public from being deceived.

In her piece, Angwin advocates for the passage at the federal level of the proposed NO FAKES Act to assist in lawsuits like hers against Grammarly. The NO FAKES Act would create a federal digital replica right—which might not apply to Angwin’s claim as it is limited to uses of a person’s “voice or visual likeness,” not just their name. Regardless, the current draft of the bill could actually work against the very concerns Angwin raises about unauthorized uses of her identity. This is the case because the bill (as currently drafted) allows others to authorize and control a person’s voice and likeness without adequate safeguards to ensure that the person depicted gave knowing and specific consent for the uses. The bill also has a stated objective of protecting the public from deception but has no provision against deceiving the public and seems to instead provide legal cover for doing so. I’ve testified in Congress and written about some of these concerns with the current version of the bill and related laws. Others have also pointed to the bill’s burdening of smaller platforms and to serious free speech concerns with it. The risk is not just redundancy then but regression. A new federal regime could undermine or displace more protective state laws. And if it leaves them in place, it could further complicate the array of existing and conflicting rights that constitute an “identity thicket” that already challenges courts and litigants.

Federal laws could make things better in a variety of ways, and a revised NO FAKES Act could be a vehicle to do so. But what is currently on the table works at cross purposes with the crucial goals of protecting individuals from losing control of their identities and protecting the public from deception. 

I do appreciate Angwin’s instinct to look beyond traditional publicity rights, and consider copyright as a broader frame for thinking about how to control uses of one’s identity. The idea of using copyright as a vehicle for protecting one’s name and likeness has deep historical roots. But here, there are important current doctrinal limits: A person’s name or identity is not considered a work of authorship, and unless a person’s image or voice is captured in a fixed work—such as a recording or film—it falls outside copyright’s domain. Copyright nevertheless may be a tool to protect a person’s identity in various ways, particularly if digital replicas are considered copyrightable, but such protections will work only if the underlying person retains control of the copyright, something the current system is not designed to do. Otherwise, copyright too could be a tool to undermine rather than support those who, like Angwin, have had their identity used without specific and knowing permission. Protection against unauthorized uses of a person’s identity then cannot be simply modeled off of or rooted in copyright law without significant adjustments.

In sum, Angwin likely has a strong case against Grammarly under current laws—one that could be strengthened further by adding a number of existing claims. But the strength of that legal case undermines her call for legal reform. Angwin’s understandable shock and dismay at having her identity used improperly by Grammarly may have led her to overlook the breadth of options available to her, as well as the pitfalls of NO FAKES as currently drafted and to more generally rushing in to legislate around fast-changing technology that existing law may already be able to address. It is also dangerous to collapse all uses of stylistic imitation into actionable harm. The First Amendment and fair use should continue to allow room for writing (or singing or acting) in another’s style where there is no confusion about source or endorsement. Here, though, it seems that Grammarly went much further—marketing and creating a product by invoking a real person’s identity—implying their participation, approval, and perhaps even authorship without consent. Such an overstepping violates numerous existing laws and does not provide a good case study for why something new is required.


Jennifer E. Rothman is the Nicholas F. Gallicchio Professor of Law at the University of Pennsylvania and holds a secondary appointment at the Annenberg School for Communication. She is globally recognized for her scholarship in the field of intellectual property and privacy law, and is the leading expert on the right of publicity and personality rights. Professor Rothman is the Reporter for the Uniform Law Commission Study of the Protection of Name, Image, and Likeness Rights, an elected member of the American Law Institute, and an adviser on the Restatement of the Law (Third) of Torts: Defamation and Privacy.

Rothman’s book, The Right of Publicity: Privacy Reimagined for a Public World was published by Harvard University Press and has been described as the “definitive biography of the right of publicity.” Rothman is the author of numerous essays and articles, including most recently Postmortem Privacy, published in the Michigan Law Review, and Navigating the Identity Thicket: Trademark’s Lost Theory of Personality, the Right of Publicity, and Preemption, published in the Harvard Law Review. Her 2024 Donald C. Brace Lecture, Copyrighting People, appears in the 2025 Journal of the Copyright Society, and her recent lecture given at Columbia, Reframing Deepfakes, is forthcoming in its Law & Arts Journal.

Rothman has testified in Congress multiples times, most recently on intellectual property, personality rights, and artificial intelligence. Rothman is also the creator of Rothman’s Roadmap to the Right of Publicity.

}

Subscribe to Lawfare