Courts & Litigation Cybersecurity & Tech

Copyright Should Not Protect Artists From Artificial Intelligence

Simon Goldstein, Peter N. Salib
Thursday, October 23, 2025, 10:10 AM
The purpose of intellectual property law is to incentivize the production of new ideas, not to function as a welfare scheme for artists.
(Image: Deepak Pal/Flickr, https://www.flickr.com/photos/158301585@N08/46085930481, CC BY 2.0)

Published by The Lawfare Institute
in Cooperation With
Brookings

On Sept. 25, Anthropic agreed to pay $1.5 billion to settle a class-action lawsuit from book publishers and authors. The lawsuit alleged that Anthropic violated intellectual property rights by training their artificial intelligence (AI) models on millions of books downloaded without permission from the internet.

Some have complained that $1.5 billion is too little, while others have said it is far too much. To complicate matters, formal copyright law is still murky here. The judicial order that precipitated Anthropic’s settlement allows some kinds of AI training on copyrighted materials (for example, purchased books), but not others (for example, pirated books). And it is silent on a major question: Is training on publicly available websites an infringement? Since law remains indeterminate, first-principles thinking about the purpose of intellectual property can shed light on what courts should do going forward.

The Constitution explicitly outlines the purpose of intellectual property. Article I endows Congress with the power to create copyrights and patents, “To promote the Progress of Science and useful Arts.” Intellectual property, including copyright, is about the creation of new ideas. As described below, this theory of copyright is also supported by the economic analysis of law.

In practice, however, popular discussions of AI and copyright often implicitly assume another purpose for intellectual property: protecting artists and content creators facing the threat of AI-led automation. But copyright isn’t, and shouldn’t be, a welfare program for artists.

This is not because we oppose welfare programs for humans facing AI automation. On the contrary, we favor it. But intellectual property is a poor mechanism for such welfare, because it is highly selective in both its target and its funding source. Moreover, compared with other potential welfare mechanisms, like universal basic income, intellectual property creates unusual amounts of social loss.

Another common theme in discussions of AI and copyright is fairness to content creators. Content creators helped AI labs develop their models, and so they deserve a fair share of the profits. But, again, this is not the purpose of intellectual property. Copyright law routinely allows for exactly this kind of unfairness, again because the point of intellectual property is to incentivize the production of new ideas.

For these reasons, and as described in more detail below, copyright protection against AI training is a bad idea, because it would likely suppress, rather than promote, the net production of new ideas. 

Intellectual Property Rights Incentivize Ideas

New ideas have enormous societal value. An idea can trigger a political revolution or lead to the eradication of a horrific disease.

But the production of new ideas also suffers from a structural economic problem: Ideas are non-rival. After someone has worked to discover an idea, everyone else can use it at trivial cost. Developing a new medicine requires billions in investment; but once the medicine is developed, the cost of manufacturing each pill is low. The same applies to a book: It costs an author a lot of time to write a book; but once they’ve finished it, each copy costs nearly nothing to print.

In other words, good ideas are expensive to produce, but cheap to use.

One natural response might be, “Great!” Perhaps everyone should be able to take any medicine and read any novel for just the cost of the pill or the printing. The problem with that plan is it ensures that no new medicines are discovered and no new novels are written. Absent some way to recoup the cost of producing the idea, scientists will have no reason to invent and authors will have no reason to write.

Intellectual property rights are one legal solution for this problem. They grant idea-creators the right to restrict how everyone else may use their ideas. Those who wish to use the idea in an otherwise-infringing way—that is, those who would benefit from using the invention, reading the novel, and so on—must then pay the author for the use. As a result, the more people value the protected idea, the more the idea-creator is paid.

Crucially, however, intellectual property rights are not absolute. On the contrary, they are carefully limited in a variety of ways in order to strike a balance. The creators of today’s ideas must be paid. But society must be able to eventually use these non-rival goods freely, to our great collective benefit. Even more important, today’s ideas must, in the end, be freely available to tomorrow’s idea-creators as the foundation for new ideas. Thus, patents expire; and the copying of books and films is allowed for many “fair uses,” such as criticism and education.

The question of whether authors should be paid when an AI company trains on their works therefore depends on whether such payments would, on balance, produce more or less ideas in the long run.

The End of Intellectual Property Rights?

AI complicates the standard economic story about idea production. Intellectual property rights were designed for a world where new ideas are expensive to produce, and cheap to distribute. With AI, this basic structure may change. Frontier models such as Anthropic’s Claude can produce new content (say, a poem) with the click of a button. The marginal cost of the new AI-generated poem is roughly one cent.

When the fundamental economic logic of idea generation changes, intellectual property law too must adapt. If AI allows for incredibly cheap production of ideas, intellectual property laws are no longer needed to shield the idea-generator from competition in order to recoup their costs. 

In the first instance, this suggests that new ideas produced by Claude do not themselves need intellectual property protection. The law is already headed there. The U.S. Copyright Office has stated that the outputs of generative AI systems are not on their own copyrightable. Nor, according to the U.S. Patent and Trademark Office, can an AI-created invention be patented.

Human content creators now must compete against Claude and friends in the production of ideas. Insofar as AI can produce new ideas as well as or better than humans, but at far lower cost, then the social justification for granting humans a monopoly on their ideas falls away.

The question, then, is whether AI has now advanced sufficiently that intellectual property monopolies are no longer needed to incentivize humans’ inventiveness. In some areas, AI has clearly surpassed human abilities. Consider DeepMind’s AlphaFold, which can read a sequence of amino acids comprising a protein, and from it produce a 3D model of the protein. Before AlphaFold, the only way to learn a protein’s 3D shape was for humans to synthesize it and then laboriously take images using techniques such as X-ray crystallography—a process that took months or even years. AlphaFold completes this task in just a few minutes.

Today, there is little economic justification for granting any human scientist monopoly rights over the 3D structure of any newly modeled protein. The 3D structure is new information, and it’s valuable for society. But it can be obtained quickly and cheaply from AI.

The situation with Claude, books, and authors, however, is murkier. So far, frontier models’ writing is arguably far worse than that of the best humans, especially with long outputs such as novels. Thus, at least for now, there still seems to be clear social value in protecting human authors’ copyrights in at least many written works. Without such protections, we humans would have no great new novels to read and enjoy. We’d be stuck consuming AI slop. 

But even this conclusion does not resolve the central question in the Anthropic suit. The question there is not whether Anthropic—or you or us—may read and enjoy the authors’ works without paying. To consume work, payment for the copy is a must. The question of the litigation is whether one may copy the works to train an AI.

Here again, the right answer to the question will be whatever creates the best incentives.There are two possibilities. Either Claude is bad at (say) poetry, or Claude is good at it. As long as Claude is bad at poetry, then poets don’t need intellectual property protection against Claude’s training. People will continue paying to consume copies of human-produced writing, instead of (or in addition to) AI slop. But once Claude is good at it, the state has no interest in protecting human poets. For this reason, we see no strong economic rationale for granting intellectual property protection against training.

In principle, there could be more complex scenarios that would support intellectual property protection against training. Imagine that AI systems could outperform human poets in any given style, but that AI systems couldn’t themselves invent a style. Here, the only way to advance the art of poetry would be for humans to write poems in a new style. But without intellectual property protection against training, there would be no incentive for humans to do so, and the art of poetry would stagnate.

Alternatively, even if AI were just as innovative as humans (or moreso), AI innovation could end up being expensive. This could favor granting intellectual property protections for AI-generated inventions, for human-generated ones, or both, depending on the relative cost. Frontier LLM-based AI systems can be expensive to run, especially when they are set to solving very difficult problems. For example, GPT-o3’s best performance on the ARC AGI benchmark required thousands of dollars worth of compute per question. Humans could do it for a few bucks.

In practice, however, such scenarios are far from guaranteed. As AI catches, and surpasses, humans at the frontier of science and useful arts, AI systems will likely innovate, just as humans do. And the cost to run frontier AI models is already falling rapidly. OpenAI’s cost per million tokens, for example, fell 99 percent between 2023 and 2025.

Notice also that, even if AI systems struggle to innovate, potential blunders will not support copyright protections against AI training for all human-produced works. Only new, innovative works need to be incentivized. A copyright regime narrowed substantially to resemble patent law—only provably “novel” works get protection—could accomplish that.

But this can go further. Once frontier AI companies become the primary consumer of novel human ideas, intellectual property law can fall away entirely. Even in a world where humans have no copyright incentive to produce novel works for use in AI training, they could easily be given a contract incentive to do so. That is, AI companies in need of novel, high-quality human training data can simply pay the relevant humans to produce it. This is already happening. AI companies have paid millions of humans to label or create new training data for their frontier models. Some such work is low-skill and low-pay. But already, demand and compensation are growing for human-produced data requiring expertise, creativity, or other scarce inputs. The wages that AI companies would pay true innovators for the production of genuinely novel ideas would, then, likely be quite high.

One more note on the power of contract law in the AI economy: Like humans, AI companies need incentives to produce AI systems that will, in turn, produce novel poetry, visual art, music, and more. But the incentive here need not necessarily come from intellectual property. Poetry is non-rival in that, once written, a single poem may be enjoyed by everyone at no cost. But there is no law that says everyone must or even will wish to read the original poem.

In a world where new works of great poetry are cheap and abundant, contract law can do the work that copyright does today. Rather than one Whitman laboring a lifetime over one “Leaves of Grass” in hopes of compensation via millions of readers, one Claude will write millions of works on par with “Leaves of Grass.” Each personalized for one or two readers. With the labor compensated—and then some—by a $20/month subscription fee.

Intellectual Property Is Not Welfare 

Intellectual property should incentivize the production of ideas—not provide welfare for artists.

Narrowing, eroding, or even ending intellectual property rights for human content creators would have a cost—even in a world where AI produced far more high-quality content than humans ever could. It would mean that human content creators could not use intellectual property as a shield from the threats of AI automation. Authors, in this scenario, would be forced to compete directly on quality with the outputs of Claude. If Claude, in this scenario, eventually produces equally good outputs for much lower cost, human content creators will be out-competed and out of a job.

This would be a tragic outcome. And there should be a path forward to help content creators. But intellectual property rights are not the right solution for several reasons. First, it is discriminatory. Content creators are just one group among many facing risks to their livelihoods from AI automation. For example, customer service workers are being replaced by chatbots, but these workers can’t use intellectual property as a remedy. The state should not be choosing arbitrarily among these groups for relief.

Second, this use of intellectual property rights creates distortions. If the costs of AI automation are borne by AI labs, then AI development will slow down. Granted, this has some important benefits, for example, potentially making AI development safer. But by and large, this kind of slowdown will tend to harm users of AI products, or in other words, everyone. AI slowdowns mean we will all have to wait longer for the cure for cancer. They also mean higher costs for any goods produced with automatable labor. The costs of AI automation should therefore not be borne by AI labs in particular; they should be borne by the citizenry writ large, through government. 

Third, intellectual property rights always and everywhere create social loss. Ideas are non-rivalrous and, therefore, free for anyone to use. When intellectual property protects content creators from AI outputs, it makes it more difficult for anyone anywhere to access the incredible ideas that could be produced by AI outputs (or by humans). This kind of social loss requires strong justification. Historically, this justification has come from the incentive to produce new ideas. Without such a justification, it is unacceptable.

The alternative to intellectual property-as-welfare is actual welfare. Here, the best policy instrument is universal basic income, or something like it. Universal basic income avoids the problems of discrimination, distortion, and social loss. It could be given nonarbitrarily to all workers affected by AI automation. And it could be funded by general tax revenue. This means that the costs of universal basic income would not cause slowdowns on AI development, as compared with other technologies. Then the transition to cheap, abundant, AI-led innovation would allow everyone to costlessly access the immense value of innumerable non-rivalrous ideas.

Intellectual Property Does Not Ensure Fairness

Intellectual property is also not a system for ensuring fairness.

Many readers will likely be struck by the following: Content creators have helped AI labs massively in training their models. Without content creators, models could never have gotten so good. But it is totally unfair for content creators to get nothing for all the help they provided.

This is all factually true. And yet, such fairness considerations reveal little about the right answer for intellectual property law, or law more generally, for three reasons.

First, life isn’t fair. But, for good reason, law usually supplies no remedy for unfairness of this kind. Moreover, intellectual property law specifically allows a great deal of exactly this sort of unfairness. And it does so for reasons that are central to the purpose of intellectual property.

Consider that there are all sorts of people who have been essential to the AI labs’ success, but who have not been, and will not be, compensated in any special way. First, the janitors at Anthropic were necessary for training the models; without them, all the machine learning engineers would have likely drowned in a sea of garbage. But janitors at Anthropic are likely paid about the same as the janitors at the office down the street. Second, the accountants at TSMC—the company that makes computer chips for the entire AI industry—were necessary for Anthropic to train its models. But these accountants again most likely make about market rate for their work.

What’s more, consider the parents of the machine learning engineers working at the frontier AI labs. These parents invested vast resources to train the people who train AI models—people whose talent is truly scarce and hard to replace. Without those parents, there wouldn’t be a Claude. And yet there are no class actions in the offing from the parents of all machine learning engineers, looking for a $1.5 billion settlement. The same goes for the parents of the scientists who created the coronavirus vaccine, the creators of the first quantum machine, and so on.

In all of these cases, law in general does not give someone a claim to compensation just because their work was involved in, or even essential to, someone else getting rich. This is a good thing. Even if one favors egalitarianism, it is hard to imagine a worse system for achieving it than one in which courts, via litigation, tinker with every single contract, employment arrangement, or business model.

Maybe content creators’ fairness claims are special. After all, they are not free-floating, but based on the law of copyright. And as we noted above, black letter copyright law is genuinely ambiguous here. The content creators’ claims are thus colorable in a way that the parents’ claims are not.

But the invocation of copyright law—or intellectual property more generally—does not necessarily strengthen calls for fairness. It might be exactly the opposite. Intellectual property law is very often unfair in exactly the manner the content creators are objecting to. Very often, intellectual property law allows one person’s hard-won ideas to be used freely, by others, to great profit.

Consider: The Velvet Underground’s innovations in musical style influenced bands from David Bowie to Nirvana. Yet the Velvets’ own albums were commercial failures in their time, and they earned no royalties from “Ziggy Stardust” or “Nevermind.” Copyright forbids the copying of a song, but anyone may rip off a band’s style for free.

Or consider Google’s plight. It invented the transformer architecture—a fundamental breakthrough underlying all modern language models—and patented it in 2019. Yet today, OpenAI, Anthropic, and many others create transformer-based AI systems freely, and pay Google nothing. This is because OpenAI and Anthropic have cleverly designed around Google’s patent. Their models rely on many of Google’s insights, but they use a decoder-only approach, rather than Google’s patented encoder-decoder architecture.

In examples like these, intellectual property law is working as intended. Again, intellectual property law is about ideas. And ideas, unlike other economic products, are non-rivalrous. Thus, the goal of intellectual property law is not to ensure that innovators capture all or even most of the surplus of their ideas. It is instead to supply just enough incentive that the ideas be created, and then to put the ideas into the commons for free use by everyone.

And third, one kind of unfairness for which law often does supply a remedy is illegal activity. If someone makes a contract and then reneges on it, that is unfair—and illegal. It’s also unfair in part because it is illegal. The settled legal rules of contract formation give the party a normatively justifiable right to rely on one another’s performance, such that the fairness claim and the legal claim travel together.

But AI training is not like that. In American copyright law’s 235-year history, it has never had to answer the question of whether using the entire corpus of human text to train a 175 billion parameter language model is fair use. The content creators had no normatively justifiable right to rely on Anthropic not doing that. Nor did Anthropic have a normatively justifiable right to rely on being able to do that. At least not one based on law.

As a result, the best way to adjudicate these claims—both the legal claims and any fairness claims downstream of them—is by reflecting on the first principles of intellectual property law. And, we argued above, such an analysis suggests that creators’ legal rights are not violated by Anthropic’s training.


Simon Goldstein is an Associate Professor at the University of Hong Kong. His research focuses on AI safety, epistemology, and philosophy of language. Before moving to Hong Kong University, he worked at the Center for AI Safety, the Dianoia Institute of Philosophy, and at Lingnan University in Hong Kong. He received his BA from Yale, and his PhD from Rutgers, where he wrote a dissertation about dynamic semantics.
Peter N. Salib is an Assistant Professor of Law at the University of Houston Law Center and Affiliated Faculty at the Hobby School of Public Affairs. He thinks and writes about constitutional law, economics, and artificial intelligence. His scholarship has been published in, among others, the University of Chicago Law Review, the Northwestern University Law Review, and the Texas Law Review.
}

Subscribe to Lawfare