Cybersecurity & Tech

Anthropic’s Settlement Shows the U.S. Can’t Afford AI Copyright Lawsuits

Stewart Baker
Monday, September 8, 2025, 8:01 AM
Copyright plaintiffs are squeezing enormous sums from AI companies. That's bad for the US and great for China. It's time for President Trump to invoke the Defense Production Act and resolve the crisis.
Anthropic CEO Dario Amodei at TechCrunch Disrupt 2023 (TechCrunch, https://flic.kr/p/2p4hwab; CC BY 2.0, https://creativecommons.org/licenses/by/2.0/deed.en)

Published by The Lawfare Institute
in Cooperation With
Brookings

Anthropic just paid $1.5 billion to settle a copyright case that it largely won in district court. Future litigants are likely to hold out for much more. A uniquely punitive provision of copyright law will allow plaintiffs who may not have suffered any damage to seek awards in the trillions. (Indeed, observers estimated that Anthropic dodged $1 trillion in liability by settling.)  The avalanche of litigation, already forty lawsuits and counting, doesn’t just put the artificial intelligence (AI) industry at risk of spending their investors’ money on settlements instead of advances in AI. It raises the prospect that the full bill won’t be known for a decade, as different juries and different courts reach varying conclusions.

A decade of massive awards and deep uncertainty poses a major threat to the U.S. industry. The Trump administration saw the risk even before the  Anthropic settlement, but its AI action plan offered no solution. That’s a mistake; the litigation could easily keep the U.S. from winning its race with China to truly transformational AI.

The litigation stems from AI’s insatiable hunger for training data. To meet that need, AI companies ingested digital copies of practically every published work on the planet, without getting the permission of the copyright holders. That was probably the only practical option they had. There was no way to track down and negotiate licenses with millions of publishers and authors. And the AI companies had a reasonable but untested argument that making copies for AI training was a “fair use” of the works. Publishers and authors disagreed; they began filing lawsuits, many of them class actions, against AI companies.

The American public will likely have little sympathy for a well endowed AI industry facing the prospect of hiring more lawyers, or even paying something for the works it copied. The problem is that peculiarities of U.S. law—a combination of statutory damages and class action rules—allow the plaintiffs to demand trillions of dollars in damages, a sum that far exceeds the value of the copied works (and indeed the market value of the companies). That’s a liability no company, no matter how rich and no matter how confident in its legal theory, can ignore. The plaintiffs’ lawyers pursuing these cases will use their leverage to extract enormous settlements, with a decade-long effect on AI progress. At least in the United States. China isn’t likely to tolerate such claims in its courts.

This is a major national security concern. The US military is already building AI into its planning, and the emerging agentic capabilities of advanced AI holds out the prospect that future wars will become contests between armies deploying coordinated masses of autonomous weapons. Even more startling improvements in AI could come in the next five years, with transformative consequences for militaries that capitalize on them as well as those that don’t. Not surprisingly, China is also pursuing military applications of AI. Given the US stake in making sure its companies do not fall behind China’s, anything that reduces productive investment in AI development has national security implications. As Tim Hwang and Joshua Levine laid out in an earlier Lawfare article, this means the U.S. can’t afford to let the threat of enormous copyright liability hang over the AI industry for the decade or more it could take the courts to reach a final ruling.

The Trump administration should cut this Gordian knot by invoking the Defense Production Act (DPA) and essentially ordering copyright holders to grant training licenses to AI companies on reasonable terms to be determined by a single Article I forum. This is the only expeditious way out of the current mess. It is consistent with the purpose and with past uses of the DPA. And it creates a practical solution that copyright holders have long used in similar contexts.

To understand how we got here, it’s necessary to know four things about copyright law.

First, there’s no guarantee that the rights to a copyrighted work are held by a single person. In fact, copyright is often divided into many subsidiary rights, each held by a different party. This is why there was never any hope that the AI companies could get advance consent for their training. In many cases, they would have needed to find and negotiate with every publisher and author who had rights to a work in the training database. That, of course, is not practicable, certainly not when time is of the essence.

Second, not all copying violates copyright. Broadly, copying is likely to be permitted as “fair use” if it results in a completely different product and doesn’t depress the market for the original. Unable to get legal certainty by buying licenses, most AI companies fell back on fair use, arguing that their models are nothing like the books on which they trained and indeed cannot be used to produce exact, competing copies of those books. In this view, the training is fair use—analogous to students reading a book and using what they’ve learned to create a completely different work.

Third, the rules of copyright have been revolutionized over the past 50 years, driven to a remarkable degree by a single company, Walt Disney. When Walt Disney released Steamboat Willie in 1928, copyright protection lasted only 56 years. To keep its main character, an early version of Mickey Mouse, out of the public domain, Disney lobbied hard to extend copyright terms to 95 years. In addition, the penalties for infringement were increased dramatically. Under current rules, copyright plaintiffs don’t need to prove damages in terms of what an act of infringement cost them; they can instead elect statutory damages that range up to $150,000 for each work copied.

Fourth, allowing plaintiffs to collect large statutory damages is a godsend for class action lawyers; it means they no longer have to worry that their class will be rejected because each plaintiff’s damages must be measured individually. Statutory damages allows many claims to be assembled into a single lawsuit threatening such massive liability that the pressure to settle even dubious cases is heavy. Coercing class action settlements has paid handsomely in privacy lawsuits, which often allow statutory damages. Meta, for example, settled a facial recognition class action for $650 million in Illinois—and for $1.4 billion under a similar Texas law.

While copyright plaintiffs didn’t find such deep pockets, at least until now, they have already inspired similar settlements. The coercive power of statutory copyright damages has been demonstrated by the recording industry, which sued a single mother of two for sharing 24 songs and obtained a judgment of $222,000, a result the judge in the case condemned as “wholly disproportionate to the damages suffered.” The recording and movie industries eventually sued hundreds of thousands of individuals for file-sharing violations and extracted settlements of a few thousand dollars each—essentially all that the defendants could afford. The authors and publishers suing AI companies will have far more leverage (statutory damages in privacy cases max out at $7500, a far cry from $150,000). As Anthropic’s deal shows, for AI companies, “all they can afford” will be a sum well into the billions, more than enough to dampen an all-out effort to achieve AI breakthroughs.

The Trump administration understands that this litigation poses a threat to U.S. AI innovations. In rolling out his AI action plan, President Trump called for a “commonsense application” of copyright law. “You can’t be expected to have a successful AI program when every single article, book, or anything else that you’ve read or studied, you're supposed to pay for,” Trump said. “It’s not doable.” White House AI czar David Sacks posted much the same: “There must be a fair use concept for training data or models would be crippled. China is going to train on all the data regardless, so without fair use, the U.S. would lose the AI race.”

And yet the action plan itself offers no way out of the morass for what I suspect are three reasons.

First, proposing to “bail out” companies like Meta, Google, and OpenAI is not great politics. On the left, they’re viewed as amoral oligopolies who are way too cozy with Trump. And the right has neither forgotten nor forgiven the shocking attack on conservative speech by platforms like Meta and Google. They’re doing less of that now, but many believe it’s not so much a change of heart as a change in administrations. If you’re a conservative blogger whose ad revenue was permanently nuked by Google and Facebook, you’re unlikely to muster much enthusiasm for protecting Google’s revenue from a hit at the hands of authors and publishers.

I share the sense of betrayal that this speech suppression triggered. But that’s not a reason to support the other side in the copyright battle. Among the most prominent plaintiffs are the New York Times and Disney—not exactly friends of conservative speech. Giving them a windfall has no appeal in MAGA world. What’s more, transferring billions from AI companies to publishers and movie companies provides no public benefit at all. Leaving the revenue with the AI companies could at least yield a crucial national security payoff for the entire country.

The second reason the administration may have pulled its punches on copyright is that the earliest judicial decisions in these cases were covered in the press as victories for the AI companies. In fact, those decisions did not lift the threat of massive liability. At best, they promise a lengthy, complex struggle over fair use.

The two main cases were both technical victories for AI, but with potentially devastating judicial caveats. The most favorable decision came in in the case that Anthropic just settled. In Bartz v. Anthropic PBC, Judge William Alsup of the Northern District of California ruled that making copies to do AI training is always fair use, which was clearly welcome news for Anthropic and fair use proponents. But that didn’t end the case. It turned out that Anthropic took another shortcut. Instead of laboriously purchasing one copy of every book it needed and then digitizing it for AI consumption, Anthropic downloaded an already digitized trove of pirated books. Downloading pirated copies instead of paying for them, the court held, was not fair use; at a minimum, it deprived the publisher and author of the profit they would have earned by selling one more book. Of course, if it’s not fair use, the damages for copying a single book won’t be limited to a few dollars; the statutory damages will run to $150,000, enough to produce class action claims in the tens of billions. Indeed, Anthropic is paying $3,000 for every book it downloaded from the pirate site.

The other case was a victory for the AI companies in name only. In Kadrey v. Meta Platforms, Inc., Judge Vince Chhabria, also of the Northern District of California, held, in essence, that the copyright holders would have won if they’d had better lawyers. For the future, he ruled, fair use was off the table.

In dicta, he rejected Meta’s fair use claim on the ground that AI training will cause competitive harm to publishers and authors, not by making copies of their work available but by creating different works that will drive the value of the original books to near zero. Who will pay for a run-of-the-mill biography or biology textbook if AI can generate a “good enough” version? Judge Chhabria was unequivocal:

The upshot is that in many circumstances it will be illegal to copy copyright-protected works to train generative AI models without permission. Which means that the companies, to avoid liability for copyright infringement, will generally need to pay copyright holders for the right to use their materials.

But in so ruling, he held that the plaintiffs simply hadn’t articulated the arguments he found persuasive and had instead put forward arguments he considered “clear losers:” As a result, it’s hard to say that Meta won given the court’s caveat:

[T]his ruling does not stand for the proposition that Meta’s use of copyrighted materials to train its language models is lawful. It stands only for the proposition that these plaintiffs made the wrong arguments and failed to develop a record in support of the right one.

No plaintiff will ever make that mistake again. Any AI company that sees this as a victory is like the man who jumped off a ten-story building and said, as he passed the fifth floor, “Well, so far, so good.”

There’s also been a third, atypical decision that only adds to the chaos. In Thomson Reuters v. Ross Intelligence Inc., on sui generis facts, Judge Bibas rejected a fair use defense; his analysis echoes Judge Chhabria’s, concluding that fair use depends in part on how directly the AI model being trained will compete with the copyright holders’ work.

With three of 40 cases decided, then, we are no closer to a consensus. There is already enough disagreement to spawn dozens of other opinions and enough appeals to last into the 2030s.

Finally, there’s a third reason the administration may have avoided the copyright mess in its action plan. Changing copyright law requires Congressional action, and asking Congress to act would simply kick off an immense lobbying battle culminating in stalemate. Proposing bills that won’t pass is a losing strategy for the Trump administration, which is why it prefers executive orders whenever possible.

In this case, however, the administration uncharacteristically overlooked a way to resolve the crisis by executive order. The DPA was first passed during the Korean War, when officials recognized that much of the US military’s supply chain depended on commercial production. As the Act’s preamble notes, “the security of the United States is dependent on the ability of the domestic industrial base to supply materials and services for the national defense.” But Congress also knew that commercial actors will not always give priority to national security needs. To ensure that they do , the DPA gives the president “a broad set of authorities to influence domestic industry in the interest of national defense.”

As Congress declared in section 2 of the DPA, “the national defense preparedness effort of the United States Government requires … the diversion of certain materials and facilities from ordinary use to national defense purposes, when national defense needs cannot otherwise be satisfied in a timely fashion.” Thus, if the government needs particular medical gear to combat a pandemic, it can identify companies able to produce the gear and “require [their] acceptance and performance of” of a contract to meet that need, giving priority to that transaction over other commercial deals.

The president is expressly authorized by section 101(a) of the DPA “to allocate materials, services, and facilities in such manner, upon such conditions, and to such extent as he shall deem necessary or appropriate to promote the national defense.”  Most importantly, under section 3003(b), the president may do so “without regard to the limitations of existing law” (other than a requirement not to spend money Congress has not appropriated). The president has used this authority many times, most recently in prioritizing government contracts for procurement of medical equipment needed to combat the coronavirus pandemic.

How could the DPA’s broad language be applied to the copyright crisis? By suing for infringement, the copyright holders have already acknowledged that they are, in effect, suppliers of an essential component to a critical defense industry. The president could probably just order copyright owners to deliver those licenses on a priority basis, and to do so “upon such conditions and to such extent as he shall deem necessary or appropriate to promote the national defense.”

When the president designates items in the defense supply chain as scarce, he can also invoke the Act’s provisions in section 102 against hoarding and price-gouging which can be enforced by criminal prosecution, an authority widely used by the Justice Department during the covid crisis. Taken together, these provisions allow the president to take reasonable steps to ensure that training licenses are available quickly and at a reasonable cost.

In my view, these authorities allow the president to license their works to any AI companies working with the Defense Department. He could probably also set the price based on his interpretation of fair use, based on his authority to set “such conditions … to such extent as he shall deem necessary or appropriate to promote the national defense.” By the same token, his authority to act ”without regard to the limitations of existing law “ could likely allow him to set aside copyright remedies that interfere with quickly addressing the AI copyright mess.

No doubt the New York Times and Disney would be deeply aggrieved to find their copyright lawsuits extinguished by President Trump. While that might please most of the president’s followers, extinction of the claims may not be necessary, or even appropriate. What the AI companies need most is not so much a free ride as a path to a single answer from a neutral decisionmaker. Thus, the president, instead of acting unilaterally, could create an executive branch forum to set an appropriate royalty for training licenses. The outcome of a single article I process would be faster and more consistent than a decade of litigation in dozens of federal courts; that is crucial to the national security calculus.

Such a solution would hardly be an unprecedented measure—especially for copyright. In other contexts, the proliferation of licenses and rights holders has made it impracticable to negotiate copyright licenses with all of them. Congress has responded by creating a “compulsory license” that “allow[s] anyone to license a work without permission of the copyright owner for a predetermined royalty rate, set periodically by a regulatory body known as the Copyright Royalty Board.”  The creation of a single license with a royalty set by a government tribunal has enabled wider use of copyrighted materials with fair compensation to the copyright holders.

Given a similar neutral forum, the AI companies stand a good chance of winning the fair use debate, as Judge Alsup’s opinion in Bartz v. Anthropic PBC demonstrates. But perhaps not across the board. Some of the training shortcuts these companies took, particularly the use of pirated databases, could easily fall outside fair use. Perhaps they should pay for the actual harm caused when they downloaded a pirated copy instead of buying one lawfully. (It’s worth noting that one AI company was prepared to spend $100 million to acquire licenses before recognizing that the plan was impracticable)

Here, the DPA’s protections against price-gouging for national security items should be relevant. At a rough guess, the actual harm to most authors from a single pirated download of his book is around a couple of bucks—the approximate royalty the author won’t get. Unlike $150,000, that amount is a price the AI companies could probably pay, and it reflects the real market value of an author’s training license (the calculation would be similar for publishers). An executive order that simply ruled out the price-gouging inherent in charging statutory $150,000 or $3,000 for a single book might set affordable bounds on how copyright affects national security.

Such an order would of course also be on brand for this President. I can’t help suspecting that he has a small office of lawyers in the basement of the White House who do nothing but dream up new ways to achieve his policy goals without new legislation.

If so, this should be their next project.


Stewart A. Baker has a law and consulting practice in Washington. His government service include three and a half years at the Department of Homeland Security as its first Assistant Secretary for Policy as well as a tours of duty as General Counsel of the National Security Agency and as General Counsel of the commission that investigated US intelligence failures in the run-up to the invasion of Iraq.
}

Subscribe to Lawfare