Cybersecurity & Tech

What History Can Teach Us About Copyright, AI, and ‘Market Floods’

Derek Slater, Aram Sinnreich
Wednesday, August 27, 2025, 9:55 AM

Although some fear that AI will flood the market, harming existing copyrighted works, historical examples seem to tell a different story. 

Artificial Intelligence & Machine Learning (mikemacmarketing, https://commons.wikimedia.org/wiki/File:Artificial_Intelligence_%26_AI_%26_Machine_Learning.jpg, CC BY 2.0)

Published by The Lawfare Institute
in Cooperation With
Brookings

On June 25, federal district court Judge Vince Chhabria ruled against a group of authors suing Meta for using their works without permission to train artificial intelligence (AI) tools. But Chhabria also gave voice to a concern that could prove critical in other copyright cases: that a “flood” of AI-enabled content may harm the market for existing creative works.

While the logic of this argument may seem intuitively appealing, history suggests that predicting this kind of market harm is easier said than done—and it would be a mistake for courts to presume such a result. New forms of media may compete with existing works, but they can also help grow the market and even increase the value of such works. As we explain below, there is reason to believe that such a positive outcome could take place as a consequence of AI.

Chhabria’s Theory of Market Floods and Dilution

While Judge Chhabria conceded that the plaintiff authors in Kadrey v. Meta “barely g[a]ve this issue lip service,” his decision argued that in future cases authors would likely prevail by demonstrating “market dilution” from an AI-driven “flood” of content. Using authors’ books for training, he wrote, generative AI tools will produce “similar outputs, such as books on the same topics or in the same genres, [that] can still compete for sales with the books in the training data.”

Though Chhabria conceded that situations may differ, he asserted that “in many circumstances it will be illegal to copy copyright-protected works to train generative AI models without permission” because it has a unique “potential to flood the market.” For instance, Chhabria predicted that “the market for the typical human-created romance or spy novel could be diminished substantially by the proliferation of similar AI-created works,” drawing a sharp distinction between “human” works created “the old-fashioned way” and “AI” works.

The relevance of such market dilution under U.S. copyright law’s fair use analysis is likely to be a central point of contention across the 40-plus copyright and AI training cases now before federal judges. Courts evaluate fair use claims on a case-by-case, fact-specific basis according to four factors, including “the effect of the use upon the potential market for or value of the copyrighted work.” Notably, not all effects are cognizable harms; after all, new creativity often competes with or impacts the existing market, but that doesn’t necessarily undermine a fair use defense if it’s aligned with copyright’s constitutional function—promoting creativity for the public’s benefit. For instance, courts have ruled that transforming an existing work to create something new—such as a parody or new tools for analyzing and studying books—can be fair use. 

In that vein, in a different ruling related to book authors who sued AI developer Anthropic, Judge William Alsup rejected the “dilution” theory of harm in copyright: “The [Copyright] Act seeks to advance original works of authorship, not to protect authors against competition.” Rather than counterposing “human” works against “AI” works, Alsup’s analysis underscores that human authors may use AI to create new works; those works may be successful in the marketplace, but such an outcome would be entirely in line with copyright’s purpose. At a minimum, analyzing “market dilution” in fair use is “uncharted territory,” as the U.S. Copyright Office itself acknowledged earlier this year in a long-awaited report on the legal consequences of generative AI. 

But beyond the relationship between dilution and copyright, Chhabria’s opinion also raises a more fundamental, empirical question: Do market floods necessarily harm the market for existing works and authors? In fact, Chhabria presumes an outcome that doesn’t square with history.

Market Floods and Rising Tides

History is replete with examples of creators and copyright holders claiming that new technology and media would destroy creativity as we know it. The early 20th-century American composer John Philip Sousa’s warnings that the player piano would end live music and shrink our vocal cords parallels Chhabria’s fear that AI will “dramatically undermine the incentive for human beings to create things” (a fallaciously market-centric vision of artists’ motivations). So too does former Motion Picture Association of America CEO Jack Valenti’s fearmongering about analog videotapes enabling movie piracy in the 1980s: “[W]e are facing a very new and a very troubling assault on our fiscal security, on our very economic life and we are facing it from a thing called the video cassette recorder and its necessary companion called the blank tape. And it is like a great tidal wave just off the shore.” Both predictions, of course, did not come to pass—the music and film industries thrived in the wake of these innovations.

But unlike Valenti, Chhabria’s argument isn’t about piracy; that is, he isn’t claiming that the outputs of AI systems will necessarily infringe existing works. And while he is concerned—like Sousa and other musicians of that era—that human creativity will diminish and be replaced with “robots,” Chhabria isn’t claiming that books will disappear. To the contrary, his point is that AI will be used to create so many books that it will hurt existing works and their authors. While his concerns seem grounded in how future authors and authorship might be impacted, he ultimately concedes that market dilution must be analyzed in terms of its impact on contemporary books and authors in a suit.

The history of book publishing suggests a more complicated story. If we look back to the early years of the printing press, existing institutions viewed this technology and its effects with deep suspicion. Filippo de Strata, a 15th-century Dominican friar in Venice, wrote a famous polemic against printing: “They [scribes] basely flood the market with anything suggestive of sexuality, and they print the stuff at such a low price that anyone and everyone procures it for himself in abundance …. The Italian writer lives like a beast in a stall. The superior art of authors who have never known any other work than producing well-written books is banished.” Yet the printing press would go on to not only grow markets for new writers but also grow the market for those classic, “superior” writers who de Strata worried about. Five hundred and fifty years later, people have more access to older, classic works than ever before.

Similar worries frequently recurred as markets for textual works evolved. Alongside congressional hearings and moral panic about comic books, many publishers and writers fretted about competition from paperbacks as a low-cost alternative to hardcovers. They believed that paperbacks would drive esteemed, high-quality, hardcover authors out of the market because they would not be able or willing to sell at lower paperback price points. Certainly, some of this “flood” competed with existing works, but much of it also complemented these works, serving different demands not otherwise met in the market. As paperback sales exploded, they also created new markets for existing writers to sell their works; in fact, successful authors were able to reap additional revenue from their existing works by selling paperback rights.

Likewise, the “flood” of videotapes in the 1980s led not to market dilution or loss of sales but, rather, to a new, lucrative home rental and sales market that both new creators and existing creators benefited from. By revealing the demand for new methods of access, videotapes also cleared a path for licensing works to cable networks and, later, streaming. Along with creating new distribution channels, videotapes and handheld cameras also lowered costs of production, widening the range of creators who could participate in the marketplace.

Or consider music. Over time, fears about recorded music gave way to new concerns from the American Federation of Musicians (AFM) and others about multitrack tape recorders, synthesizers, drum machines, digital audio tapes, and other tools for music production. Rather than merely providing musicians with another way to disseminate their works, the union feared that these tools would allow human laborers and their instruments to be replaced with machines. But, just as Chhabria overlooks the potential for humans using AI to create works, the AFM failed to predict musicians’ use of synthesizers and other tools to lower production costs and open up new creative opportunities. As artists deployed these instruments, the music industry continued to grow.

Today, in spite of a “flood” of digital music, many existing works are even growing in value. Will Page, the former chief economist of Spotify, estimates that more music is being released in any given day in 2025 than was released in the entire calendar year of 1989, due in no small part to low-cost production tools and distribution opportunities. By Chhabria’s analysis, one might expect that flood to massively depress the value of older works. Yet many music publishing catalogs’ valuations are going up, with owners increasingly able to sell billions of dollars in bonds backed by vast music libraries featuring famous artists like Red Hot Chili Peppers and 50 Cent. As Bloomberg recently noted, “Music royalty financing has become popular with private credit managers, with portfolios achieving eye-dropping valuations in the past few years, driven by the success of streaming platforms.” Streaming created a relatively predictable revenue stream for many existing, popular works; even though more and more music comes on to platforms like Spotify everyday, works that have held their popularity over time continue to get streamed. Owners of music publishing catalogs can use data on music consumption to predict future performance and revenue and, in turn, create new financial instruments that get more value from existing works.

To be fair, Chhabria did note that perhaps some existing famous artists will fare well despite a generative AI flood, while others won’t. But that concession simply underscores that the precise impact of a “market flood” on a given work or artist can be highly uncertain. 

Understanding Emergent Effects of “Floods” and AI

The basic logic that an increase in supply drives down prices is of course fair enough under standard economic theory. That’s not the end of the story, though. As the brief examples above highlight, there can be other emergent effects of new technologies, new media, and radically enhanced supply. These historical examples suggest three key points. 

First, new technology can help create new markets, including for existing works. Rather than being a zero-sum contest, new media effectively helped grow the pie. They uncovered unmet demand for different types of works and different price points, for instance, and led to a range of lucrative new revenue streams, for old and new works alike. Consumer attention and expenditure are of course limited, so the pie can’t grow infinitely, but it is safe to say that a flood of new works doesn’t spell doom for old ones.

Second, new technology can also lower the costs of authorship. Even before the generative AI boom, there was already a flood of new “content” such as books, music, and video, unlocked by easily accessible tools for writing, recording, editing, special effects, distribution, and much more. Twenty years after “Web 2.0,” it is clear that this flood did not eviscerate the markets for preexisting works and legacy creators.

Finally, the surplus of media doesn’t eradicate value; instead, it shifts it. In a world of abundant streaming music, that which remains scarce or unique—such as vinyl records, or live performances—can grow in value. And legacy musicians’ works have only grown more valuable as financialized assets. Perhaps in the context of AI, there will be greater valorization of uniquely human works; some research already suggests that “[a]rtists who emphasize that their work is human made, with no AI involvement, could actually command higher prices.” These sensibilities may evolve over time, distinguishing uses of AI in the same way that people half a century ago distinguished the creative uses of synthesizers from someone who just played an automated loop.

The shift in value to what is unique presents an inherent tension in Chhabria’s account of human creativity and market impact. If AI is only used to create an abundance of content truly devoid of human creativity—what some call “slop”—then old-fashioned human authorship will become comparatively rare and thus more valuable, not less. If, in contrast, artists and authors use AI to generate creative works that add new cultural ideas, then that would seem to be very much in line with copyright’s core purpose; these new works may legitimately compete with existing ones, but that does not necessarily justify protectionism for old-fashioned techniques of authorship.

How all of this plays out with AI may vary across contexts. History suggests that the cultural consequences of new technologies are complex and difficult to predict. A flood of content will not necessarily sink the market for existing works; in fact, it could lead to a rising tide for creators of many kinds.


Derek Slater is a Founding Partner at Proteus Strategies, a boutique tech policy consulting firm. Previously, he helped build Google’s public policy team from 2007-2022, serving as the Global Director of Information Policy during the last three years. He led a global team of subject matter experts on access to information, content regulation, and online safety, and testified before legislators in the US, UK, and elsewhere around the globe. Before his time at Google, Derek was the Activism Coordinator for the Electronic Frontier Foundation and the first student fellow at Harvard’s Berkman Center for Internet and Society.
Dr. Aram Sinnreich is a professor of Communication Studies at American University in Washington, DC, as well as an author and musician. His writing has appeared in outlets including The New York Times, Rolling Stone, and Time Magazine. His most recent book, published by MIT Press in 2024, is entitled The Secret Life of Data: Navigating Hype and Uncertainty in the Age of Algorithmic Surveillance. As a musician, he plays bass and composes in a range of styles including jazz, reggae, and Malian music. He co-produced the recent album Out of Our Cells: Music by Incarcerated Composers in Washington DC.
}

Subscribe to Lawfare