Courts & Litigation Cybersecurity & Tech

Don’t Count on Courts to Rein In Unregulated AI

Daniel Wilf-Townsend
Friday, May 15, 2026, 2:01 PM
The courts are too slow for AI’s pace—and delay is already shaping the outcomes.
Close-up of a smartphone with AI chat interface. (Tim Witzdam, https://tinyurl.com/2sz4c3ab)

Over the past few months, the headlines might give the impression that the U.S. court system is deftly navigating the world of tech regulation. A federal judge quickly enjoined the Department of Defense’s overbroad and retaliatory attempt to label artificial intelligence (AI) developer Anthropic a supply chain risk. Juries in California and New Mexico issued major verdicts against Meta and Google for harms arising from their social media products, in cases that could influence thousands of other similar claims. And the U.S. Supreme Court let stand a lower court’s decision over the copyrightability of AI-generated art, in one of the first actions by the Court in the area of intellectual property (IP) and artificial intelligence. Based on this flurry of activity, one might think that when it comes to the current greatest challenge for technology policy—artificial intelligence—the courts can be trusted to man the regulatory helm.

But that trust would be misplaced. We are years into the explosive growth of the generative AI industry, and its integration into commerce and daily life has raised many important legal questions. But the courts have largely failed to resolve these issues in a timely enough way to inform the development of this major sector. The state of AI litigation over the past few years raises concerns about the adequacy of courts and their procedures in this fast-changing area, and suggests that policymakers may need to think twice about relying on courts as cornerstones of new regulatory regimes. 

The “ChatGPT moment” of late 2022 is now more than three years ago. Even at that early time, a number of important legal questions were fairly obvious. Among the most significant ones: Is it a violation of copyright protections to train on copyrighted data? Are the outputs of AI tools copyrightable? If a user causes a tool to generate outputs that violate IP protections, who is liable—the user? The developer? Both? Neither? Many of these major early questions focused on intellectual property. And some of them continue to have implications for the viability of large parts of the industry. If it is not fair use to train large language models on copyrighted data, for instance, it’s not clear that the current paradigm of training on massive text corpuses would continue to be feasible.

So there were lawsuits. A lot of them. By one recent count, 105 lawsuits have been filed against AI companies just in the area of copyright. But these lawsuits have gotten bogged down the way only litigation can. Some have been tied up in preliminary matters—fights over issues such as venue or personal jurisdiction that address which court has power to hear the case. Others have affirmatively postponed ruling on important legal questions until the parties can bring further facts to light in discovery. And still others have simply multiplied in terms of procedural complexity, adding parties and issues like kudzu. That major lawsuit by the New York Times against OpenAI and Microsoft, filed in 2023? As of now, there have been over 1,300 docket entries in the litigation, encompassing years of pleadings, partial motions to dismiss, consolidation with other cases, protective order disputes, and intensive discovery battles over everything from training data to the records of millions of users’ chatbot conversations. And the case still appears to be nowhere near resolution in the trial court, let alone the appeal that would likely come after. If you enjoy those little disclaimers about the lawsuit in the New York Times’s reporting on OpenAI, you’re in luck—we’ve probably got a few years of those left.

There are plenty of reasons for judges to want to take their time in these cases. These are complex and novel issues. Who wouldn’t want more time to learn about them and decide what the best outcome is? Who wouldn’t want to hear from a wide range of experts, gather all the conceivably relevant data, and make the most informed decision possible?

But there is a problem with the timelines that courts have been operating on. The problem is that the world keeps marching on, changing the environment in which these cases will be decided in ways that can influence the outcomes of the cases themselves.

In particular, AI tools have expanded in capabilities, becoming integrated into businesses across the country and attracting astronomical investment. The concrete financial stakes at this point are vast: If the 2026 planned AI capital expenditures of Alphabet, Amazon, Meta, and Microsoft were to be counted as a country with its own gross domestic product (GDP), it would be the 25th largest country in the world, between Sweden and Argentina. And the economy around AI is growing rapidly. Anthropic’s annualized revenue doubled from $4 billion to $9 billion in the second half of 2025; then it doubled again to nearly $20 billion in the first three months of 2026. The New York Times v. OpenAI case, in contrast, spent the first months of 2026 going from docket entry #1,082 (a letter motion to require Microsoft to produce certain Copilot output logs) to docket entry #1,266 (a letter arguing about a protective order regarding the deposition of an expert). 

It is very difficult for a judge to dispassionately assess legal issues when one side has such significant reliance interests. “Let justice be done though the heavens may fall” (or the economy may tank) has never been a particularly appealing philosophy for most judges. Most important legal questions around AI are not slam dunks for one side or the other. Like many important legal questions, there is some room for judges to write plausible decisions coming out either way. And when the GDP of Sweden rests on your ruling about the interpretation of the fair use doctrine, it’s hard to use the significant discretion given to you to burn it all down. 

The result is that the long time that it has taken to decide these AI cases has loaded the dice toward a particular resolution. It will be very difficult for a judge to hold that training a large language model on material available freely on the internet constitutes a copyright violation—a holding that would undermine existing approaches and threaten to seriously impair the industry. The only copyright ruling to result in major liability for an AI company so far is Judge William Alsup’s ruling in Bartz v. Anthropic, which resulted in a $1.5 billion class-action settlement. But even there, Judge Alsup imposed liability on a much narrower theory of piracy, avoiding the conclusion that training itself is a copyright violation. 

These effects of delay are also why onlookers would be wrong to read the Supreme Court’s recent denial of certiorari in the AI copyright case as evidence that the Court is sympathetic to writers, artists, and other creatives. The Supreme Court is probably not going to wade into these cases unless it is forced to, such as by a lower court judge actually deciding to rule in a way that puts the financial feasibility of the AI industry in jeopardy, or by a meaningful split between the federal circuits on a question of real importance to one of the major industries involved in these disputes. The application of intellectual property law to AI tools is not an area where the justices are likely to have strong ideological precommitments. In such a context, the small-c conservatism that is often on display at the Court will likely carry the day when it comes to weighing in on novel, rapidly changing issues with major economic ramifications. We should expect the Court to avoid AI cases as long as it can.

This is no way to run a legal system. Novel legal issues that affect major industries should be resolved before the development of massive reliance interests makes it nearly impossible to fairly resolve those issues on their merits.

If you, like me, tend to hope for the “training is fair use” side to prevail, this may not bother you much. After all, the status quo is one in which AI developers appear to be proceeding with training their models on copyrighted material, without any obvious negative effects from the uncertainty resulting from the long delays in these cases.

But the cost and delay of obtaining legal clarity in the current system will affect the development and deployment of AI from many angles. The earliest wave of cases focused on intellectual property, but more recently there has been a new wave of tort cases surrounding AI tools, mental health, and self-harm. As the use of these tools widens and deepens, there will be cases on all sorts of issues: employment, health care, civil rights, and more. In some of these cases, delay that entrenches the status quo will tend to be permissive toward the use of AI, as with the IP cases; in other cases, entrenching the status quo via delay will be restrictive. Companies considering deploying AI tools in health care, for instance, face enormous uncertainty about liability exposure. That uncertainty may make the rational move for them to be avoiding adoption of AI tools altogether in some contexts until the governing rules are clearer. 

Or consider the recent lawsuit by Anthropic challenging the Department of Defense’s designation of the company as a supply chain risk. Anthropic was granted a preliminary injunction, a fast form of relief. But if that relief had been denied, with the case proceeding at the creeping pace of the past three years of federal AI cases, it would have been a sustained wound. There is no real way to tell, in advance, who benefits from the slow pace of courts.

For the nation’s institutions to rise to the moment when it comes to AI, they will need to figure out ways to embrace the benefits of the new technological capacities while mitigating the downside risks. And that, in turn, is going to require clarity about how laws apply in new situations. This is going to be a repeated need in the years ahead. And the way the courts have been handling it so far does not inspire confidence. 

Unfortunately, all of this is happening at a time when the nation’s governing institutions are not doing very well. Congress, which could aid things by updating or revising existing laws, is not at its zenith of responsiveness. The executive branch is not at its peak of reasoned decision-making, as suggested by the Anthropic debacle among other examples. The Supreme Court, meanwhile, appears poised to erode our collective capacity to experiment with independent agencies and other institutional alternatives to courts. So we are left largely with state agencies and legislatures, existing federal agencies, and the courts. 

The courts aren’t going away. But policymakers working on these issues should be clear-eyed about what courts can and cannot deliver on a reasonable timeline. Enforcement mechanisms that depend on private litigation to establish baseline legal rules may be ones that forego clarity for years. Where possible, lawmakers should consider alternatives, or at least complements: administrative rulemaking and enforcement, bright lines for liability or clear safe harbors, independent verification organizations or other systems based around competent third-party auditors, or changed procedural rules for litigation that accelerate timelines. 

And judges themselves can help, too. The long timelines of these AI cases are not inherently necessary, even in complex and novel cases. Judges have tools to speed things up, and they should reconsider the relative value of further factual and legal development versus speedy resolution in areas that are developing as quickly as AI. Courts that insist on deciding AI cases at the pace of ordinary litigation may find they’ve undercut their ability to decide them at all. 


Daniel Wilf-Townsend is an Associate Professor at Georgetown University Law Center. His research focuses on consumer protection, civil procedure, and artificial intelligence.
}

Subscribe to Lawfare