How Not to Embarrass the Future

Published by The Lawfare Institute
in Cooperation With
The timing of society’s legal response to artificial intelligence (AI) matters, but not in the way one might think. Time is not of the essence. The best policy emerges from learned experience. Yet, when it comes to new tools such as AI companions, a desire to regulate unnecessarily truncates that learning period.
Policymakers are far more likely to err by acting rashly than by delaying legal reform for too long. AI’s advances are outpacing legislative cycles, tempting lawmakers to try to “future proof” the law with sweeping rules that carry unpredictable consequences. But the history and theory of technology governance teaches just the opposite: When policymakers act quickly, they’re likely to get the details wrong. Those errors are not costless. Regulatory mistakes today will harden tomorrow into obstacles to innovation that last long after the targeted technologies have changed.
Legislators should resist the urge to pass hasty and overconfident laws that burden future innovation. Regulation should be specially crafted to limit its duration and guard against inadvertent creep. And—because the legislature will never be filled with enlightened philosopher kings capable of predicting the future—doing nothing may be the best option of all.
The Regulatory Reflex—And Why It Backfires
New technologies provoke a familiar cycle: marvel, then anxiety, then legislation. It’s a story Elisha Graves Otis, the creator of the elevator, could recite. His invention induced awe, but it also incited panic. Despite numerous successful demonstrations of Otis lifts and other elevators, a mountain of case law and regulations developed to assuage public fears.
Lawmakers’ desire to “do something” to confront emerging tech is understandable, but often wrongheaded. In periods of rapid innovation, lawmakers confront a stark pacing problem during which technological advances exceed the capacity of policymakers to craft responsive regulations. That mismatch pushes governments toward a choice between bad options—either premature regulation or ongoing study. The former is tempting, but more damaging, for legal mistakes once made are not easily corrected. Rules (mis)drafted for yesterday’s technologies will come to govern tomorrow’s infrastructure, distorting markets and channeling private investment toward compliance rather than experimentation.
For example, regulations grounded in preset computational-power (or “compute”) thresholds have already made this point clear. Raw computational power, the key driver of frontier AI models, has been treated as a proxy for risk by some legislators. Accordingly, bills such as SB 1047 in California and the RAISE Act in New York—both of which attempt to impose restrictions on the development and deployment of advanced AI models—have made compliance with some more detailed regulatory requirements contingent on the compute involved with the training process. Their hope was to confine those provisions to the biggest of the big models. Yet that threshold has already been or will soon be passed by many more models than intended. Legislation targeted at edge cases will someday become the norm, not the exception, for all cases. What’s more, compute may not be a reliable proxy for the harms motivating legislative activity. Still, bills such as the RAISE Act are forging ahead in state capitals.
The reflex to legislate rapidly has been especially strong for AI. Just last session, Congress entertained hundreds of proposed bills to regulate AI, and statehouses are likewise sprinting to fill perceived legislative gaps. Though many of those bills never made it through their respective chambers, the sheer number of proposals indicates a pervasive sense among regulators that AI presents problems in need of legal solutions, and fast.
But speed and volume are not virtues. Early, tech-specific laws tend to be brittle, slotted to today’s (soon to be yesterday’s) taxonomy of models and use cases, not tomorrow’s. Even laws intended to be technologically neutral, such as the 1976 Copyright Act, may inadvertently codify contemporary assumptions about the nature of technology, rendering purportedly “future-proof” provisions less helpful than planned. Pursuant to Section 102(a) of that act, for example, copyright protection may be afforded to “original works of authorship fixed in any tangible medium of expression, now known or later developed[.]” Yet we have seen courts struggle to apply that language in the age of AI, which has allowed individuals to develop novel works in new formats via new methods. Brad Greenberg, then a visiting fellow at the Information Society Project at Yale Law School, anticipated as much in his 2016 article, warning that “[n]eutrality was a blunt tool, but it appeared to guard copyright law against obsolescence, even if over time it became apparent that the law was often too general to be adequately tailored to new technologies.”
The Worst Path: Sticky Statutes
Statutory law tends to be sticky. Once on the books, a new statute sets expectations, reorients business models, and spawns compliance regimes that develop their own constituencies. Even if the law is designed poorly, inertia will keep it in place. In the AI space, where capabilities and risks are evolving rapidly, durable, tech-specific statutes are the regulatory equivalent of pouring wet cement during an earthquake.
Sticky statutes also travel. A single, large market (say California or the European Union) can export its enactment nation- or even worldwide, converting a local experiment into a de facto multijurisdictional mandate. This process has played out before: California’s infamous furniture-fabric flammability rule effectively became a national standard, with significant and only later-understood health costs, as flame-retardant, potentially toxic chemicals now seem ubiquitous. That dynamic isn’t a one-off; it’s a warning about extraterritorial spillovers when states “don’t stay in their lanes.” AI’s inherently interstate development magnifies the risk that a handful of jurisdictions will set rules for everyone else.
Finally, rigid laws are especially harmful when the government lacks the capacity to enforce them. Requiring audits, disclosures, or technical assessments without also funding the expertise to evaluate them leads only to the appearance of oversight, not its reality. In that scenario, paperwork piles up but actual understanding does not. The result is wasted resources, weakened public trust, and less attention paid to the safety-critical issues that most need it. Environmental impact statements (EISs) required by the National Environmental Policy Act (NEPA) and submitted to the Environmental Protection Agency are a case study of this phenomenon. As reported by the Government Accountability Office, “Little information exists on the costs and benefits of completing NEPA analyses,” yet they persist. The enduring nature of this process step may not be problematic if EISs were easy to produce. The opposite is true. The average EIS may require $250,000 to $2 million in expenses.
What Adaptive Statutes Look Like (If We Must Legislate)
If, despite the warnings, legislators do legislate—and politics suggests they will—then statutes must incorporate adaptive mechanisms. They can be less sticky and more agile if they are designed with three simple tools.
Sunset clauses. A sunset clause flips the burden of proof. Instead of persisting by default, a rule expires by default and must earn renewal with evidence. In fields where “yesterday’s breakthrough becomes tomorrow’s baseline,” expiration dates are not a sign of weakness; they are a guardrail against ossification and capture. In AI, sunsets would force lawmakers to test whether mandates (say, documentation or audit requirements) actually reduce risk rather than paper over it or create it elsewhere.
Sunrise clauses. A sunrise clause delays the effective date of a statute until specified conditions are met—often institutional capacity, standardized metrics, or independent testing readiness. For example, if an agency lacks the expertise to evaluate the model card released by an AI lab deploying a new AI system, flipping a statutory switch yields theater, not safety. Enforcement of those regulations then should be tied to demonstrable institutional capacity to accurately and comprehensively oversee their implementation. (Here, sunrise and sunset are complements: Start only when we can enforce wisely; continue only if results justify it.)
Retrospective review. Here the idea is, by statute, to require a look-back on the same statute’s efficacy. Did the rule reduce the targeted harm? What were the unintended effects? Who reviews, what metrics matter, and when findings trigger revision or repeal must be specified up front. Designed well, retrospective review can be an antidote to vibes-based regulation; yet care must be taken for, if designed poorly, the review can become mere bureaucratic pageantry.
Beyond these three, AI regulation also calls for jurisdictional humility and realistic expectations about capacity. Lawmakers should avoid one-size-fits-all structures that sound elegant but misalign incentives across wildly different models and uses. Instead they must develop genuine expertise before layering on duties. Lawmakers must also respect constitutional limits on extraterritorial effects so that the laboratories of democracy can do their work and the biggest states don’t end up regulating the entire country through market power. Finally, and above all, lawmakers must resist premature regulation, for rules enacted before we understand a problem have little hope of solving it.
Unfortunately, the U.S. probably won’t see adaptive statutes. They require legislators to be hands-on: to fund evaluators, commission studies, revisit choices, and explain revisions that might look like backtracking. Electoral incentives reward visible action over disciplined experimentation. The political pressure to control AI accelerates this bias: Restraint can be caricatured as negligence, while speed reads as courage. Yet it is “regulatory patience” that requires true fortitude. This lawmakers lack, and thus we see instead premature, sticky laws that quickly outlive their rationale.
If politics won’t deliver adaptive statutes, we need another path that preserves room to innovate while addressing real harms.
The Common-Law Alternative
Despite lawmakers’ fears, the common law already supplies a versatile toolkit for AI’s problems, and its tools can do so without foreclosing future innovation. Consider defamation by large language models (LLMs). Because liability for defamation has (wisely) never turned on whether a human sat down to author the words in question, but only on whether she has culpably released a false, reputation-harming statement into the wild, courts can easily extend defamation-law principles to AI outputs. When a company designs an LLM defectively or continues publishing known falsehoods, liability should follow. The facts are novel; the legal inquiry is not.
The same is true for physical harms. When an autonomous system causes an accident, established doctrines ask familiar questions: Was there a design defect? Were warnings adequate? Did a human operator act negligently? Manufacturers bear responsibility for defects; users for negligence; courts and juries for evaluating reasonableness under the circumstances. Where proof problems arise—say, due to system complexity—doctrines like strict liability for manufacturing defects or res ipsa loquitur can fill the gap, and insurance can smooth losses without statutory reinvention. The common law’s menu of options is not hypothetical; it’s the product of centuries of doctrinal refinement. Why prefer this path?
Adaptability. Standards such as “reasonableness,” “good faith,” and “due care” let decision-makers apply enduring principles to new facts without cabining tomorrow’s technology into today’s definitions. In a fast-moving domain, that’s a feature, not a limitation, of the common law. The common law’s incremental corrections avoid the pacing problem that plagues legislatures. We do not need to decide, today, how to govern every use case that might arrive in five years.
Information generation. Litigation surfaces facts about failures and trade-offs that no predeployment statute could possibly anticipate. The surprising successes and failures of new technology launches show that even the most knowledgeable experts cannot predict how a technology will be received in the marketplace.
Political insulation. Courts apportion responsibility among manufacturers, deployers, and users based on culpability, not identity or hype. And private litigants assert claims not to satisfy the whims of the president or this or that bureaucratic official, but to recover real losses for real plaintiffs (and of course to make real money doing it).
This is not to romanticize courts. They are imperfect, slow, and sometimes inconsistent. But compared to sticky, tech-specific statutes, the common law is brilliantly conservative in the best sense. It preserves flexibility by refusing to articulate the law more fully than the facts require.
Answering the Obvious Objections
Some observers may claim that AI is different. Yes—and no. Some AI risks (e.g., scale, speed, opaqueness) challenge existing doctrines. Yet the common law has long handled complex, inscrutable systems—industrial machinery, pharmaceuticals, aviation—by placing responsibility where human decisions cause harm and by adjusting evidentiary burdens when proof is uniquely hard to obtain. Where truly novel, systemic risks emerge, the common law can still serve as the baseline while targeted, temporary statutes—structured with sunset, sunrise, and review—are crafted to address specific gaps.
Others may suggest that courts act too slowly. Sometimes. But slowness can be a virtue when the alternative is locking in mistakes. And courts decide live controversies year-round, iterating as facts accumulate. In a field where the “known unknowns” are substantial, a rulebook that learns is superior to one that presumes omniscience.
Still others might point to the difficulty of national coordination. Where interstate conduct raises cross-border concerns, federal law may be warranted—but it should be narrow, evidence-backed, and temporary. The worst path is a patchwork of 50 sticky statutes or a sprawling federal code drafted in 2025 to govern models we can’t yet even imagine. In a market where one state’s rules often become national standards, jurisdictional humility and federal restraint are not luxuries, but necessities.
Conclusion: Keep the Future Open
AI’s risks are real. So are its possibilities. Sticky, tech-specific statutes enacted in a rush are the worst of both worlds. They are ill-suited to address current problems and ill-equipped for future ones. Adaptive statutes—sunset, sunrise, review—are better, but politics rarely rewards the patience they require. That leaves the common law, our oldest and most versatile technology policy—a system built to refine, to balance, and to apportion responsibility without freezing innovation in place. We should default to its baseline; legislate modestly, temporarily, and only where truly necessary; and preserve space for ingenuity to flourish.
The task is not to protect the law from whatever AI brings. It is to protect the future from our own impulse to legislatively force the unknown into the comfortable mold of the present. That requires humility, discipline, and confidence in the law we already have. The promise of AI is too important to handcuff with statutes written in fear of possibilities we do not yet understand. Keep the future open.