1,000 AI Bills: Time for Congress to Get Serious About Preemption
If this growing patchwork of parochial regulatory policies takes root, it could undermine U.S. AI innovation.

Published by The Lawfare Institute
in Cooperation With
The United States is approaching a troubling technology policy milestone: One thousand artificial intelligence (AI) bills have already been introduced just over four months into 2025. This means almost eight new AI-related bills have been introduced every day this year. Compared with previous technologies, this represents an unprecedented level of policymaker interest in a technology that has not yet even been widely used by most Americans—at least not in a substantive fashion. Though state legislators may have the best of intentions in pushing their respective bills, they risk disrupting the development of a transformative technology. For example, the RAISE Act (Responsible AI Safety and Education Act) under consideration by the New York State Legislature would saddle all qualifying labs with an annual inspection by a third party for compliance with the act’s extensive and ambiguous development standards. Even if a handful of states managed to pass equivalents of the RAISE Act, labs would soon find themselves searching for different auditors in different states to evaluate them on different metrics. And, again, the RAISE Act is just one out of 1,000 bills.
If this growing patchwork of parochial regulatory policies takes root, it could undermine the nation’s efforts to stay at the cutting edge of AI innovation at a critical moment when competition with China for global AI supremacy is intensifying—a reality that Dean Ball and Alan Rozenshtein observed last year (before the DeepSeek moment!). As House Energy and Commerce Committee Chair Brett Guthrie argued recently, the United States must “make sure that we win the battle against China” and the key to that is to ensure America does not “regulate like Europe or California regulates,” because “that puts us in a position where we’re not competitive.” Similarly, the Trump administration has stated America must “secure its position as the unrivaled world leader in critical and emerging technologies” beginning with AI.
These important goals of encouraging a robustly innovative national AI marketplace and a strong strategic base to compete with China will be undermined unless Congress preempts the development of a patchwork of conflicting and costly state and local regulatory policies.
Unprecedented Legislative Activity
The vast majority of the 1,000 AI-related bills pending currently are state bills, and many of them would impose various new regulatory obligations on algorithmic systems. While not all of these AI bills will pass, some important measures have already been implemented and others are advancing rapidly.
The wisest state bills propose studying how AI is currently being used by government bodies, or whether existing governance capacity is sufficient for responding to AI developments. Other smart bills look to promote the use of AI within government to improve efficiency or promote state development. Two states—Montana and New Hampshire—have even proposed “Right to Compute” initiatives that would protect the ability of the public to access and use computational resources. The Montana bill was signed into law on April 21. One final meritorious approach is already under way in Utah, where the Office of Artificial Intelligence Policy offers “regulatory mitigation” to companies deploying AI—this unique arrangement allows companies to partner with the state to develop safeguards responsive to specific use cases.
Unfortunately, most state AI bills look to preemptively regulate in some fashion, often based on hypothetical concerns. Some of these bills contain differing definitions of the term “artificial intelligence.” Two major types of state AI regulatory measures are particularly problematic.
The first are “model-level” regulatory proposals that would impose various constraints on large AI “frontier” models on safety grounds. For example, in 2024 the California legislature passed SB 1047, which would have regulated frontier AI systems developed in California—effectively allowing California to extraterritorially dictate AI development across the nation given that many of the leading AI models are developed in the Golden State. While California Gov. Gavin Newsom eventually vetoed the measure last September, it represented a threat to innovation outside of the state’s borders. That would have been constitutionally problematic and ripe for federal preemption. New York, Illinois, and Massachusetts introduced similar measures this year.
Another major category of state AI bills look to preemptively eliminate the possibility of AI harms or “algorithmic discrimination” in various contexts. Dozens of states have floated such measures, and the leading bills have been coordinated by the Multistate AI Policymaker Working Group (MAP-WG), a coalition of state lawmakers from more than 45 states attempting to create a uniform AI discrimination law. Colorado already passed one of these bills (SB24-205) last May.
These measures take their cues from the European Union’s new AI Act and other European tech regulations by proposing prescriptive rules imposed by new technology bureaucracies or state attorneys general. Instead of relying on the many existing legal remedies that already exist to cover potential AI-related harms, these measures would essentially treat certain types of new AI innovation under a guilty-until-proven innocent standard and impose cumbersome mandates before innovators could even get products to market. That said, it is clear that AI has in many cases amplified and accelerated the sorts of harms addressed by existing law. By way of example, ongoing litigation against AI products that allegedly contributed to a young user’s suicide rightfully have elicited popular concern about how to protect children in the age of AI. Similarly, the widespread unease introduced by the creation and dissemination of compelling deepfakes warrants public attention.
Another issue with these measures arises from their inclusion of a litany of open-ended regulatory terms like “consequential decisions,” “substantial factors,” “reasonable care,” and “high-risk” applications, as well as conflicting definitions of new regulatory classifications like “developers,” “deployers,” and “distributors.” This vague language, multiplied across several states, will inevitably introduce the sort of regulatory uncertainty that has long stymied innovation. What constitutes “reasonable care” in the context of model development is far from a decided question, and it’s one that should not fall to trial courts to puzzle through.
Protecting Interstate Algorithmic Commerce and Speech
The state lawmakers pushing such proposals cite “the probability of congressional inaction” on AI issues as justification for their proposed parochial regulations. If every state goes its own way on AI policy, however, it will interfere with the free flow of algorithmic commerce and speech, and diminish America’s domestic and international competitiveness. Rep. Jay Obernolte (R-Calif.), who co-chaired the House’s Bipartisan Task Force on Artificial Intelligence in the previous Congress, argues that, “AI is very clearly an interstate commerce issue, and I think that, predominantly, regulation of AI needs to be done at the federal level, if you allow 50 different state regulations to exist … [that] is an enormous barrier to entry for innovation.” While the Supreme Court long ago recognized that states play a key role in regulating subjects “drawn from local knowledge and experience,” broad frontier AI regulation is surely not one of those subjects.
Insistence by state legislators that they make their mark on AI governance while Congress studies the issue may have irreversible consequences. Larger, better resourced companies will find it easier than smaller startups to comply with compounding state AI regulations, as even regulatory proponents acknowledge. When Colorado Gov. Jared Polis (D) signed his state’s major new AI regulatory measure into law last May, he noted that a patchwork of state AI regulations would create “a complex compliance regime for all developers and deployers of AI” that will “tamper innovation and deter competition.” A leading venture capital firm warns such a compliance patchwork will “cripple Little Tech and hinder American efforts to compete with AI development in other countries.” This is especially true for open-source AI innovators, who typically lack the resources needed to deal with these regulatory complexities. Importantly, many municipalities (led by New York City) are now also crafting their own AI regulatory schemes, further exacerbating this problem.
This is why Gov. Polis urged Congress to create “a needed cohesive federal approach … to limit and preempt varied compliance burdens on innovators and ensure a level playing field across state lines along with ensuring access to life-saving and money-saving AI technologies for consumers.” Sam Altman, CEO of OpenAI, recently echoed that point. During a Senate Commerce Committee hearing this week on “Winning the AI Race,” Sen. Cynthia Lummis (R-Wyo.) asked representatives of leading AI developers about “a patchwork regulatory framework and how that could impact our competitiveness” if “significantly burdensome” state AI mandates proliferate.
“I think it would be quite bad,” responded Altman. “It’s very difficult to imagine us figuring out how to comply with 50 different sets of regulation,” he said. “That will slow us down at a time when I don’t think it’s in anyone’s interest for us to slow down. One federal framework that is light-touch that we can understand and that lets us move with the speed that this moment calls for seems important,” he argued, but “the sort of every state takes a different approach here I think would be quite burdensome and significantly impair our ability to do what we need to do” to invest and compete.
Preemption or Moratorium?
Congress can address the problem Polis and Altman identified through either express preemption of state AI regulations or a federal “learning period moratorium” that would pause new AI regulatory enactments for a period of time.
Under express preemption, as previously detailed here by Ball and Rozenshtein, Congress would create a uniform national standard for AI, making explicit that federal law preempts contradictory state and local AI rules. Federal preemption legislation should make it clear this is being done both to protect the free flow of interstate algorithmic commerce and speech and also to ensure other important national policy objectives are protected. This is in line with the national framework Congress and the Clinton administration crafted for the internet and digital marketplace and speech in the 1990s. Notably, that framework specified that the internet “should be governed by consistent principles across state, national, and international borders that lead to predictable results regardless of the jurisdiction in which a particular buyer or seller resides.” Likewise, it called for governments “to establish a predictable and simple legal environment.”
Congress should align the language of AI preemption legislation with previous laws and judicial precedents that federal courts have recognized as providing a clear expression of lawmaker desire to protect interstate activity from confusing, conflicting state and local policies, while also safeguarding the very real interests of states to adopt narrow policies in response to the norms and values of their political community. How to draw the line between safeguarding a competitive and thriving national AI ecosystem and affording states the possibility to advance community values is a topic that merits further inquiry.
A few key principles should inform that effort: First, Congress should limit carve-outs, which could undermine the effort to establish clear and consistent national AI standards. Possible exceptions would include local policing and education policy decisions involving state government use of algorithmic systems (which fit the description of subjects “drawn from local knowledge and experience”). Second, Congress should pay particular attention to preempting state regulations that have commercial implications. While state- and local-level bans on commercial products, such as pricing algorithms, may appear to have limited geographic reach, the nature of AI tools and our interconnected economy may result in local decisions having statewide and even nationwide ramifications. What’s more, given that commercial tools are likely to elicit the greatest interest from investors, it’s particularly important to avoid the creation of a policy patchwork in that domain that could diminish the overall rate of AI innovation and investment.
Congress could look to the Copyright Act of 1976 and the Telecommunications Act of 1996 for examples of how problematic state policies were preempted as part of previous information technology policy reform efforts. For example, in the Telecom Act, Congress specified that “[n]o State or local statute or regulation, or other State or local legal requirement, may prohibit or have the effect of prohibiting the ability of any entity to provide any interstate or intrastate telecommunications service.” The law included other specific preemptions as well as a provision instructing federal and state regulators to forbear from regulating in certain instances to enhance competition. Congress could use similar language to ensure the development of a robust national AI marketplace with clear and consistent policies for all innovators and investors.
If express preemption proves too challenging in the short term, federal lawmakers could instead consider a “learning period moratorium” on new AI regulations. This idea was first outlined in an R Street Institute report last May. A time-limited moratorium on certain new AI-related regulations would create breathing space for new AI innovations while also giving policymakers and other experts the chance to study what issues deserve greater attention. A recent R Street filing to the House Commerce Committee outlines how an AI moratorium would work and clarified that it would not limit the enforcement of preexisting laws and regulations. Many existing consumer protection laws, targeted sectors rules, and other legal remedies would still cover any harms that came about from algorithmic systems.
Congress has used moratoria in the past to encourage the growth of other markets while studying optimal policy for new technologies. The Internet Tax Freedom Act of 1998 (made permanent in 2016) prevented the development of “multiple and discriminatory taxes” on electronic commerce and internet access by state and local governments. The Commercial Space Launch Amendments Act of 2004 also included a moratorium that ensured that federal regulations did not hamstring the nascent market for commercial human spaceflight. It expires in 2028.
With either preemption or a moratorium, Congress could still eventually impose certain new “light-touch” rules, such as minimum transparency standards for some AI frontier systems, or adherence to some basic best practices in exchange for liability protections. These details are secondary to immediately establishing a federal framework that removes such regulatory powers from state and local governments.
It’s worth recalling that the framers adopted the Commerce Clause (and that courts have subsequently recognized the dormant Commerce Clause) explicitly so that Congress could take the lead on these questions—even if that meant delaying substantive action. “The very purpose of the Commerce Clause,” as expressed by the Supreme Court, “was to create an area of free trade among the several states.” The powers afforded to Congress under the clause include not only the promotion of commerce throughout the nation but also the creation of “an area of trade free from interference by the States.”
***
Congress and the Trump administration share a desire to boost AI opportunity and expand the nation’s computational capabilities to ensure America stays ahead of China and other adversaries in the race for global AI supremacy. Fifty different AI regulatory regimes will undermine these goals. Comprehensive AI preemption legislation or a learning period moratorium on new AI mandates is the first step needed to achieve that goal.