Cybersecurity & Tech States & Localities

AI Federalism: The Right Way to Do Preemption

Charlie Bullock
Monday, November 24, 2025, 1:00 PM

The allocation of regulatory authority over AI between states and the federal government is a complex problem that can’t be resolved in a single stroke.

AI landscape (Photo: Rawpixel, https://tinyurl.com/bdzz635h, Public Domain)

Published by The Lawfare Institute
in Cooperation With
Brookings

Last week, congressional Republicans launched a last-minute attempt to insert an artificial intelligence (AI) preemption provision into the must-pass National Defense Authorization Act (NDAA). As of this writing, the text of the proposed addition has not been made public. However, the fact that the provision is being introduced into a must-pass bill at the eleventh hour may indicate that the provision will resemble the preemption provision that was added to, and ultimately stripped out of, the most recent reconciliation bill. The U.S. House of Representatives passed an early version of that “moratorium” on state AI regulation in May. While the exact scope of the House version of the moratorium has been the subject of some debate, it would essentially have prohibited states and municipalities from enforcing virtually any law or rule regulating “artificial intelligence,” broadly defined. There followed a hectic and exciting back-and-forth political struggle over whether and in what form the moratorium would be enacted. Over the course of the dispute, the moratorium was rebranded as a “temporary pause,” amended to include various exceptions (notably including a carve-out for “generally applicable” laws), reduced from 10 years’ duration to five, and made conditional on states’ acceptance of new Broadband Equity Access and Deployment (BEAD) Program funding. Ultimately, however, the “temporary pause” was defeated, with the Senate voting 99-1 for an amendment stripping it from the reconciliation bill.

The preemption provision that failed in June would have virtually eliminated targeted state AI regulation and replaced it with nothing. Since then, an increasing number of politicians have rejected this approach. But, as the ongoing attempt to add preemption into the NDAA demonstrates, this does not mean that federal preemption of state AI regulations is gone for good. In fact, many Republicans and even one or two influential Democrats in Congress continue to argue that AI preemption is a federal legislative priority. What it does mean is that any moratorium introduced in the near future will likely have to be packaged with some kind of substantive federal AI policy in order to have any realistic chance of succeeding.

For those who have been hoping for years that the federal government would one day implement some meaningful AI policy, this presents an opportunity. If Republicans hope to pass a new moratorium through the normal legislative process, rather than as part of the next reconciliation bill, they will need to offer a deal that can win the approval of a number of Democratic senators (seven, currently, although that number may grow or shrink following the 2026 midterm elections) to overcome a filibuster. The most likely outcome is that nothing will come of this opportunity. An increasingly polarized political climate means that passing legislation is harder than it’s ever been before, and hammering out a deal that would be broadly acceptable to industry and the various other interest groups supporting and opposing preemption and AI regulation may not be feasible. Still, there’s a chance.

Efforts to include a moratorium in the NDAA seem unlikely to succeed. Even if this particular effort fails, however, preemption of state AI laws will likely continue to be a hot topic in AI governance for the foreseeable future. This means that arguably the most pressing AI policy question of the moment is: How should federal preemption of state AI laws and regulations work? In other words, what state laws should be preempted, and what kind of federal framework should they be replaced with?

I argue that the answer to that question is as follows: Regulatory authority over AI should be allocated between states and the federal government by means of an iterative process that takes place over the course of years and involves reactive preemption of fairly narrow categories of state law.

The evidence I’ll offer in support of this claim is primarily historical. As I argue below, this iterative back-and-forth process is the only way in which the allocation of regulatory authority over an important emerging technology has ever been determined in the United States. That’s not a historical accident; it’s a consequence of the fact that the approach described above is the only sensible approach that exists. The world is complicated, and predicting the future course of a technology’s development is notoriously difficult. So is predicting the kinds of governance measures that a given technology and its applications will require. Trying to determine how regulatory authority over a new technology should be allocated ex ante is like trying to decide how each room of an office building should be furnished before the blueprints have even been drawn up—it can be done, but the results will inevitably be disappointing.       

The Reconciliation Moratorium Was Unprecedented

The reconciliation moratorium, if it had passed, would have been unprecedented with respect to its substance and its scope. The lack of substance—that is, the lack of any affirmative federal AI policy accompanying the preemption of state regulations—has been widely discussed elsewhere. It’s worth clarifying, however, that deregulatory preemption is not in and of itself an unprecedented or inherently bad idea. The Airline Deregulation Act of 1978, notably, preempted state laws relating to airlines’ “rates, routes, or services” and also significantly reduced federal regulation in the same areas. Congress determined that “maximum reliance on competitive market forces” would lead to increased efficiency and benefit consumers and, therefore, implemented federal deregulation while also prohibiting states from stepping in to fill the gap.

What distinguished the moratorium from the Airline Deregulation Act was its scope. The moratorium would have prohibited states from enforcing “any law or regulation … regulating artificial intelligence models, artificial intelligence systems, or automated decision systems entered into interstate commerce” (with a few exceptions, including for “generally applicable” laws). But preemption of “any state law or regulation … regulating airplanes entered into interstate commerce” would have been totally out of the question in 1978. In fact, the vast majority of airplane-related state laws and regulations were unaffected by the Airline Deregulation Act. By the late 1970s, airplanes were a relatively well understood technology and air travel had been extensively regulated, both by the states and by the federal government, for decades. Many states devoted long sections of their statutory codes exclusively to aeronautics. The Airline Deregulation Act’s prohibition on state regulation of airline “rates, routes, or services” had no effect on existing state laws governing airlines’ liability for damage to luggage, airport zoning regulations, the privileges and duties of airport security personnel, state licensing requirements for pilots and for aircraft, or the legality of maneuvering an airplane on a public highway.

In short, the AI moratorium was completely unprecedented because it would have preempted an extremely broad category of state law and replaced it with nothing. In all the discussions I’ve had with die-hard AI preemption proponents (and there have been many), the only preemption measures I’ve encountered that have been anywhere near as broad as the reconciliation moratorium were packaged with an extensive and sophisticated scheme of federal regulation. The Federal Food, Drug, and Cosmetic Act, for example, prohibits states from establishing “any requirement [for medical devices, broadly defined] ... which is different from … a requirement applicable under this chapter to the device.” But the breadth of that provision is proportional to the legendary intricacy of the federal regulatory regime of which it forms a part. The idea of a Food and Drug Administration-style licensing regime for frontier AI systems has been proposed before, but it’s probably a bad idea for the reasons discussed in Daniel Carpenter’s excellent article on the subject. Regardless, proponents of preemption would presumably oppose such a heavy-handed regulatory regime no matter how broad its preemption provisions were.

Premature and Overbroad Preemption Is a Bad Idea

Some might argue that the unprecedented nature of the moratorium was a warranted response to unprecedented circumstances. The difficulty of getting bills through a highly polarized Congress means that piecemeal preemption may be harder to pull off today than it was in the 20th century. Moreover, some observers believe that AI is an unprecedented technology (although there is disagreement on this point), while others argue that the level of state interest in regulating AI is unprecedented and therefore requires an unprecedentedly swift and broad federal response. That latter claim is, in my opinion, overstated: While a number of state bills that are in some sense about “AI” have been proposed, most of these will not become law, and the vast majority of those that do will not impose any meaningful burden on AI developers. That said, preemption proponents have legitimate concerns about state overregulation harming innovation. These concerns (much like concerns about existential risk or other hypothetical harms from powerful future AI systems) are currently speculative, because the state AI laws that are currently in effect do not place significant burdens on developers or deployers of AI systems. But premature regulation of an emerging technology can lead to regulatory lock-in and harmful path dependence, which bolsters the case for proactive and early preemption.

Because of these reasonable arguments for departing from the traditional iterative and narrow approach to preemption, establishing that the moratorium was unprecedented is less important than understanding why that approach to preemption has never been tried before. In my opinion, the reason is that any important new technology will require some amount of state regulation and some amount of federal regulation, and it’s impossible to determine the appropriate limits of state and federal authority ex ante.

There’s no simple formula for determining whether a given regulatory task should be undertaken by the states, the federal government, both, or neither. As a basic rule of thumb, though, the states’ case is strongest when the issue is purely local and relates to a state’s “police power”—that is, when it implicates a state’s duty to protect the health, safety, and welfare of its citizens. The federal government’s case, meanwhile, is typically strongest when the issue is purely one of interstate commerce or other federal concerns such as national security.

In the case of the Airline Deregulation Act, discussed above, Congress appropriately determined in 1978 that the regulation of airline rates and routes—an interstate commerce issue if ever there was one—should be undertaken by the federal government, and that the federal government’s approach should be deregulatory. But this was only one part of a back-and-forth exchange that took place over the course of decades in response to technological and societal developments. Regulation of airport noise levels, for example, implicates both interstate commerce (because airlines are typically used for interstate travel) and the police power (because “the area of noise regulation has traditionally been one of local concern”). It would not have been possible to provide a good answer to the question of who should regulate airport noise levels a few years after the invention of the airplane, because at that point modern airports—which facilitate the takeoff and landing of more than 44,000 U.S. flights every day—simply didn’t exist. Instead, a reasonable solution to the complicated problem was eventually worked out through a combination of court decisions, local and federal legislation, and federal agency guidance. All of these responded to technological and societal developments (the jet engine; supersonic flight; increases in the number, size, and economic importance of airports) rather than trying to anticipate them.

Consider another example: electricity. Electricity was first used to power homes in the U.S. in the 1880s, achieved about 50 percent adoption by 1925, was up to 85 percent by 1945, and was used in nearly all homes by 1960. During its early days, electricity was delivered via direct current and had to be generated no more than a few miles from where it was consumed. Technological advances, most notably the widespread adoption of alternating current, eventually allowed electricity to be delivered to consumers from power plants much farther away, allowing for cheaper power due to economies of scale. Initially, the electric power industry was regulated primarily at the municipal level, but beginning in 1907 states began to assume primary regulatory authority. In 1935, in response to court decisions striking down state regulations governing the interstate sale of electricity as unconstitutional, Congress passed the Federal Power Act (FPA), which “authorized the [predecessor of the Federal Energy Regulatory Commission (FERC)] to regulate the interstate transportation and wholesale sale (i.e. sale for retail) of electric energy, while leaving jurisdiction over intrastate transportation and retail sales (i.e. sale to the ultimate consumer) in the hands of the states.” Courts later held that the FPA impliedly preempted most state regulations governing interstate wholesale sales of electricity.

If your eyes began to glaze over at some point toward the end of that last paragraph, good! You now understand that the process by which regulatory authority over the electric power industry was apportioned between the states and the federal government was extremely complicated. But the FPA only dealt with a small fraction of all the regulations affecting electricity. There are also state and local laws and regulations governing the licensing of electricians, the depth at which power lines must be buried, and the criminal penalties associated with electricity theft, to name a few examples. By the same token, there are federal laws and rules concerning tax credits for wind turbine blade manufacturing, the legality of purchasing substation transformers from countries that are “foreign adversaries,” lightning protection for commercial space launch sites, use of electrocution for federal executions, … and so on and so forth. I’m not arguing for more regulation here—it’s possible that the U.S. has too many laws, and that some of the regulations governing electricity are unnecessary or harmful. But even if extensive deregulation occurred, eliminating 90 percent of state, local, and federal rules relating to electricity, a great number of necessary or salutary rules would remain at both the federal and state levels. Obviously, the benefits of electricity have far exceeded the costs imposed by its risks. At the same time, no one denies that electricity and its applications do create some real dangers, and few sensible people dispute the fact that it’s beneficial to society for the government to address some of these dangers with common-sense regulations designed to keep people safe.

Again, the reconciliation moratorium would have applied, essentially, to any laws “limiting, restricting, or otherwise regulating” AI models or AI systems, unless they were “generally applicable” (in other words, unless they applied to AI systems only incidentally, in the same way that they applied to other technologies, and did not single out AI for special treatment). Imagine if such a restriction had been imposed on state regulation of electricity, at a similar early point in the development of that technology. The federal government would have been stuck licensing electricians, responding to blackouts, and deciding which municipalities should have buried as opposed to overhead power lines. If this sounds like a good idea to you, keep in mind that, regardless of your politics, the federal government has not always taken an approach to regulation that you would agree with. Allowing state and local control over purely local issues allows more people to have what they want than would a one-size-fits-all approach determined in Washington, D.C.

But the issue with the reconciliation moratorium wasn’t just that it did a bad job of allocating authority between states and the federal government. Any attempt to make a final determination of how that authority should be allocated for the next 10 years, no matter how smart its designers were, would have met with failure. Think about how difficult it would have been for someone living a mere five or 10 years after the invention of electricity to determine, ex ante, how regulatory authority over the new technology should be allocated between states and the federal government. It would, of course, have been impossible to do even a passable job. The knowledge that governing interstate commerce is traditionally the core role of the federal government, while addressing local problems that affect the health and safety of state residents is traditionally considered to be the core of a state’s police power, takes you only so far. Unless you can predict all the different risks and problems that the new technology and its applications will create as it matures, it’s simply not possible to do a good job of determining which of them should be addressed by the federal government and which should be left to the states.

Airplanes and electricity are far from the only technologies that can be used to prove this point. The other technologies commonly cited in historical case studies on AI regulation—railroads, nuclear power, telecommunications, and the internet—followed the same pattern. Regulatory authority over each of these technologies was allocated between states and the federal government via an iterative back-and-forth process that responded to technological and societal developments rather than trying to anticipate them. Preemption of well-defined categories of state law was typically an important part of that process, but preemption invariably occurred after the federal government had determined how it wanted to regulate the technology in question. The Carnegie Endowment’s excellent recent piece on the history of emerging technology preemption reaches similar conclusions and correctly observes that “[l]egislators do not need to work out the final division between federal and state governments all in one go.”

The Right Way to Do Preemption

Because frontier AI development is to a great extent an interstate commerce issue, it would in an ideal world be regulated primarily by the federal government rather than the states (although the fact that we don’t live in an ideal world complicates things somewhat). While the premature and overbroad attempts at preemption that have been introduced so far would almost certainly end up doing more harm than good, it should be possible (in theory, at least) to address legitimate concerns about state overregulation through an iterative process like the one described above. In other words, there is a right way to do preemption—although it remains to be seen whether any worthwhile preemption measure will ever actually be introduced. Below are four suggestions for how preemption of state AI laws ought to take place.

1. The scope of any preemption measure should correspond to the scope of the federal policies implemented.

The White House AI Action Plan laid out a vision for AI governance that emphasized the importance of innovation while also highlighting some important federal policy priorities for ensuring that the development and deployment of powerful future AI systems happens securely. Building a world-leading testing and evaluations ecosystem, implementing federal government evaluations of frontier models for national security risks, bolstering physical and cybersecurity at frontier labs, increasing standard-setting activity by the Center for AI Standards and Innovation (CAISI), investing in vital interpretability and control research, ramping up export control enforcement, and improving the federal government’s AI incident response capacity are all crucial priorities. Additional light-touch frontier AI security measures that Congress might consider include (to name a few) codifying and funding CAISI, requiring mandatory incident reporting for frontier AI incidents, establishing federal AI whistleblower protections, and authorizing mandatory transparency requirements and reporting requirements for frontier model development. None of these policies would impose any significant burden on innovation, and they might well provide significant public safety and national security benefits.

But regardless of which policies Congress ultimately chooses to adopt, the scope of preemption should correspond to the scope of the federal policies implemented. This correspondence could be close to 1:1. For instance, a federal bill that included AI whistleblower protections and mandatory transparency requirements for frontier model developers could be packaged with a provision preempting only state AI whistleblower laws (such as § 4 of California’s SB 53) and state frontier model transparency laws (such as § 2 of SB 53).

However, a more comprehensive federal framework might justify broader preemption. Under the legal doctrine of “field preemption,” federal regulatory regimes so pervasive that they occupy an entire field of regulation are interpreted by courts to impliedly preempt any state regulation in that field. It should be noted, however, that the “field” in question is rarely if ever so broadly defined that all state regulations relating to an important emerging technology are preempted. Thus, while courts interpreted the Atomic Energy Act to preempt state laws governing the “construction and operation” of nuclear power plants and laws “motivated by radiological concerns,” many state laws regulating nuclear power plants were left undisturbed. In the AI context, it might make sense to preempt state laws intended to encourage the safe development of frontier AI systems as part of a package including federal frontier AI safety policies. It would make less sense to implement the same federal frontier AI safety policies and preempt state laws governing self-driving cars, because this would expand the scope of preemption far beyond the scope of the newly introduced federal policy.

As the Airline Deregulation Act and the Internet Tax Freedom Act demonstrate, deregulatory preemption can also be a wise policy choice. Critically, however, each of those measures (a) preempted narrow and well-understood categories of state regulation and (b) reflected a specific congressional determination that neither the states nor the federal government should regulate in a certain well-defined area.

2. Preemption should focus on relatively narrow and well-understood categories of state regulation.

“Narrow” is relative, of course. It’s possible for a preemption measure to be too narrow. A federal bill that included preemption of state laws governing the use of AI in restaurants would probably not be improved if its scope was limited so that it applied only to Italian restaurants. Dean Ball’s thoughtful recent proposal provides a good starting point for discussion. Ball’s proposal would create a mandatory federal transparency regime, with slightly stronger requirements than existing state transparency legislation, and in exchange would preempt four categories of state law—state laws governing algorithmic pricing, algorithmic discrimination, disclosure mandates, and “mental health.”

Offering an opinion on whether this trade would be a good thing from a policy perspective, or whether it would be politically viable, is beyond the scope of this piece. But it does, at least, do a much better job than other publicly available proposals of specifically identifying and defining the categories of state law that are to be preempted. I do think that the “mental health” category is significantly overbroad; my sense is that Ball intended to address a specific class of state law regulating the use of AI systems to provide therapy or mental health treatment. His proposal would, in my opinion, be improved by identifying and targeting that category of law more specifically. As written, his proposed definition would sweep in a wide variety of potential future state laws that would be both (a) harmless or salutary and (b) concerned primarily with addressing purely local issues. Nevertheless, Ball’s proposal strikes approximately the correct balance between legitimate concerns regarding state overregulation and equally legitimate concerns regarding the unintended consequences of premature and overbroad preemption.

3. Deregulatory preemption should reflect a specific congressional determination against regulating in a well-defined area.

An under-discussed aspect of the reconciliation moratorium debate was that supporters of the moratorium, at least for the most part, did not claim that they were eliminating state regulations and replacing them with nothing as part of a deregulatory effort. Instead, they claimed that they were preempting state laws now and would get around to enacting a federal regulatory framework at some later date.

This was not and is not the correct approach. Eliminating states’ ability to regulate in an area, while decreasing Congress’s political incentives to reach a preemption-for-policy trade in the same area, decreases the odds that Congress will take meaningful action in the near future. And setting aside the political considerations, that kind of preemption would make it impossible for the normal back-and-forth process through which regulatory authority is usually allocated to take place. If states are banned from regulating, there’s no opportunity for Congress, federal agencies, courts, and the public to learn from experience what categories of state regulation are beneficial and which place unnecessary burdens on interstate commerce. Deregulatory preemption can be a legitimate policy choice, but when it occurs it should be the result of an actual congressional policy judgment favoring deregulation. And, of course, this congressional judgment should focus on specific, well-understood, and relatively narrow categories of state law. As a general rule of thumb, express preemption should take place only once Congress has a decent idea of what exactly is being preempted.

4. Preemption should facilitate, rather than prevent, an iterative process for allocating regulatory authority between states and the federal government.

As the case studies discussed above demonstrate, the main problem with premature and overbroad preemption is that it would make it impossible to follow the normal process for determining the appropriate boundaries of state and federal regulatory jurisdiction. Instead, preemption should take place after the federal government has formed some idea of how it wants to regulate AI and what specific categories of state law are inconsistent with its preferred regulatory scheme.

Ball’s proposal is instructive here as well, in that it provides for a time-limited preemption window of three years. Given the pace at which AI capabilities research is progressing, a 10- or even five-year moratorium on state regulation in a given area is far more problematic than a shorter period of preemption. This is, at least in part, because shorter preemption periods are less likely to prevent the kind of iterative back-and-forth process described above from occurring. Even three years may be too long in the AI governance context, however; three years prior to this writing, ChatGPT had not yet been publicly released. A two-year preemption period for narrowly defined categories of state law, by contrast, might be short enough to facilitate the kind of iterative process described above rather than preventing a productive back-and-forth from occurring.

***

Figuring out who should regulate an emerging technology and its applications is a complicated and difficult task that should be handled on an issue-by-issue basis. Preempting counterproductive or obnoxious state laws should be part of the process, but preempting broad categories of state law before we even understand what it is that we’re preempting is a recipe for disaster. It is true that there are costs associated with this approach; it may eventually allow some state laws that are misguided or harmful to innovation to go into effect. To the extent that such laws are passed, however, they will strengthen the case for preemption. Colorado’s AI Act, for example, has been criticized for being burdensome and difficult to comply with and has also generated considerable political support for broad federal preemption, despite the fact that it has yet to go into effect. By the same token, completely removing states’ ability to regulate, even as AI capabilities improve rapidly and real risks begin to manifest, may create considerable political pressure for heavy-handed regulation and ultimately result in far greater costs than industry would otherwise have faced. Ignoring the lessons of history and blindly implementing premature and overbroad preemption of state AI laws is a recipe for a disaster that would harm both the AI industry and the general public.


Charlie Bullock is a Senior Research Fellow at the Institute for Law & AI. Charlie's research focuses on the intersection of AI governance and U.S. law and policy, with a particular focus on U.S. administrative law. His current research includes projects on whistleblower protections, preemption, information-gathering authorities, emergency powers, and regulatory updating. Charlie received his J.D. from Yale Law School in 2020, where he was an editor of the Yale Journal on Regulation.
}

Subscribe to Lawfare