Congress Cybersecurity & Tech

On AI Policy, Congress Shouldn’t Cut States Off at the Knees

Katie Fry Hester, Gary Marcus
Monday, May 19, 2025, 4:13 PM
A sweeping preemption provision tucked into a federal budget bill would be a major step backwards in AI policy.
Maryland State House, workplace of Sen. Katie Fry Hester. (S.L., https://www.flickr.com/photos/ochinko/6267813103, CC BY-NC-SA-2.0, https://creativecommons.org/licenses/by-nc-sa/2.0/deed.en)

Published by The Lawfare Institute
in Cooperation With
Brookings

Editor's Note: This article is adapted from an open letter initially published on the Substack Marcus on AI and co-signed by state representatives Delegate Michelle Maldonado (D-Va.), Sen. James Maroney (D-Conn.), Sen. Robert Rodriguez (D-Colo.), Rep. Kristin Bahner (D-Minn.), Rep. Steve Elkins (D-Minn.), Sen. Kristen Gonzalez (D-N.Y.), and Rep. Monique Priestley (D-Vt.).

Artificial intelligence holds immense promise—from accelerating disease detection to streamlining services—but it also presents serious risks, including deepfake deception, misinformation, job displacement, exploitation of vulnerable workers and consumers, and threats to critical infrastructure. As AI rapidly transforms our economy, workplaces, and civic life, the American public is calling for meaningful oversight. According to the Artificial Intelligence Policy Institute, 82 percent of voters support the creation of a federal agency to regulate AI. A Pew Research Center survey found that 52 percent of Americans are more concerned than excited about AI’s potential, and 67 percent doubt that government oversight will be sufficient or timely.

Public skepticism crosses party lines and reflects real anxiety: voters worry about data misuse, algorithmic bias, surveillance, and impersonation, and even catastrophic risks. Pope Leo XIV has named AI as one of the defining challenges of our time, warning of its ethical consequences and impacts on ordinary people and calling for urgent action.

Yet instead of answering this call with guardrails and public protections, Congress, which has done almost nothing to address these concerns, is considering a major step backwards. It’s a tool designed to prevent states from taking matters into their own hands: a sweeping preemption provision tucked into a federal budget bill that would ban all state regulation on AI for the next decade.

The provision, which is likely at odds with the 10th Amendment, demands that “no state or political subdivision may enforce any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems during the 10-year period beginning on the date of the enactment of this Act.” The measure would prohibit any state from regulating AI for the next ten years in any way—even in the absence of any federal standards.

This would be deeply problematic under any circumstance, but it’s especially dangerous in the context of a rapidly evolving technology already reshaping healthcare, education, civil rights, and employment. If enacted, the statute would preempt states from acting—even if AI systems cause measurable harm, such as through discriminatory lending, unsafe autonomous vehicles, or invasive workplace surveillance. For example, twenty states have passed laws regulating the use of deepfakes in election campaigns, and Colorado passed a law to ensure transparency and accountability when AI is used in crucial decisions affecting consumers and employees. The proposed federal law would automatically block the application of those state laws, without offering any alternative. The proposed provision would also preempt laws holding AI companies liable for any catastrophic damages that they contributed to, as the California Assembly tried to do.

The federal government should not get to control literally every aspect of how states regulate AI—particularly when they themselves have fallen down on the job—and the Constitution makes pretty clear that the bill as written is far, far too broad. The 10th Amendment states, quite directly, that “The powers not delegated to the United States by the Constitution, nor prohibited by it to the states, are reserved to the states respectively, or to the people.” In stepping so thoroughly on states’ rights, it is difficult to see how the proposed bill would not clash with this 234-year-old bedrock principle of the United States. (Defenders of this overbroad bill will claim that AI is part of interstate commerce; years of lawsuits will ensue.)

And as Sen. Ed Markey (D-Mass.) put it, “[a] 10-year moratorium on state AI regulation won’t lead to an AI Golden Age. It will lead to a Dark Age for the environment, our children, and marginalized communities.”

Well aware of the challenges AI poses, state leaders have already been acting. An open letter from the International Association of Privacy Professionals, signed by 62 legislators from 32 states, underscores the importance of state-level AI legislation—especially in the absence of comprehensive federal rules. Since 2022, dozens of states have introduced or passed AI laws. In 2024 alone, 31 states, Puerto Rico, and the Virgin Islands enacted AI-related legislation or resolutions, and at least 27 states passed deepfake laws. These include advisory councils, impact assessments, grant programs, and comprehensive legislation like Colorado’s, which would have mandated transparency and anti-discrimination protections in high-risk AI systems.  It would also undo literally every bit of state privacy legislation, despite the fact that no federal bill has passed after many years of discussion.

It's specifically because of state momentum that Big Tech is trying to shut the states down. According to a recent report in Politico, “As California and other states move to regulate AI, companies like OpenAI, Meta, Google and IBM are all urging Washington to pass national AI rules that would rein in state laws they don’t like. So is Andreessen Horowitz, a Silicon Valley-based venture capitalist firm closely tied to President Donald Trump.” All largely behind closed doors. Why? The tech industry and venture capitalists have worked behind closed doors on this federal provision which would reduce states—barring them from enacting any safeguards. With no regulatory pressure, tech companies would have little incentive to prioritize safety, transparency, or ethical design; any costs to society would be borne by society. 

But the reality is that self-regulation has repeatedly failed the public, and the absence of oversight would only invite more industry lobbying to maintain weak accountability. 

At a time when voters are demanding protection—and global leaders are sounding the alarm—Congress should not tie the hands of the only actors currently positioned to lead. A decade of deregulation isn’t a path forward. It’s an abdication of responsibility.


Katie Fry Hester is a Democratic member of the Maryland Senate from the 9th district.
Gary Marcus is Emeritus Professor of Psychology and Neural Science at New York University and the author of five books.
}

Subscribe to Lawfare