Cybersecurity & Tech

A Framework to Govern AI Innovation

Kevin Frazier
Tuesday, September 23, 2025, 8:00 AM
There’s broad support for evidence-based AI policy. What that means in practice is a hard question. This policy framework is a path forward.
(Jernej Furman, https://shorturl.at/5EIxz; CC BY 2.0, https://creativecommons.org/licenses/by/2.0/)

Published by The Lawfare Institute
in Cooperation With
Brookings

How best to allocate the responsibility for artificial intelligence (AI) governance between the states and the federal government remains an unanswered question. If there’s any agreement among state legislators, members of Congress, AI labs, and civil society organizations, it’s that whatever policy is adopted should be evidence based, technically sound, and public interest oriented. The Joint California Policy Working Group on AI Frontier Models, for example, called for “[e]vidence-based policymaking [that] incorporates ... analysis grounded in technical methods and historical experience, leveraging case comparisons, modeling, simulations, and adversarial testing.” Sen. Ted Cruz (R-Texas) has advocated for a regulatory sandbox that would lower barriers to deploying AI tools but impose heightened information sharing about risks and benefits. Utah has already implemented a flexible regulatory regime that permits labs to quickly deploy tools subject to specific safeguards. In short, there’s broad recognition that how best to govern AI will require experimentation.

 The framework below details how best to facilitate that sort of policy experimentation in a manner that aligns with our federal system—which places constraints on the states and the federal government—as well as recognizes the stakes of getting AI policy right or, minimally, not wrong. Stakeholders across the political aisle, from Cruz to Gov. Jared Polis of Colorado, have expressed serious concern about the U.S. falling behind China and other adversaries in developing and deploying AI.

Risks of Experimentation

Before diving into that framework, it’s important to further detail the risks of experimenting with AI regulation. Experimentation in the context of general purpose technologies like AI comes with potential downsides. Adopting misguided policies today may have long-term and irreversible impacts on the direction of AI development. Well-intentioned regulations early in the evolution of an emerging technology can result in three major flaws. First, they may entrench the market dominance of existing players that are best able to comply with new laws. This will delay or even deny new upstarts from entering the market.

A failure to encourage a competitive ecosystem contributes to the second flaw—foregone innovation. Large players have less of an incentive to pursue disruptive technologies—after all, such technologies may disturb their own business model. When new firms do not have a clear lane to pursuing frontier technology, the public misses out on new tools, cures, and fixes. Innovation is a cumulative process—new ideas build on and combine what’s come before. A slower rate of innovation today will necessarily slow progress tomorrow.

The third flaw is that policy “experiments” may not be conducted as experiments at all but, rather, as permanent interventions into a highly sensitive and significant policy space. An experiment involves a clear problem statement, a testable hypothesis, an intervention, data collection, analysis of the results, and, ideally, some form of replication. Laws, like zombies, are hard to kill. Absent having sunset clauses that specify an expiration date for certain provisions, laws tend to stay on the books. State AI bills such as SB 53 in California that mandate regular assessment of definitions are a step in the right direction when it comes to leaving antiquated laws in effect. SB 53, however, is an outlier. Without such checks, so-called experiments are not really experiments at all.

To stretch the analogy slightly further, it’s also important to mitigate the likelihood of poorly run policy experiments leaking out of the proverbial lab. One state’s experiment should not be imposed on residents in other states. That’s good science, and it’s good federalism. Here again, however, certain AI bills at the state level will necessarily alter the AI used by residents in other states. AI training is not a modular process. Labs do not conduct state-specific training runs of their leading models. Each training run costs millions, if not billions, of dollars, involves data from around the country, and occurs on files and computers around the nation. When states alter the training phase of AI development, they risk changing the end product for the entirety of the country. This sort of interference clashes with the idea that states cannot project their legislation into other jurisdictions.

While some regulatory spillover has been tolerated by the Supreme Court, its most recent decision interpreting the dormant Commerce Clause—National Pork Producers Council v. Ross—turned on facts not applicable to AI. That case involved a California law dictating the conditions under which sows must be raised before being sold in the Golden State. This effectively caused pig farmers around the country to change their practices given the size of the California market. Yet, in theory, they did not have to alter their practices for how they raised all pigs—they could easily segment the pigs intended for sale in California and those that would result in products elsewhere. That sort of segmentation is not possible for AI training. The result is that one state’s training requirements will be imposed on the rest of the country—assuming labs opt to comply (as they presumably would with any law passed in California due to the high number of users there). While such an outcome may not necessarily be “bad” when it comes to maintaining the ability of the U.S. to lead in AI, the uncertain nature of AI development means that such experimentation needs to be closely observed and carefully crafted. State laws that inhibit AI innovation would have grave national consequences.

A failure by the U.S. to maintain its leadership on AI may imperil its ability to protect Americans from manifold threats. Bad actors can easily access AI tools that have lowered the knowledge barriers to developing bioweapons. Several adversaries, such as China and Russia, have integrated AI into weapons systems, allowing for more rapid and deadly attacks. AI-based cyberattacks levied by the nation’s enemies place our critical infrastructure in a perilous position. Second-rate AI will impede America’s ability to respond to and counter these threats. By way of example, from my conversations with AI experts in leading labs I’ve learned that the best way to detect deepfakes that Russia may deploy to disrupt U.S. elections is to ensure U.S. labs are leaders in creating and, by extension, identifying deepfakes. Relatedly, AI advances will play a key role in the ability of the defense sector to monitor cyberattacks. Consider that Proof Labs, based in New Mexico, has made steady progress in leveraging AI to determine if U.S. satellite infrastructure is being hacked. These examples illustrate that continued development and diffusion of AI technology is a national priority.

But not all AI policy issues have the same level of national significance. The application and use of AI by deployers and end users is, in many cases, a matter of local concern. For example, a school district policy for when teachers may use AI is very much a community affair. That policy pertains only to the school community. What’s more, it does not aim to—nor carry any chance of—altering how AI labs develop and deploy their models. Similar policies in sensitive domains that have long been the domain of state authority, such as health care and law enforcement, would likewise ensure state officials can tailor AI use to local demands without impacting the underlying technology or changing the availability and nature of AI in other states.

State AI laws cabined to governing AI use in a specific geography, then, are ripe for robust experimentation unimpeded by national oversight. These are instances in which each state will trial policies while staying in its policy and jurisdictional lane. Such experimentation is the kind that Justice Louis Brandeis originally envisioned when he coined the phrase and welcomed policy development by the states “without risk to the rest of the nation.” Realization of this sort of experimentation hinges on states being legislatively disciplined. When it comes to any regulation of a general purpose technology that pervades society and relies on a range of inputs distributed across the country, it is easy for lazily designed experiments to unnecessarily impact nonresidents.

Recognizing these risks underscores the need for a carefully calibrated framework to guide AI experimentation within our federal system. The challenge is not simply to avoid poorly designed state initiatives, but to channel the inevitable impulse to legislate into structured experiments that generate reliable evidence without distorting national markets or jeopardizing U.S. leadership. In other words, rather than treating state and federal roles as points of conflict, policymakers should view them as complementary tools for disciplined trial and error. It is to that allocation of responsibility—and to the design of mechanisms that make experimentation both safe and informative—that this framework now turns.

A coherent framework for AI governance, which I propose below, must begin, first, with a clear division of labor between the states and the federal government. At its core, the federal government should retain exclusive authority over aspects of AI that implicate national security, interstate commerce, and the baseline integrity of model development and deployment. These domains—where AI models cannot be segmented by geography and where spillover is inevitable—are quintessentially federal in nature. By contrast, the states should exercise authority over the use of AI within their borders, especially in contexts like education, policing, and health care, where local preferences and community values have long shaped policy. This allocation respects the practical realities of AI development while aligning with the constitutional logic of federalism.

Second, experimentation must be disciplined by structure and process. For state-led initiatives to yield useful lessons, they must be designed with the rigor of actual experiments. That means clear objectives, measurable outcomes, transparent data collection, and built-in sunset provisions that force reassessment before any policy becomes entrenched. The federal government can reinforce this structure by conditioning funding or technical assistance on the inclusion of these safeguards. In doing so, Washington would not dictate substantive outcomes but would ensure that state efforts generate evidence capable of informing national policy. This model mirrors the way the Food and Drug Administration often oversees clinical trials: States would remain free to innovate, but only within a disciplined framework that prevents sloppiness from calcifying into law.

Finally, the framework should institutionalize mechanisms for learning and iteration across jurisdictions. A national clearinghouse for AI policy experiments—led by the National Institute of Standards and Technology (NIST)—could serve as a repository of state-level innovations, complete with comparative analyses and recommendations for best practices. Regular convenings between federal agencies, state regulators, industry leaders, and civil society groups would further ensure that insights flow in both directions. This infrastructure would allow promising state approaches to scale upward, cautionary tales to be avoided elsewhere, and federal standards to be continually updated in light of on-the-ground experience. The result would be a governance ecosystem that is adaptive, evidence based, and responsive to both national imperatives and local realities. 

The framework below is a draft effort to achieve those goals. It’s meant to spark creativity and conversation among stakeholders in this ongoing debate. Feedback is encouraged.

 

Uniform National Innovation & Technology Enablement for AI Act (UNITE-AI Act)

Be it enacted by the Senate and House of Representatives of the United States of America in Congress assembled,

Section 1. Short Title

This Act may be cited as the “Uniform National Innovation & Technology Enablement for AI Act of 2025.”

Section 2. Findings and Purpose

(a) Findings: Congress finds the following:

  1. National Significance of AI Innovation: Artificial intelligence (AI) innovation—defined as the development and diffusion of highly capable AI systems—is a matter of national significance affecting interstate commerce, national security, economic growth, and the general welfare of the United States. The transformative impact of advanced AI transcends state boundaries, influencing markets and communities across the nation.
  2. Lessons of History: A patchwork of conflicting state regulations on matters of national importance can fragment markets and undermine progress. The Founders, reacting to the failures of the Articles of Confederation, empowered the federal government to “foster and protect” interstate commerce through uniform national rules. They deliberately removed certain powers from states to avoid the fragmentation and chaos that resulted from conflicting state laws under the Articles of Confederation. Consistent with this constitutional design, matters like postal services and interstate highways have historically been governed federally to ensure a unified national system. AI innovation, which inherently operates across state and even national lines, likewise requires coherent federal oversight.
  3. AI Development and Diffusion: AI innovation comprises both development of AI systems and their diffusion into society. Development includes the research, design, pre-training, fine-tuning, and other processes involved in creating advanced AI models. It often demands significant resources—data, computational power, energy, and talent—and frequent large-scale training runs at the frontier of technological capability. Diffusion refers to the spread and adoption of AI technologies across the economy and society. Effective diffusion requires that AI systems be affordable, reliable, and effective, and that their use earns public trust. Congress recognizes that both robust development and responsible diffusion of AI are essential to national competitiveness and public welfare.
  4. Federal Responsibility: Matters of national significance—including national markets, defense, and general welfare—fall under the governance of the federal government. Congress has the authority under Article I of the Constitution (including the Commerce Clause and other powers) to regulate AI innovation that affects interstate commerce and national security. Exercising this authority ensures that the direction and pace of AI advancement can be managed for the country’s overall benefit.
  5. State Police Powers: State and local governments possess police powers, allowing them to enact laws to protect public health, safety, and welfare within their jurisdictions. These powers are vital for addressing local needs and concerns, and states have begun responding to AI’s local impacts (e.g., addressing AI use in hiring, education, law enforcement, and consumer protection). However, when state regulations extend beyond state lines or substantially burden interstate activities, they risk violating constitutional limits on state power. Uncoordinated state actions on AI could produce a patchwork of laws that impedes innovation, hinders interstate commerce, and potentially undermines national security.
  6. Use vs. Innovation: There is a critical distinction between regulating the use of AI (applications of AI within a state’s traditional domains, such as policing, education, or commerce) and regulating AI innovation itself (the fundamental development and open distribution of AI models). States can often best address AI uses to reflect local values and needs—serving as “laboratories of democracy” for issues like AI in job recruiting, health care, or criminal justice. In contrast, the core development of AI models and their widespread diffusion function as general-purpose technology infrastructure, analogous to electricity or the internet, which benefit from unified national standards. The nature of AI model training and deployment is such that it cannot be feasibly confined within a single state’s borders. A state law that forces changes to how frontier AI models are trained or distributed will inevitably have national (and global) consequences, effectively governing out-of-state actors who have no political voice in that state. Therefore, a federal framework is needed to preempt harmful inconsistencies while still enabling states to exercise their rightful authority over local AI uses.

(b) Purpose: In light of the above findings, the purpose of this Act is to establish a clear federal framework for AI innovation governance. This framework will: (1) assert federal primacy over the regulation of AI development and diffusion (AI innovation) as it affects interstate commerce and national interests; (2) temporarily preempt state laws in this domain while encouraging a unified national strategy; (3) allow states to obtain waivers to implement certain innovative policies regarding AI development, but only under strict criteria that safeguard against extraterritorial impact or undue burdens on other states; (4) affirm states’ continued authority to regulate AI use within their jurisdictions to protect local interests (subject to constitutional limits, such as the Dormant Commerce Clause and other federal laws); and (5) create mechanisms for oversight, judicial review, and sunset evaluation to ensure this framework remains accountable, adaptive, and respectful of both federal and state roles.

Section 3. Definitions

For purposes of this Act:

(a) “Artificial Intelligence System”: The term “artificial intelligence system” or “AI system” means a machine-based system that, for a given set of human-defined objectives, can make predictions, recommendations, or decisions influencing real or virtual environments. This definition encompasses systems that use automated processes (including machine learning, statistical methods, or algorithmic techniques) to perceive environments, abstract those perceptions into analytical models, and generate outputs (such as predictions or decisions) that inform actions. It includes generative AI models and automated decision systems.

(b) “AI Innovation”: The term “AI innovation” means the processes by which artificial intelligence systems are designed, trained, or distributed for general availability. This includes: (1) research, development, and training of AI models; (2) activities necessary to prepare AI models for broad distribution in interstate commerce; and (3) decisions about the release, licensing, or open distribution of AI models intended for multi-jurisdictional use.

AI innovation does not include the tailoring, fine-tuning, or application of AI systems for specific local purposes, unless such activity has the primary effect of determining the features or availability of the system in interstate commerce.

(c) “AI Use”: The term “AI use” means the application of an AI system to a specific activity, service, or function within a state or locality. This includes: (1) deployment of AI systems in government services, education, health care, employment, policing, or other state-regulated sectors; (2) requirements imposed on AI users or deployers within the state to protect consumer rights, civil rights, safety, or other local interests; and (3) procurement standards or internal policies adopted by state or local governments.

AI use excludes requirements that effectively regulate how AI models are trained, built, or distributed for interstate commerce.

(d) “Extraterritorial Application”: A state law or regulation has an extraterritorial application when, by its purpose or practical effect, it governs or conditions unitary processes of AI innovation—including the training, fine-tuning, or diffusion of AI models—that are inherently national in character. Such processes cannot be meaningfully confined within a single state’s jurisdiction, and state regulation of them would necessarily alter the attributes, design, or availability of AI systems for persons outside the state. This includes, but is not limited to: laws that compel changes to AI models or training data in ways that affect distribution beyond state borders; laws that effectively determine the standards, features, or availability of AI systems offered in interstate commerce; and, laws whose compliance obligations cannot be limited to in-state activity without impacting the development or delivery of the product nationwide.

A requirement shall not be deemed an extraterritorial application solely because compliance by a developer or deployer may incidentally have a de minimis influence on model design or features, so long as the primary effect of the requirement is limited to in-state use.

Section 4. Federal Preemption of State Laws Regulating AI Innovation

(a) Preemption of State Innovation Regulations: No state or political subdivision thereof may enact or enforce any law, regulation, or requirement governing AI innovation (including the development, training, or diffusion of AI models or systems) if such law or regulation applies to AI systems or developers in a manner that impacts interstate commerce or extends beyond that state’s own boundaries, except as expressly permitted in this Act. In general, state laws addressing the core development of AI models (such as setting technical standards for AI model training, licensing AI developers, restricting the release of AI models, or imposing liabilities or safety testing requirements on advanced AI model development) are hereby preempted, given the inherently national scope of these activities. Congress finds that allowing each state to separately regulate fundamental AI development would impose undue burdens on interstate commerce and impede the United States’ cohesive response to AI opportunities and risks.

(b) Moratorium Period: The preemption in subsection (a) shall take effect upon enactment and remain in force for a period of three (3) years thereafter (the “moratorium period”), unless Congress affirmatively extends or modifies this framework. During the moratorium period, states may not enforce laws regulating AI innovation except pursuant to an approved waiver under Section 5. Any waiver granted under Section 5 shall expire no later than the conclusion of the moratorium period. (Note: Congress intends this period to ensure a stable national approach to AI innovation in the formative years of this technology’s development. It may revisit the need for continued preemption or a different framework once a comprehensive federal regime is in place.)

(c) State Laws on AI Use (Savings Clause): Nothing in this Act shall be construed to preempt or invalidate any law or regulation of a state or its political subdivisions that governs the use of AI systems within the jurisdiction in service of traditional state police powers—including laws to protect consumer rights, prevent unlawful discrimination, ensure public safety, election integrity, privacy, or other legitimate local public interests—provided that such laws do not: (1) extraterritorially regulate persons or conduct outside the state; (2) conflict with federal law or the lawful regulations of other states; or (3) effectively regulate AI innovation itself rather than specific uses. In other words, states retain authority to enforce generally applicable laws (e.g., consumer protection, civil rights, fraud, or campaign transparency laws) as applied to AI-driven activities, and to enact targeted restrictions on AI uses (for instance, banning certain AI uses in policing or mandating disclosure of AI-generated deepfakes in political ads) so long as those measures address genuine in-state problems and do not impose requirements on how AI models are fundamentally built or distributed across states.

(d) Rule of Construction: Subsection (a) preempts state laws specifically aimed at regulating AI model development or deployment entering into interstate commerce. It does not preempt: (1) state procurement decisions or internal policies about AI (e.g., a state setting standards for AI systems purchased for its own agencies’ use), (2) enforcement of state laws of general applicability that incidentally affect AI (such as product liability or data breach laws not uniquely targeting AI), or (3) local regulations on business uses of AI that are narrowly focused on intrastate activity and have no more than incidental effects beyond the state. If a question arises as to whether a state provision is a regulation of innovation (preempted) or a regulation of use (generally preserved), courts shall consider factors such as: whether the primary focus and intent of the state law addresses local concerns; whether the law’s provisions require changes to the design, architecture, or training of AI models themselves (suggesting an innovation regulation) or merely govern the behavior and responsibilities of end users or deployers of AI within the state (suggesting a use regulation); and the extent of any interstate impact of the law relative to its local benefits.

Section 5. State Innovation Waiver Program

(a) Waiver Authority: Notwithstanding Section 4(a) of this Act, a state may apply for, and the Secretary of Commerce (acting through the Director of NIST and in consultation with NIST’s Center for AI Standards and Innovation (CAISI)) may grant a temporary waiver allowing the state to enforce an AI-related law or regulation that would otherwise be preempted as an AI innovation regulation. The waiver, if approved, permits the specified state law to operate within that state for a limited duration, subject to the conditions in this section. The intent of this waiver program is to allow carefully controlled state-level experimentation in AI governance, in recognition of unique local priorities or insights, while preventing harmful interstate effects and ensuring national interests are not compromised.

(b) Application Requirements: A state seeking a waiver must submit an application to NIST in such form as NIST/CAISI shall require, including at minimum:

  1. Description of Proposed Law: The text of the state law or regulation for which the waiver is sought, and a description of its objectives (e.g., the specific AI-related harm or issue the state aims to address).
  2. Local Need and Public Purpose: Evidence and explanation of the distinct local problem or interest that the law addresses. The state must demonstrate that the regulation responds to legitimate public health, safety, welfare, or security needs of that state’s residents—i.e., an exercise of its police powers for a matter of local concern (such as a demonstrated risk from AI in a particular sector, or strong community standards requiring action).
  3. Putative Local Benefits: An analysis of the expected benefits of the law for the state’s residents or institutions. The state should show that the law is likely to produce substantial intrastate benefits or protections (relative to the status quo) and that these benefits cannot be achieved as effectively by relying on existing federal or state frameworks.
  4. Assessment of Extraterritorial Reach: Clear provisions or mechanisms in the law ensuring it does not unnecessarily regulate conduct beyond the state’s boundaries or unduly burden out-of-state actors. The state must certify that the law has been crafted in a manner to reduce extraterritorial application—meaning it will not, in purpose or practical effect, force persons outside the state to comply or interfere with other states’ ability to govern AI within their own jurisdictions. This includes ensuring the law will not require changes to AI models or systems in their development phase that are infeasible to confine within the state (given that AI model training and deployment typically have nationwide effect).
  5. Mitigation of Conflict with Other States’ Laws: An explanation of how the law will avoid or minimize conflicts with regulations or laws of other states. If similar issues are addressed by other states differently, the applicant state should describe how its approach can coexist without legal or practical conflict, or why its approach is justified despite potential divergence.
  6. Consideration of Alternatives: Documentation that the state has considered reasonably available alternatives to achieve the law’s purpose that might pose less risk of regulatory spillover beyond the state’s borders. For example, alternatives might include more targeted “use-specific” regulations, voluntary measures, or collaboration with federal authorities. The application should explain why the chosen approach is preferable and necessary compared to these alternatives.
  7. Public Safety and Welfare Rationale: If the law could potentially affect the pace or scale of AI innovation, the state must provide compelling evidence that the public safety or welfare benefits justify any such impact. In other words, the risk addressed by the law (e.g., preventing specific harms from AI) significantly outweighs any potential cost of slowing or deterring AI development. The state should reference evidence, expert assessments, or experience supporting the efficacy of its regulation in preventing harm.
  8. Constitutionality and Legal Basis: An analysis by the state of the law’s consistency with the U.S. Constitution and federal law. This includes ensuring the law does not violate rights (e.g., free speech, due process) and falls within the state’s authority. (Granting of a waiver by NIST does not itself constitute a determination of the law’s constitutionality, which remains subject to judicial review.)
  9. Sunset Provision: The state law must include a sunset clause providing for its automatic expiration not later than 18 months after its effective date, unless the state legislature affirmatively extends or reenacts it with a new waiver. This 18-month period reflects the rapid development cycle of frontier AI models, ensuring that state interventions are temporary and reevaluated regularly in light of new developments. The application must describe this sunset mechanism and the state’s plan for reviewing the law’s impacts before any renewal.
  10. Retrospective Review and Reporting: A commitment to collect data and metrics needed to assess the law’s actual effects on AI innovation and on the targeted local issue. The state must establish a process (such as forming an expert task force or partnering with academics/industry) to study the outcomes of the regulation during its effective period. This includes measuring any impact on local AI research activity, business climate, or migration of AI firms, as well as effectiveness in mitigating the intended risk. The state shall agree to provide interim and final reports on these findings to NIST/CAISI and make them available to the public. NIST may specify the metrics or questions to be addressed, in order to facilitate comparison across different state experiments.

(c) Criteria for Approval: NIST (through CAISI) shall approve a waiver application only if it finds that the proposed state law or regulation meets all of the following criteria:

  1. Local Benefit vs. National Cost: The law is expected to yield significant benefits for the state’s residents or address a unique local concern, and those benefits are sufficiently compelling to justify any potential costs or disruptions to AI innovation beyond that state. The law should not unduly burden interstate commerce relative to its local benefits.
  2. No Undue Extraterritorial Impact: The law is designed to avoid extraterritorial effects, and any incidental impact on out-of-state activities is minimal, clearly outweighed by local gains, and cannot be avoided through a narrower alternative. The law should comply with the principle that one state should not effectively set policy for the nation.
  3. Innovative and Necessary Approach: The law presents a genuinely innovative or well-reasoned approach to an AI-related challenge that is not adequately addressed by existing federal policies. There must be a sound rationale for why the experiment is necessary (e.g., addressing a novel risk or piloting a new regulatory strategy), and it should not simply duplicate efforts underway elsewhere.
  4. Time-Limited and Evaluative: The inclusion of an 18-month sunset and robust evaluation plan indicates the state’s commitment to learn from the experiment and adjust course as needed. The temporary nature ensures that any missteps can be corrected and that successful ideas can inform national policy in a timely way.
  5. Consistency with National Interests: The proposed law will not undermine national security, impede the United States’ international competitiveness in AI, or conflict with fundamental federal regulatory objectives. NIST shall consult with relevant federal agencies (such as the Department of Defense, the Office of Science and Technology Policy, etc.) for any waiver that might implicate national security or foreign policy concerns, to ensure granting the waiver would not be contrary to the national interest.

(d) Procedure: Upon receiving a complete application, NIST/CAISI shall publish a notice summarizing the request in the Federal Register and on an appropriate website, and provide a public comment period (e.g., 30 days) to gather input from stakeholders (including industry, academia, other states, and the general public). NIST may also convene a public hearing or seek expert advice as needed. A decision on the waiver should be issued within 90 days after the comment period closes, unless extended for good cause. Approval may include specific terms or conditions to ensure the criteria are met—for example, requiring the state to amend certain provisions for narrower scope, or requiring periodic check-ins with NIST. If an application is denied, NIST shall provide the state a brief explanation of the reasons, and the state may revise and resubmit its proposal. NIST’s decisions on waivers shall be published, and approved waivers shall be documented publicly, including the text of the state law and any conditions.

(e) Duration and Renewal: Nothing in this Act shall be construed to permit a state law authorized under a waiver to continue in effect beyond the moratorium period established in Section 4(b). Upon the expiration of that period, all waivers and associated state laws shall terminate automatically, regardless of any state provision to the contrary. Courts shall construe this Act to require that the federal moratorium period serves as an outer boundary for all state activity under waiver authority, ensuring that any continuation of state innovation regulation occurs only if Congress affirmatively provides for it.

(f) Emergency Suspension: If at any time the Secretary of Commerce, in consultation with NIST/CAISI, determines that a state law operating under a waiver is causing unforeseen and serious harm to the national interest (e.g., a significant impediment to interstate commerce or a national security risk) that outweighs its local benefits, the Secretary may suspend or terminate the waiver after providing notice to the state and an opportunity for the state to respond or cure the issue. Similarly, if the state is found to be noncompliant with the terms of the waiver (e.g., failing to conduct the required evaluations or deviating from the approved provisions), the waiver may be revoked. In such cases, the state law shall immediately become preempted under Section 4 unless and until the waiver is reinstated or the law is brought into compliance.

(g) Guidance and Assistance: NIST, through CAISI, shall issue guidelines to assist states in preparing waiver applications, including examples of acceptable approaches to avoid extraterritoriality and methods for measuring impacts on innovation. NIST/CAISI may also facilitate information-sharing among states and between states and federal agencies to highlight best practices and lessons learned from any state experiments conducted under this section.

Section 6. Judicial Review and Private Right of Action

(a) Cause of Action for Affected Parties: Any person or entity residing or primarily based outside of a state that enacts or enforces a law purportedly subject to a waiver or exemption under this Act (or any person/entity otherwise subject to or harmed by a state law that they allege is preempted by Section 4) shall have a cause of action to challenge that state law in federal court. This right of action reflects the fact that individuals and companies in other states may have no political recourse against another state’s law yet could be significantly affected by it.

  1. Scope: The plaintiff may seek declaratory and injunctive relief on grounds including: (i) that the state law is preempted by this Act (because it regulates AI innovation without a valid waiver, or exceeds the scope of any granted waiver); (ii) that the state law violates the extraterritoriality principles recognized by the Constitution (as informed by this Act’s standards), by effectively regulating conduct wholly outside the state; or (iii) that the state law otherwise conflicts with federal authority or another state’s law in a manner prohibited by this Act.
  2. Standing: A plaintiff must show that it is directly or imminently affected by the state law. For example, a company headquartered in another state that conducts AI model development which would need to alter its operations to comply with the defendant state’s law, or an individual from another state who would be subject to obligations or penalties under the law when engaging in commerce with that state, would have standing. The intent is to broadly confer standing to nonresidents who, absent this provision, might lack a clear path to judicial review of an out-of-state regulation impacting them.

(b) Federal Jurisdiction: Actions under this section may be brought in the United States district court for any of the following venues: (1) the District in which the defendant state (or subdivision) is located, (2) the District where the plaintiff resides or has its principal place of business, or (3) the District of Columbia. The federal courts shall have original jurisdiction over these claims, as they arise under federal law (this Act and constitutional principles). State officials responsible for implementing the challenged law (such as the state Attorney General or relevant agency heads) may be named as defendants in their official capacity.

(c) Expedited Review: Given the time-sensitive nature of AI innovation and the need to promptly resolve uncertainties, courts are encouraged to expedite proceedings for cases brought under this Act. A district court, upon motion, may give the case priority on its docket. In the event an injunction is sought to prevent enforcement of a state law, courts should weigh the potential nationwide impact on innovation and constitutional structure as part of the equitable analysis.

(d) Relief and Remedies: If the court finds the state law is preempted or otherwise in violation of this Act, it shall enjoin the state from enforcing that law (to the extent of the conflict). The court may declare the rights and obligations of the parties under this Act and the Constitution. In a case where a waiver was granted by NIST, the court may also review whether the state law stays within the scope of the waiver’s terms; if the state law is being applied beyond what the waiver allowed, the excess can be enjoined. However, nothing in this Act shall be construed to authorize an award of monetary damages against a state or state officials (aside from potential award of attorneys’ fees as provided below). The focus is on equitable relief to prevent ongoing harm.

(e) Attorneys’ Fees: In order to encourage valid challenges and deter improper state actions, a prevailing plaintiff in an action under this section shall be entitled to an award of reasonable attorneys’ fees and costs, pursuant to 42 U.S.C. §1988 (or a similar provision), as if enforcing an important federal right. Conversely, if a court finds that an action was frivolous or brought in bad faith, it may award fees to the state defendant.

(f) Savings for Constitutional Challenges: The availability of this statutory cause of action does not preclude any party from raising constitutional challenges to state AI laws through other legal avenues (e.g., under the Dormant Commerce Clause or First Amendment) in state or federal court. This Act provides an additional, explicit mechanism for prompt federal review, but does not displace traditional constitutional litigation routes.

Section 7. Oversight, Coordination, and Reporting

(a) Federal Oversight (NIST/CAISI): The National Institute of Standards and Technology (NIST), through the Center for AI Standards and Innovation (CAISI), shall be the lead federal entity overseeing the implementation of this Act’s provisions. In this role, NIST/CAISI shall:

  1. Waiver Administration: Receive and review state waiver applications, make approval decisions (in consultation with the Secretary of Commerce and any other relevant agencies as needed), and monitor compliance with waiver conditions.
  2. Guidance: Develop and issue guidance to states on the waiver process and on best practices for crafting AI regulations that minimize interstate burdens. This includes publishing examples of how states can address concerns like deepfakes, algorithmic bias, or AI in critical sectors without triggering extraterritorial effects or conflicts, thereby helping states legislate within the framework’s boundaries.
  3. Technical Assistance: Offer technical assistance to states formulating AI policies—for example, providing expertise on AI risk management (drawing on NIST’s AI Risk Management Framework) and suggesting standards or testing protocols that states might reference instead of creating their own. NIST can help states evaluate alternatives to heavy-handed regulation, such as regulatory sandboxes or voluntary certification, which could achieve local aims with fewer spillover impacts.
  4. Data Collection: Work with states to gather data on the impact of both federal and state AI governance measures. NIST/CAISI should track metrics such as: the number and nature of state laws preempted, number of waivers requested and granted, effects of state experiments on AI research or business formation, and any measurable differences in AI-related outcomes (safety incidents, public sentiment, etc.) in states with waivers versus those without.
  5. Interagency Coordination: Coordinate with other federal agencies with expertise or equities in AI (such as the Department of Defense, Department of Energy, the Federal Trade Commission, the Food and Drug Administration, etc.) to ensure that federal AI policies remain consistent. CAISI will also serve as a liaison to the White House Office of Science and Technology Policy (OSTP) and any national AI task forces or advisory committees, feeding in lessons learned from state-level innovation and ensuring federal strategy accounts for on-the-ground developments.

NIST/CAISI’s role under this Act is administrative and advisory. Nothing in this Act shall be construed to confer upon NIST/CAISI independent regulatory authority over AI innovation beyond the administration of waivers and coordination responsibilities expressly set forth herein.

(b) Reporting to Congress: Within one year of enactment, and every year thereafter for the duration of the moratorium period, the Secretary of Commerce shall submit a report to Congress (and publish it publicly) detailing the implementation of this Act. The report shall include:

  1. A summary of any state laws that have been enacted or identified as preempted under Section 4 (including brief descriptions of those laws).
  2. A list of waiver applications received and, for each, whether it was approved or denied (with a summary of the rationale). For approved waivers, describe the key conditions imposed and any preliminary results known.
  3. An assessment of how the framework is affecting AI innovation in the U.S., including any observable reduction in regulatory fragmentation (e.g., avoidance of conflicting rules) and any signs that the remaining state activities (regarding AI uses) are fostering innovation or not.
  4. Any legal challenges filed under Section 6 and their outcomes, to the extent known, and how those outcomes inform the balance of state-federal power in AI governance.
  5. Recommendations, if any, for adjustments to the framework. For example, NIST/CAISI may recommend that Congress consider ending the moratorium early if a comprehensive federal law is enacted, or conversely, extending it if needed. They may suggest new areas where federal standards should be developed (perhaps prompted by patterns in state waiver requests) or where additional guidance to states is needed.

(c) Sunset Review Commission: To complement the ongoing reports, the Act establishes an AI Innovation Federalism Commission (composed of experts appointed by congressional leaders, including representatives from state governments, AI industry, academia, and civil society) to convene at least 6 months before the expiration of the three-year moratorium. The Commission shall evaluate the effectiveness of this Act in achieving its goals (national cohesion, preserved innovation, appropriate local experimentation) and shall issue a comprehensive report with findings and recommendations on whether to continue, modify, or end the preemption and waiver structure. This ensures that Congress has a basis for deciding the long-term governance approach once the initial period concludes.

(d) NIST/CAISI Funding Authorization: Such sums as may be necessary to NIST for carrying out the responsibilities under this Act are authorized to be appropriated, including staffing CAISI appropriately to handle waiver reviews and monitoring. The Act also authorizes NIST to provide grants or cooperative agreements to assist states in conducting the research and analysis required for waiver applications or retrospective reviews, recognizing that resource constraints should not prevent a state with a worthwhile experiment from participating.

Section 8. 18-Month Sunset on State AI Laws; Reassessment

(a) State Law Sunset Requirement: As a condition of any waiver under Section 5, the state law in question must expire within 18 months of its commencement (unless renewed or reenacted with a new waiver). Congress encourages all states, even when regulating permissible AI uses, to include similar sunset or review provisions in their AI-related statutes or regulations. This will foster a habit of continuous reassessment in this fast-changing field, ensuring laws do not outlive their usefulness or inadvertently stifle beneficial innovation as technology evolves. An 18-month cycle aligns with the approximate development timeline for major AI model advancements, prompting regulators to update their approaches in light of new capabilities or understandings.

(b) Retrospective Analyses: Any state that enacts significant AI regulations (whether subject to a waiver or not) should undertake retrospective analysis of the law’s impacts before renewal. The federal government (through NIST/CAISI) will support these efforts by providing methodological guidance and, where possible, data (e.g., NIST may share relevant national trend data or technical findings). States are expected to look at factors such as: economic impact on AI companies and startups in the state, any changes in availability of AI services or tools to the public, measurable improvements in the targeted issue (e.g., reduction in AI-related harms), and any unintended negative consequences observed.

(c) Termination of Federal Preemption: The provisions of Section 4 (preempting state AI innovation laws) shall expire three (3) years after enactment unless extended or replaced by Congress. Upon such expiration: (1) all waivers approved under Section 5 shall immediately terminate, and no state law previously preempted shall revive except as affirmatively permitted by subsequent federal or state legislation consistent with constitutional limits; and (2) the regulation of AI innovation shall thereafter be determined by whatever federal or state laws are then in force, subject to the Supremacy Clause, the Dormant Commerce Clause, and other applicable constitutional constraints.
During any such interim period, the regulation of AI innovation shall be governed by existing federal statutes of general applicability, and by state laws consistent with constitutional limits, until Congress enacts a superseding framework.

(d) Rule of Construction: The sunset or expiration of any provision of this Act shall not by itself validate any previously preempted state law; unless Congress affirmatively permits such laws to take effect, the preemption in place during the moratorium period would simply cease to have force, leaving questions of AI regulation thereafter to be determined by whatever federal or state laws are then applicable (subject to general constitutional constraints).


Kevin Frazier is an AI Innovation and Law Fellow at UT Austin School of Law and Senior Editor at Lawfare .
}

Subscribe to Lawfare