Regulatory Misalignment and the RAISE Act
While the act attempts to address AI harms, its regulatory misalignment serves as a cautionary tale, urging a more centralized approach.

Published by The Lawfare Institute
in Cooperation With
As artificial intelligence (AI) becomes more powerful, there is an ongoing debate about whether the federal government or states should lead in regulating this rapidly advancing tool. One of the commonly proposed frameworks—tort liability based on reasonableness standards—often struggles to adequately address harms caused by AI. Under such a framework, plaintiffs may have difficulties proving the foreseeability of the alleged harms, establishing causation, defining a clear “standard of care” amid rapidly evolving technology, and demonstrating a breach of that standard. Statutory interventions like New York’s proposed Responsible AI Safety and Education (RAISE) Act, which imposes a tort-based liability scheme, attempt to fill these perceived gaps but fall short of that tall order.
Crafting rules that are both flexible enough to accommodate rapid technological evolution and robust enough to safeguard against significant risks is a delicate balancing act. Ill-conceived regimes risk stifling innovation, creating an unlevel playing field for competitors, or failing to prevent the very harms they aim to address. The RAISE Act serves as a case study of such a flawed approach. Asking whether the act fulfills the ideal aims of AI regulation—incentivizing innovation, fostering responsible development, providing redress, and ensuring predictability for all stakeholders—returns a clear answer: no. This conclusion should inform AI governance efforts in states considering similar models and provide insight into the present debate over allowing Congress to lead in shaping the AI policy landscape.
Background on the RAISE Act
The RAISE Act empowers the New York attorney general to enforce ex ante reasonableness standards on frontier AI models, with the primary goal of preventing critical harms. These harms are defined specifically as the death or serious injury of 100 or more people; or at least $1 billion in damages resulting from the creation or use of a chemical, biological, radiological, or nuclear weapon or an AI model engaging in autonomous conduct that would equate to a criminal offense if carried out by a human. The attorney general may impose a civil penalty of up to $10 million for the first violation and $30 million for all subsequent violations. While these objectives are laudable, analysis reveals that the act’s fundamental weakness lies in its regulatory misalignment—the mechanisms and standards it employs are poorly suited to achieve its stated aims, while creating significant practical complications for enforcement and compliance.
The choice of this regulatory approach warrants particularly close attention given that similar proposals have gained traction in other states, such as Massachusetts. As detailed below, excessive uncertainty undermines the utility of this regulatory approach as applied by state actors at this stage of AI progress. The imposition of vague standards enforced by government actors with limited technical wherewithal and regulatory capacity—combined with the possibility of 50 states’ worth of conflicting and even contradictory expectations—does not line up with the motivations behind the RAISE Act and related legislation. An onerous and poorly crafted regulatory environment will only increase the odds of bad actors shifting their operations to other jurisdictions, relying on models developed outside the United States, or both.
Regulatory Misalignment Under the RAISE Act
Regulatory alignment occurs when the selected approach aligns with the unique characteristics of the subject of that regulation, sets forth predictable rules, identifies a robust and uniform enforcement regime, and—most important—achieves its intended objective. A brief survey of the relevant part of the RAISE Act shows that’s not the case here, rendering it a poor model to adopt and follow.
With respect to that key alignment factor—whether the legislation will achieve its foundational objectives—several aspects of the act invite doubt. For example, it is far from clear that the source and scale of the penalty will have the intended deterrent effect on labs that might flout the RAISE Act’s specifications. More specifically, if a $10 million fine is a small fraction of a lab’s overall revenue, then it will likely treat violations of the act as a cost of doing business. For comparatively smaller labs, however, such penalties could amount to an existential event. Labs that fall short of that threshold may likewise face competitive barriers as a result of having to incur compliance costs in preparation for eventually falling under the act’s ambit. Consequently, the act may bolster the financial and competitive prospects of the labs with the greatest odds of deploying ever-more powerful models by decreasing the number of competitors.
There’s also the matter of enforcement capacity and capability. The attorney general’s office must have enough staff with sufficient technical expertise to enforce the law equally. Generally, however, attorneys general have a shortage of resources and personnel. A short-staffed attorney general’s office may understandably prioritize enforcement actions against the smallest labs, which have less sophisticated regulatory compliance regimes. Rather than risk the scarce financial, operational, and political resources of the attorney general’s office on less certain claims against larger labs—which often have top lawyers and a bevy of product counsel to increase the odds of compliance—the attorney general may go after smaller players. In such a world, the dominant labs would again come out in a better competitive position.
A quick glance at other efforts to implement novel regulatory frameworks addressing emerging technology indicates that the impact of limited enforcement resources on regulatory alignment is commonly overlooked. The EU, for example, is struggling with this very question. Despite a significant runway leading up to enforcement of the EU AI Act, European Parliament digital policy adviser Kai Zenner observed that member states “facing serious budget crises will [likely not] choose to invest in AI regulation over public services.” A similar story played out in California with respect to its comprehensive privacy law. In that case, the California Privacy Protection Agency missed critical deadlines for rulemaking and ultimately delayed enforcement of some provisions. Colorado, too, has struggled with whether and how to move forward with its comprehensive AI legislation as a result of concerns that its enforcement may result in unintended consequences to the nascent AI ecosystem.
The capacity of judges to litigate the disputes brought by the attorney general’s office raises a number of concerns with respect to predictable rules. As I have thoroughly documented elsewhere and can speak to from personal experience as a former clerk on the Montana Supreme Court, state court judges often lack access to substantive ongoing education with respect to highly technical subject matters, such as AI. This is especially problematic given the vague standards upon which these cases will turn as well as the precedential nature of our legal system. The first few cases interpreting this law may rest on flawed understandings of the law, the technology, or both. Nevertheless, subsequent courts may feel compelled to closely follow those cases. This possibility, analyzed in depth by Alicia Solow-Niederman, demonstrates another area of misalignment. Well-intentioned judges who lack the time and resources necessary to master the complexities of AI development may lock in problematic interpretations of the law.
Concerns about regulatory ambiguity could raise fewer issues if the act clearly defined the standard to which labs will be held. However, the conditions under which the attorney general may bring civil actions lack precision; they can pursue any lab for violating the following provisions (among others). First, prior to deploying a frontier AI model, the lab must “implement a written safety and security protocol,” which the act defines as “documented technical and organizational protocols that specify reasonable protections and procedures that, if successfully implemented, would appropriately reduce the risk of critical harm.” Here the previously flagged concerns about institutional capacity to enforce the act are amplified. A short-staffed attorney general’s office, a judiciary with minimal AI training, or both will struggle to interpret the contours of “reasonable protections and procedures” that will “appropriately” diminish the odds of critical harm. Academics, researchers, and the labs themselves have spent significant time attempting to design protections and procedures with those ends in mind. Among myriad proposals, no consensus has emerged. The RAISE Act effectively allows the judiciary to have the first go at entrenching such protections and procedures in law—a result that is suboptimal.
Similar regulatory misalignment issues arise from a prohibition on pre-deployment labs deploying “a frontier model if doing so would create an unreasonable risk of critical harm.” A sea of gray area surrounds this provision. How can a lab establish ex ante whether their model—which may contain emergent properties and interact in unpredictable ways with other AI models—creates such an unreasonable risk? Presumably labs will attempt to comply with this provision by creating ever-longer reports on their pre-deployment testing—an extended paper trail that may amount to “accountability theatre.” Large labs may have minimal difficulty in producing such documentation, but empirical evidence of regulatory compliance costs suggests that medium-sized firms will face disproportionate costs in complying with requirements of indeterminate efficacy.
Finally, it’s not clear how well some of the act’s provisions reflect the current state of the AI ecosystem. The act demands that labs hire an independent third party to complete an annual audit of their protocols to ensure compliance with the act. Tellingly, the act does not grapple with the current shortage of such independent third parties with competencies required to perform such audits. The shortage of AI experts across the economy suggests that such auditors may not be as readily available as the sponsor suspects. What’s more, as case law interpreting the act piles up, these audits will become more onerous, more costly, and more bespoke. This point will become especially important if more states adopt similar legislation with audit requirements. While one may presume that a single auditor could come in and guarantee compliance with one, two, five, or 10 different versions of the RAISE Act, this presumption runs aground after considering that courts may vary in their interpretations of each version of the act. In that world, it’s easy to imagine an industry of state-specific auditors serially appearing at labs’ doors to assess their protocols. This is disruptive and likely counterproductive. As made clear by allegations of “greenwashing” in the environmental sector, reliance on auditors is by no means a proven method for achieving a broad regulatory goal, such as decreased risk of critical harm.
***
A review of the RAISE Act illuminates not only its own internal misalignments but also the broader challenges inherent in relying on traditional liability regimes for comprehensive AI governance. The act’s shortcomings serve as a critical lens through which to analyze the suitability of such frameworks for a technology as dynamic and complex as AI.
The act’s state-centric, pre-deployment model highlights a core tension: AI’s diffuse, rapidly evolving nature often outpaces the capacity and jurisdictional reach of existing legal actors and fixed regulatory checkpoints. Traditional tort liability frameworks, whether applied retrospectively after harm occurs or preemptively through statutory requirements, face significant limitations in effectively guiding frontier AI development. These limitations become particularly acute when regulatory authorities possess insufficient technical expertise to evaluate complex AI systems. Furthermore, imposing regulatory controls at pre-deployment stages presents unique challenges, as these interventions often precede the point where critical risks become clearly identifiable or manageable through broad-spectrum standards. This timing mismatch undermines the ability of conventional liability approaches to meaningfully shape AI development trajectories toward safer outcomes.
The crux of the challenge of governing emerging technologies often lies in the selected regulatory approach, as liability regimes traditionally depend on defining a standard of care—a task profoundly complicated by AI’s emergent properties and the nascent understanding of its long-term societal impacts. The RAISE Act’s reliance on “reasonableness” standards for an imprecise goal of preventing “critical harms” underscores this difficulty. Such ambiguity risks creating a liability lottery rather than a predictable system for accountability or effective risk mitigation. If the justification for regulation is to proactively shape AI development toward safety and public benefit, then traditional liability—often reactive and focused on assigning blame after harm—proves an incomplete tool. It may struggle to foster the continuous, adaptive governance and collaborative safety culture that advanced AI necessitates, pointing toward a need for liability to be one component within a more diverse, agile, and technically informed governance ecosystem. Such an ecosystem would necessarily integrate liability with proactive measures, including the development of robust, consensus-based technical standards and auditing practices, dedicated support for cutting-edge safety research and testing protocols, and potentially adaptive regulatory frameworks that can evolve alongside the technology.
On the whole, the need for a predictable, uniform, and effective AI regulatory ecosystem points toward a centralized approach to AI governance operating at the federal level—a concept that Adam Thierer and I have previously outlined for Lawfare, which is gaining momentum on the Hill and eliciting more popular coverage. Though the RAISE Act and related legislation rightfully aim to diminish the odds of catastrophic outcomes caused by AI, a practical analysis of such legislation reveals severe regulatory misalignment—which risks doing more harm than good.