For AI Safety Regulation, a Bird in the Hand Is Worth Many in the Bush

Published by The Lawfare Institute
in Cooperation With
A puzzling thing keeps happening in the artificial intelligence (AI) regulation space. Writers whom I like and admire argue publicly that near-future AI systems could cause large-scale disasters—killing hundreds, thousands, or millions of people. They agree that, even if one is skeptical of AI regulation in general (which I am), targeted regulations addressing catastrophic AI risks are needed. But then, when an actual piece of legislation targeted at catastrophic AI risk is on the table, they oppose it.
The most recent example of this phenomenon is Kevin Frazier’s Lawfare piece opposing New York’s RAISE Act, which has now passed the general assembly and is awaiting a gubernatorial signature. Other notable earlier entries in the genre include, for example, Dean Ball’s various commentaries on California’s SB-1047 and Ball’s 2024 Lawfare article with Alan Rozenshtein calling for federal preemption of all state-level AI regulation.
I single out these three authors because I think they are among the most incisive and intellectually honest thinkers about large-sale AI risk regulation. Nonetheless, I think that they—and the politicians, lobbyists, and AI researchers who broadly agree with them—are making a kind of systematic error.
It is not that the critiques they level at proposed AI regulations are wrong, exactly. It is that they fail to support the conclusion. No law is perfect, but laws like RAISE and SB-1047 are far better than the average proposed AI bill. In fact, they are better along the specific dimensions that Frazier, Rozenshtein, and Ball identify for concern. If such laws fail, the result is likely to be not better law but, rather, worse law or no law at all. Both are very bad outcomes, when the laws in question are designed to prevent mass casualty events.
Consider Frazier’s piece arguing against the RAISE Act. The New York State Assembly bill has two main provisions: First, it requires large AI developers—labs spending at least $100 million on training—to write and publish a safety and security protocol for their most powerful models. Second, the same developers must implement “appropriate” safeguards to prevent “unreasonable” risks of their frontier models causing disasters that kill over 100 people or do over $1 billion in damage. If they fail to do either of these, the New York attorney general can sue the developers for between $10 and $30 million.
Frazier has four main objections: (a) The penalties might be both too small (as to large companies) and too large (as to small ones); (b) the New York attorney general will be resource constrained and will allocate her limited enforcement capability badly; (c) certain terms in the law—such as “appropriate,” “unreasonable,” or “safety protocol”—are somewhat ambiguous; and (d) there may not be enough independent auditors to evaluate compliance, as the law requires.
Ball and Rozenshtein’s objections last year to California’s SB-1047 were mostly similar to Frazier’s qualms with the RAISE Act: The requirements were not sufficiently spelled out, state governments were too poorly resourced to enforce such a technically complex law, compliance might burden smaller open-source AI developers more than big closed-source labs, and so on.
The problem isn’t that these critiques are wrong, exactly. It’s that they tip easily into fully general objections—ones that would apply to literally any law. All laws have some capacious and, thus, vague terms. There are well-known trade-offs between legal “rules” and “standards.” More capacious standards, in fact, have some advantages in contexts, like AI, where optimal precautionary measures are not yet known. Likewise, all regulators face resource constraints, and they all may use their prosecutorial discretion well or badly. Any given fine, damages award, or compliance burden will harm smaller startups more than large incumbents. And so on.
There are, of course, better and worse laws along these dimensions. And better law is what Frazier, Rozenshtein, and Ball say they are demanding.
To that end, all three support a ban on state-level AI laws, in favor of an expected uniform federal regime. For Ball and Rozenshtein, the objection to state-level rules that affect nationwide industries is even stronger than Frazier’s. In addition to the uniformity problem, they worry that “a particular state’s viewpoints on the appropriate risk trade-off for AI may not be representative of the country as a whole.”
These heuristics—favoring regulatory uniformity and federal-level risk analysis—probably increase the quality of law on average. But like all heuristics, they should be abandoned whenever they cease to serve their purpose. And for AI safety regulation, as it stands circa June 2025, insisting on federal law, to the exclusion of the states, is a mistake. The approach will produce a worse AI regulatory regime, not a better one.
Here are the three important blind spots. First, both SB-1047 and now RAISE are actually pretty good laws along the specific dimensions of risk that Frazier, Rozenshtein, and Ball identify. For example, to mitigate concerns about harming small startups, both laws were drafted to apply only to companies spending over $100 million on AI training runs. Very few tech startups have nine-figure compute budgets, and the ones that do can probably afford to write down and follow a safety plan. Similarly, early drafts of SB-1047 were critiqued for being too vague in defining when an AI system meaningfully caused a large-scale disaster. In response, the bill’s sponsors updated its language, excluding, for example, cases where the AI system merely produced information otherwise available online.
Regarding the worry that state-level lawmaking will overrate risk to the detriment of innovation, both SB-1047 and RAISE were drafted to be very light-touch and flexible. Neither imposes any particular safety intervention on AI companies. The laws simply require AI companies to make and publish some concrete plan and to act reasonably with regard to the risk of large-scale disasters. These are disasters that essentially every AI company agrees their technology could cause. And under SB-1047 and RAISE, it is the AI companies, not regulators, that get to decide which of the thousands of possible mitigations can be implemented with the least harm to innovation.
On procedure, too, these laws tread lightly to avoid stifling innovation. They forbid suits by private parties, curtailing the threat of vexatious NIMBY litigation. And they cap penalties at a level unlikely to be ruinous for covered firms.
These are not perfect solutions to the identified problems. That is in part because there are no perfect solutions. Each of these design choices involves trade-offs, including along the exact dimensions critics identify. Possibly, some law could be written that balances these trade-offs even better than SB-1047 did or RAISE does now.
But it only makes sense to reject these laws and demand federal action if you both can say what a better-drawn law would look like and have reason to believe it will be on offer from Congress. I don’t see much of either argument from the objectors.
Indeed, I think that Frazier, Rozenshtein, and Ball would agree with me that the average piece of proposed AI legislation is horrible—technologically illiterate, infeasible, and written with essentially no thought for competition. Shouldn’t we then expect a random draw from the congressional hat to be worse than what is on offer now?
The second blind spot is about federal preemption as a policy tool in this arena. Frazier, Rozenshtein, and Ball do not merely prefer federal AI safety laws over state laws. They support a ban on state-level action like the one currently making its way through Congress as part of the “Big Beautiful Bill.”
Even if it were true that an eventual disuniform patchwork of state laws or some overly risk-averse piece of state legislation especially would harm the AI industry, there would still be little argument for such a ban today.
Today, there are no state-level laws targeting catastrophic AI risk. So the enactment of one such law—like RAISE—would not in fact generate disuniformity. The second, third, or fourth might. But not the first.
Nor would the proliferation of state AI safety laws necessarily produce a tangled web of regulation. If one state, like New York, enacts one well-drafted law targeting AI catastrophe, other states wishing to protect their citizens might simply follow along. They could much more easily copy RAISE’s provisions than craft bespoke, incompatible ones.
Likewise for the stifling of innovation. It is a possible risk, but not a present one. And not one that RAISE, specifically, does much to raise. Other state AI safety laws might eventually be a problem. But they might not. Indeed, the substantive similarities between RAISE and SB-1047 suggest little legislative appetite for draconian rulemaking.
So, while the goal of enacting uniform, well-balanced AI safety regulation is a good one, opposing RAISE and supporting a federal ban on state AI safety rules does little to serve it. Today, there are no AI safety regulations, at either the state or the federal level. RAISE is not an unbalanced, innovation-killing rule. Opposing this law does not produce better, more uniform regulation. It simply maintains the status quo of no regulation at all.
Crucially, if the status quo begins to shift, and disuniform, badly-written state laws begin to proliferate, then federal preemption will still be available to quash those rules. But today, with exactly zero AI safety laws enacted and one light-touch state proposal realistically on offer, the preemption of safety laws is a solution to a problem we do not have.
This leads to Frazier, Rozenshtein, and Ball’s third blind spot: There isn’t going to be any federal action on AI safety anytime soon.
The Trump administration is explicitly and stridently opposed not only to AI regulation generally but AI safety regulation specifically. One of Trump’s first actions on taking office was to rescind the Biden administration’s executive order on AI. That order was quite light-touch. It contained no substantive safety mandates; it merely directed agencies to partner with AI labs in studying potential approaches to averting AI catastrophe. Similarly, in a February speech to the AI Action Summit, Vice President Vance announced that the “AI future is not going to be won by hand-wringing about safety.”
Thus, while Congress has now introduced the federal ban on state AI laws that Frazier, Rozenshtein, and Ball favor, it seems very unlikely to enact any AI safety regulations alongside them. The state-level ban, as currently written, would last for 10 years. Trump will be president for four. Vance could be president for eight after that.
In the meantime, assuming the federal ban passes, there will be exactly zero American laws on the books designed to mitigate the risk that advanced AI systems cause large-scale mayhem and death.
Four years is an eternity in the world of AI—to say nothing of eight, 10, or 12. Four years was the time that it took large language models to go from being useless curiosities to elite computer programmers. Many people close to the AI industry think that it will be substantially less than four years until AIs will be smarter than most human knowledge workers. And even skeptics agree that, in a decade, the world may be utterly transformed.
So let’s not push off AI safety legislation to some unknown future date. And let’s especially not do it for the sake of incremental clarity in statutory language, modest expectations of higher legal uniformity, small-bore changes to the balance of compliance costs between bigger and smaller firms, or uncertain forecasts about the comparative use of prosecutorial discretion.
Let’s instead, as the kids say, “take the W.” If you care about catastrophic risk from AI, the RAISE Act is a pretty good bill. It’s a pretty good bill even if you think—as I do—that many AI regulations are both badly written and misguided at their core. Would RAISE be better if it were a federal law, enforced by a well-resourced federal agency? Possibly. Is there some hypothetical spelling-out of “unreasonable” that would both reduce compliance costs and prevent evasion? Maybe.
But such a bill is not on offer. There is little reason to think it will be offered anytime soon. Given the speed at which catastrophically risky AI systems may emerge, the expected-value-maximizing strategy here is to be thankful for the bird in the hand. It is not to wait and see what might someday emerge from the bush.