Latest in Podcasts and Multimedia

Cybersecurity & Tech

Lawfare Daily: David Rubenstein, Dean Ball, and Alan Rozenshtein on AI Federalism

Dean W. Ball, Kevin Frazier, Alan Z. Rozenshtein, David S. Rubenstein
Friday, July 5, 2024, 10:19 AM
What's in AI bill SB 1047 pending before the California State Assembly?

Published by The Lawfare Institute
in Cooperation With

Alan Rozenshtein, Associate Professor of Law at the University of Minnesota Law School and a Senior Editor at Lawfare; David Rubenstein, James R. Ahrens Chair in Constitutional Law and Director of the Robert J. Dole Center for Law and Government at Washburn University School of Law; and Dean Ball, Research Fellow at George Mason University's Mercatus Center, join Kevin Frazier, a Tarbell Fellow at Lawfare, to discuss a novel and wide-reaching AI bill, SB 1047, pending before the California State Assembly and AI regulation more generally.

To receive ad-free podcasts, become a Lawfare Material Supporter at You can also support Lawfare by making a one-time donation at

Click the button below to view a transcript of this podcast. Please note that the transcript was auto-generated and may contain errors.



Dean W. Ball: Well, in some sense, you could call SB 1047 an effort to legislate on behalf of all Americans. In fact, I certainly, I would agree with that. I think it is. It's not obvious that the model based framework that Senator Wiener is proposing is the way that we should go.

Kevin Frazier: It's the Lawfare Podcast. I'm Kevin Frazier, a Tarbell Fellow at Lawfare, with Alan Rozenshtein, Associate Professor of Law at the University of Minnesota Law School, and a Senior Editor at Lawfare, David Rubenstein, the director of the Robert J. Dole Center for Law and Government at Washburn University School of Law, and Dean Ball, research fellow at George Mason University's Mercatus Center.

David S. Rubenstein: The more that people learn, the closer we get to the details of any kind of laws, this is going to only become more difficult, not easier, to pass AI laws as more people become involved in the conversation.

Kevin Frazier: Today we're talking about efforts to regulate AI at the state and federal level, with a specific focus on SB 1047, currently pending in California. Let's start with the basics. The AI regulatory landscape seemingly gets more muddled each day. The EU took the .Regulatory lead by passing the EU AI Act and now states such as Colorado are already passing their own AI regulations.

California is actively considering legislation that some regard as particularly restrictive and the AI bipartisan working group, led by Majority Leader Chuck Schumer, recently released a policy roadmap that may spur regulation at the federal level. Dean, you and Alan recently penned a piece in Lawfare exploring some of these regulatory efforts.

Can you tell us more about the current status of SB 1047, also known as the Wiener Bill in California, and at a high level what some of its key provisions are?

Dean W. Ball: Yeah, absolutely. And thank you for having me on the podcast. So SB 1047 is a bill that San Francisco state Senator Scott Wiener introduced earlier this year.

And the goal of it is to regulate catastrophic risks from frontier, so called frontier AI models, the most advanced, largest, most capable models, things like GPT 4. 0 from OpenAI or Claude 3. They actually just had a new one, Claude 3. 5 from Anthropic models like that. And the way that it does that it accomplishes that in a few different ways.

One way is through what they call the Frontier Model Division, which is a new regulator created at the state level that would have jurisdiction over these models and would receive mandatory safety certifications that, that the companies that make these models would need to submit every year.

And the second way is by creating a liability regime that would apply if somebody uses a model to commit a hazardous capability, which the bill defines as basically above $500 million in damages to critical infrastructure through a cyber attack, bio attack, really any anything like that. It would make developers liable if somebody else used a model to accomplish a hazardous capability, if that hazardous capability were significantly harder to accomplish without the model in question. That is, on its face, what the bill claims to do.

Kevin Frazier: Thank you very much, and we will have plenty of time to dive into some of the more specific provisions of that bill and perhaps some critiques of that legislation.

But David, I'd like us to just zoom out a little bit higher. Senator Schumer famously or infamously asked participants in the Senate's AI forums for their P(doom), or the odds that artificial intelligence will cause a doomsday scenario. FTC Chair Lina Kahn apparently has a P(doom) of about 15%.

Alan and Dean mention in their piece that, quote, doomers, end quote, are particularly supportive of SB 1047. David, who the heck are doomers? What are their concerns? What does this mean for regulation? And why are we seeing this manifest in bills like SB 1047?

David S. Rubenstein: First of all, thank you for the opportunity to, to be on this podcast with you and to bring federalism into the fold, because really until now there's been very little discussion about the way that our constitutional structures affect not only the content of AI policy, but more importantly, and I'm sure we'll focus a lot on this, the question of who decides what AI policy should be.

Because who decides AI policy is going to affect what is decided, how it's decided, when it's decided, and all the details that go into those considerations. And so there's been some discussion about federalism and I'm grateful for the piece that is going to be published on Lawfare because it's a very important contribution to the discussion about federalism, but I, but I, and I'm, again, I'm appreciative of the opportunity to be here because at times, I'd like to expand the frame a little bit because it's really, as important as it is, it is a small slice of the considerations that are at play when we start looking at the larger landscape and what federalism has to offer.

Okay, so in regards to the doomers, there's a story there. I mean, it's quite an important story there, but I'll say that the so called doomers are one of many stakeholder groups or one of many different perspectives that are being brought to bear on very important questions about how to regulate AI technology, which many things is among the most important technologies of our generation.

And depending on who you ask, including some of the Doomers perhaps ever. Okay. So the, these so called Doomers have been around for quite a while. It is not as if they emerged as a philosophy or an outlook when ChatGPT was released. It was mostly a very marginalized an important voice, but very much in the margins.

And what the release of ChatGPT did, and then especially GPT 4 in the spring of 2024, was to elevate questions of catastrophic risk, existential risk, and all of a sudden thrust it into the mainstream, but all of the major technology labs, all of the big labs have some amount, some number of people that have long subscribed to the belief that artificial intelligence is potentially something that could bring around catastrophic risk.

And so they've given the legal, you know, P(doom) and they track these P(doom)s. They talk about their P(doom)s at cocktail parties. And as you mentioned, now these discussions have been brought to the forefront of policymaking agendas. And so what's interesting about, I would say this bill in particular, is that it was really sponsored, not sponsored in the political sense, but certainly backed by one group, you know, CASE, which is among those that are especially concerned about existential risk and catastrophic risk. And so what's one of the ideas that will tie into some of the themes we'll talk about today is that that voice as much as it was at the forefront of some of the early debates in Congress in 2023 really gets marginalized a year later by the time the Senate roadmap comes out, right? So to the extent that existential risk and catastrophic risk or things that some of groups, some individuals are deeply concerned about, the sort of the inability perhaps of them to get their viewpoints heard, or maybe now they're more marginalized on the national stage, opens the possibility for outlets at the state level.

And so this bill is really emblematic, and I think it's independently an important bill. Okay, so like what it's trying to do, but I also think it's a wonderful selection for sort of a case study, which is how I read in many ways the article coming out in Lawfare, because it really becomes a flashpoint, a focal point that brings to the surface a number of other very important issues that have to be discussed.

Kevin Frazier: So, with that, Alan, David just dropped the F word, federalism, and that is at the crux of your paper. So, let's assume, and I think having reviewed the remarks of all three of you, everyone here agrees that there is some need for regulation of AI. Why should this be at the federal level? What is the case for making sure that these efforts are centered in D.C. And not in Sacramento or not in Denver, or pick your capital.

Alan Z. Rozenshtein: Sure. So I think in my view, and, and, you know, I think we should, we should distinguish again, this question of whether we should have this or that kind of regulation from where the regulation should be I'm going to use this opportunity to plug Dean's fabulous newsletter, Hyper Dimensional, which everyone should subscribe to.

It's really excellent. And it's actually how I got to know Dean. And he just published a really interesting piece critiquing AI safety regulation at the model level generally from a public choice perspective. And it's a very interesting argument. I'm still thinking about Dean, you know, whether I agree with you or not.

I kind of got to cogitate on it more. The reason Dean and I wrote this piece is because we agreed that, you know, whatever our views might be, whether they're the same or whether they differ about the merits of any specific type of regulation, what we did agree on is that it should be decided at, at the federal, not, not the state level.

So why is that? And there are a couple of reasons. You know, one is just the reason about expertise and, you know, at what level are you going to have sufficient levels of knowledge and the ability to do this sort of fact finding and not just that, but the ongoing regulation, being able to hire the, kind of the, the best people to do this.

And we thought that it's likely that this is going to more, more be at the federal level than at the state level. Though, of course, California is basically the size of a, you know, medium European nation. So if there's any state, to be fair, that could do it is probably as California. But more fundamentally, we think that this question of safety regulation, again, especially at the model level, right?

We're not at all against any against the state doing regulation of the use of AI in a particular domain. We might agree or disagree on the merits of that, but we think it's totally appropriate for a state to decide within our state. We don't want AI used in this way in these purposes. Our concern is that at the model level, the regulation could have such a broad impact, not just on the way AI is used in that state, but how AI is developed at the nation level.

And this is particularly true in California, of course, because most of the major AI labs are in California. That we think that the value trade offs that we think that are at the heart of any kind of safety regulation. Ultimately, this question, not just predictive of, you know, what is your P(doom) on the one hand versus I don't know what the opposite of P(doom) is P(utopia) or something on the other hand is and then also at what margin you're willing to trade that off.

You know, what do you think the upside and downside risks are and how much are you willing to trade one for the other? That's just a values question. That's not a technocratic question as much as that's a question of what kind of world you want to live in. And while at least. I personally don't begrudge anyone wanting to take a more small C conservative approach about AI.

And maybe that's even might be my approach or that may become my approach over time, I, I still don't know. I do, and I think, you know, I, I think I speak for both Dean and myself here, we think that this is the sort of decision that should be made at the national at the national level. And, and so what we're trying to do is we're trying to distinguish between those areas in which states can credibly say, look, what we're trying to regulate is primarily the way a particular activity impacts our citizens versus we're trying to regulate something that is much more general and that therefore we think should be regulated at the national level.

Now, look, to be fair, and this is something that someone actually pointed out on on X when I, when I tweeted this out, this article out, the line between those two is not entirely precise. So, you know, one might say, well, when California, for example, regulates car emissions, they, that has a massive effect on the car industry generally, not because so many cars are manufactured in California, but because, you know, even though only a, you know, some percentage are sold in California, it's just easier for companies to have one assembly line.

That's true. And I, you know, we can argue about whether it's a good or bad thing overall for California to set emission standards, but I'm okay with California doing that because the harms of a particular level of emissions from cars is directly felt in California, right? You know, you can, you can say if emissions are at this particular level, we can say there's going to be this much smog in Los Angeles.

And if we cut this particular level of emissions, it's going to be this much less smog in, in California. And, you know, if this has effects somewhere else, well, so be it. I think that this sort of regulation, though, that the California AI bill is, is aimed at, or the sorts of harms it's trying to prevent, you know, catastrophic, biological, chemical, radiological harms, it's unclear what they have to do with California, exactly.

You know, of course, if there is a novel bioweapon that destroys all life on Earth, I guess it's also going to destroy all life in California. But this doesn't strike me as the sort of thing where California has anything particularly Californian to add to this discussion. And so for this reason, I think this is the sort of issue where because the the effects of the legislation are going to be at the national level, given the potential impacts on AI development and also the harms the legislation is trying to prevent are not really state specific or unique to a particular state. This does seem the sort of thing that I think the federal government should ideally do. Now, to be clear, I, I don't think that it's unconstitutional for what California is doing, and certainly I think under current Supreme Court precedent regarding the Dormant Commerce Clause, it'd be hard to establish that California is violating that.

So I, I'm not, I'm not really trying to invite a constitutional challenge here. I'm just saying as a matter of policy, this is the sort of thing where it would make sense for the federal government to come in and say, hey, we're going to handle this. And even if that means there's a bit of a delay, that's okay.

And even if that delay means we don't pass safe legislation, well, if that's the national consensus, then that's the national consensus. That's how stuff works.

Kevin Frazier: So focusing on this policy question and going to that idea that there may be some delay, we've seen Congress is not exactly speeding towards AI regulation.

We had this whole series of AI insight forums, months and months of deliberation, and then a roadmap that was more or less the equivalent of a Harvard Kennedy School policy memo. And yet we still don't see a lot of movement towards specific legislation. So, Dean, we've heard that Senator Wiener has described this bill as light touch, as not micromanaging, as really just preventing the worst, worst case scenarios.

And though many folks across the nation may not always agree with the policy preferences of Californians, I would be willing to bet that most Americans are opposed to $500 million in damage to critical infrastructure and mass casualties. So isn't this in some ways just a, a public service of sorts of Californians of saying, we're gonna get to this first, we're gonna address these worst case scenarios, and then you, Congress can fill in the details.

Why isn't this sort of a public good that California's doing for all of us?

Dean W. Ball: Well, let me say a couple of things, just also going back to the sort of the federal process and sort of where the federal policymaking process is. So ChatGPT, to put it candidly, freaked a lot of legislators out and, you know, many, many experts from all over the world were convened in Washington.

Senator Schumer had his insight forums where dozens and dozens of the top people in the world, from industry, from academia, civil society, et cetera all met and shared their, shared their beliefs. And I think what you find is that over that period of time, there's been a softening in the legislative mindset towards, towards this issue.

I think a year and a half ago, there was a lot more Oh, this is a catastrophic risk, and we have to regulate this, and we need to have pre approval licensing, and what's your P(doom), and things like that. And I think that, you know, David, you, you made the point that the Doom community, that perspective has been kind of marginalized again in the last year.

And I would contend that if that's the case, it's of their own doing, fundamentally, because they certainly had a lot of microphones pointed at them. They had a lot of invitations to all of these events, congressional hearings, insight forums, hundreds of millions of dollars of financial support in the form of, you know, many different non profits.

The Center for AI Safety is just one of those nonprofits. And so I think, you know, if, if, if they've been marginalized again, I'm not sure that's true, but if, you know, to the extent their influence has gone down, I suspect that's because their arguments and their policy proposals have not been persuasive to people in D.C.

And I think part of the reason that they've been a tough sell is that the regulations that this community has suggested would impose substantial barriers to innovation and to the diffusion of this technology throughout society. So, while it is, you know, certainly possible to me certainly believable that you know, if you did certain kinds of polling, a lot of Americans would support a bill like SB 1047.

I'm not sure a lot of Americans would support delaying cancer cures by 20 years, or delaying solutions to climate change by 30 years. You know, all of these innovations that we can get, but unfortunately, it's that exact kind of issue, you know, innovation, ideas that never made it out of the cradle.

Those are the ones that often don't get a seat at the table in regulatory conversations, and I think that that, unfortunately, is what, is what SB 1047 would affect in the most pronounced way. So, while in some sense, you could call SB 1047 an effort to legislate on behalf of all Americans, in fact, I certainly, I would agree with that. I think it is. It's not obvious that the model based framework that Senator Wiener is proposing is the way that we should go, nor is it obvious that it's the way that Congress wants to go. You don't really see, I mean, Senators Romney and Angus King, maybe you're still proposing, you know, legislation of this kind, but most people who are engaged on this issue in Congress are not pushing model based legislation at this time, nor are either of the presidential candidates.

That to me is a strong sign that the, things have shifted at the federal level.

Alan Z. Rozenshtein: Yeah, and just one thing to add on to what Dean said, I think there is a tendency among some AI safety advocates, some, I want to be clear, I'm not paying with a broad brush here, among some to assume that if they could just explain their position better, everyone would ultimately agree with them.

And, I think what the reality is, is that people's evaluations of, again, the, the, both the downside and upside risks of AI are just different and people's risk tolerances for how much do they care about those upsides versus those downsides differ. And so, you know, I do think that, that, you know, just as the, the, the accelerationists, I guess we might, we call them need to take the AI safety folks seriously.

And sometimes they don't, and that's a problem. I think the AI safety folks also have to take seriously the possibility that like they might just lose this argument and they might be right. And we might, the rest, you know, of the, their opponents might be wrong. That's always possible. But in a democracy, sometimes you just lose arguments.

And, and I think that's an important part of this, this puzzle as well that I, I would just urge AI safety folks to, to keep in mind just the, just the possibility, right? That, that the democratic process is just gonna go against them in this in this particular debate.

Kevin Frazier: So I think it's important to call attention to the fact that whether this is a true expression of the democratic process to its fullest extent is very much subject to debate. We have seen recent numbers come out that show the industry lobbyists have woken up to the new battlefield going on in D.C. and have poured lots and lots of money into a fight that has now become David versus Goliath. So, I'm not sure that this is necessarily an argument on its merits about who has the most compelling argument.

So, David, let's go to you to get a little bit more insight, perhaps, about the true democratic lowercase d benefits of empowering states to tackle this issue.

David S. Rubenstein: Yeah. I want to just, if I could respond to just a few different things in as well and sort of tie that point in. I appreciate Alan's comment earlier about drawing a distinction between making a doctrinal or constitutional arguments on the one hand, and what is essentially a political decision on the other.

So, whether or not, Congress or the states should regulate in the way that California is now doing, if that is not prohibited from a constitutional standpoint, then it is a political issue. And so then the question becomes, okay, and I, and I don't suppose I have an answer to this, but I just want to point out that as a political issue, whether or not to preempt state law is also a political issue. That states will have representatives in Congress that may not want federal law to preempt state law. And so when we think about Congress, we have to remember, certainly in the Senate, you know, we have two senators from each state. It is very difficult to pass a law at any time, and so while we're focusing on, maybe we're using 1047 as a case study, but if you think about the substance, the, the details, the, the right, the rights and the duties that any law at any level, but let's say at the federal level, that is, those are all important questions on which there is virtually no agreement currently.

We've only barely begun to ask the questions. But it's really important to appreciate that preemption itself is a political issue. And I think this is probably illustrated in no better than in the ongoing debates around data privacy legislation. Congress has been trying, you know, for maybe a decade to get comprehensive data privacy laws passed and for whatever reason Congress is not able to.

One of the major sticking points, there's two real, like one was preemption of state law and the other is private rights of action. And then you have a number of other smaller subsidiary issues that go into that. And I think that there's a lot of lessons to be drawn from the privacy political economy around data privacy and so on and so forth.

The first thing is, is that the industry fought very hard and has fought very hard to prevent any kind of national comprehensive federal comprehensive data privacy legislation to be enacted for a long time. That had the effect of pushing the issue to the states. And so California is, you know, out of the gates, off of the models, was mostly from the EU GDPR. And California comes out of the gate with the first comprehensive data privacy law in the United States, and I don't know what the latest numbers, but I think the numbers up to 14 states now have something that is like a data privacy protection law, and they're different. They're not all the same, but there are similarities and there's differences, and there's political reasons for all of that.

But now what's interesting is that now that the data privacy legislation has come back into the national spotlight. You have this like bipartisan bill, the most recent version of the comprehensive data privacy legislation gets really a lot of bipartisan support. And by the way, I think like 75 percent of Americans would support it.

And one of the issues that tanked the most recent iteration is preemption again, because had a comprehensive data privacy law been passed years and years and years ago, before the state started enacting laws, preemption is, it's still a political choice. Like, should we preempt state law? But there's no state laws to preempt.

And the political trade offs are very different. But now once states start passing data privacy laws, and this happened, they are not willing to allow the federal government to pass a comprehensive data privacy law that is less protective, let's say, than what the states are doing. I think that industry is very, should be incentivized right now to get ahead and get Congress to pass comprehensive anything regarding artificial intelligence because states are doing it.

There's 600 plus bills pending in the states. States have political polarization, but it operates very differently than at the federal level. And here I want to talk about democracy and with a small d. The polarization that exists at the states is concentrated. So you have a record number, certainly since 1990, I don't know if there's even stats much before that, but since 1990, 40 states have trifectas, which I believe is the most ever, certainly in the last, I don't know, 30 years.

So you have party polarization, but they're state by state, which means that states will be able to enact laws that the federal government cannot, and it's really important to appreciate that that is also democracy. And some would say that is a better representation of democracy because it more closely resembles the preferences of the people in those states, in those jurisdictions. And you can imagine that if there were a uniform national AI law about anything, that might actually satisfy less political preferences, not only because the Senate itself is sort of anti democratic in sort of the way it's built up in the filibuster, but because people are so, first of all, most people are ambivalent about these things.

I mean, people are really just starting to talk about AI regulation for the most part. And, and, but the more that people learn, the, the closer we get to the details of, of any kind of laws, this is going to only become more difficult, not easier, to pass AI laws as more people become involved in the conversation.

And what the states offer is the ability to satisfy actually more preferences because one state could have their preferred set of laws and another state could have their preferred set of laws and actually I'm not so sure that gives you less representation of what people want, you might actually get more because if you just split it down the middle, like Democrats, Republicans, and by the way, it's not even clear which side of a lot of the AI, there's a lot of strange bedfellows in the AI space.

It's one of the interesting dynamics to follow, but if you just split down party lines at some point, politicians are going to attach to particular AI proposals, like because the, public are going to be looking for signals and cues from their political leaders. And once that happens, you're going to have basically 50 50, let's just call it, about politicians that support or don't support.

So if you, if when a federal law is passed, And by the way, that will happen, I hope, or maybe should happen at some point. It's not necessarily going to satisfy more political preferences. You might actually only satisfy 50%, let's say. Whereas, if the blue states could pass their laws, the red states could pass their laws.

And I am not saying at all that that's better, what's better, what's worse. All I'm saying is that this is what's going to happen. So, one of the things that I'm trying to do, I'm writing in this space, I'm coming out with an article, it's called Federalism and Algorithms, and one of the things that I'm trying to, I'm doing a lot of things in this piece, but one of the things is that I'm trying to imagine what an AI federalism future could look like.

And it's infinite numbers, nobody can know, but I do think it's useful to think about it in two broad, stylized, images of the future. One I call federalism by default. The other one I call federalism by design, and they're not, like I said, the caveats, you know, there's overlap, but it, there's some intuition here.

Number one, federalism by default is what I imagine is really just an extension of the status quo, what's happening today politically, whereas Congress, either does not want to or cannot pass comprehensive AI federal law, that the executive branch will try to do it based on existing authorities, but that states will, like the states already are.

And so we haven't talked about what should be, what's best for, this is happening. So it gets back to my earlier point, thank you for having this discussion, because federalism has been, pushed to the sideline of these debates about the future of AI policy, but there isn't an AI federalism by default that's coming.

Kevin Frazier: I want to pause you there so that we can get Dean's thoughts on this sort of Jeffersonian ideal and why that isn't the case with respect to SB 1047, perhaps in his opinion. I think there is a compelling argument that you set forth that this is the people of California, theoretically showing that their preference for AI is this more onerous approach, perhaps as, as we've had debated here, Dean, what's your concern about this lowercase d democratic approach?

Dean W. Ball: Yeah, so I'll say a couple things. First of all, I've spent most of my career in state and local policy. And so I'm a big believer in, in state government and, and city government too, for that matter. And I think that there is a robust role for state and local governments to play in AI. What I think Is, is more fundamental though, is this question of, of what is your regulatory approach?

So there's this idea of we're going to regulate the models and models are going to be regulated, like airplanes are regulated. And that sounds logical and consistent with history, but the reality is that that is a radical step in the sense that that's not how consumer software is generally regulated, right?

Like, Apple doesn't submit their operating systems to the government. They spend hundreds of millions of dollars, billions of dollars a year making their operating systems. They don't submit those to the government for approval, and there isn't a substantial regulatory regime that applies at the operating system level.

So this would be a step change from how software is generally regulated in the United States. And I think that if we're going to go down that direction, there needs to be a very serious appraisal of the costs associated with doing that. And also, I think just a recognition of, there's probably a reason we don't regulate software in that way, right?

And, and part of that is that, like David, you've mentioned this idea of comprehensive federal AI legislation. Comprehensive, you know, I would ask of what, precisely, right? Like, we don't have computer regulation. There's not the comprehensive federal computer law, the comprehensive federal electricity law, the comprehensive federal internal combustion engine law.

These things don't exist. Now those things are regulated in a thousand different ways. By, you know, by all the different ways in which, you know, personal conduct and commercial conduct engages with public policy. There's not one law. And generally that's the case for general purpose technologies. We're not talking about something like cryptocurrency here.

I don't want to offend any crypto supporters, but I think the, the use cases are more limited than a technology which can be used to both predict the structure of proteins and the next word in a sentence among many other things. Right. And so I think that like, when you're talking about, I mean, I would almost say it is incoherent to suggest that there's such a thing as a comprehensive regulatory approach to what is in essence, a method of doing statistics. So, you know, and, and again, that's not to say that there's not federal state and local law that will apply to AI. And I think part of what the Schumer discovery process, the sort of insight forums led to was this realization that AI is in fact already regulated by many, many different laws and that AI policy over the next decade, rather than being, you know, about landmark legislation, is probably more about iterative improvements to existing law to make it compatible with AI and to make AI compatible with it in a two way bi directional feedback loop. So I think that there's room for the Jeffersonian approach and I think state and local governments will play a big role in that. But this question of whether we want to create a centralized regulator for AI models is in fact a quite major one.

It is one that is incompatible. Well, certainly not incompatible, but it is, it is, it challenges interstate commerce in important ways because of the way that software is distributed on the internet, and for the reasons that Alan mentioned earlier. So, I think that a question like this in particular is one of the relatively few major questions that I would say should be decided at the federal level.

But, yes, how are we going to deal with deepfakes, you know, name image likeness problems raised by AI, all sorts of things like that. Maybe even some aspect, I mean, intellectual property tends to be a federal matter, but some aspects of that question, all sorts of different room. And I think, I think actually that's an area, there are all kinds of unanswered AI policy questions where the laboratories of democracy model can be a structural advantage for the United States.

Because China doesn't have a bunch of sovereign entities inside of it that can pick different regulatory paths and converge on the best option. France doesn't have that but we do. And I think it can be an advantage, but it can also be a disadvantage. And I think the specific question model based regulation is an example of where it's a disadvantage.

Kevin Frazier: So I really want to drill down a little bit more on this question of where the right point of intervention is from a policy perspective. And I do want to just raise the fact that in comparison to electricity and the unknown unknowns about electricity, we weren't concerned about mass casualties from electricity when that first came out, nor $500 million in potential damage to critical infrastructure.

So for those folks who may disagree with you, Dean, about the possible risks posed by AI, where would we go from a regulatory standpoint? Where is the proper regulatory point of intervention, if not at the model level, if not at the companies who have the most control and the greatest degree of understanding over these models?

Alan Z. Rozenshtein: Can I just jump in before Dean? Cause I want to actually push back a little bit, Kevin, on what you said. I'm actually not sure it's true that with the development of electricity or internal combustion or the chemical manufacturing system, there weren't concerns about massive amounts of damages and even potential mass casualty events.

I mean, certainly this is true with other technology. I mean, certainly there were concerns with other technologies like, like nuclear. You know, I, I think that the risks of AI, or let me put it this way, the uniqueness of our current concerns over some forms of AI relative to what's happened in the past are somewhat overstated, and I actually do think that there have been periods in the past where we actually haven't known a lot about the technologies that we were engaging in, but we didn't try to regulate it in the same way that some people try to now regulate AI.

Now, there's a question of, should we have? I mean, that's a different question, but I do think that what can get lost sometimes is that concerns over AI are not quite as, I think, unique to AI as some AI safety proponents might argue that they, that they, that they are, but I jumped in ahead of Dean. So I want to hear what Dean has to say.

Dean W. Ball: Yeah. And, well, I would also, I would, I would even add that, you know, some, something, I think a lot of people think that our current situation with AI is unique in the sense that, you know, it is often said, we, we don't understand how AI models work, how the most advanced AI models work. And, and that is a claim that is kind of true and kind of not, but the fundamental mechanisms of many important technologies that came to define our economy were not understood when they were first embedded, and that includes electricity. We discovered electricity and put it in buildings and had world's fairs based around electricity before we discovered the electron.

Steam engines were used were in widespread use before the science of thermodynamics was understood. So I would say that this is in fact maybe even just more, it might just be the common, you know, outcome at the frontier of technology to be uncertain about all the mechanics and part of how you learn how it works is by adopting it.

Kevin Frazier: We've raised a lot of important points about distinguishing AI risk from other sorts of risks. And I know David has some thoughts on this question as well. So let's go back to there.

David S. Rubenstein: Yeah, let me first say that because I didn't have a chance to offer what an alternative AI federalism future could look like.

I explained that AI federalism by default is primarily defined again, it's a, it's a, it's a sketch. It's a stylized vision of the future, but it's really anchored to the current political dynamics and the current political economy of AI. And so it's very easy to foresee a world where Congress does not regulate AI in any significant sense in the commercial market.

And that's state's will, right? So this is federalism by default and what to make of all this. Now, before I want to say though, that one of the things that states offer, and you hear a lot about the laboratories of democracy and that's, that's where the laboratory is for regulatory innovation. I think that's all true.

We talked about which one would be more efficient in terms of satisfying voter preferences. And I think that's a theoretical question and an important question to consider. But one thing that we didn't really talk about yet, and it'd be remiss not, not to bring it up, of what the state's offer. It really is a platform for national debate.

You know, so we talk about California and what interest California has. And I don't want to go down into the weeds of that. I do think it precisely because the big technology companies are in California because the $500 million catastrophic risk would apply to that type of damage in California. You know, that alone should be sufficient, plus many other reasons why I can imagine making a political argument, not a constitutional one for why California might want to, to be involved in, in setting the tone of regulation.

But think about all the money and all of the, the voices that are pouring into California that are not from California. Kevin, you just mentioned I'm in Vancouver, you're in Oxford, and then we're all over the country, all over the world, but it's because there's a chance. that California might pass this law, that we have a global conversation, forget about a national conversation, about the merits of what is being proposed in California.

Whereas if the discussion were only in Congress, we all know that Congress is not going to do anything along the lines of what California's doing anytime soon. The question then is, well, could Congress just pass a pre preemption provision to say, that nobody's going to regulate AI safety or regulate the risk of catastrophic harm.

It'd be one thing if the federal government were going to regulate it in some way and at the same time preempt state law. I mean, that creates more opportunities for political consensus. But the idea that Congress would just pass a preemption statute, which the authors are right, it should be pointed out that it's very fuzzy. Like what's the line between model level versus deployment versus use is very fuzzy. But, but if Congress is simply going to evacuate the entire field of AI safety and not put anything in that spot and prevent states from doing anything in that spot, I just, I do think that, I mean, again, but this is just my political sense, but I don't need to leave it at that.

I don't see how the American public are going or Congress is going to go for that because you have to remember that members of Congress are being represented by people in the states. And so I just don't see how that's going to happen if there are sufficient states or even just California, given its prominence in the dialogue.

Alan Z. Rozenshtein: So I, I guess I just want to respond to, to in one way to what David said, which is, I think he's right that in its own way, the California effort is sparking a global conversation and, and that, that's, that's better than the alternative. But I almost feel like that actually proves the point that Dean and I were trying to make, which is if you have a state law that is getting so much attention outside the state, across the country, across the world, you know, around the world even, doesn't that suggest that the potential impact of that state legislation is by definition extraterritorial. Again, not in the strictly legal sense, just in the sense of it has so many effects outside the state that one might reasonably decide that the effects outside the state outweigh the effects inside the state. And I think that's the point that Dean and I were trying to make. Just that this is a situation where Congress could legitimately, and we think as a policy matter it should, conclude that, if California action, you know, doesn't just drag in people from across the country, but the, the Europeans and, and all sorts of other players, that may be as an indication that this is the sort of thing that should be decided in Congress, not the California state house.

Kevin Frazier: So this will not be the last word we have on SB 1047, because there is a heck a lot of time between now and whatever's going to happen to that bill. So hopefully we can have these three esteemed panelists back down the road. But we will leave it there for now.

David S. Rubenstein: Thank you so much.

Dean W. Ball: Thanks.

Kevin Frazier: The Lawfare Podcast is produced in cooperation with the Brookings Institution.

You can get ad free versions of this and other Lawfare podcasts by becoming a Lawfare material supporter through our website slash support. You'll also get access to special events and other content available only to our supporters. Please rate and review us wherever you get your podcasts.

Look out for our other podcasts, including Rational Security, Chatter, Allies, and The Aftermath. Our latest Lawfare Presents podcast series on the government's response to January 6th. Check out our written work at lawfaremedia.Org. The podcast is edited by Jen Patja and your audio engineer this episode was Goat Rodeo.

Our theme song is from Alibi Music. As always, thank you for listening.

Dean Woodley Ball is a Research Fellow in the Artificial Intelligence & Progress Project at George Mason University’s Mercatus Center and author of Hyperdimensional. His work focuses on emerging technologies and the future of governance. He has written on topics including artificial intelligence, neural technology, bioengineering, technology policy, political theory, public finance, urban infrastructure, and prisoner re-entry.
Kevin Frazier is an Assistant Professor at St. Thomas University College of Law. He is writing for Lawfare as a Tarbell Fellow.
Alan Z. Rozenshtein is an Associate Professor of Law at the University of Minnesota Law School, a senior editor at Lawfare, and a term member of the Council on Foreign Relations. Previously, he served as an Attorney Advisor with the Office of Law and Policy in the National Security Division of the U.S. Department of Justice and a Special Assistant United States Attorney in the U.S. Attorney's Office for the District of Maryland.
David S. Rubenstein is the James R. Ahrens Chair in Constitutional Law and director of the Robert Dole Center for Law and Government, Washburn University School of Law. He currently teaches constitutional law, administrative law, legislation and jurisprudence.