Cybersecurity & Tech States & Localities

Scaling Laws: Sen. Scott Wiener on California Senate Bill 53

Kevin Frazier, Alan Z. Rozenshtein, Scott Wiener
Tuesday, October 21, 2025, 11:53 AM
What is the significance of SB 53 in the large debate about how to govern AI?

Published by The Lawfare Institute
in Cooperation With
Brookings

California State Senator Scott Wiener, author of Senate Bill 53--a frontier AI safety bill--signed into law by Governor Newsom earlier this month, joins Alan Rozenshtein, Associate Professor at Minnesota Law and Research Director at Lawfare, and Kevin Frazier, AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to explain the significance of SB 53 in the large debate about how to govern AI.

The trio analyze the lessons that Senator Wiener learned from the battle of SB 1047, a related bill that Newsom vetoed last year, explore SB 53’s key provisions, and forecast what may be coming next in Sacramento and D.C.

 

Find Scaling Laws on the Lawfare website, and subscribe to never miss an episode.

This Scaling Laws episode ran as the October 24 Lawfare Daily episode.

To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.

Click the button below to view a transcript of this podcast. Please note that the transcript was auto-generated and may contain errors.

 

Transcript

[Intro]

Kevin Frazier: It is the Lawfare Podcast. I'm Kevin Frazier, the AI innovation and law fellow at the University of Texas School of Law and a senior editor at Lawfare. Today we're bringing you something a little different. It's an episode from our new podcast series, Scaling Laws. Scaling Laws is a creation of Lawfare and Texas Law.

It has a pretty simple aim, but a huge mission. We cover the most important AI and law policy questions that are top of mind for everyone from Sam Altman to senators on the Hill to folks like you. We dive deep into the weeds of new laws, various proposals, and what the labs are up to make sure you're up to date on the rules and regulations, standards, and ideas that are shaping the future of this pivotal technology.

If that sounds like something you're going to be interested in, and our hunch is it is, you can find Scaling Laws wherever you subscribe to podcasts. You can also follow us on X and Bluesky. Thank you.

Alan Rozenshtein: When the AI overlords takeover, what are you most excited about?

Kevin Frazier: It's not crazy. It's just smart.

Alan Rozenshtein: And just this year, in the first six months, there have been something like a thousand laws.

Kevin Frazier: Who's actually building the scaffolding around how it's gonna work, how everyday folks are gonna use it?

Alan Rozenshtein: AI only works if society lets it work.

Kevin Frazier: There are so many questions that have to be figured out, and nobody came to my bonus class. Let's enforce the rules of the road.

[Main episode]

Welcome back to Scaling Laws, the podcast brought to you by Lawfare and the University of Texas School of Law that explores the intersection of AI, policy, and, of course, the law. I'm Kevin Frazier, the AI innovation and law Fellow at Texas Law and a senior editor at Lawfare, joined by Alan Rozenshtein, associate professor at Minnesota Law and research director at Lawfare, and our returning guest State Senator Scott Weiner.

As debates over how to regulate AI become more partisan, folks on both sides of the aisle can certainly agree on one thing: Senator Wiener is a creative and tireless policymaker. We're fortunate to have him on to explain a major AI policy recently signed into law by Governor Newsom. And with that, we hope you enjoy the show.

So if states are quote unquote, laboratories of democracy, then, Senator Wiener, I'm pretty sure you are effectively a PhD-level scientist. You've made progress happen on housing, on transit, on homelessness, and so many easy issues, and then you picked AI. Why? Why have you spent so much of your very valuable and very creative political capital on this issue?

Scott Wiener: Well, I represent San Francisco, which is the beating heart of AI innovation. And I'm proud of that. We're all very proud of that. And a few years ago, people who I respect and trust in the community, who are work-, were working at AI Labs or who, some were startup founders, they asked me to have conversation around AI safety.

And so I ended up attending a series of dinners and salons with a significant number of people, really impressive AI minds who talked about the work that they were doing and what they were seeing and the promise of AI, but also talked about the concern and some of the safety risks. And they were worried that not enough was happening, and that we were relying on some non-binding pledges from AI labs to do the right thing. And they wanted to explore a policy approach. And so we started having conversations with a lot of different people. And the first version of our AI safety bill, SB 1047, was the result of those conversations.

Alan Rozenshtein: I'm curious, Senator, since you do represent San Francisco, I imagine that AI is a day-to-day––AI regulation as well, AI policy––is kind of a day-to-day concern to some of your constituents in a way that it might not be to others in the state legislature or across the country. But at the same time, you know, you also, I assume think of yourself as representing Californians in general.

And I mean, maybe compared to some of the other issues, whether it's you know, housing or transportation or, you know, the ICE masking bill you know, all the other things you've been involved in, I'm curious if you think that AI is a kind of bread and butter issue for, let's call it normies in San Francisco and beyond.

Scott Wiener: Well, in San Francisco, you know, it's made a little bit different. I did have people, invariably people who worked at AI labs and other tech workers come up to me on the street throughout the last two years to thank me for my work on it. So that's the beauty of representing San Francisco, is that your constituents know exactly what you're doing. And you can––and it's always bad to try to BS your constituents, but in San Francisco, they particularly know if you're BSing them.

And so, does it resonate with people at the level of housing or some healthcare issues or even climate? No, not at that level. But people also appreciate that I'm focusing on it and they understand that. They're excited about AI and what it can do, and they're also a little nervous. Not just around the catastrophic risk that our legislation dealt with, but also around algorithmic discrimination, around the very, very rapid destruction of jobs unlike any other past technology, and how we're going to deal with that. People are worried about mental health issues, and they don't like it when a chatbot tries to convince a kid to commit suicide.

And so people do have concerns and they're glad that someone is actually trying to address them.

Kevin Frazier: As one of your former constituents, I can speak to the fact that San Francisco is in and of itself a bubble within a bubble that may be California although a very big bubble. I'm curious if you had your colleagues in Bakersfield, in Modesto, in the areas that perhaps aren't the beating heart of AI innovation, although Austin may have something to say about that, but we can fight about that another time.

Did they turn to you during these AI debates to say, Senator Wiener, you know, ‘Scott, help us, we're not familiar with these AI ins and outs,’ or was there sort of a group that was helping your colleagues better understand these issues?

What did that look like, just from a legislative understanding perspective?

Scott Wiener: Well, the good news is there are a number of members of the California legislature who are quite knowledgeable around technology and tech policy and AI. And so for example, my, one of my newest colleagues, Jerry McNerney, who used to be in Congress, is––now, he was sworn into the state Senate last December, he's a, I believe a nuclear physicist? Or some sort of very impressive physicist, I can't remember which kind.

And he knows his stuff around AI. And he's been a great colleague on that. My colleague assembly member Rebecca Bauer-Kahan from the East Bay who chairs the Assembly Privacy Committee, she's very knowledgeable and very active in the space.

And. Senator Steve Padilla from San Diego has been very active. So there, there have been a number of––Buffy Wicks––some of them are from the East Bay. She hasn't worked in the AI space, but she's worked in various sort of child protection spaces around social media and so forth.

And there are others as well. Josh Becker from who represents part of Silicon Valley. So we have a good crew of members who are knowledgeable and active. And we also work with members from other states, so from New York, for example and a few other states. So it's been a good collaboration and we have a good knowledge base in the legislature.

Alan Rozenshtein: So we're going to dive into the details of SB 53, and then sort of SB 1047 previously, and all the future stuff in a second. But before we do, I want to ask a high-level question and get your sense since you're very much in the trenches. I think, and please correct me if you disagree.

I think it would be fair to say that you are part of the. AI safety movement. Very broadly construed, and obviously that contains multitudes. Yeah. And you know, if we're going simplify you know, on the other hand you have the kind of accelerationist types. Again, we're simplifying, but you know, folks like you know, the White House and a lot of the companies, a lot of the VC.

My sense of following this for a few years was that. The wind was very much at the back of the safety types for a while, and then that has shifted somewhat in the last six to 12 months. Part of it being what's happened in Washington, but part of it just being the general vibe. Obviously we're in the very early innings of AI regulation and AI policy.

I am curious where you think the momentum is. Or, feel free to push back on the kind of premise of the question, whether you don't think that's a useful way of thinking about that.

Scott Wiener: No, I think it's in a way that you know, effective altruist versus accelerationists, you know, I think most people are somewhere on that spectrum, but they're not at the, at one end or the other.

And I think most, I think so many people in AI world and beyond want to see AI innovation happen and want to see how it can make the world a better place and cure diseases and make people's lives better. And people want to protect public health and safety. And people understand that you know their benefits to, you know, the internet economic model that's developed. And they also know that maybe we should have done something about data privacy a long time ago. They know that social media has had huge benefits for society, and they also see the detriments and that maybe we should have done something about that.

So people are pretty sophisticated in how they think about this. And yeah, you have the people who are like, you know, accelerationism, and, you know, just keep going and don't worry about it. And everything will work out one way or the other. And then you have, you know, people who are just don't want to see anything happen. But that doesn't describe the vast majority of people.

And yeah, the pendulum has swung back and forth a little bit. But ultimately, you know, we just have to. Make the right policy regardless of where that pendulum is at a moment in time. And yes, last year, during the SB 1047 debate, I became very intimately aware of this fight.

And it, and you know, what's happening now with the, with this federal government and they just want to have pure accelerationism. And I don't think most people want that.

Alan Rozenshtein: I, I'm curious, how surprised were you that the fight over SB 1047 became such a huge deal? I mean, it was a real focusing flash point. I mean, is this, did you, did you get what you were hoping with that, or was it a little more than you expected?

Scott Wiener: It was a little more than I expected. I think it was the governor, when Governor Newsom vetoed it, it was actually the longest veto message that I have ever seen. It was like multiple pages. It was a very long veto message. Usually it's like one page or maybe a page and a half. This is, I don't know, three or four pages.

And, I think the governor described it as that the bill or said that the, in the veto message, that the bill had created its own weather system, which was true. When I introduced SB1047, I thought this'll be a typical tech regulation bill. We'll introduce it. There'll be, you know, some opposition we'll, you know, have conversations, some negotiations, we'll figure it out, we'll move it forward and we'll either pass something or we won't be able to pass something. I did not anticipate the scale of the dialogue and the fight.

And what was very interesting about it is that on both sides the bill became in a way, like a, what do you call it, an avatar, or a vessel for everyone's hope, dream, hopes, dreams, fears, anxieties. And both support––they were both supportive and in opposition, our supporters and opponents who attributed things to the bill that were not in the bill. So there were opponents. The bill itself was, I don't want to call it modest, it was an impactful bill, but there were opponents who, when they described the bill, they were describing a bill that was way bigger and more expansive than what SB 1047 actually was.

And there were supporters who were putting their hopes into the bill and describing it and envisioning it as being bigger than it actually was. And so in a way, the bill triggered a really important conversation and forced people to talk about like, what do we mean when we say that we want to have smart guardrails?

What does it mean when the CEOs of all of the major AI labs go to Congress and go to Seoul, South Korea and go to the White House and promise that they're going to do safety testing and be responsible?

Okay, so we want to put that in statute now, and now they all hate that. And so what does that mean? How are we gonna actually do this in a way that promotes innovation and make sure we don't have a catastrophe for society?

It was exceptionally painful for me, even though I knew the public was on my side. The polling that was done, including polling that was done jointly by supporters and opponents showed overwhelming support, pushing 80% support statewide for the bill. Higher support among tech workers than among the general public. So I felt good about the popular opinion. But it was painful and I had I have supporters who are no longer supporters as a result of it.

Kevin Frazier: Well, I will save a conversation about the quality of AI polling for another date, because I think it's somewhat akin to asking if people think quasars are going to cause the end of the world. Which is to say, no one knows, but it sounds scary. But we can talk about––

Alan Rozenshtein: Well, Kevin, I think they will. Thank you for that. There goes my good night of sleep. Go tell your kids, Alan. Watch out for quasars. But Senator, I am eager to hear your explanation of the process that occurred between SB 1047 and SB 53.

Yeah, because I come out a different way on many of these issues than you, I think. That the important point though, as you've noted, is you all were very deliberate about trying to learn from SB 1047 in a way that I do think speaks highly of the legislative learning process. So you all were accused in the SB 1047 fight of being too insular, of having cloak and dagger type sessions, of really only hearing from one side of the safety debate.

What sort of steps occurred between the veto message that was a little muddled about what exactly we wanted to see in the future and the path to SB 53?

Scott Wiener: Yeah, and I do want to, before I tell you, talk about that, I want to address the, in terms of some of the conspiracy-brain stuff that was happening about cloak and dagger and secretive.

That was, there were people who said that, there was a ton of conspiracy-brain, especially on the––am I allowed to say curse words?

Kevin Frazier: Correct.

Scott Wiener: Especially on the shitshow platform known as X.com. And I mean, truly conspiracy-brain. We, I mean, no legislative process is perfect and you can always look back and say, hey, I should have done this or that.

The original version of SB 1047 was what we call an intent bill, which is like an outline of a bill where you're basically putting it out there to describe, say to the world, hey, we're working on this. In 2023, at the end of our session, so like month, like six months before 1047 went in print, I put in print an intent bill for the public to see of an outline that ended up being largely consistent with what became SB 1047. And we did that for the precise purpose of saying, Hey world, Hey everyone, love this or hate this, this is what we're thinking about. Tell us what you think.

And I literally took a link to that publicly introduced intent bill and started texting it around to people who I knew would like it or not like it, to VCs, to tech, high-level tech people, all sorts of people saying, hey, we just introduced this, we'd love to meet. Tell us what you think. And it was like you could hear a pin drop.

And it's not a criticism. People are running companies, doing investments, people, you know, are busy. We, so we got very little feedback. Then we put 1047 together and before we introduced it, we were sending a draft to around––I'm not going to say who we sent it to, but we sent it to various people who ended up being opponents of 1047 to say, tell us what you think. Some people didn't really respond. Some people said, Hey, that looks pretty reasonable, the right direction.

And some of those people ended up opposing it. So we were so, I think, hyper transparent about what we were looking at. And people just were not focused and they thought, or maybe they thought, oh, this bill will die quickly. We then introduced it in, I think February of 2024. And for the first three months, like there was, it was like silence, very little dialogue about it.

And then a few months later, a few of the big accelerationist accounts had a meltdown about it on Twitter, and that's when it started. And then after that happened, we made big amendments to the bill around open-source, around the requirement to have a shutdown switch. So many significant amendments to the bill in response to feedback, particularly from the open-source community.

And so I, the, some of the conspiracy theories about it were, you know, just truly that: conspiracy theories. We really tried to be open and transparent about the bill. But anyway, with that said, after the veto. The governor empaneled this working group, including two people who were opponents of the bill and one who was more sympathetic to the bill.

And they did a lot of good work and I introduced the original version of SB 53. We took the two pieces that were least controversial at1047, which was Cal Compute, the public cloud that we wanted to create, and then the whistleblower protections. And we put that in a bill and said once the working group report comes out, we will consider putting pieces in.

That report ended up coming out in the spring of this year and it had some really solid recommendations. And we included those in the bill and proceeded from there.

Alan Rozenshtein: So let's talk about, then, SB 53. And so, you know, shortly after Governor Newsom signed it, Kevin and I did a kinda rapid response on this, but it's a real pleasure to be able to talk to the drafter of it.

As you mentioned, it has the Cal Compute part and it has the whistleblower part, both of which I think are great and probably not that controversial though. I'm, we can, we'll get to them. I'm sure there are things to talk about.

Scott Wiener: I became, and always,

Alan Rozenshtein: I mean, every, everything is controversial with AI.

But I think it's probably worth spending most of our time focusing really on the, the reporting requirements here. So, I'd love to get sort of, in your words, your description of what you are trying to accomplish and what you think the reporting requirement is, because, you know, having now read that provision a few times, I think there's some interesting––and we're all lawyers, right? This is what we do.

We look at legislative text and then we immediately see how far we can torture it. Some interesting corner cases that might be worth talking about. But before we do that, what do you think the reporting requirement does in the main, and what are you trying to accomplish from a policy perspective with that reporting requirement?

Scott Wiener: Sure. And by the way, the whistleblower piece actually is quite important because existing whistleblower law only applies if you violate the law. This allows people to blow the whistle even if it's not a violation of the law, but it's just something dangerous that’s proceeding.

So that is quite significant. Yeah. So it, well, it depends if you have––if you're a lab with annual revenue of more than 500 million that––less than 500 million, you have to disclose some version of your policy around safety, around catastrophic risk. So if you, and if you don't, if you don't have a policy, then you have to say that. But now we actually, well, know because they all say that they have these policies and they're doing it.

Alan Rozenshtein: Okay, so I actually want to focus on that for just a second, if that's okay. Because this was the loophole, or quote unquote, I'm not sure if it was a loophole, but that Kevin and I were trying to figure out, which is, as we read the language, and it's good that you're confirming this, you are allowed to say, yeah, we don't have one, right? And I assume––correct me if I'm wrong––that your calculation here is, there may be some companies that do this, but presumably the market blowback, the embarrassment of being a frontier AI company with no policy will discipline that. Is that the theory here?

Scott Wiener: Yeah. So SB 1047 was a liability law that you have to do the safety testing. And if you don't, something bad happens. You have exposure.

That didn't fly. And so SP 53 is a transparency law. With the whistleblower and Cal Compute as well, but it's largely a transparency law. And so it's not mandating what they have to do. It's saying you have to be transparent about it.

And yes, if you have decided you're going to blow it off and not have any kind of responsible scaling policy or safety policy, then you have to say that and be transparent about it.

And yes, there will be blowback. And if you, and we think that none of them are going to do that, or very few will do that. And so the large companies, the large labs, will have to disclose a more, a detail––their full policy. They do have the ability to redact for trade secrets, and there could be some fighting about that, but they have to disclose it.

And then under 500 million in revenue, they have to disclose a summary version of it. We have a lighter touch for those smaller companies. And then if they have a critical incident, so, something happens that shows, you know, something dangerous happening or potentially happening, they have to report it to, I believe the agency we ended on was the Office of Emergency Services, OES.

One thing that––fun fact, for people who don't follow the legislative process, one of the most absolutely annoying part of legislating is figuring out what agency is going to administer a law. Because what happens is there's huge fighting.

Stakeholders have strong opinions about who or who shouldn't, 'cause they like certain agencies or hate certain agencies. The agencies themselves are like, I don't want to do that. And we're like, well no, but you make perfect sense. No, I don't want to do it. And then you end up fighting about that and then you have to work out who's going to do it.

So, we went with OES in the end. I love OES. It's great. It's a great department. But it was a process getting there.

Alan Rozenshtein: So, one more question on this reporting requirement, which is––and again I think the self-conscious willingness to let companies say ‘we don't have a policy,’ if that's what they have, may answer this question, but I'd love to get your thoughts on this.

So I'm sure you're familiar with the Zauderer case under the First Amendment. For our audience, this is the case that sets out what counts as permissible, compelled speech in the commercial context. So, basically the short version is that under the Zauderer test, the government can force companies to say things they would otherwise not want to do if it's sort of related to a government interest in the commercial context. And the disclosures are, I think the language here is factual and uncontroversial.

Scott Wiener: Terrible decision, by the way, just for the record. I don't know if you agree. Horrible.

Alan Rozenshtein: Well, I'm––say more, actually. Why do you think so?

Scott Wiener: Well, I think this Supreme Court for years has been, first of all, fetishizing corporate speech. And we saw––Citizens United of course, is like just the most extreme and destructive and horrific manifestation of that.

But we've seen it––you know, and I had, when I was on the Board of Supervisors in San Francisco, I passed a law requiring health warnings on advertisements for sugary beverages that like, give your kids diabetes. And ultimately we had, that had to be repealed, 'cause it was losing in court.

I, you know, we had and it got to the point where I passed this, I authored this law requiring large corporations to disclose their carbon emissions. It's being implemented now. And the chamber of Commerce took the position that was compelled corporate speech to disclose data. It's been rejected by the courts, but like, the fact that they even feel like they can in good faith raise that argument shows how extreme the Supreme Court has gotten around corporate speech.

I'm not saying that corporate, that corporations should never have any First Amendment protections whatsoever, but I think it's gone too far.

Alan Rozenshtein: So that's interesting. So, so your concern with Zauderer is even though it's actually at the time it was decided, which was I think the mid-eighties meant to actually be more permissive for government regulation than the n normal compelled speed doctrine. The fact that it still requires things like ‘factual and uncontroversial’ would prohibit some government transparency legislation that you would think is appropriate.

Scott Wiener: And I, I should say that decision was not in and of itself inherently terrible.

But where it's gone, I should say, where it's gone from there. as been her. I don’t, yeah. So I should rephrase what I said, that it is not that, it's the doctrine that's flowed from that in the coming decades after that, including now, is what's terrible. So I'll reframe that.

Alan Rozenshtein: Fair enough. I do wonder though if, given that, you know, we have the doctrine, right, that we have. I would imagine your argument for SB 53’s reporting requirements would be, ‘well, they really only kind of have to report the stuff they've already done.’ And if they haven't done stuff because they don't want to, like––they, they don't, they internally don't want to speculate on this stuff, well then they'll report that they haven't speculated on this stuff.

So we're not really requiring them to say things that go beyond Zauderer. I'm just trying to think through, because I can imagine that this will be raised by someone at some point.

Scott Wiener: Yeah, I mean, it could––listen, we, I mentioned the carbon disclosure. But the idea of saying it's a First Amendment violation, that it's a First Amendment violation to compel a corporation to disclose information around safety, or climate––that could mean that the SEC, that you can't require SEC filings. That's also, quote unquote, that would, could be argued to be compelled speech, which would be ridiculous. So when we talk about protecting the public, whether it's investors or health and safety, you can require corporations to disclose information that's not compelled speech, in my view.

Kevin Frazier: So while we're playing law review, excuse me, while we're playing law professor probing the bill, I would love to get your insights on a question that I've thought a lot about and is garnering a lot of attention, which is you and other sponsors of similar bills to SB 53 have been quite outspoken about the fact that you recognize it would be wonderful if Congress were to pass similar legislation and perhaps even preferable for Congress to pass similar legislation. And yet you all have proceeded nonetheless in a way that, in some cases, folks have blatantly said they hope this bill has nationwide ramifications in terms of changing how AI labs are behaving.

And so we're seeing in real time a sort of intended Sacramento effect. And yet, on the other hand, in another area where you have done quite a bit of work with respect to reproductive health. Yeah, there's been a real pushback against states trying to apply their laws extraterritorially and subject non-residents to whatever state law they're imposing.

So how do you sort of square this circle of, in one domain saying states need to stay in their lane, these pivotal decisions about, for example, reproductive health should be the decisions of Californians––not Texans, not Floridians, or whomever you want to choose.

And yet, on the other side of the equation, saying California cares so much about the rest of the nation with respect to AI, we should move ahead on what we think will be a bill with nationwide ramifications.

Scott Wiener: I personally think there's a huge difference between a state saying we want to put someone in from another state into prison for helping someone get an abortion. That's very different than protecting California residents from, you know, damage from AI or from a company doing business in our state. The approach of SB 1047, SB53, many other state laws in California and elsewhere is, if you are doing business in our state, there are certain rules you have to comply with and in terms of how they impact our residents.

And so, yes, does SB 53 have national impacts? It does. But our approach is, we want to protect California residents. And so if you're doing business in our state, you have to follow certain rules. And so I think that's very different than you know, we want to put you in prison.

If you help someone get. An abortion and I understand someone you can, you know, you can try to, you know, say no state can ever do anything that affects anyone in any other state. I've never taken that position.

And I also think, frankly, there, there is a, you know––I think it's, I think you can also distinguish that it's bad to put someone in prison for helping someone engage in reproductive health services.

And it's good to protect the public from a chemical weapon event caused by a large language model.

Kevin Frazier: The, the––certainly different contexts. I will say, though, that the fervor and the attention and the predictions about the importance of AI suggests that any changes, for example, to the training itself or the trajectory of AI will have not only nationwide consequences, but perhaps intergenerational consequences.

And so I wonder, is there a level of intervention when we think about the AI tech stack from training to pre-deployment, to deployment, and then onto use, is there a certain line you would draw where you would say, ‘Hey, yes, the odds of regulation at this level of the AI tech stack having nationwide and perhaps long-term ramifications is something that is the exclusive domain of Congress?’

Scott Wiener: I mean, I think states have the inherent ability to say, if you're doing business in our state you need to you need to comply with the rules to protect our residents. Does Congress have the power to say, we're going to regulate and we're going to occupy the field, and states can't supplement or deviate from what we're doing?

Congress can have that authority. But it's also the––that's theoretical, because Congress has not even come close to exercising that authority. So I authored California's net neutrality law back in 2018 and 2025. There's no federal net neutrality law, which has––net neutrality has, like, 90% support. We passed a data privacy law in California in 2018.

It's 2025. There's still no federal data privacy law, which I think is just bizarre. And so we have various contexts where, you know, Congress has not acted. And the states have stepped up. We also have situations where the Republicans in the federal government decide they want to preempt without doing regulations.

So Ajit Pai tried to do that at the FCC, to say ‘we're going to eliminate all federal net neutrality protections, but we're going to ban the states from doing it.’ That was overturned in court. We saw Ted Cruz's effort to ban all AI state AI regulation without having federal, and that was rejected 99 to one.

So, you know, I think Congress certainly has the ability is there really an inherent point where like the commerce clause kicks in and even without congressional action, the states just lack the power? I, maybe. But I do think states have broad police power to say we're going to protect our residents from harms. And particularly where there's no congressional preemption.

Alan Rozenshtein: Let me, lemme do one last cut at this federalism question, because I do think it's very important in a kind of very rich area. So I agree with you that from a legal perspective, there's no bar to what California is doing. Congress has not preempted anything. And I tend to be pretty skeptical based on at least my reading of the dormant Commerce Clause.

And I know Kevin and I have had sort of disagreements on the, on this front. And, you know, TBD that the Dormant Commerce clause really limits these kinds of regulations. The way I come at it though is thinking about, what is sort of a good long-term compromise between the need for federal uniformity and the need for national issues to be decided at the national level and state experimentation, and the need for states to be able to protect their own citizens?

And I guess I myself have fallen between the full preemption and no preemption position. I don't necessarily mean congressional legislative preemption, but just how to think about who should do what. To me, it seems like if a state like California or my state of Minnesota wants to say, you know, we're going to regulate how data centers operate because they might have effects on electricity. Or, we're going to regulate how AI systems are used, and we don't want AI systems used in, you know, rental you know, in, in the rental process or the employment process or whatever the case may be.

That seems to me perfectly reasonable. And falling on the sort of line of this is something state should do. But, and maybe I'm just repeating Kevin's point here, so I apologize if I am, when it comes to regulating these big questions of you know, of how AI companies are developing the frontier models, I totally agree with you as a legal matter.

Look, they're––they chose to do business in California, so you can get your hooks in them, but that seems to be more of an accident of where they chose to develop rather than, you know, how Anthropic or OpenAI or Gemini or xAI, you know, builds the next frontier model affects Californians, 'cause it doesn't really affect Californians more than it reflects Minnesotans. It affects all of us enormously. But isn't that kind of why we have Congress?

So this is kind of the circle that I end up coming to or finding myself in when thinking, sort of, what is the optimal way of everyone to do this.

Scott Wiener: Sure. In an ideal world, we would have a functional federal government. I mean, I hate to have to even make that statement, but we would. And, you know, just to be clear, in the last Congress, where we weren't being governed by nihilists, we saw some pretty big things happen.

The Inflation Reduction Act: huge infrastructure, bipartisan infrastructure law. The American Recovery Act, we saw some big things happen. So I don't want to say Congress can't do anything. I think when you have, you know, unf––Democrats in control of Congress, and a Democratic president who believes in government, they can do some good things.

We now have nihilists running the government. I hope that changes. But even when Democrats were running things, there was really no tech regulation, meaningful, significant tech regulation, since like the 1990s. And so for whatever reason, when it comes to technology, Congress has been fairly paralyzed.

They've done some things. I don't want to dismiss that. But again, no data privacy law, no net neutrality law. These very basic things. And so, yes, ideally we will have a government not run by nihilists. And maybe the congressional politics of tech will shift so that they can do some things.

I want to make sure that it's not gonna be some bare-bones de minimis, like, watered-down ineffective law, and that then have a preemption. That would be problematic. So that would be ideal, but I'm not holding my breath.

I will also say that California does have an important role to play. And I agree having like 50, a patchwork of 50 different regulations is not ideal. But California has a huge role to play, because of our––no offense to Austin, but our dominant role in tech. It is dominant. It's still dominant. It's still dominant with venture capital investment.

And so we're so dominant in innovation, we should also lead on safety, on smart regulation. And I'm not going to be that arrogant Californian who says all the other states should just defer to us and cut and paste what we do. That would be arrogant. And Californians sometimes do have that reputation. But I do think there's an argument to be made that we should collaborate as states and try to create that consistency. I think that makes a lot of sense.

Kevin Frazier: Well, we certainly have a lot to debate for another time. Maybe the next time in I'm in SF. Or I will bring you to Austin so you can get some breakfast tacos and see a space that we clearly dominate in.

But I'll leave that for another day. For now, I want to talk about an even hotter fun topic among a bunch of law professors and law nerds, which is the act of actual implementation.

We've seen in Colorado right now, there has been basically a complete breakdown in trying to understand how they're going to effectively and efficiently implement SB 205, their major AI act.

We've seen delays, postponements, some degree of political fracas. And so, what gives you assurances that California is ready to go when it comes to implementing not only SB 53, but the myriad other AI related bills that Governor Newsom just signed?

Scott Wiener: Yeah. And different bills have different enforcement mechanisms, whether it's private or the attorney general––and the attorney general does have a real role to play here. I think our attorney general, Rob Bonta is, has been very clear that he wants to enforce these laws.

Not, you know, we––not having private rights of action, which we in the California legislature, probably in a lot of legislatures, it's very hard to get private rights of action through, does make it a little bit harder to enforce. But I think we will have enforcement. And I think there are enough people, enough eyeballs on this that there will be––it'll be very public if you have labs that are blowing it off.

And I think with SB 53 in particular, I do think that my prediction is we'll have pretty good compliance. There could be disputes about redactions due to claims of trade secrets. You could have some companies that are abusive and they redact out everything. And we'll have to deal with that as it comes.

Alan Rozenshtein: So I, before we move off from the law I do want to talk about the Cal Compute portion of it because I, I think this should not go under the radar. I think this is a very cool idea to have this public option for compute. You know, right now I think something, something's very unusual about AI is that it's has been, it's a major marquee technology that's been developed with very little, almost none, zero government support, which is very unusual.

You compare that to the internet, for example, or semiconductors. And so I think it's great that you're all thinking about how to do this and create opportunity for these advances to not just be in the big labs and just entirely through like VC money.

At the same time, there do seem to be some implementation challenges that I'm of curious how you're thinking about. Two in particular come to mind. At least for me, one is just the sheer scale, right? So my understanding is that setting up Cal Compute––but it's really setting up kind of a task force, and task force is going to think about it and it's going to come back to legislature at some point. Legislature's going to have to appropriate. Within, you know, in the 18 months that's going to happen, if you just look at the scaling laws, right? We're going to be talking about, I mean, I've already lost track, but the training runs are going to be in the billions, if not 10 billions of dollars at that point.

California is basically a country, right? It's like the fifth largest economy in the world, I think. Or it's somewhere in that.

Scott Wiener: Sometimes with the fourth largest. Depends on the day.

Alan Rozenshtein: My apologies. Right. It's a big economy, but there's a point at which even California taxpayers start saying, okay, this is a lot of money. So the first question is, do you worry about just the resources necessary to make this a reality?

And the second is, doing things as hard, doing things as hard throughout the country. Doing things in California is often very hard, right? I mean, California does a lot of amazing things, but then you look at things like high-speed rail, not so great.

You know, how do we avoid Cal, something like Cal Compute with the building and the electricity and all of that, not kind of becoming another high speed rail d––I mean, I think I'm allowed to say debacle at this point. Though of course you can feel free to disagree with me if you disagree.

Scott Wiener: Yeah, a lot of good things happen, too, in California. But absolutely. A lot––and we do, in California, you know, I, we have, we have to figure out how to make government consistently work.

You know, we––I think there's bipartisan support, essentially, for an industrial policy in this country, which I think is a good thing. And we have to be able to make that work. And I know there were some issues with implementation of the CHIPS Act, for example. And we have to learn lessons from that. And by the way, we're working on some permitting reform for high-speed rail to get that moving.

So sometimes California needs to just, we need to get out of our own way. Because we have a way of just like erecting unnecessary obstacles just because.

In terms of Cal Compute, the good news is people are excited about it. More than one UC campus has already told me that they want to host it and that they're excited about hosting it. So that's a good thing. And that was step one. 'cause I think I, I believe I have to look at the bill again. We––it's like a request that the UC host it, because we're not allowed to mandate that they do it under our constitution. And so they seem to be excited about doing that.

We're also more broadly because we have an insane president and insane people around him who have decided that they want to just destroy the country's science capacity. And they're methodically destroying every federal science agency, and they're trying to cut university science funding.

It's just, I mean, insane. We're really focused on, how do we bulk up California's science capacity, and make sure that we retain and expand our leadership on science? No offense to Texas, but we're way ahead on that. And we need to be even further.

Alan Rozenshtein: I'm just, I’m just gonna stick up for Minnesota. We invented the supercomputer, but it's fine. It's fine.

Scott Wiener: Minnesota’s great. Minnesota's great. Texas is doing horrible things right now, but I know that's not Austin. That's Texas. And so, and so I think this fits into that. That is a great opportunity for California to like triple down on our leadership around AI innovation, but science more broadly.

And we structured account compute that it can be a public-private partnership. And so I think there are opportunities, but I know––I also don't want to sugarcoat it. Are there, we have to deal with the funding, and––but by creating this program, we at least get our foot in the door and can then figure it out going forward.

Kevin Frazier: You know, Senator, I do have to say that I think some of your innovative companies there in San Francisco are very happy with the data centers we're making available here in Texas. So, hence again the need for some national harmony here on AI governance. But like I said, this will not be the last time we're calling on you.

Scott Wiener: I will also say, and I have said this, 'cause I've been very critical of California around permitting in general, but permitting for clean energy. And I think it's humiliating that both Texas and Florida, which are smaller states than California, and which both have governments that are climate deniers, both produce more clean energy than California.

That is humiliating to California. And I say that all the time.

Kevin Frazier: So, Senator, you've obviously thought a lot about a range of policy topics, and have in some ways become California's AI guy when it comes to new legislation. Should we expect anything from your office in 2026 in this area, or what are you already contemplating?

Scott Wiener: We don't have any specific plans for 2026. We'll see what happens. There will be AI work next year and I have, as I mentioned earlier, a broad array of colleagues who are in the space. And so I'm confident there will be good work next year.

I'm, it's too early for me to say. We're still, I'm still basking in the glow of the fact that this is my first year ever where I had no vetoes. The governor signed a hundred percent of my bills. It's never happened before, so I'm basking in that at the moment.

Alan Rozenshtein: Well, Senator, I think we're going to have to end the discussion there. Thank you so much for coming on the show. Thank you. I suspect I speak for Kevin when I say it's an enormous pleasure to talk to a policymaker who's staking out really interesting positions, is very much in the weeds, and can you, that you can really get into it with.

So, thank you very much and we'd love to have you back at some point in the future if you're willing to come back on.

Scott Wiener: Thanks for having me, and I would love to come back.

Kevin Frazier: Scaling Laws is a joint production of Lawfare and the University of Texas School of Law. You can get an ad-free version of this and other Lawfare podcasts by becoming a material subscriber at our website, lawfaremedia.org/support. You'll also get access to special events and other content available only to our supporters.

Please rate and review us wherever you get your podcasts. Check out our written work at lawfaremedia.org. You can also follow us on X and Bluesky.

This podcast was edited by Noam Osband of Goat Rodeo. Our music is from ALIBI. As always, thanks for listening.


Kevin Frazier is an AI Innovation and Law Fellow at UT Austin School of Law and Senior Editor at Lawfare .
Alan Z. Rozenshtein is an Associate Professor of Law at the University of Minnesota Law School, Research Director and Senior Editor at Lawfare, a Nonresident Senior Fellow at the Brookings Institution, and a Term Member of the Council on Foreign Relations. Previously, he served as an Attorney Advisor with the Office of Law and Policy in the National Security Division of the U.S. Department of Justice and a Special Assistant United States Attorney in the U.S. Attorney's Office for the District of Maryland. He also speaks and consults on technology policy matters.
Scott Wiener is a California State Senator. He was elected in 2016 and represents the San Fransisco-Bay Area.
}

Subscribe to Lawfare