Cybersecurity & Tech

Lawfare Daily: State Senator Scott Wiener on His Controversial AI Bill, SB 1047

Kevin Frazier, Scott Wiener, Jen Patja
Monday, August 5, 2024, 8:00 AM
Discussing state-level AI regulations.

Published by The Lawfare Institute
in Cooperation With
Brookings

Scott Wiener, California State Senator, joins Kevin Frazier, Assistant Professor at St. Thomas University College of Law and a Tarbell Fellow at Lawfare, to explore his “Safe and Secure Innovation for Frontier Artificial Intelligence Models” bill, also known as SB 1047. The bill has become a flashpoint in several larger AI debates: AI safety v. AI security, federal regulation or state regulation, model or end-user governance. Senator Wiener and Kevin analyze these topics and forthcoming hurdles to SB 1047 becoming law.

To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/c/trumptrials.

Click the button below to view a transcript of this podcast. Please note that the transcript was auto-generated and may contain errors.

 

Transcript

[Intro]

Scott Wiener: I've shown a real willingness to, to make significant changes to the bill. Unfortunately, there are others who are not engaging constructively, who are simply taking the approach of, you know, get off my lawn and just don't want any regulation at all of AI.

Kevin Frazier: It's the Lawfare Podcast. I'm Kevin Frazier, assistant professor at St. Thomas University College of Law, and a Tarbell Fellow at Lawfare joined by California State Senator Scott Wiener.

Scott Wiener: The existence of the, of Frontier Model Division has caused a lot of concern and anxiety in some quarters, and, and I am not wedded to that. So that's an amendment Anthropic has proposed that we are, you know, quite open to.

Kevin Frazier: Today we're talking about his Safe and Secure Innovation for Frontier Artificial Intelligence Models bill, also known as SB1047, a controversial AI bill he's spearheading in the Golden State.

[Main Podcast]

Senator, you've fought some hard battles in your legislative career. In many cases, you've won and won big earning praise from across the political spectrum, though mainly on the left. You now find yourself in one of the most heated political skirmishes you've perhaps ever fought at a minimum, much of the tech world, including some of your constituents in San Francisco stand in opposition to your bill.

And if you read between the lines, some may think that FTC Chair Lena Khan may be another person who is perhaps not exactly on your side. She recently listened to your defense of SB 10 47, and while she declined to comment on the bill itself, voiced her support for protecting openness in the AI industry. So given the headwinds you've encountered, what's motivating you to persist with this fight? What's driving Senator Wiener to make this one of his key legislative efforts?

Scott Wiener: Well, thank you for having me, and thank you for the opportunity to talk about promoting innovation in AI and also acknowledging that this is incredibly powerful, transformative technology that can make the world a better place, but that also creates risks and that we should not bury our head in the sand and be mindful of those risks and that it's reasonable to ask the large AI labs.

What they have committed to doing repeatedly, which is to perform safety testing on their large AI models before they train and release them. And so that, that's all we're doing with this bill. This bill does nothing more then ask the largest AI labs when they're training and releasing huge powerful models to simply do what they have committed to doing, which is to perform safety testing, mitigate potential catastrophic risks.

That's what we're asking them to do, and it's perfectly reasonable. There is actually quite a bit of support for this bill, if you ask people, including folks who are in tech, but there are a group of, of, of folks who are very opposed to it. I, I think what everyone's view is of AI this is powerful, powerful technology and to promote both innovation and safety.

Kevin Frazier: So you've emphasized that you do not want to cabin innovation, you've touted that you've long championed innovation in the state, and yet we have the folks who are supposedly the innovators themselves. If you look at groups like a16z, arguably Anthropic, and some of the other labs, Meta, have come out ether directly in opposition to the bill or perhaps questioning the bill or asking for some pretty large amendments.

Why do you think they're getting this innovation question wrong? Why are the folks trying to support these startups miscalculating when they say you're actually going to squash innovation in California?

Scott Wiener: Well, first of all, just to be clear, has said in writing that they will support the bill if we make certain amendments, and as I've stated repeatedly in public, we're generally positive about the amendments that has proposed. Listen, I, this has been a very transparent process. I started working on this a year and a half ago.

We actually took the extraordinary step of, of releasing a public outline of the bill very, very formally last September for the exclusive purpose of soliciting feedback from big tech companies, from startups, from investors, from academics, from from activists, from anyone who wanted to comment to say, tell us what you think of these ideas.

And we have engaged with anyone who will engage with us, and we have received some very good faith engagement from anthropic, from GitHub, from various folks in big tech and small tech. And we've made significant changes to the bill along the way, including direct response concerns from folks in the open source space.

Very significant amendments we've made, and very appreciative that Anthropic has come forward with feedback and ideas which as I mentioned, we're favorably disposed towards those ideas. My goal here is not just to, it's not about winning or losing. I wanna get this right.

And so for the folks who have constructively engaged, we appreciate that and I've shown a real willingness to, to make significant changes to the bill. Unfortunately, there are others who are not engaging constructively, who are simply taking the approach of, you know, get off my lawn and just don't want any regulation at all.

And you mentioned Lina Khan. She she, first of all, she spoke before I spoke, so she did not hear me talking about the bill, and I thought she took a very balanced approach where she I think absolutely did not say she was opposed to regulation of AI. She also said she supports open source. I support open source as well. And so there, there, unfortunately, in addition to the folks who are constructively engaging, which is great, there are people who have been and, and organizations that have not constructively engaged and have put out fear tactics and misinformation about the bill.

Unfortunately, there are some within a16z who have participated in that telling AI model developers that the bill will send them to prison which is absolutely false. And yet they keep saying that, putting out information about how the bill will, will impose significant risk of liability that doesn't exist today.

That is inaccurate. AI model developers can be sued today and the potential liability that they face today is profoundly broader than the extremely narrow liability that is possible under the bill. So you know, it's politics. And I also understand that the tech industry does not want to be regulated, and that's why Congress has never enacted a data privacy law.

Here we are in 2024, there's no federal data privacy law, which is supported overwhelmingly by the public because the tech industry has incapacitated Congress from passing that law. There's no net neutrality federal law. There's, you know, very little with social media. They banned TikTok. It looks like there's now a child protection bill that's moving forward.

But we have had to act in California because the tech industry has prevented Congress from acting. And I am not just trying to shove something down texts through. We were working very collaboratively with anyone who will work with us.

Kevin Frazier: To that collaborative approach, which I think you're spot on. Anyone who's tracked the bill can see that there have been significant amendments made in response to feedback, including changing the threshold for which models may qualify, which is obviously a huge deal.

Of the Anthropic amendments, I think one of the bigger ones is calling for not creating the frontier model division, which would have arguably the main role in enforcing and yeah, paying attention to this regulation. So is that one of the amendments you're open to?

Scott Wiener: Yes.

Kevin Frazier: Okay.

Scott Wiener: No, with the Frontier Model Division, it's, we, we created that division. It's not even its own agency, the division of the Department of Technology, in order to have, you know, an, an agency or a division that will receive the reports. And then they have one power which is to, after a few years, adjust the size threshold. It's currently set at 10 to the 26 flop. They could adjust it, but not the a hundred million dollars training threshold, which they cannot touch.

So the Frontier Model Division has very, very little authority. The attorney general is the individual who really will enforce not the Frontier Model division, but the, but the existence of the, of Frontier Model Division has caused a lot of concern and anxiety in some quarters. And, and I am not wedded to that. And so that's an amendment Anthropic has proposed that we're, you know, quite open to.

Kevin Frazier: So we know there are at least seven hills in San Francisco and that's not one you will die on. Okay. Duly noted of the other amendments, 40.

Scott Wiener: I think it's 48 hills, actually. 48 hills. Seven, it's seven miles by seven miles. 48 hills. Yeah, something like that.

Kevin Frazier: Alright, all right there. The complex geography. That's for our next podcast for now. Another major amendment I'd say that Anthropic has suggested is changing some of the requirements for what qualifies as a critical harm they've called for, excluding perhaps the use of models in national security context. What's your response to that exclusion that they're calling for?

Scott Wiener: There are a few items that they raise around that 'cause we define critical harm as. Having to do with chemical, nuclear, biological, et cetera weapon having to do with cyber crime causing more than $500 million in damage, damage, critical infrastructure, causing more than $500 million in damage or harms of a similar scale in terms of national security and there are certain that could be in the bill that could be preempted under federal law and certain aspects of national security could fall in that category. And so we're absolutely open to refining the bill to make that clear.

Kevin Frazier: So we may continue to see SB 1047 evolve throughout the month of August as it continues to receive scrutiny from the assembly. But we know that SB 1047 isn't the only bill addressing AI in California AB 3211, which would require generative AI systems to keep a log of any piece of potentially deceptive content is also moving relatively quickly through the legislative process. So when we think about innovation in California, are you concerned that the cumulative effect of these bills may result in SFO being full of AI experts looking for greener pastures?

Scott Wiener: It's really interesting. Some of the critics, of SB 1047, say, hey, don't regulate at the model level, regulate at the application level. If someone is using a model for something bad, a deep fake revenge porn, algorithmic discrimination, whatever the case may be, regulated at that level. My my take is of course we should be, if someone's using an AI model or anything else to do something terrible, that should be illegal and there should be accountability for that.

But if the model can reasonably be designed in a way to reduce the risk that the model will shut down the grid or, or do whatever terrible thing we should do that as well. The two are not mutually exclusive.

But I, I think what we're seeing is that as some folks say, hey, don't regulate st the model level, regulate at the application level. Well, they're also opposing efforts to regulate at the application level. So, you know, query what that means in terms of some of the engagement that we're seeing and, and some of the desire simply to have no regulation in the public interest whatsoever. But in terms of, you know, the bills that we're seeing in the legislature in.

There's the watermarking bill. There's a, a major bill about algorithmic discrimination. We have a bill about AI generated revenge porn and probably a couple of others. You know, of course we wanna make sure that all of this is coordinated, that it's all consistent.

I don't think that this is going to push, these bills are not gonna push AI innovation outside of California. SB 1047, in addition to, I imagine all of these bills, they're not limited to companies that are headquartered or doing AI development in California. That's not what triggers 'em. It's doing business in California. And so this whole notion that, you know, oh, they're gonna move, you know or Austin or whatever the other, you know, flavor of the day is, I don't buy that.

I mean, we, we know that tech is spreading out regardless. That's been happening for quite some time. California is the fifth largest economy in the world. It is absolute global epicenter of, of tech and tech investment and having reasonable regulatiosn protecting the public that's not gonna drive this work outta California.

Just like when we passed the California data privacy law in 20, I believe it was 18, because Congress had not acted. If you look at the opposition, it said it would drive industry outta California. Well, guess what? That didn't happen. So, you know, we wanna be, of course, mindful of, of wanting to support innovation in California.

And there are parts of this bill that specifically do that, and we're working with opposition to, you know, refine the bill. But I, I think this whole argument that if you do anything. Around tech regulation, you're gonna push the industry outta California. It's been proven not to be accurate.

Kevin Frazier: So given the America wide nationwide implications of SB 1047, AB 3211, any other AI legislation, we come down the pike to the AI developer in Iowa, or the AI developer in Louisiana. What's your response to: Well, Senator Wiener, that's great that you want those regulations in California, but why should I be subject to what a couple of folks in San Francisco and across the Golden State have to think about AI. This is a federal issue and should be decided at the federal level.

Scott Wiener: Well, first of all, those AI developers in Iowa or wherever else, they're highly unlikely to be covered by SP 10 47. The, the bill only applies if you are spending more than a hundred million dollars to train your model. So if you're not spending a hundred million. You simply not covered by the bill. And, and, and by the way, it's tied to inflation, so it'll go up over time. And, and so people need to really understand that.

But in terms of should this be handled at the federal level, absolutely. I would love for Congress to get it together to act in a number of different areas, not just AI, but as I mentioned, data privacy, social media. I author California's net neutrality law in 2018 after Trump's FCC got rid of.

Net neutrality protections. I wish that Congress would just pass a strong federal net neutrality law. Six years later, it hasn't done so. So yeah, this should be handled at the federal level, but the tech industry has made it impossible for Congress to do that. And so here we are in California wanting to protect the public, wanting to protect our states, and so we're, we're doing what we need to do.

And I think we're doing it in a thoughtful way with an open door, as you acknowledge, taking very significant amendments in response to feedback from folks in the Ai sector, including in the open source sector. For example, we made a, a amendment making crystal clear that if you no longer have possession of a model, so if you open source the model and others are then using it, you no longer have a responsibility to be able to shut down the model because that's one of the requirements of the bill. You have to be able to shut down the model that you develop.

But if you open source it and some, and it's no longer in your possession, someone else is using it. You do not have that responsibility any longer. In addition, if someone takes an open source model and fine tunes it to a significant degree, it's no longer your responsibility. It becomes effectively someone else's model.

So we have over and over again listened to feedback and made significant changes, and I anticipate we'll, we'll be making more significant changes in response to the Anthropic letter.

Kevin Frazier: Well, so I guess my friends in Des Moines and Baton Rouge can rest easy, at least in that regard. But with respect to the idea, well–

Scott Wiener: They should come to San Francisco too. It's amazing.

Kevin Frazier: You know, they've got an open invite.

Scott Wiener: Oh, housing are expensive. I apologize for that. We're, there we

Kevin Frazier: So stay away from San Francisco. The message for, no, I'm just messing. So with respect to the superiority, perhaps in an ideal policy world of Congress settling this issue, what sort of development would cause you to pump the brakes on pushing SB 1047, we've seen OpenAI has announced its support for at least three AI bills increased funding for the U.S. AI Safety Institute, a bill supporting AI education initiatives, and one supporting a AI research resource. What if we saw one of those bills take off? Would you say, all right, we'll give, we'll give Congress some time to see if they can take this regulatory challenge, or are you done waiting for congressional action?

Scott Wiener: Yeah, I think, I mean, Congress is typically the power to preempt state laws. So if in a year or two years or five years, Congress passed an AI safety law and said, we're preempting state laws, they can do that just like they could tomorrow pass a data privacy law and preempt the California law if they wanted to.

Have they passed that data privacy law secures later? No, they haven't. They could preempt us on net neutrality. Have they passed a net neutrality law six years later? No, they haven't. So they have every ability to do that. And I don't given Congress's track record.

Congress, by the way, does a lot more than people think. Congress has in recent years done a lot of really amazing things around infrastructure, climate supporting working families.  So I'm not saying I don't buy into the Congress doesn't do anything. Congress does a lot of things around technology in particular. The last major law that Congress has passed was in the 1990s, and since then, it's been like banning TikTok and now potentially this social media kids bill.

And so having bills introduced on AI, having, I'm glad that OpenAI is supporting some of these bills. That's terrific. But that doesn't mean that they're going to pass. And we also know I am gonna work really hard to, to help Kamala Harris get elected president of the United States.

If Donald Trump wins, he's already made clear because of the, the Republican platform is to repeal the Biden executive order, which is not binding, by the way, but it's still good. And they've, the Republicans have committed to repealing that executive order. Republicans in the house have been working to defund NIST. And, and so if the election goes in a certain direction, we could see even moving backwards on federal efforts around AI safety.

Kevin Frazier: As the bill stands now, there's still a tremendous amount of ambiguity and you've emphasized that some of these things will have to be worked out over time including, for example, training costs. What will be included in that $100 million threshold? For when you've crossed that bridge and some commentators fearful of the impact on AI development have said the big labs, big tech is going to dominate that process after the fact of refining some of the definitions within the bill to benefit them. What are you doing now to try to assist small labs, having a voice, those startups, those innovators, in refining some of these terms, if the bill gets enacted?

Scott Wiener: Well, there's the bill. I, I don't agree that the bill is like super ambiguous and we'll continue to work between now and the end of August to if, if, you know, and if folks think there's anything that's needs to be tightened, we wanna hear about that.

But we've been working very hard to tighten things up. So this is not a situation where some regulatory body is gonna be able to rewrite all sorts of aspects of the bill. You're always gonna have to strike a balance between being prescriptive in the bill having some flexibility and people are gonna criticize you if it's too prescriptive or it's too flexible and, you know, we, we actually have require open source representation on one on, there's an open source body that's being.

So I, I, I think the whole goal is to have that kind of diversity of, of representation. There's been a narrative about this bill that it's some sort of regulatory capture by big tech. Of course, Google and Meta are opposing the bill, so that would be an odd thing for them to oppose a bill that’s gonna allow them to engage in regulatory capture. I don't think there's gonna be any regulatory capture here.

Kevin Frazier: And you've emphasized that the bill is not intended to quash in any way open source model development. While you've also stressed that the animating factor is the prevention of some of these catastrophic risks, we recently saw Llama 3.1 get released. It's an incredibly capable open source model. Are you thinking now that perhaps there should be a more stringent approach to open source if we see that these sorts of models can have such extensive capabilities and are now going to be even more broadly available? If the goal is to prevent these catastrophic risks, might now be the time to be more hands-on with respect to open source models.

Scott Wiener: Listen, I know there's a whole debate happening around open source. There are people who hate open source. There are people who love open source. My view here is I support open source and I'm, and I'm, I'm not in any way opposed to open sourcing. I think open sourcing has huge potential benefits in terms of democratizing AI in terms of allowing really smart people to look under the hood and make improvements to a model, including improvements around safety.

So I'm not a critic of, of open source, and I know a lot of startups in particular really rely on open source, open sourcing of models, you know, including, including Llama. And so we also need to acknowledge that open source, like other models also can bring safety risks. And so that's why we don't wanna exempt open source because open source.

Just like any other model perhaps in different ways can cause good to happen and also harm, you know, so, so unlike some folks who wanna ban open source, I don't fall into that category at all. But there, but I, I think we need to acknowledge that there are risks like Lawrence Lessig, who is like a major open source advocate in technology in general, has raised concerns about Meta's approach in terms of releasing the model globally. So, and that's, that's his perspective and he's a really smart guy.

I'm not going down that path, whether it's an open source model or otherwise performed. The safety testing Meta is one of the companies that went to Seoul, South Korea, that went to the White House and has repeatedly committed to doing safety testing on its open source model, and we're simply asking them to keep their commitment.

Kevin Frazier: And with regard to that testing, a lot of focus has been drawn to the difficulties of what some say is proving a negative. How can you prove that a model isn't going to cause critical harms? What, what is your response to, yeah. To, to this take

Scott Wiener: that, that is another sort of characterization of the bill that some of the opponents have cut out. That's not accurate. You don't have to wait. Well, I forget, what was the word you used?

Kevin Frazier: Proving a negative

Scott Wiener: Prove. Right. You have to prove that it's not gonna cause harm. You have to guarantee that it's not gonna cause her all of these extreme, categorical words that are used, that's not what the bill says.

The bill talks about reasonable assurance. This is not about guaranteeing that your model is not gonna cause harm. It's not about certifying that it can't cause harm. It is about conducting reasonable safety evaluation, determining whether there is a, a real actual risk of catastrophic harm, and then if so, taking reasonable steps.

To reduce that risk, not to eliminate the risk. Life is about risk. It's impossible to eliminate risk. And if you're eliminating risk can have its own very bad consequences, like undermining innovation. And that's why we don't require eliminating the risk or having like certainty that nothing can go wrong.

And so in terms of the testing, this bill does not like. It's not, there was, there's no testing and the bill says, th test, we're gonna will it into existence. The testing is exists now. The, the labs, these labs. That they are testing, that they're planning the test, and they, and they've made formal commitments in, at the White House in Seoul, et cetera.

And so I, I, I think that this, there's a, this narrative like, oh, there's no testing. You're asking us to do something that's impossible. It doesn't exist. Well, they, they all say they're doing it. And so I, I, you know, I think what we're asking is being done and is perfectly reasonable.

Kevin Frazier: If we had a open source lab create Blama 3.1, some other open source model that's just as capable as Llama 3.1, how would they be able to show a reasonable assurance that a user, let's say a bad actor in a foreign country wouldn't use it to further a critical harm, even if modified less so than the threshold for qualifying them as being exempt.

Scott Wiener: Yeah, and and to be clear, I'm not a technologist and I, I'm not the expert on how the testing works. And so you, you should absolutely ask someone who's an expert in, in testing to talk about how you can do that open, extremely large model. We know that there are different strategies like, you know, doing extremely thorough red teaming, for example. They're doing that now.

And again, Meta says that they're doing this testing, so the people who actually know how to do this testing say that they're doing it. The bill is also flexible as to what kind of testing you do. Bill is not prescriptive, you know, red teaming is one example, but I know that there are, that there are others.

Kevin Frazier: So if you continue to receive feedback, as you've mentioned, you're open to saying that, you know, reasonable assurances with respect to some uses of open source models and some of these critical harms is just not feasible at this point. Would that be something you'd be open to considering and further amendments?

Scott Wiener: I've had an open door for a year and a half now.

Kevin Frazier: You should probably lock your door.

Scott Wiener: I, I literally, like when we published our initial outline last September, I affirmatively like texted and sent it around, including people I thought might have questions or concerns because we want that feedback.

We're, we're near the end of the process now, but there's still time. And so we welcome the feedback and there's some people who are just categorically opposed, who are never gonna, you know, provide amendments. You know, a16z, I think falls into that category. They prefer to put a bunch of stuff on Twitter, which is fine. It's their First Amendment right to do that.

But there are other organizations like Anthropic, like GitHub, like even Meta. Meta has been collaborative and talking to us and, and, and trying to brainstorm ideas. And we really appreciate that. And I, of, even if someone is opposing my bill, if they have a reasonable good idea. I wanna know. I don't only listen to people who are supporting what I'm doing. I, I'm, I listen to anyone with good faith, constructive feedback.

Kevin Frazier: Well, before I let you go, I do have to ask, because you've mentioned that you're not quite as far as some may be with respect to, for example, limiting the risks posed by open source, but you're clearly concerned about catastrophic risks. So, Senator Wiener, what is your (p)doom?

Scott Wiener: I know that there has been a, there's a range. I forget what Lina Khan said. I think she said 15%, which is, you know, that's still concerning. Right? And there are other people that say 40, who say 1%. I, I'm not focused on doomsday. You know, we can talk about, you know, the, the doom scenarios.

I'm not a doomer. I think there are a lot scenarios that are short of helping shut down the grid. That's not doomsday scenario, but that's a very tangible, significant harm that we can all envision, right? That's not robots coming and, and taking over and, and rounding everyone up and sending us off somewhere, or killing people. That's like a tangible thing that can hap that actually does happen today, right there, there are criminals who do things like try to shut down the grids or shut down the banking system.

And, and so that's a very tangible thing that people can get their heads around today. It's very real, and that's what I'm focused on, those kinds of harms.

Kevin Frazier: Well, Senator, I know you have a busy August ahead, so we will leave it there.

Scott Wiener: Thank you for having me.

Kevin Frazier: The Lawfare Podcast is produced in cooperation with the Brookings Institution. You can get ad free versions of this and other Lawfare podcasts by becoming a Lawfare material supporter through our website, lawfare media.org/support. You'll also get access to special events and other content available only to our supporters.

Please rate and review us wherever you get your podcasts. Look out for our other podcasts, including Rational Security, Chatter, Allies, and The Aftermath, our latest Lawfare Presents podcast series on the government's response to January 6th. Check out our written work at lawfaremedia.org. The podcast is edited by Jen Patja and your audio engineer this episode was Cara Shillenn of Goat Rodeo. Our theme song is from ALIBI music. As always, thank you for listening.


Kevin Frazier is an AI Innovation and Law Fellow at UT Austin School of Law and Senior Editor at Lawfare .
Scott Wiener is a California State Senator. He was elected in 2016 and represents the San Fransisco-Bay Area.
Jen Patja is the editor of the Lawfare Podcast and Rational Security, and serves as Lawfare’s Director of Audience Engagement. Previously, she was Co-Executive Director of Virginia Civics and Deputy Director of the Center for the Constitution at James Madison's Montpelier, where she worked to deepen public understanding of constitutional democracy and inspire meaningful civic participation.
}

Subscribe to Lawfare