Cybersecurity & Tech

Scaling Laws: A Year That Felt Like a Decade: 2025 Recap with Sen. Maroney & Neil Chilson

Neil Chilson, Kevin Frazier, James Maroney, Alan Z. Rozenshtein
Tuesday, December 30, 2025, 10:40 AM

Connecticut State Senator James Maroney and Neil Chilson, Head of AI Policy at the Abundance Institute, join Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, and Alan Rozenstein, Associate Professor at Minnesota Law and Research Director at Lawfare, for a look back at a wild year in AI policy.

Neil provides his expert analysis of all that did (and did not) happen at the federal level. Senator Maroney then examines what transpired across the states. The four then offer their predictions for what seems likely to be an even busier 2026. 

 

This episode ran on the Lawfare Daily feed as the Jan. 9 episode.

Click the button below to view a transcript of this podcast. Please note that the transcript was auto-generated and may contain errors.

 

Transcript

[Intro]

Alan Rozenshtein: It is the Lawfare Podcast. I'm Alan Rozenshtein, associate professor of law at the University of Minnesota and a senior editor and research director at Lawfare.

Today we're bringing you something a little different: an episode from our new podcast series, Scaling Laws. It's a creation of Lawfare and the University of Texas School of Law where we're tackling the most important AI and policy questions, from new legislation on Capitol Hill to the latest breakthroughs that are happening in the labs.

We cut through the hype to get you up to speed on the rules, standards, and ideas shaping the future of this pivotal technology. If you enjoy this episode, you can find and subscribe to Scaling Laws wherever you get your podcasts and follow us on X and Bluesky. Thanks for listening.

When the AI overlords takeover, what are you most excited about?

Kevin Frazier: It's, it's not crazy. It's just smart.

Alan Rozenshtein: And just this year, in the first six months, there have been something like a thousand laws.

Kevin Frazier: Who's actually building the scaffolding around how it's going to work, how everyday folks are going to use it?

Alan Rozenshtein: AI only works if society lets it work.

Kevin Frazier: There are so many questions have to be figured out, and nobody came to my bonus class. Let's enforce the rules of the road!

Kevin Frazier: Welcome back to Scaling Laws, the podcast brought to you by Lawfare and the University of Texas School of Law that explores the intersection of AI, law, and policy.

I'm Kevin Frazier, the AI innovation and law fellow at Texas Law and a senior editor at Lawfare, joined by my co-host, Alan Rozenshtein, associate professor at Minnesota Law and research director at Lawfare.

If 2025 taught us anything about AI policy, it's that vibes matter. Action or inaction at the state and federal levels ebbed and flowed as different officials and interest groups responded to varying and complicated public narratives around AI. One example: a quick vibe shift brought talk of a 10-year moratorium on certain state AI bills to a swift close.

To help make sense of these trends, vibes, and more, Connecticut State Senator James Maroney and Abundance Institute's head of AI policy Neil Chilson joined the pod.

To get in touch with us, email scalinglaws@lawfaremedia.org. And with that, giddy up for a great pod.

[Main episode]

Alan, we're recording on December 29th, so Christmas and Hanukkah have passed, and yet we get to unwrap the gift of having Senator Maroney and Neil Chilson on the podcast all at once. My gosh.

Senator Maroney, Neil, welcome to Scaling Laws.

Neil Chilson: It's great to be here.

James Maroney: Yeah, thanks for having us.

Kevin Frazier: Alan, how did we get so lucky?

I, I don't know. But I am grateful every day. That's what I'm grateful for on my look back on 2025.

Kevin Frazier: Wow. Now we're gonna—Thanksgiving, you're just, you're really making this the holiday season episode.

But we have so much to be thankful for, but we have even more AI policy to talk about. So Neil, our goal today is to get a sense of what the heck happened in 2025 and what that might suggest for 2026.

And a few more people have talked about what's going on at Congress, what's going on at the White House with respect to AI policy than yourself.

So, in your best truncated version of what the hell happened in what felt like several lifetimes, what occurred at the federal level with respect to AI policy in 2025?

Neil Chilson: Yeah, I mean, wow, it's been a wild year. It's actually hard for me to believe how much I was going back in prep for this, how much ground was covered.

So I'll do a very quick high-level and then we can talk about some of the major touchpoints over the year. We could talk about what happened at the federal level all day, but what happened at the federal level has a lot to do actually with what happened at the state level, and so I definitely don't want to chew up all the time on the federal stuff because I know Senator Maroney has a lot to say about the state-level work that happened, and it also is very intertwined with what happened at the federal level.

So I'll just jump in. I'd say obviously the Trump administration kicking off was a sea change in the federal approach of the executive branch especially towards AI as a technology. Very much shifted from, you know, frontier AI safety oversight to wanting to accelerate the technology and deregulate in this space.

And so that tone played out across the year in a bunch of different ways. We could talk about that.

The first thing that happened, I think very early on, maybe first day, actually the first day of the Trump administration was the repeal of the Biden executive order and some of the other executive orders that he had set up.

And with the promise that more was going to come. Also in January, there was right out the gate, right out of the gate, the announcement of Project Stargate, which is this 500 billion private investment, but Trump was part of the announcing posture for that. And that really kicked off, you know, I think really signaled how the Trump administration is going to approach this technology.

And then right in January as well, adding to the swirl of the discussion, you know, DeepSeek, the Chinese company launched one of their very powerful open-source models, and that really—I remember having a lot of discussions with people on the Hill and in the executive branch about what that meant for policy.

So, you know that's kind of the early—right out of the gate, that was a sort of the early context in which the Trump administration entered this. We had a lot of follow up-later that winter and into the spring. There's a, you know, J.D. Vance's speech in Paris was a barnburner to the Europeans about how the U.S. needed to lead in this technology. And how leading was largely about a sort of deregulatory approach, an approach that wasn't so much about safety, it was about innovation and about investment.

Kevin Frazier: And just then, you know to pause there for like, in just a slight second because it is—I'm having to pinch myself to remind myself that this all occurred in 2025. I'm already like, no, Neil, you got it wrong, man. The DeepSeek moment has to have been at least seven years ago. There's no way that was earlier this year.

And I wonder—from your perspective, Neil, and Senator Maroney, I'd love to hear your take too—at that point in time, following the DeepSeek moment, following the rescission of the Biden-era AI EOs.

It still kind of felt like there was some degree of consensus, or at least not dissensus, on AI policy. Is that correct? Just in terms of trying to get a sense of what the vibes were as we saw this year progress, I think it is really fascinating to see just how quickly the narrative started to shift.

Neil Chilson: Well, I think there was, I don't know if there was consensus.

I think that it was still—the loudest voice—maybe there was sort of consensus among the loudest voices, who had, at that, up to that point had largely been AI safety folks who were really worried about sort of frontier models. That had been the primary voices in this, I think, at the federal level. And so—and that was consistent with what was happening in the Biden administration.

I think we started to see that turn in part because of the Trump administration both saying that wasn't the right approach, but then I think also it became much more like a normal tech policy fight in some ways where it was about innovation versus regulation. Whereas up to that point, a lot of the safety, that really hadn't been the sort of trade off. And so I do think that was a sort of sea change.

Some of that just has to do with, you know, reactions to Trump as an individual and as a position, but also a lot of it just had to do with the shift to the old innovation-versus-regulation sort of dynamic.

But lots more happened after that. I'm happy to keep going.

Kevin Frazier: Yeah, keep it, keep it rolling.

So, I mean, we come out of the AI winter—referring here to just the season of winter—into the AI spring, and we get this AI action plan that we were all waiting for from the original AI EO on day one.

Neil Chilson: Yeah. And that followed up—you know, that followed both the sort of rebranding of the AI Safety Institute, another action that sort of signaled that we were moving away from safety being the primary concern.

It also followed up—Congress actually acting and doing the Take It Down Act in May, which is a big, you know, a big important piece of federal policy as well.

And so, going into July, the AI action plan really did outline Trump's vision for this. And it was a, you know, it's a 28-page piece, so it's not super long. It's not like an executive order. But it is, it did really outline these three pillars in which the administration wanted to act.

Those are around, you know, deregulation and innovation. The need to build infrastructure tying together these energy discussions and the artificial intelligence discussions. And then also, you know, needing to talk about how we're going to expand internationally, or how the US' technology is going to interact with the international sector.

And there were EOs that sort of a addressed each of those issues, and in addition addressed this ‘woke AI’ issue that the Trump administration had been talking about even into some of the folks had been talking about it even into the campaign for the presidency. And so that was July, very active.

August, September, October, there were a lot of debates about export controls. I would say that's sort of where a lot of it went on. But I can't skip the fact that as part of the Big Beautiful Bill, obviously probably the most electric moment that I had this year was around the, this whole inclusion of a potential moratorium.

The House passed that in late May. There was a debate that went on in the Senate, back and forth, with a lot of modera—modifications to what this provision looked like. It ended up mostly looking like a spending condition. Much less of a, you know, a ban on states doing something. But ultimately with a sort of last-minute, last-second negotiation agreement, then backing out, it failed, right?

And so like, it did not happen. And so the states continued to move along, and I look forward to hearing from Senator Maroney on his perspective on that. And then moving into the fall that, that concern, that drumbeat continued to be discussed about what was going to be the relationship between the federal government and the states.

We ran into November. There was a sort of test case of whether or not there was going to be consideration of including something like a moratorium or preemption in the NDAA.

That did not happen, ultimately. But what did happen was an executive order coming out of the Trump administration called ‘Ensuring A National Policy Framework for Artificial Intelligence’ that kicks off a bunch of other actions across the federal government with the goal of setting, I would say setting, trying to create incentives or disincentives, incentives for states to not do this or disincentives for states to not create additional regulatory structures for AI, and a plan for the, at least, the White House to create AI framework in which they think should be at the federal level, what that framework should look like for AI governance. And then, you know, then put it back in Congress's court, where it really ultimately does belong to set the national framework.

And so that's a sort of very high-level gloss on what happened. I'm sure we'll get into maybe some of the dynamics there. There were lots of politics, there are lots of policy fights.

It's been a really interesting time to be in this space. And that's just a federal level. And the state level is a whole ’nother ball of wax, so, yeah.

Alan Rozenshtein: Yeah. No that's great. That's a fabulous overview, Neil, and we'll get to the state discussion in a few minutes.

But I want to actually use this to kind of focus on Congress a little bit, because most of the things that you talked about, and I think correctly so, were about the executive branch and what the executive branch did.

And obviously there was some congressional action, the Take It Down Act is important, though I mean—I think it'd be hard to say that it's a sort of comprehensive attempt at AI regulation. It's taking a bite out of an important but pretty narrow slice of a problem associated with AI.

And so I think this raises the question of why we haven't seen more from Congress. I think, I think there have sort of been two different accounts that have been put forward.

I think the dominant account is that we haven't seen anything from Congress because Congress is dysfunctional and so Congress just can't get its act together. I think there's a different account—

Yeah.

—And maybe this one that I've been more sympathetic to, that what we're seeing from Congress is a kind of tacit approval of the status quo, a tacit approval of the current deregulatory environment.

Obviously, you can agree or disagree with that as a policy matter, but to me it does reflect that Congress, or at least congressional leadership, has—and this may change in 2026, but at least in 2025—has been okay with the status quo.

So I'm curious, Neil, from your perspective, what you think explains Congress's relative lack of affirmative action on AI policy issues? And then I want to go to Senator Maroney, because I think—I'm curious from a state legislator's perspective, looking at Congress, sort of how you in particular interpret what Congress is doing, or perhaps more importantly not doing.

So Neil, let's start with you and then we'll go to Senator Maroney.

Neil Chilson: Yeah. I think I'm pretty sympathetic to the second explanation as well. If you just measured by the number of hearings that Congress had to talk about AI, they've been very active.

And that goes back before, you know, the Trump administration. Both the House and the Senate had very comprehensive sets of hearings and came out with reports. And when you look at those reports, the substance of those reports is, ‘we should do something, but we're not 100% sure what to do yet because we're not 100% sure like where the biggest problems are gonna lie and where there should be action.’

And where there was a really broad agreement about a problem, like non-consensual, deep fake pornography, Congress did act, right? And so I think that that's primarily what I'm seeing here, is that Congress is looking at the landscape, and they're saying ‘we're not seeing problems that are so intense at this point that we need to step in and create a new federal framework.’

The problem that I think many people have been seeing, and this includes at the leadership level, is more on, ‘what is the balance between federal and state here?’ And so to the extent there is a growing concern about a specific problem, I think that is one of them. That's not the only one there, but that, that is one of them, that we're getting a patchwork of laws here. And that might create some pressure for federal you know, for federal regulation.

And so I don't see this as a measure of dysfunction. I've often said—you know, people are saying like, you know, Congress is dysfunctional and they're not acting here.

And I'm like, well, Congress chooses when to act and when it doesn't. And inaction is an outcome of the democratic process as much as legislation is. And so, so I tend to think that it is more the latter. Congress hasn't identified a specific set of problems that require like a big overarching construct here.

Although it has identified some specific problems where it has tackled them.

Alan Rozenshtein: And interestingly, just to jump into that before we go to you, Senator Maroney, I think you could probably make that argument because when you think—I think on both sides—so you can say, Congress hasn't decided that there is a specific AI problem that needs regulation, okay.

But you can also say—and I think this really tees up nicely into, Senator Maroney, I'm curious your perspective on this—Congress also hasn't identified that state regulation is such a problem that it requires full-throated congressional preemption, right? We can put sort of questions about the dormant commerce clause aside.

That's not really a congressional issue. And so I think whether you're sort of a AI regulatory skeptic or you are pro-AI regulation, I think you can look at what Congress is saying—or doing or not doing—affirmatively as supporting your position. And so I, I think that's a thing, I think, worth thinking about, but Senator Maroney, jump, jump in.

What, how do you, where you're, where you're sitting in sunny Connecticut, view what Congress is or is not doing?

James Maroney: Yeah. I think it is the status quo when you look at tech regulation, right?

I mean, you can go back to the last, you know, before the Take It Down Act, the last major piece of federal comprehensive privacy was 1998 COPPA, Children's Online Privacy Protection Act.

I think there's broad agreement that something should be done—more—to protect children. You know, remember 1998, before social media, before the iPhone, before all of these different things, and that's what we're relying on. And there is consensus that we need to protect children, but they haven't done anything federally, right?

A number of the states have acted, and I think that's similar to what we're seeing when it comes to AI regulation. And, you know, can read back in the history of the argument over data privacy legislation. The same arguments are coming back as were back then. Regulatory leader versus, you know, you use ‘regulatory leader,’ we're an ‘innovative leader,’ right?

We're repurposing some of these arguments, are coming back. So I think it's just, it's something we've seen in tech regulation. It's not necessarily dysfunction at this time. You know, we can—and we love to always say it's now, it's the worst time or whatever, and that there's so much dysfunction.

But I think we can go back through history and there's always been arguments and a, a level of dysfunction. So, I just don't think anything, you know—there is bipartisan agreement that we should do something. It's just, what is that something, you know?

And, it seems—and when you go to the people, right, most of the polls of the people, Pew, you know, when they do their internet, you know, their surveys, I think it's more, it's like—usually it's around two-thirds of the people who think that government won't go far enough in regulating AI. It's just, again, what does that mean? And where we can't agree on what that means and what we should do.

Kevin Frazier: And just to stick with you for a second longer, Senator Maroney, I know you've been on the Hill, you've been coordinating with other state legislators and actively meeting with members of Congress.

Over the course of 2025, with a particular point of the moratorium debate in July, how did you see AI start to become more and more salient for members of Congress for whom, otherwise, you know, they were listing other things on their agenda and on their website, and then suddenly everyone was the AI expert or putting this near the top of their blog posts and so on and so forth.

How did you see that over the course of the year?

James Maroney: Yeah, it's interesting. And you're right, you notice a lot more Congress, you know, people—federal, congresspeople who are now making AI a priority for them.

And I think as it's entered, it—I mean, it's been in the national consciousness, right? But I think as we're seeing more examples of harms—and again, I am an AI optimist, I have not a pessimist by any means, but we are seeing real-life harms, right?

You're seeing the—you know, the Adam Raine case where, you know, he died by suicide. Sue Seltzer, who had died by suicide because of—encouraged by Character.AI.

And then in Connecticut we had a murder-suicide where ChatGPT validated a man’s delusions about his mother, didn't necessarily talk him down. So I think as you're seeing some of these egregious harms coming out, there are people who are like, yeah, we need to do something about that. But how?

You know, again, it's that—that’s what we don't necessarily agree on, right? The what and how. It's ‘something needs to be done.’

And I think until problems become visible, right, they don't become as popular or, you know, as recognized. And that's—I remember reading a book once and saying like, going back to surgery, right, there were two major innovations that came around at the same time.

One was they realized that we needed to be cleaner, right, and sanitized. The other was that anesthesia was developed. And people adopt—surgeons adopted anesthesia more quickly because they can see a patient writhing in front of them in pain. But when they died of an infection, they went home, right? And it was two weeks later before it showed up and they, they died.

But now, like again, that moment, the November 30th, 2022, when ChatGPT was launched, everyone could see—and for most people, AI means ChatGPT, right? Let's just be honest. It means a generative AI, a chatbot, versus it's been around and we know, right, it's been powering our Google Maps. It's been powering so many different things for a long time.

But once you could see that power, right, in your hand, in your own phone, that became part of the consciousness. And I think the same for more of, you know, the elected officials who, once they've seen some harms from their constituents.

And so, the same with, you know, when you've seen the victims of non-consensual, intimate images, right? When you have constituents call and complain that this happened to their teenage daughter, it becomes more real for people. And you agree that you need to do something.

And again, we always like to say we want to do something. It's just figuring out, you know—definitions and what that something is, are where we can't agree.

Kevin Frazier: And for the record, I would've been an early adopter of anesthesia. Just to make that clear, I would've, yeah. Sign me up.

Neil Chilson: Washing his hands, not so much. Not—

Kevin Frazier: Yeah. Washing my hands, overrated, overrated. Just knock me out.

Alan Rozenshtein: I don't know. I'm still pretty convinced by the miasma theory. But then again, I have two small children, so they're just, yeah, that's my lived experience.

Kevin Frazier: There you go.

Well, Neil, it, it wouldn't be a good end-of-year episode if we didn't force our guests to try to make sense of very complex, disparate trends and neatly summarize it all.

So thinking about the White House's posture with respect to AI, we had the AI Action Plan. Most recently, we had the AI EO directing the DOJ really to prioritize challenging state AI laws that may be unconstitutional or otherwise unlawful.

Then we get the Project Genesis announcement directing the Department of Energy to really spearhead AI for science and some of these more materials science, data science, drug development-type initiatives. And then of course we have the very convoluted world of chip export controls and seemingly making it easier for China to get its hands on some of the most sophisticated chips, albeit not the most sophisticated chips.

What should we take from all of that? What is the Trump AI doctrine? If there is one or were one, what is it? And can we get our hands around that? And do we think it'll be around for 2026?

Neil Chilson: So I, I think if you were going to cut across all of those, the theme is that AI is a normal technology.

It's a super powerful, really interesting, compelling technology that will probably transform the world like the internet did.

But it is a normal technology in the sense that this is not a separate entity. This is not a separate entity that is, you know, going to develop and is going to compete with humanity.

And that had been the sort of sci—frame around the AI safety space for a while. But now the Trump, so the Trump administration, I think, just does not think that way about this technology.

They think of it as something that we need to work through in all the processes that we have, but that has a huge amount of promise. It's something that we need to invest in and innovate and we need to lead the world in. But ultimately, it is a normal technology in that it is one that is managed and developed and created for and by humans.

Alan Rozenshtein: So, so would you say then that ‘this is a normal technology and we want more of it’ is kind of the throughline? Because at the very least, that's how I see it, right? ‘We want less regulation, we want more of its use in science, we want we want to sell more chips to China, right. We want more AI, even if that AI is going to be used by our peer adversary.’ Just, you know, the answer is just, ‘we want more of it,’ right?

And again, we can, we can have a debate about whether it's good or bad. But you know, I think what Kevin and I are trying to get at is, is there sort of a Trump doctrine on AI?

And if so, is “more” a reasonable articulation of that doctrine in your view?

Neil Chilson: I would say more American AI is the doctrine, is the doctrine. More American AI.

Alan Rozenshtein: And so then from that perspective, again, let's just assume that is the goal. You know, if you are, if you're David Sachs and he's doing his sort of end of year, how did 2025 go? Do you think that from the perspective of more American AI, this White House has done a good job?

And let me sort of maybe a little bit more concrete in where I'm coming at this. Right, you know, you can look at you know, some of the policies, let's say a lot of the AI action plan the, this, your Project Genesis stuff that seems to be straightforwardly. Yes, we're gonna, you know, support AI and we're gonna try to, you know, improve the zoning process for data centers. Fine, fair enough.

But then you look at other parts of the administration's strategy, and again, taking more American AI as, as granted here, let's say the export controls. Maybe that's, they'll certainly be selling more American AI chips to China in the short term—whether that leads to more American AI over the long term is hotly debated.

Or you could look at, let's say the preemption EP, you could say, okay, in the short term, clearly that's meant to create more American AI by getting in the way of, you know, the senator, the “Senator Maroneys” of the world. But you could also imagine that backfires and alienates potential allies, whether in Congress or in the States, and short circuits, the hard work of working with Congress on real preemption, not this somewhat more tenuous executive order preemption. And so I'm just curious, you know, how would you grade this administration, you know, what it's done in 2025. And then I'll kick it to you, Senator Maroney, 'cause I'm curious your answer to the same question. Are they succeeding in the more American AI?  

Neil Chilson: I think if you look at the stats, I mean, yes, I mean the U.S.it dominates in this space. It still does on all, all across the tech stack. We have some vulnerabilities maybe in the hardware supply chain especially on the, some of the inputs to data centers on the electrical—electricity generation side especially.

And so I think they wanna shore up those, but I think the stats would show that we are full steam ahead in a way that I think is very aligned with what the administration is trying to accomplish. Now, to your point: are these the right tactics? That, that's hard to know in advance, right? There might be some ways in which these are, these would undermine their—

Alan Rozenshtein: The whole, the whole point, Neil, is I want you to speculate wildly and irresponsibly. That's the whole Lawfare way

Neil Chilson: Well, well, I, what I won't speculate, but I, 'cause I don't think I need to speculate. What is extremely obvious from this is that they believe that they're optimists about AI technology.

They want more, they want it faster. And they are not, they do not buy the idea that this is that this is something that we should be slow in pursuing, that we should be fearful about the technology. They do not buy that at all. And so, I think they want more and they want it faster maybe than maybe than some people are comfortable with.

But, but that, that very much is their vibe. And I think if you look at that and measure I think they've done a good job on that. Certainly compared to, you know, the massive amounts of you know, threatened red tape, that would've come out of the Biden executive order for example. And so I think that I think by their own goals I think you could measure them at least so far as well.

Now, will some of it backfire in the future? Maybe? I am­—I think it's more possible on the export control side maybe, than it is on the preemption side. I don't see the, that sort of backfiring happened there, happening there. I mean, just judging from, like Senator Maroney was talking about from the history of privacy negotiations like that, it's a hard thing to happen anyway, so like being able to assign that to like attempts by the administration to preempt, I think is gonna be very hard. I mean, I think it's, that might be a small drop in like a big, a big bucket that the, that sent the Senate­—that Congress is gonna have to lift. And so, but I don't think it's decisive, so.

Alan Rozenshtein: And Senator Maroney, yeah, let me kick it to you. I mean, do you agree that the Trump administration is trying to do a, you know, more American AI and do you think that they are succeeding on their own lights? And I think it's particularly interesting to ask you because you know, although obviously you, you've—and this is a good segue to our next conversation, which is gonna be about state regulation. Although you've obviously been a proponent of pretty far-reaching state regulation, I don't take you to be an AI skeptic, exactly. I think in some ways you're also quite an optimist. And so, you know, perhaps you also want more American AI even if it's in a somewhat different form, this administration.

I'm just kinda curious on, on your thoughts on this last year.

James Maroney: Yeah, no, I, I think it I agree that it, it seems the administration wants more, and as Neil said, it's not just more, it's more faster. And I think that's, you know, they look at regulations as slowing it down. Right? And so, whereas I take kind of the, to paraphrase the John Wooden approach, the hurry up, but don't rush.

Like, we obviously wanna innovate. We want to be the most innovative country in the world, right? We want to unleash all of the potential when it comes to improving people's lives, right? Helping people live longer with medical discoveries, with making us more efficient in how we work and how we deliver services.

But we don't wanna make mistakes. And so we don't want to you know, the move fast and break things mantra when you're going into medicine, when you're going into making financial decisions and decisions that have real world impacts. It's okay to slow down a second, I feel, and test it. And I see the true, you know, for a state like Connecticut, I see the true economic benefit of AI as adoption.

Not in we're not going to grow the next OpenAI in Connecticut, right? Or the next Google. But we are gonna be able to utilize you know, different automations in technologies, machine learning to make us better at the things we do—to make us better at defense manufacturing. We are a healthcare leader to make us lead, hopefully in insurance tech and health tech and in other areas.

And so, for me, I think most people wanna feel safe before they use something and they want to know that it has been tested to be fair and accurate. And so that's why I'm not necessarily, I don't believe in banning anything. More on the testing, right, the ‘hurry up, we wanna innovate,’ but I feel to get the full benefit, we want people to feel safe so that they will use it and will have more adoption.

Kevin Frazier: Well, Senator Maroney you teed me up to lean into one of my favorite topics, which is college basketball and of course, UCLA and John Wooden had Lew Alcinder to help lead the team to manifold national championships. And if there were a single person, perhaps outside of your rival on the West Coast in Senator Wiener, you may be the point guard of state legislative efforts here.

And so from your vantage point, running point, distributing ideas about how to regulate AI. And I promise I'll be done with the basketball metaphors.

James Maroney: No, I'm gonna go with it!

Kevin Frazier: There we go. Alright, I'm gonna pass the ball of you then, Senator, and give us your overview of what occurred at the state level. We saw bans in Illinois. We saw big proposals in California. We saw lots of things happen in New York. What the heck is the state of play, at the state level with respect to AI. 

James Maroney: And it was a busy year and actually when Neil was going through everything that happened, federally, I, as you it said Kevin, I couldn't believe that was all in just this year.

And I think the same, when I think of all of the state bills that have passed, I forget that a lot of them were this year. Right? And so, TRAIGA, the Texas Responsible AI Act, it has some prohibited uses, but they also put in the, sandbox, right, a pro innovation and I think which we had seen from Utah before and then out of Utah sandbox came chatbot legislation on mental health chatbots.

And that's an area that was kind of big this year. But I think will dominate next year you'll see is, will be the chatbot legislation, especially if when you're looking at the children's protections next year, because the executive order did—it, it was a little confusing, I think the executive order, but it did say it exempted things with children's protection.

So we'll see more of the chatbots. But Utah had the mental health chatbot. As you mentioned, there was an Illinois chatbot law, I think it was also in healthcare license professions. In other chat bot legislation, California had two. One was vetoed and one was signed into law. New York state and their budget had a chat bot legislation as well. So theirs was around the suicidality, suicidal ideations in labeling. And then Maine you know, so, Representative Amy Kuhn passed a labeling bill on chatbots in Maine.

There were the two big bills you mentioned Senator Wiener in, who would be the center, I guess how, he's 6’6”? He's a tall guy.

Kevin Frazier: We forgot to ask if he could dunk.

James Maroney: Yes.

Kevin Frazier: Senator Wiener, come back on. We have some—

James Maroney: And I, and I was a point guard when I played, so thank you. Pass first. And we do have a good basketball team in Connecticut, Yukon Huskies.

Kevin Frazier: You know, now that I'm thinking we need an AI March Madness, little basketball. David Sachs, you're well also welcome. Come on. Tell us about whether you can go through the legs.

James Maroney: I think I've seen what I think was Keir Labat, we had a future of privacy forum and put up a privacy March Madness on bills pairing, seeing who would pass bills.

But you know, those frontier model bills, so Senator Wiener 53 and California signed into law. The RAISE Act, Rep Bores and our assembly member in New York, I forget. Bores had passed that and that was signed into law with some chapter amendments that made a little—I think substantially similar to 53. There was some differences.

So those were big bills, you know, another big one that had passed this year in a different vein would've been Montana and Senator Zolnikov, the Right to Compute bill. You know, you have the right to compute as long as you're not violating people's rights. And then some of the government use, there are still a lot of government use bills this year.

There were some of the pricing bills. So both California and New York state, New York state was more just a disclosure. And that's what a lot I think we're gonna see towards the transparency. There were— it will probably be a key theme next year, you know, the transparency.

In, in addition to the pricing bills, there were some states who did the rental, like the rental algorithm, price setting algorithm legislation. Connecticut actually did have one in the past in our special session as part of a housing bill. But there are a number of states who had done that last year as well.

I'm trying to think what other bills am I missing? Okay, employment. California did have, and it focused more on from what I remember, when you were employed, like task allocation and others and disclosures for using AI. But the bigger, some of the bigger things were actually in the regulations.

So California, the automated decision making regulations came out, which were similar to like the Colorado, I would say, AI act as well as, you know, and there were some data privacy laws that looked at making some you know, doing some things that you could look at as disclosure when using profiling Connecticut, our data privacy update, but then also the regulations in New Jersey proposed regs looked at the ADMT, automated decision making technology.

So those, the automated decision making seemed to have been through the data privacy bills or through regs. This last year there weren't as many that had passed. New Mexico got close and then did not pass, and I don't think that's the way their sessions work. I think next year's a much shorter session. I don't think that's coming back next year, but may come back in the future. I think that was Rep Chandler, Christine Chandler, who had run that bill in, in New Mexico.

So, but, and let's see, the other sandbox. So the three states had the sandboxes, right? Utah was not this year, it was last year. But they had their first, first companies go through the sandbox. Their first agreements proposed legislation outta that. Texas, and then Delaware is looking at creating a sandbox that was in executive order. Their AI task force is looking at that and their task force is scheduled to deliver their legislative recommendations. Their sandbox wants to focus on agentic AI.

And so they're gonna make their recommendations in January. And I spoke with Rep Griff—Krista Griffith in Delaware. I think she's their co-chair of that task force. And so, they'll be delivering them. And they're on pace to do what they were tasked with by the executive order. I'm not sure if I missed, let me see what I missed.

Alan Rozenshtein: Well, I think that's a great overview and you know, obviously we have so many state legislatures and so many sort of proposed bills. But I think it's a great lay of the land. I mean, I think one thing that I noticed. In your description and also just following this, is that a lot of, I'd say the vast majority, I think at least of these legislative efforts, are really about AI harms and obviously harms are bad and we should fix them, right?

There's been a lot less though, about trying to affirmatively encourage sort of the benefits of AI, whether it's you know, zoning reform for data centers. Obviously that's a complicated issue. Or integrating ai, sort of encouraging AI integration into parts of the economy or even to state government.

And I'm curious if sort of, A) you agree with me just descriptively about that and B) if so, sort of why? Why is it that state governments see more focused on tamping down the harms? Rather than sort of encouraging the benefits is that because, well, that's just how they see the relative priority, or is there something structural about state governments where it's sort of easier to focus on harms rather than push benefits?

James Maroney: Yeah, no I think that there are some things going on, right. When you're looking at investing and they're not necessarily legislative, right? They're also from the governors right in New York state and has put a lot of money in, right. Massachusetts, they've put a lot of money—over hundreds of millions of dollars into trying to incentivize AI use in growing businesses within their state.

But I think for a lot of legislators, yeah the harms are what get the attention. We are looking, you know, longer term, looking at the education. There are some, right, workforce development and education. We've done some things in Connecticut, right, where we created an online AI academy free training for our residents. And some other states have done similar things, but we've seen some of the states do some innovative things, I guess, in that like LA Office of Innovation, you know Maryland has an Office of Innovation, but they focus on innovation and government use of AI.

Louisiana through their economic and community development. They're trying to help, but it's in certain industries for AI adoption, right, working with them. Mississippi's done some interesting things and put some good money into education and workforce development. So we are seeing some of those things. I just think they—the harms get the headlines right?

And I think that's why that's what we hear about more unfortunately. And we're still trying to figure out a lot of like how's the best way to approach this in education and in looking at, right? So you have to look short term and long term. Right. And long term with kids. Like how should we be teaching them responsible use?

I think, especially when you're talking K to 12. Lifelong learning is the most critical. We know it's gonna be changed, right. And so just again, one of the good things here is we're going back to emphasizing traditional skills, like critical thinking, right? That are, we're, those are coming more, becoming more and more important, right?

Not that they ever weren't important but we know you're gonna have to update your skills. And so, and. A lot of, like, when you go around to the different conferences, different companies, a lot of the top AI professionals who are speaking at these conferences, they were like a French literature PhD or philosophy majors.

They're not always just straight computer engineers. And so that diversity of thought.

Alan Rozenshtein: So, so I know Kevin is excited to talk about predictions and looking forward, but I do wanna ask one last question for you, Senator Maroney, but also for you Neil, and that's about the politics of this, and particularly kinda the partisan valence of this.

So, so, I think one thing that's been, at least interesting for me to look at is, this is not obviously a left, right, Republican, Democrat issue. So obviously Senator Maroney you've proposed a lot of legislation and some of that's been blocked by your own governor who's a fellow Democrat, he was very complimentary towards you, but nevertheless.

And then on the other hand you have people like Florida Governor Ron DeSantis, who I think is really trying to take the mantle of kind of, right-wing AI skepticism. He's put forward this kind of AI bill of rights, and he's actually come out pretty strongly against the administration's preemption proposal in executive order.

So from your perspective, Senator Maroney on, maybe on the state level. And then I wanna ask you Neil, on the federal level, how do you see the politics of this shaking out? I mean, is this really a kind of free ball, right, as it were? I'm not the sports—Kevin's the sports ball guy on this podcast.

But I'm doing the best that I can. Probably the—

Kevin Frazier: Loose ball.

Alan Rozenshtein: Loose ball! You can tell I'm just the, I'm the worst when you talk sports, man, I'm just, absolutely, the worst. Is this a loose ball, politically speaking, and if so, given how kind of brutally partisan everything is in our society, is that gonna last for very long?

Or inevitably is, you know, one side of the aisle gonna become the quote unquote pro AI side. And the other side gonna be the AI skeptical side. So let's start with you, Senator, and then Neil jumping right afterwards.

James Maroney: I, I think for the most part, tech legislation, one of the nice things and why I enjoy working there, it is bipartisan.

I think that we sometimes have different approaches with the same goal and so. I would say to go back to the sports, it's an offense-defense, what you'll see in the next year in that some of the more solidly Republican states may have more of the pro innovation, I guess type legislation, the right to compute perhaps, but sandboxes, right.

We're gonna see a lot of the sandboxes coming and again, trying to encourage, use and giving more regulatory certainty to businesses and then gathering data. So like, like they did through the Office of AI Policy in Utah to come up with future legislation when you see problems.

Versus there may be some more transparency legislation, and so defense, in some of the Democratic states, I think that will both states will look at children's protections, right? So that's why I do think chatbots when you've seen harms to children and why children's you know, children's privacy and other legislation like that seems to get more of a priority.

But I, I think if for the most part it is a bipartisan issue, but there are different approaches.

Kevin Frazier: Neil, how about yourself?

Neil Chilson: Yeah, so at the very highest level, part of the challenge is, because AI is such a vague cluster of technologies you can kind of pick out the part of it you wanna focus on, and depending on what party you are you are, you’re gonna say different things about it.

So you can have, there are very much optimists about parts of this technology in both parties. There are pessimists about parts of this technology in both parties, so it's really hard to paint with a broad brush. I do worry a little bit about, you know, the sort of pro innovation go faster, build faster, being associated primarily with the Trump administration and reflexively maybe having that create a backlash in the Democratic Party, which I think overall there's no particular reason that in this technology, at least in big chunks of it, that they should be concerned about it you know, as a you know, a Republican technology of some kind.

And so I, I do think it might be getting more polarized, but right now it's not. And there are some specific areas where there is, you know, consensus. Both in the, let's do something about this and also consensus in, well, I don't know if we really need to do something about this. And so, so I'm hopeful that will continue to be the case, not just because it's more interesting to work on as a policy person when it's not just purely politics, but also because I think that leads to better outcomes.

Kevin Frazier: So. Because it's an end of the year episode, we have to end with predictions. And one of the things I hate when I, whenever I listen to an end of year episode, is the hosts always take themselves out of having to make difficult predictions. So I'm going to do one quick one, which is to say we are going to see more legislators realize the power of AI use cases and start to see things like deep research can actually be incredibly valuable and useful.

This is emerging from a recent Politico article in which Senator Warren realized that AI might actually be helpful. And I'm optimistic that perhaps she is clearing the way for more folks to actually experiment and use AI, which I think can lead to more nuanced conversations as we walk through this offense-defense debate

But Neil, let's start with you and leave for the Senator, the final word, unless Alan wants to jump in with his own prediction, but Neil, tell us—

Alan Rozenshtein: Well, well, I'll just go quick. My prediction is that we will not have preemption.

Kevin Frazier: Okay.

Alan Rozenshtein: We will not have federal preemption, and the White House preemption effort will almost entirely fail.

That funding restrictions will be blocked by the courts. Dormant commerce clause cases will go essentially nowhere. And that we're just going to have just a bunch more sort of state level regulation and bits and pieces and the, and the industry, we'll just learn to live with it. That's why prediction for 2026.

Kevin Frazier: Hard and fast. Alright. Okay. Neil hit us.

Neil Chilson: My prediction is that data center build out is gonna merge with environmental issues and local construction issues, and that's gonna dominate local, like when people think about AI fights in 2026, a big chunk of it is gonna be about stuff that is not really about AI.

It's about construction and energy production. Okay.

Kevin Frazier: And so you just—

Neil Chilson: And that's gonna be a problem. I, I think that's a big problem.

Kevin Frazier: Yeah. And you wanna keep things interesting for yourself. I studying construc—construction law and Policy, just to keep your mind nimble. I like it. Running some suicides if it were for all those basketball nerds.

All right, Senator Maroney, give us a big, bad, bold prediction.

James Maroney: Right, I'm gonna start with basketball. Yukon men and women both win the national championship.

Kevin Frazier: Oh my. Okay. Alright. I'm going both.

James Maroney: I'm going both. And then I would say state level AI regulation, there still will be legislation. A lot of legislation, the common themes are gonna be chatbots, and then more industry specific rather than larger sweeping.

I think you'll see, you know, whether it's healthcare, insurance, finance, but industry specific regulations popping up more and more next year. And then some transparency legislation just on disclosures for use.

Alan Rozenshtein: And I'm curious, Senator if you're willing, any previews of what you're gonna be focusing on.

James Maroney: Yeah. So it's a same, similar bill, three parts: protect, you know, promote, empower. So, the protect, it's gonna be narrower than in the past. We will have chatbot legislation. There will be—the automated decision making will be narrowly tailored to just the employment, instead of a broader automated decision making regulations. The—we also may look at some disclosures in pricing similar to New York state, for the use in, you know, the dynamic pricing and others.

The promote, you know, we are looking at how do we create a confidential computing cluster in Connecticut so we can share data and in a privacy enhancing way for encouraging research for healthcare. So that's one of the areas—we're looking at expanding our workforce development looking at increasing some professional development for teachers as well as potentially—and it's all budget dependent, so this is, who knows what ends up happening. We do wanna look at, you know, the state. Right now not all of the models are compliant with our state education data privacy laws. So having the state get some contracts to allow some of the school systems right, to utilize the models in there.

And then empower, we'll be looking at how are we effectively using AI in state government, right, to better serve our constituents. And so I have a fellow, like two fellows who've been working with me. So we're gonna look at some use cases that they're working on putting in place for some pilot programs. We wanna look at expanding those types of pilot programs and you know, how do we better serve, how do we make it easier for people to find information and to get access to services.

Alan Rozenshtein: Terrific. Well, I think this is a good place to leave it. Neil Chilson, Senator James Maroney. Thanks so much for coming on Scaling Laws.

James Maroney: Thanks Alan. Thank you for having me.

Kevin Frazier: Scaling Laws is a joint production of Lawfare and the University of Texas School of Law. You can get an ad free version of this and other lawfare podcasts by becoming a material subscriber at our website, lawfaremedia.org/support. You'll also get access to special events and other content available only to our supporters.

Please rate and review us wherever you get your podcasts. Check out our written work at lawfaremedia.org. You can also follow us on X and Bluesky. This podcast was edited by Noam Osband of Goat Rodeo. Our music is from Alibi. As always, thanks for listening.


Neil Chilson is the head of AI Policy at Abundance Institute
Kevin Frazier is a Senior Fellow at the Abundance Institute, Director of the AI Innovation and Law Program at the University of Texas School of Law, a Senior Editor at Lawfare, and a Adjunct Research Fellow at the Cato Institute.
Sen. James Maroney represents the 14th District (Milford) in the Connecticut State Senate. Sen. Maroney currently serves as the Co-Chair of the General Law Committee and serves on the inaugural Leadership Council of the Future of Privacy Forum Center for Artificial Intelligence. Prior to politics, Sen. Maroney founded and ran an educational consulting business in Milford.
Alan Z. Rozenshtein is an Associate Professor of Law at the University of Minnesota Law School, Research Director and Senior Editor at Lawfare, a Nonresident Senior Fellow at the Brookings Institution, and a Term Member of the Council on Foreign Relations. Previously, he served as an Attorney Advisor with the Office of Law and Policy in the National Security Division of the U.S. Department of Justice and a Special Assistant United States Attorney in the U.S. Attorney's Office for the District of Maryland. He also speaks and consults on technology policy matters.
}

Subscribe to Lawfare