Cybersecurity & Tech

Scaling Laws: Can AI Make AI Regulation Cheaper?, with Cullen O'Keefe and Kevin Frazier

Alan Z. Rozenshtein, Cullen O'Keefe, Kevin Frazier
Tuesday, February 24, 2026, 10:00 AM

Alan Rozenshtein, research director at Lawfare, spoke with Cullen O'Keefe, research director at the Institute for Law & AI, and Kevin Frazier, AI Innovation and Law Fellow at the University of Texas at Austin School of Law and senior editor at Lawfare, about their paper, "Automated Compliance and the Regulation of AI" (and associated Lawfare article), which argues that AI systems can automate many regulatory compliance tasks, loosening the trade-off between safety and innovation in AI policy.

The conversation covered the disproportionate burden of compliance costs on startups versus large firms; the limitations of compute thresholds as a proxy for targeting AI regulation; how AI can automate tasks like transparency reporting, model evaluations, and incident disclosure; the Goodhart's Law objection to automated compliance; the paper's proposal for "automatability triggers" that condition regulation on the availability of cheap compliance tools; analogies to sunrise clauses in other areas of law; incentive problems in developing compliance-automating AI; the speculative future of automated compliance meeting automated governance; and how co-authoring the paper shifted each author's views on the AI regulation debate.

 

Find Scaling Laws on the Lawfare website, and subscribe to never miss an episode.

To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.

This episode ran as the March 6 episode on the Lawfare Daily feed.

Click the button below to view a transcript of this podcast. Please note that the transcript was auto-generated and may contain errors.

 

Transcript

[Intro]

Kevin Frazier: It is the Lawfare Podcast. I'm Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law, and a senior editor at Lawfare. Today we're bringing you something a little different. It's an episode from our new podcast series, Scaling Laws. Scaling Laws is a creation of Lawfare and Texas Law.

It has a pretty simple aim, but a huge mission. We cover the most important AI and law policy questions that are top of mind for everyone from Sam Altman, to senators on The Hill, to folks like you. We dive deep into the weeds of new laws, various proposals, and what the labs are up to make sure you're up to date on the rules and regulations, standards, and ideas that are shaping the future of this pivotal technology.

If that sounds like something you're gonna be interested in, and our hunch is it is, you can find scaling laws wherever you subscribe to podcasts. You can also follow us on X and Bluesky. Thank you.

Alan Z. Rozenshtein: When the AI overlords takeover, what are you most excited about?

Kevin Frazier: It's, it's not crazy, it's just smart.

Alan Z. Rozenshtein: I think just this year, in the first six months, there have been something like a thousand laws—

Kevin Frazier: Who's actually building the scaffolding around how it's gonna work, how everyday folks are gonna use it?

Alan Z. Rozenshtein: AI only works if society lets it work.

Kevin Frazier: There are so many questions have to be figured out and—

Alan Z. Rozenshtein: Nobody came to my bonus class!

Kevin Frazier: Let's enforce the rules of the road.

Alan Z. Rozenshtein: Welcome to Scaling Laws, a podcast from Lawfare and the University of Texas School of Law that explores the intersection of AI law and policy. I'm Alan Rozenshtein, associate professor of law at the University of Minnesota and research director at Lawfare.

Today I'm talking to Cullen O'Keefe, research director at the Institute for Law and AI and my very own Scaling Laws co-host, Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law, and a senior editor at Lawfare. Cullen and Kevin have written a new paper and accompanying Lawfare article arguing that AI itself could dramatically lower the costs of complying with AI regulation.

We discussed the concept of automated compliance, the limits of compute thresholds, and a novel proposal for automatability triggers that would tie the activation of new regulations to the availability of cheap compliance tools.

You can reach us at scaling laws@lawfaremedia.org, and we hope you enjoy the show.

[Main Episode]

Kevin Frazier and Cullen O'Keefe, welcome to Scaling Laws.

Cullen O'Keefe: Thanks for having me.

Alan Z. Rozenshtein: So you all wrote a really interesting paper about the effect of AI on potentially lowering compliance costs for regulation and specifically in the context of AI regulation. But before we get into that paper, let's just set the scene.

Lemme start with you, Kevin. What is the general problem of regulatory compliance costs? Just outside the AI context. I mean, in the paper you provide some really interesting striking examples. For example, you know, $55 billion for California's privacy law, or outside the tech context, the quote unquote nuclear premium, which adds double digit percentages to construction materials and on and on.

So just describe overall kind of what the current landscape of compliance costs are, and then how they map onto the AI policy debates that we're all having.

Kevin Frazier: Yeah. So I think what's really important here is to frame that compliance costs vary by your size of company, right? So for the sort of largest company, let's talk about meta, let's talk about Google.

Let's talk about OpenAI. They have whole compliance teams, oftentimes hundreds if not near thousands of lawyers who are just paying attention to what's the latest regulation? How can we streamline compliance with that regulation? And they're generally going to kind of float. And get by whatever regulatory hurdles are thrown their way.

While that's going to be a substantial cost as a fraction of their total operational expenditures or as a fraction of their revenue and profits, it's kind of di minimis. And so they'll be able to comply in a fairly straightforward fashion.

But if you look on the other end of the spectrum and think about the startups, whether in the AI space or generally just any small firm complying with any set of regulations is going to be a lot more onerous because when you start, something like a new business, your first hire isn't usually an attorney, right? We're expensive, we're not exactly fun. You don't wanna have us around. And so instead, what do you do if a new law gets enacted?

Maybe you just ignore it and then you're kind of screwed when you're found in noncompliance or you have to turn to outside counsel. And that means looking to a big law firm who charges big dollar, big law firm fees. And suddenly for something as small as just updating your privacy policy, for example, that may cost around $5,000 in outside council expenses.

And for a startup, that's a significant amount of money when the usual average operating expenditures for a startup is around $55,000 per month. And so compliance costs are really this question of number one, how is it impacting you in terms of just those pure operational expenditures? But then as we also point out in the paper, you have to pay attention to the opportunity costs.

All the time that you spend collecting the requisite forms, touching base with the right administrators, so on and so forth, that's time you could have been spent doing other things, other more productive things for your businesses in particular.

Alan Z. Rozenshtein: So, Cullen, I mean, you've been involved in a lot of efforts to develop frontier AI regulation your organization, the Institute for Law and AI of which I should say I'm, I'm currently also a part of as, as is Kevin in a kind of part-time capacity.

I'm not sure I would necessarily call you guys necessarily an AI safety organization, but I think it's fair to say that you're AI safety adjacent or AI safety, curious, certainly you're in a lot of those same conversations as, AI safety folks. How do you, and maybe more generally, how do you think the AI safety and AI regulatory community tends to think about compliance costs to the extent that they even do, and should they think about it more?

Cullen O'Keefe: Yeah. So as for ILAI, I think, it's right to say that we take AI safety related issues pretty seriously and have done work, kind of sketching out what forms of frontier AI regulation might look like. But I think we, and some, maybe not all, but definitely some of the actors in this space try to be attentive to how you could tailor frontier AI regulations to capture a lot of the safety benefits while also minimizing the costs on actors that are maybe not contributing as much to some of the frontier AI risks that we are worried about.

And historically, one of the main ways that people in the kind of frontier AI safety community have tried to thread that needle is by using something called compute thresholds.

This is a topic that I assume has come up on Scaling Laws before, but just to refresh your audiences. the idea here is that AI systems can be trained with different amounts of compute. There tends to be a relationship between the amount of compute trained and the capabilities, and therefore maybe the risks of AI systems, and compute is also quite expensive, as people probably know.

And so one nice thing that you can do potentially is set what's called a training compute threshold, where you say that this type of regulation will only apply to models trained with say, 10 to the 26 floating point operations, FLOPs. And what this means is that this would only apply to firms that could afford that amount of compute.

And even though it's not like an iron law or anything, those firms would tend to be the better capitalized firms of the sort that Kevin kind of led with, and therefore, might be better able to absorb compliance costs and then firms operating below that threshold you know, would be exempted. So that's one way historically that people have tried to address this problem.

So maybe one way of framing and motivating the paper is like, can we improve on that as a methodology for differentiating between firms that can easily eat compliance costs versus not, or otherwise make the trade tradeoffs a bit more sensible.

Alan Z. Rozenshtein: Well, let's, let's say on the compute threshold point for a second, because as you point out, right, that has been the standard way of doing it and it has certain intuitive appeal.

But you all point out in the paper that increasingly that may not be a useful way of distinguishing, on the one hand, the models that we are potentially worried about and on the other hand, the sorts of companies that can afford. to pay these compliance costs. Lemme stay, stay with you, Cullen. Why, why is that?

What, what recently has been happening that is making the compute threshold approach perhaps no longer fit for purpose?

Cullen O'Keefe: Yes. you know, this is somewhat old news in the fast moving world of AI, but, you know, over the past two years or you mean minutes,

Alan Z. Rozenshtein: You mean it’s two weeks old?

Cullen O'Keefe: More or less, you know, over the past two years we've seen this emerging paradigm, called reasoning models, right?

And one of the key insights of reasoning models is that you can, in some sense trade off training compute for test time compute or inference compute. That is to say a model that took less compute to train can kind of think for longer. When you ask it for the answer to a question and perform as well as a model that took more compute to train, but is only given a single kind of, forward pass to, to complete its answer.

And I think a lot of people expect this to mean that over time the amount of compute, needed to kind of, give rise to a certain capability level will go down. There's kind of other reasons to expect that as well.

Firms are always finding new ways to make their training runs more efficient. And compute costs are also coming down, right? So there's all these kind of secular trends that tend to point to fixed FLOP amounts being cheaper to hit and also fixed FLOPs corresponding to greater and greater capabilities.

So, I think, you know, if training compute is a reasonable proxy measure, and I, I don't have like a strong view on, on whether that's still the case. You know, I think it's a reasonable guess that it, it might be appropriate, but if it is, there's a, a bunch of kind of secular trends that mean that it's not going to be forever and, may not be for very much longer either.

Kevin Frazier: And just one small thing to add on here is I think that the FLOPs-based governance or FLOPs-based trigger for compliance, expectations also misses some of the new risks that are emerging in a lot of the AI discourse. So, for example, in state legislatures around the country, AI companions now are among the top issues that they're focusing on.

You don't need. Pardon my French, a shit ton of compute to design a AI companion that's gonna drive young users towards certain behaviors. And so, you know, grounding a lot of AI legislation on that proxy. It depends on the risks you're focused on. I agree with Colin that especially for those sort of frontier risks, it may be a reliable proxy, but for the folks who are concerned about the AI issues that are oftentimes headline news these days, I think it's particularly ill-suited for that.

Alan Z. Rozenshtein: So it, it sounds like we have the following problem, which is that the current compute thresholds are insufficient to capture the world of things that we might wanna regulate. So then the response would be, we'll just regulate all the things. Maybe do it by some capability threshold, or maybe just by sort of a general, if you're building an AI system, you have to satisfy these obligations.

On the other hand though, that hits into the compliance cost problem. And so I think this is a nice segue into what it takes to be the core insight of your paper. And I'll start with Kevin here, which is maybe we can solve this problem. Maybe there are some, some kind of efficiencies to be had through this idea of automated compliance.

So, Kevin, what is automated compliance?

Kevin Frazier: Yeah, automated compliance is exactly what it sounds like. Thankfully it, it's pretty on the nose here, which is to say taking compliance tasks and delegating it essentially more or less to AI systems. And this is not new by way of trying to find efficiencies with respect to complying with complicated sets of requirements or new expectations from the state or the federal government.

If you go talk to any business, they'll tell you about how they're always trying to streamline how they can comply with various expectations and to create new workflows and so on and so forth. And this is really just saying, Hey, we have these new tools that are really good at a couple of things. They can aggregate a lot of data, they can parse through that data, and they can share that data.

And so when we think about some of the AI regulations that we're seeing pop up around the country, we've got SB 53 in California, the RAISE Act in New York. there's a, I'll say a SB 53 sister or sibling that's been proposed in Utah. I suspect we'll see similar kind of transparency requirements.

Well, what are we really asking companies to do with respect to those efforts? Well, it's to compile transparency reports about how an AI system is performing and then sharing that information with a regulator. Well, if we can have AI do that, and Cullen and I think AI will get to that point of being able to do just that.

Well, suddenly your somewhat trite, although accurate statement, Alan, of well, why not just regulate everyone? Well, if it's costless or near costless, then yes, why not? Right now we're seeing that the disproportionate burden that currently exists under a lot of compliance regimes would essentially disappear.

But, I'll also flag that there are some other key things that, we expect AI will be able to do if not now in the soon, near future performing, for example, automated evals on AI systems monitoring, safety and incident, safety and security incidents, for example, which is another thing that a lot of state legislators are looking at.

And then finally providing incident disclosures to regulators and consumers. And so there's a range of really important, kind of essential regulatory mechanisms that AI may be able to handle in the near future. And our argument under automated compliance is that AI can lower, those costs and make it far more efficient for all sizes of companies.

Cullen O'Keefe: Yeah. and, completely agreed. Maybe just two things I'd add to that too is, first I would direct people to a great article by Paul Ohm called something like, Toward Compliance Zero that came out a few months before us, where he makes a lot of similar points and elaborates that very well.

And then maybe the other like framing that I, I think people might wanna bring to this conversation is that, you know, most new technologies kind of expand the production possibility frontier, right? They make new things possible. And so, you know, that's what makes a lot of us excited about AI technology and, maybe sometimes also apprehensive.

But this is really just pointing out that kind of the one logical consequence of that, for AI technology is that it's going to make new forms of compliance automation possible that wouldn't have been possible before.

Alan Z. Rozenshtein: Cullen. I think it'd be helpful to get a little more specific as to what sorts of things are automatable and what sorts of things are not automatable.

You know, compliance is a very general term. It encompasses a lot of behaviors. And so, just give a sense of when you and Kevin are already about automated compliance, what sorts of tasks, like specifically are you all anticipating and maybe more importantly, what is not automatable and is it not automatable yet or is it sort of in principle not really automatable?

Cullen O'Keefe: Yeah, great question. And I think this task-based framing that you introduced is really the way, at least I think about it. So Kevin mentioned a few types of examples of things that we could imagine AI safety regulations requiring people to do. And so a lot of these seem like things that in principle AI either could do today if you put a, you know, a little bit of elbow grease into working out the workflows and plumbing to, to make it work.

So things like compiling information about how an AI system was trained, right? Transparency type, obligations. maybe intervening in the training process. You know, there's different ideas for how you can intervene in the training process to make AI systems safer or behave in certain ways, right?

And so that's another type of thing where AI systems are, you know, quite good at coding. The AI labs are already using their AI systems to help them build the next generation of AI models. Well, you know, if you require the AI system to incorporate some regulatory requirements into that, maybe it's not too much extra work.

But there definitely are things that you could imagine AI safety regulations requiring that would seem a lot harder to automate. So, just one example, it, a thing that's often considered a kind of best practice in AI safety is something like human red teaming, where humans try to cause the AI systems to behave in undesired ways kind of by definition, that has humans involved.

There's definitely a lot of interest in AI-driven red teaming, or AI-aided red teaming. And so, you know, we will see whether that is ever competitive with human red teaming. But, you might want there to be a requirement that humans red team, the system, at least if that was a requirement, that would obviously be hard to automate.

Though maybe, you know, with AI assistance, they could do it quicker. Who knows. And then maybe another thing you might consider is like some sort of like, clock time requirement, right? So, one idea that people have talked about is something like an exclusivity period where you know, a, a company kind of has to sit on an AI model, and maybe can only offer it through an API or through a chat bot or something, but can't release the weights publicly for maybe six months while people kind of see how it behaves and assess whether it would be safe to release the weights of this model broadly.

Um, kind of regardless of whether you think that's a good idea obviously you can't automate away six months. Although again, maybe you can do more in those six months and maybe that means you would get the same safety benefit in three months. kind of post AI that you would get pre AI. So, nevertheless, like it, it, you, if you think about how different requirements might be specified, some of them will be hard to automate.

Yeah, which it kind of gets to part of the point of our paper, which is that you should think about which types of safety requirements will be more automatable, less, and maybe there's a, some reason to prefer ones that will be more automatable.

Alan Z. Rozenshtein: How do you all think of what we might call the Goodhart’s Law objection to your account.

So, so Goodhart’s Law is the famous victim that, once a measure becomes the goal, it ceases to be a useful measure. And we see this sort of throughout society, you know, we all focus on such and such to stick a bad education, performance or healthcare performance. And then the regulated industries start. optimizing for that. And that ends up distorting the very goal that they were trying to ac accomplish.

And I can imagine a similar concern with automated compliance where, okay, you know, once you've made compliance kind of machine readable in a sense, then you could imagine the incentive of companies to try to game the system, train the models to sort of satisfy, you know, in legal terms, you might think of this as a kind of letter of the law versus the spirit of the law concern.

But I, I can just imagine a world where you have this amazing automated compliance framework, but in the end it's not actually solving the reason that the legislatures or the regulators put out whatever, whatever, you know, whatever compliance, requirement they did, whether it's the safety or anything else.

And I'm curious how you all think about that potential concern.

Kevin Frazier: I'm happy to take a first stab at this one. I think for me the difference here is that Goodhart’s Law has some sort of reward mechanism that values changing your operations to achieve that result, right? So the assumption is that by virtue of changing your operations, you'll send some signal to the world, to your stakeholders, to your consumers, so on and so forth, and be recognized for achieving that metric.

Whereas what we're proposing is basically just continuing the status quo. Whatever you are doing, the background tasks that you are ignoring to begin with, or perhaps not paying, an incredible amount of attention to or not gathering in the way you previously imagined—Now AI's just doing that. But it's not saying that we're necessarily going to reward you for this outcome or give you some, relief from some other regulatory paradigm or something like that.

Basically, you get to carry on as is, but just have this tool, do your compliance test for you. And so I don't have the same concern that suddenly an AI startup that faces some. regulation for which automated compliance is possible. They just don't really have an incentive, in my opinion for changing their behavior.

But I'm always intrigued what my co-author has to say.

Cullen O'Keefe: No, I, I think I generally agree with that. I think, you know, Goodhart problems are endemic to the process of setting measures and, then people optimizing against them. And you know, one way. People think about AI systems is that they're optimizers.

And so they might find ways to optimize against whatever measures and, do so more aggressively than humans might be able to. So I think this will be like a general issue that, the law and a lot of other sectors will have to grapple with in the future. You know, I, I guess the way I would think about it as it relates to this paper is that, you know, it remains the duty and burden of legislatures and regulators to think about what types of behaviors they wanna inculcate and find the best ways to do them. And then they'll specify them. And, you know, the best that we can do is help regulated parties achieve those, specifications kind of as efficiently as possible.

Um, and I guess, yeah, I could see ways in which introducing AI into that process introduces more optimization. But, I could also see ways in which it also helps, for example, regulators, think through more clearly their, like, drafting process and think about ways in which the measures that they're picking, might be Goodhart-able, for example.

Alan Z. Rozenshtein: let me pose another potential objection to the project which is, if the problem that you are trying to solve for is, let's say Silicon Valley's resistance to regulation, and your solution is, well, it's actually gonna be a lot cheaper than you think because of automated compliance. That might only get at one part of the reason why the technology industry might oppose regulation, right?

So it may very well be that, you know, especially for the big companies where the compliance costs, while not trivial are, you know, fundamentally rounding errors, their concern is actually not cost at all. It's the actual substance of the regulation, right? They may say, you know, you could drive the quote unquote costs of complying with the regulation to zero in the sense of lowering the administrative costs, but automated compliance is not lower than non-administrative costs of regulation.

So I'm just curious how you all think of that or whether that's just a different problem and we're, you know, we're solving a problem over here. There's still a problem over there. We might as well solve the problem over here, even if it's not the entirety of the problem.

Cullen O'Keefe: Yeah. yeah. Yeah, I can jump in on that. I mean, I think that's great. Like, I think that we should just then have, like, part of what's exciting about this is it enables us to focus on the first order question instead of the second order question of like, do we think that these regulations are worth the kind of first order cost and benefits?

Is it worth you know preventing AI companies from doing the profit maximizing thing that we assume that they will do by default to, you know, achieve some additional degree of public safety or whatever other type of good we're trying to achieve. And people like can and will disagree about that, like those, disagreements, you know, are healthy and, and part of, you know, normal democratic debate.

And I think it's, actually just more productive if, AI technology enables us to focus on those disagreements eventually.

Kevin Frazier: And I'll jump on there to say that one thing that particularly excites me about this idea is the ease with which we can now switch to a different regulatory paradigm in which automated compliance is possible is way easier.

And so one of my gravest concerns about premature regulation and, and we outline the difference between a sort of pro regulatory and deregulatory spectrum. And Cullen and I occasionally end up on opposite sides of that spectrum, but I think everyone agrees we want evidence-driven policy, and we really want to avoid path dependence being created by laws that are well-intentioned, but perhaps send the AI development down a certain direction when in reality, you know, we want it to go a different route that perhaps is even safer and even more innovation enabling.

And so if we have automated compliance be the norm, and it doesn't require you to effectively change your operations such that you're fulfilling some expectation of the regulators. Well, now both regulators and companies can be more innovative and more evidence-driven, and that is super exciting.

Alan Z. Rozenshtein: Okay, so that's, that's great. Let me. I could repeat back to you what, what I heard, and you could tell me if it's right, which is, and I always find the sort of production possibility frontier diagrams from, you know, first year microeconomics really useful.

I'm now waving my finger in the air because podcast is a very visual medium as everyone knows.

But you know, I take it that what you're arguing is that look, there are real trade-offs in regulation, safety versus innovation kind of as the classic example, and your paper is not kind of responding to that as a general matter.

What you're saying is yes, but there's a whole other set of trade-offs that are actually dissolvable, which is like, you know, for any given amount of safety, we can have the same amount of innovation, we can have more innovation or vice versa.

As long as we get rid of this like, compliance sludge and we should all want to get rid of compliance sludge, 'cause then we can start fighting about the thing that actually matters. Is that a kind of fair description of, of the project?

Cullen O'Keefe: I would say so. I mean, yeah, I think we, we say as much, right?

Like for a, for if you hold the level of safety that you want, constant, you get it for cheaper. If you hold the amount of like, regulatory costs that you're willing to eat as a society, then you get more safety—like either way of framing it works. And that's the, the beauty of positive sum innovation.

Alan Z. Rozenshtein: So let, let's not talk about another part of your paper. And this to me was the most interesting idea, and this is your proposal for what you all call automatability triggers. So Cullen, what, what are these triggers? And again, what, what problem are they sort of responding to?

Cullen O'Keefe: Yeah. So this really goes back to kind of the, the central tension that often motivates some of these debates where—let's say that, Kevin and I agree that like we need regulation at some point, and Kevin's refrain is, ah, but if we regulate now, you know, you might have all these bad things. You might go into a kind of course, a path dependent route of technological development that's hard to reverse or costly to reverse. you could kind of lock in incumbents, et cetera.

And I retort well, I'm quite worried that if we don't regulate now, there will kind of never be another opportunity to regulate, or by the time there's another another opportunity to regulate. It'll be too late. We've already had some sort of catastrophe that we really would've preferred to prevent. But you know, Kevin and I like, share an underlying worldview, which is something like AI is going to unlock a lot of very, very beneficial capabilities in the future.

And among those, it, it really looks to us is like the ability to automate a lot of core compliance tasks. And I think this you know, the, the way that I kind of like initially came up with, some of the ideas behind this is like, I think this suggests a very natural trade, which is like we agreed to regulate. But not now.

We agree to regulate when that AI capability improvement that we both expect drives automation costs below some level. That's the fundamental idea of what an automatability trigger is. It says, we will—this regulation will not be effective now. It'll become effective only when the costs to implement compliance with it are lower than they are today.

Because presumably AI technology is better at doing the compliance tasks.

Kevin Frazier: And it's worth flagging just to add something quickly, it's worth flagging that this is not a novel concept with respect to conditioning the application of a law on a certain event. these are known as sunrise clauses, a lot of folks know about sunset clauses, and don't get me started 'cause I can go off for another 90 minutes about the importance of sunset clauses—

But sunrise clauses are also essential and basically condition the enforcement of a law on some trigger that may be okay, now an AI tool exists to allow for compliance. Or it can be something like, Hey, we're not going to start to implement these privacy laws or regulations until we've actually created the privacy agency and hired the requisite number of staff and so on and so forth.

There have also been states that impose sunrise clauses with respect to occupational licensing provisions. This is a interesting use case where they say, we will not allow for a new occupational license until there's a study done indicating that we actually need one, which is, like, no shit.

I would hope that's the law, but sometimes we just need these reminders to be baked into the legislation themselves.

Alan Z. Rozenshtein: And, and just to make sure I understand how this would be implemented, someone would have to decide when the, well, I mean two, two things that would've to happen presumably. One, someone would have to set the kind of tradeoff between how much automation do you want to make sure there is before the law goes into effect.

I imagine that would be something for the legislature to decide. And then there's someone I assume in the executive branch who has to say, okay, I've done a study. I believe that the time is now, in terms of satisfying legislation, do you have in mind who would do that? I, my instinct would be like the Secretary of Commerce because of NIST, and I would imagine NIST would be—the National Institute for Science and Technology, or the AI safety or whatever, whatever they're calling it these days. Um, institute, I, I'm like who actually does this?

And how, I'm kinda curious in the sort of ad law minutia of this a little bit.

Cullen O'Keefe: Yeah, I mean, you know, I think as a first order matter, I think there's a lot of different ways you could imagine this being implemented and since it is a new type of mechanism, you know, I, I wouldn't say that Congress people tomorrow should rush out and try to copy and paste the language from our paper into, their hot new AI regulation bill.

There still needs to be a lot of work done to think through how this would be implemented. That said, yeah, I think the basic schema that you're pointing out sends about right where Congress would say, you know, we want this law to come into effect only when we think that compliance costs have dropped to X dollars per like relevant task.

And so you might think that relevant task is like evaluating a single AI model. Just to take a very simple example of what an AI safety reaction might do. We think that, right now, it would probably cost firms, if you include kind of overhead, maybe it costs like a million dollars to run, a, a single model evaluation, and that's too much.

But if it only costs $10,000, then we think that's great, just to make up numbers, right? And so yeah, Congress would say that. And then maybe the, yeah, secretary of Commerce, seems like the best placed person, in the federal system since we don't have the Department of AI yet. You know, says we think the day has come. We think that the cost is $10,000. Here's why. And then, you know.

The enforcer starts bringing enforcement actions, maybe then litigants could challenge that determination in court that is itself is a, you know, statutory and administrative procedure question that I am not necessarily an expert on. But, yeah, the, the, that's just one example of how you might implement this.

Kevin Frazier: And something that we talked about in the initial formation of this idea was the fact that this could lead to a really interesting market on the private side of saying, Hey, I want to develop the tool that then gets adopted or offered as one of the options for this AI compliance. And we don't necessarily have that right now.

Obviously, there are a number of startups that are trying to think through how they can facilitate easing your compliance burden with various AI regulations and other regulations, but actually developing this sort of AI compliance tool is a really interesting market that could be created. And I also think it's worth flagging that this concept could have a lot of positive spillover benefits in other areas of regulation, where we're also concerned about having a sort of disproportionate impact on smaller businesses.

Alan Z. Rozenshtein: Lemme actually stay with this question of who would develop these tools because I, I wanna sort of, kind of prod at this idea a little bit. I think it's really interesting. But one objection you might have is, well, why would Silicon Valley have an incentive to develop these tools if it's not until the tools are developed, do they have to actually do the compliance or that the regulation comes into effect.

So how do you incentivize, and of course, Silicon Valley is a, they, it's not an it. But, how do you incentivize Silicon Valley to build these tools when in some sense it's against their interests to do so?

Cullen O'Keefe: Yeah, I think a great question and I, I think number one, like there's a coordination problem or something, right?

So if, you know, if firms see that there's going to be a lot of business to be made by offering this like compliance tool, it would be illegal for them to coordinate, not to make it under the antitrust laws, probably so, they couldn't get together and do that. But then also it's probably the type of thing that is built, you know, by someone building on top of a foundation model is my guess, like the most likely way that this would be implemented. And it's just hard for firms to kind of prevent them from doing that. You could imagine having additional, restrictions, that make it hard for firms to like stop, like people from building compliance tools on top of them.

I, I don't know if we want that, but yeah, I guess I'm pretty optimistic that like, you know, compliance automating AI will find a way. you know, at the very least there's like open-source models that are not too far behind the frontier, and this would be, you know, even harder for anyone to hold back intentionally.

Kevin Frazier: Yeah. And I, I think that, so long as the government is saying we're going to pay for this, or it, whether it's the federal government or 50 state governments or governments around the world that wanna emulate this automated compliance mechanism, there will be a market for saying, Hey, yeah, we'll, we'll procure and then make available, this AI compliance tool or set of tools and we'll give you this contract, and so on and so forth.

And so someone will wanna make that money.

Alan Z. Rozenshtein: So a couple more, a couple more potential objections. So let me ask this one of you, Kevin, you know, one thing I can imagine a safety focus critic saying to this idea is, well, automatability triggers just sound like a way of delaying regulation. You know, if not indefinitely then for quite some time.

I mean by advocating the way that you, the way that you all present this in your paper is this is a way of calibrating lawmakers preferences around sort of safety versus innovation. But a different way of saying is, well, just the very idea of delaying this is kind of putting a thumb on the scale for deregulation, because of course in the vast majority of other domains, we don't actually do this.

So you gave some examples of, of sunrise provisions, which I think is very interesting to think about. But you know, the counter example that came to my mind, and I've not done a sort of deep study into this, but I think what I'm saying is reasonably accurate, which is when, you know, the EPA, or let's say the state of California, which has really taken the lead on this tells car companies, you know, you must drive emissions down such and such a, you know, to, to 10%, 20%, whatever the case is.

They actually have not always done that knowing that such technology existed often it was, we're going to make you do this. We'll set the effective date of this sometime in the future to allow you to prepare, but it's kind of on you to figure out how to do this. So why isn't that the better answer? You know, if you're worried about the companies not being able to do this now, tell them, okay, you have two or three years, to do this.

This is going to go into effect. And instead of saying it'll only go into effect once someone else has figured out how to do it cheaply, if you, you know, it's gonna go into effect. So if you Meta, Google, OpenAI, Anthropic, X, whatever, if you wanna save money on the compliance, which presumably you do, you figure this out.

Kevin Frazier: So it's a really valid critique and a good one. I think that the. Assumption that Colin and I are making and that folks like Paul Ohm have made and that other folks in the space have made, is that AI seems to be closer to facilitating a lot of these kinds of compliance tasks than perhaps in another domain or a different sort of automated compliance scheme. So I think that day is sooner rather than later.

So that's one response. Another response is, yes, this is certainly putting a thumb on the scale with respect to assuming some degree of delay. Now that's a reflection of the fact that every single policy we enact always has costs and benefits. And this is sort of a forcing mechanism that says, are you really weighing those as seriously and as thoroughly as you can?

And one aspect of that is the sort of loss. Innovation, loss in safety, loss in just greater and novel technological development that may come as a result of that sort of premature regulation. Now, we didn't consider this in the paper, but I'd be curious, or perhaps we could add something on at some point, exploring the notion of, okay, if these tools aren't available within three years or within 18 months or within however long, then it will go into effect, right?

And that way you're kind of feeding two birds with one scone. Hashtag You're welcome, PETA—that is a different approach that we could certainly rely on that kind of tries to get both of those mechanisms going that you were mentioning Alan, both at one point, putting folks on notice that they may have to comply with this, while also giving those innovators who want to develop the automated tool, an incentive to giddy up and get going on whatever that automated compliance tool may look like.

Cullen O'Keefe: Yeah and maybe to add a few things,

Alan Z. Rozenshtein: Cullen, lemme—Oh. Yeah,

Cullen O'Keefe: A as the person who tends to like, worry a bit more about, like us not regulating in time is like first this, this dynamic works both ways, right? This is a way of credibly signaling that like, and binding. We signaling that a regulation will come into effect if this milestone is met, right?

It's definitely like, in some sense, if you don't do the disjunctive thing that Kevin just said, more flexible than a, you know, date, certain sun rise provision. but it's you know, more certain than a like, well, we'll revisit it if there is a problem that requires us to legislate, which I think frankly is like the default outcome.

The default outcome in legislation is nothing happens. And, so I think this is a way of like trying to strike a deal that in principle, like, principled parties can agree to. And then, yeah, it also creates an incentive to like order the technological innovations in a way that I think reflects what people should want, right?

We should want the technology that helps us solve these thorny trade-offs before the like, applications of the technology that create hard problems, right? And so this is saying that like all else equal, we would prefer to have the compliance automating technology sooner. Thank you. And if you do that, you'll be rewarded by the market because there will be a captive market that is basically, you know, strongly incentivized to buy it.

But there are, there are like situations in which, you know you might worry that this is not ideal, right? So like this makes the most sense for problems where you think you don't have like catastrophes that arise before you have the compliance automating AI that could have prevented those catastrophes.

And that may or may not be the case. So, you know, legislators would have to think carefully, empirically, and strategically about whether the problem, this is the right solution for the problem that they're facing. And it might not be, you know, other things will make sense for other problems.

Alan Z. Rozenshtein: So I pose the sort of critique from the safety side to Kevin.

Let me propose the opposite side of the critique to Cullen, which is this all seems very complicated. Why are we trying to regulate stuff in the future when we think that the technology that we don't really understand exists? Like this is not how we do stuff generally. The way that legislatures usually work is that they identify a problem, they make sure they can fix it, and then they implement it.

Why are we singling out AI for this sort of additional regulation? You know, if you can if the regulation is cost benefit justified today. Fine, we can have that fight. But if it's not cost benefit justified today, which is a little bit what I think the idea of these automatability triggers in the future kind of imply otherwise.

Why would you push it out to the future? What are we doing? There are so many other things that Congress could be doing today. It seems weird to, you know, both have them guess and also just seems weird. One might argue to have them spend their precious current political capital on stuff that, again, by definition is not gonna happen for a while and may never happen.

Cullen O'Keefe: Yeah, again, I think there's a lot of validity to, to that critique, especially as applied to different AI problems, you know, different problems and AI policies have different dynamics and require different solutions. And I think, you know, one of the, the best parts of Scaling Laws is bringing more nuance to all the various AI policy problems that exist.

And so, you know, there are problems that I spend a lot of my time worrying about where society would probably have a very low, I think, risk tolerance. So I think one example, and this might be AI systems that are, would aid in the engineering of novel pathogens that, yeah, we may not have immunity to, may be quite costly to respond to, you know, COVID costs trillions and trillions of dollars, right?

And so, to be willing to prevent the next COVID. We should be willing to spend, you know, a lot of money. right. And so, I guess the way I think about this is that number one, the use of an automatability trigger sends a useful signal about, we would, you know, prefer there to be lower cost to implement it implement this type of regulation.

We are not willing to implement it at the current cost benefit analysis, but we would be at a different one. Number two, we're going to kind of like make that commitment credible in a way that delaying until the problem has happened is not, is not a credible kind of signal for market actors to be working on in the meantime, maybe, you know, sometimes it is, sometimes it isn't.

So it's, it's a, it's a way for legislators to really like, put a credible signal that, there will be market incentives to regulate in the future, or sorry to, to provide a certain type of AI service in the future.

Alan Z. Rozenshtein: Before we close I want to talk a little bit about what I thought was a particularly interesting scenario that you all have. It's a little speculative as you all describe, but it's a very interesting potential preview of the future, which is, quote, automated compliance meets automated governance. So I, I could, I could try to summarize what you're all predicting, but I'd rather just hear it from you all.

What is this potential Jetsons-like world where essentially robots talk to robots to figure out what the law says? Cullen, let me, lemme start with you.

Cullen O'Keefe: Yeah, great. You know, I think. If you can just imagine, if you just imagine a kind of human staffed, regulator and then the automated compliance regulated party, you're kind of playing half court tennis, right?

So, I, I, I think this probably works the most efficiently when the regulated com the compliance automating AI can talk to at the speed of AI some sort of other AI systems in the regulators offices that can help it understand like, hey, like can I get additional guidance on this, for example? And, you know, I dunno how long that would take in a typical regulatory process.

My guess is on the order of months. but maybe it can provide it in a matter of seconds. Right? And that's just like one benefit that kind of automated governance could bring to this process is kind of the speed of AI and there's lots of others too. So, you know, why don't firms just share a bunch of information with regulators and you know, just like try to get better signal from them about what, what's tolerated, what's not.

One plausible answer is that they're afraid that the regulator is going to like, use that, selectively against them or hold it over their head or something. I mean, part of the reason that is, is worrying, right? Is that because regulators are staffed by humans, humans can't just like forget things that they've learned about regulated parties.

But maybe you could design AI systems that could.

Alan Z. Rozenshtein: I have two small children. I can forget anything.

Cullen O'Keefe: I envy you, Alan. But, the, you know, maybe one thing that, regulator, AI regulator side AIs could do is like have a kind of quasi privilege thing where they say like, we wanna get like regulatory guidance on this, like, type of thing.

We're going to provide you a bunch of super sensitive documents that we wouldn't share with anyone normally. But because we have strong, you know, trust in the regulator side AI set up that, that you have, we know that you're not going to use them for other enforcement actions. You're just going to give us, you know, your regulatory approval.

And then we're good to go. And like, you know, we can have a kind of secure record of that, that we keep, when you ask us later, you know, Hey, why'd you do this? And we could say, well, you're, you know, we showed this to your regulator AI, and it said it was okay. And then, you know, everything's good.

So I think just like ideas like this about the potential synergies between these two things is going to be a really important dynamic in the 21st century to consider.

Kevin Frazier: And I'll just add what I think could be a concrete example of this. So I am thinking a lot about workforce and job displacement issues right now, and there's a lot of conversation about how we can update the WARN Act.

And for folks who aren't steeped in 1970s policy, this was the idea that when you lay off 300 folks at your factory in Buffalo, New York, you have to tell not the Department of Labor, 'cause that would make too much sense, but the local officials in your state that you're about to lay off 300 people. Well now we have a lot of concerns.

For example, we're talking on, January 28th, 2026, Amazon announced it's going to lay off 16,000 people. And some people are attributing that to AI. And so there's a lot of conversation about how can we manage the labor market. In a more productive fashion. Now no company wants to send to the Department of Labor, Hey, here's all of our information. Three weeks in advance, we're about to lay off these people. Please don't do anything mean or give us bad press or anything like that.

What they may be willing to do is, let's say on a quarterly or monthly basis, submit data via automated compliance to the Department of Labor who can then aggregate and then share out really valuable insights that could trigger congressional hearings or a response by the Department of Labor or, new programs by job retraining programs and things like that.

That's a whole new workflow and kind of regulatory approach that we just don't have that automated compliance and by extension, automated governance could realize, and that to me is really exciting.

Alan Z. Rozenshtein: So I wanna end by asking you two to reflect a little bit about sort of your journey in writing this paper, as you know.

And I think Kevin, as you pointed out earlier in the conversation you two are on, I don't wanna say opposite sides of the pro-regulatory versus deregulatory spectrum, but there's some sort of daylight obviously, but between you two, which I think is actually always a really fun way to sort of collaborate.

And I'm curious having think, having thought through this issue and, and the many conversations I'm sure you two had in writing this paper has it changed your views on either the optimal timing or content of AI regulation?

So let me ask Kevin, your version of this question, and then I'll close out by asking Cullen his version.

You know, Kevin, has it made you more sympathetic to some forms of earlier or more intensive regulation on AI, let's say.

Kevin Frazier: Yeah, I, I think I'm very sympathetic to the argument that there are certain things that we may not be as able to measure. And this is where Cullen and I, I think had a meaningful discourse of automated compliance can only go so far.

And so by virtue of writing this paper and having that experience, I think it did shine a light on what are the areas of AI governance where we're still going to have to have a sort of human driven conversation about what risks and what benefits are we willing to tolerate because quantifying all of that and using AI to derive all of the requisite inputs and data may not always be possible in the near term given the sort of risks that we often talk about in a more kind of long-term perspective.

And so to me it was just a really useful exercise to try to bifurcate—What's the sort of information where automated compliance could be really useful. And what are the sorts of tasks that will not allow for that sort of compliance? And then with respect to those tasks, who then has the institutional capacity to handle those, regulatory questions.

So to me, it just added more nuance, to use Cullen’s word and more nuance in my opinion, is always better and a heck of a lot more fun.

Alan Z. Rozenshtein: So, Cullen, let me ask you sort of your version of the same question to close out. Has it made you more sympathetic to the concerns from the quote unquote pro innovation side, around compliance costs?

Cullen O'Keefe: Yeah, I mean, I think the pro innovation side has done a really good job of hammering or injecting a few different very important memes into this discourse. And I think working on this paper was great to grapple with them. And like, among these, and one thing that I hope comes clear is that like, we're both big believers in the idea that technology is generally positive.

Some, and, you know, a lot of discourse tends to lose light of that fact. And this is kind of, in some way applying this like general positive sum dynamic into a domain where there's often like assumed to be a, zero sum kind of trade off, right? So I think, grappling with that is, has been fun.

I think that grappling with these like timing problems is also is also kind of, important. You know, when I was at OpenAI, one thing that OpenAI talks about a lot is the benefits of iterated deployment. And by which they mean that like the process of society seeing AI progress and learning how to deal with it incrementally is beneficial to the kind of long-term challenge that humanity has of figuring out how to deal with AI systems.

You know, people can agree or disagree with the specific ways in which OpenAI has been going, about that kind of iterative deployment philosophy. But I think that the core insight that learning from the technology and leveraging some of its beneficial uses as it advances is, it has a lot of benefits that I think AI safety and policy discourse, you know, four years ago or something, might not have appreciated.

And I do think this general bet of trying to sequence AI innovation in, in the way that, you know, gets you the most socially beneficial applications first. And think about ways to do that instead of just framing it as a progress versus stasis kind of problem.

I think is like maybe a more productive framing and thinking about you know, ways to do that, I think is a fruitful policy endeavor that hopefully this paper is just the first of, of many. And because I think everyone agrees that different forms of progress, you know, have different social values, right?

Progress in more addictive drugs is, is probably not a good thing. Progress in, providing legal services to people, medical innovations, et cetera, is better. And so, you know, when we can, kind of selectively pick beneficial forms of innovation, all else equal, we should prefer to do that. And yeah, this is just one way to do that.

Alan Z. Rozenshtein: Well, I think it's a good place to leave it. it's a great paper. We'll link to the original paper that ILAI is hosting and then to, a shorter Lawfare post that should be up by the time this is released. But thank you Cullen and Kevin for coming on the show and talking about it.

Cullen O'Keefe: Thanks, Alan.

Kevin Frazier: Always a hoot. Thanks.

Scaling Laws is a joint production of Lawfare and the University of Texas School of Law. You can get an ad-free version of this and other Lawfare podcasts by becoming a material subscriber at our website, lawfaremedia.org/support. You'll also get access to special events and other content available only to our supporters.

Please rate and review us wherever you get your podcasts. Check out our written work at lawfaremedia.org. You can also follow us on X and Bluesky.

This podcast was edited by Noam Osband of Goat Rodeo. Our music is from Alibi.

As always, thanks for listening.


Alan Z. Rozenshtein is an Associate Professor of Law at the University of Minnesota Law School, Research Director and Senior Editor at Lawfare, a Nonresident Senior Fellow at the Brookings Institution, and a Term Member of the Council on Foreign Relations. Previously, he served as an Attorney Advisor with the Office of Law and Policy in the National Security Division of the U.S. Department of Justice and a Special Assistant United States Attorney in the U.S. Attorney's Office for the District of Maryland. He also speaks and consults on technology policy matters.
Cullen O'Keefe is the Director of Research at the Institute for Law & AI (LawAI) and a Research Affiliate at the Centre for the Governance of AI. Cullen's research focuses on legal and policy issues arising from general-purpose AI systems, with a focus on risks to public safety, global security, and rule of law. Prior to joining LawAI, he worked in various policy and legal roles at OpenAI over 4.5 years.
Kevin Frazier is a Senior Fellow at the Abundance Institute, Director of the AI Innovation and Law Program at the University of Texas School of Law, a Senior Editor at Lawfare, and a Adjunct Research Fellow at the Cato Institute.
}

Subscribe to Lawfare