Scaling Laws: Rapid Response to the AI Action Plan

Published by The Lawfare Institute
in Cooperation With
Janet Egan, Senior Fellow with the Technology and National Security Program at the Center for a New American Security; Jessica Brandt, Senior Fellow for Technology and National Security at the Council on Foreign Relations; Neil Chilson, Head of AI Policy at Abundance Institute; and Tim Fist, Director of Emerging Technology Policy at the Institute for Progress join Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare for a special version of Scaling Laws.
This episode was recorded just hours after the release of the AI Action Plan. About 180 days ago, President Trump directed his administration to explore ways to achieve AI dominance. His staff has attempted to do just that. This group of AI researchers dives into the plan’s extensive recommendations and explore what may come next.
Find Scaling Laws on the Lawfare website, and subscribe to never miss an episode.
This episode ran on the Lawfare Daily podcast feed as the July 25 episode.
To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.
Click the button below to view a transcript of this podcast. Please note that the transcript was auto-generated and may contain errors.
Transcript
[Intro]
Alan Rozenshtein: It is the Lawfare Podcast. I'm Alan Rozenshtein, associate professor of law at the University of Minnesota, and a senior editor and research director at Lawfare. Today we're bringing you something a little different, an episode from our new podcast series, Scaling Laws. It's a creation of Lawfare and the University of Texas School of Law where we're tackling the most important AI and policy questions.
From new legislation on Capitol Hill to the latest breakthroughs that are happening in the labs, we cut through the hype to get you up to speed on the rules, standards, and ideas shaping the future of this pivotal technology. If you enjoy this episode, you can find and subscribe to Scaling Laws wherever you get your podcasts and follow us on X and BlueSky. Thanks for listening.
When the AI overlords takeover, what are you most excited about?
Kevin Frazier: It's, it's not crazy, it's just smart.
Alan Rozenshtein: And just this year, in the first six months, there have been something like a thousand laws.
Kevin Frazier: Who's actually building the scaffolding around how it's gonna work, how everyday folks are gonna use it?
Alan Rozenshtein: AI only works if society lets it work.
Kevin Frazier: There are so many questions have to be figured out and nobody came to my bonus class. Let's enforce the rules of the road.
[Main Podcast]
Welcome back to Scaling Laws, the podcast brought to you by Lawfare and the University of Texas School of Law that explores the intersection of AI law and policy.
Six months of waiting is finally over. Yes, it's Christmas in July, but a different kind than the one you usually look forward to: It's AI Action Day. Or depending on who you ask, you could say that this is the first meeting of the AI Reading Book Club, and today's read is the AI Action Plan. You can see everyone has their own edited copy. It's gonna be a fantastic Scaling Laws episode.
Thanks for listening in. Folks, we have so much to get to, but I just want the quick and easy takeaways. So, Neil, at a high level, if you were to send out one Tweet gut reaction to the AI action plan, it would be?
Neil Chilson: It'd be the, the AI Action Plan continues the pivot of this administration away from a overly cautious, risk fearing approach of the previous administration and really emphasizes that AI is a giant opportunity that the United States needs to seize.
Kevin Frazier: So we've gotta pivot. I do think you maybe breached the 140 character limit, but I'll, I'll let listeners figure that out. Janet.
Janet Egan: I'll back that in. I think AI is a big deal, and this shows this, it's got actions for cross government agencies. I'm really interested in, in the focus on adoption. It's moving away from the only focusing on frontier models to actually looking at how can we diffuse across government, society, and education and workforce.
Kevin Frazier: Very exciting. Neil? Well, I already went to Neil, Tim, I was just so excited. Neil's take dominating. He already, yeah, he's dominating. AI do has been achieved.
Tim Fist: Yeah, the vibes are great. I, yeah, what one thing I like about it is it focuses on, I think, maybe the most underrated thing about AI policy, which is that it's like really hard to know what's gonna happen over the next few years.
So focusing in on improving the government's ability to kind of measure and evaluate progress and especially domestic versus foreign models is a big deal here. And then I think focuses in on the things we do know our problems and sort of we should be trying to solve, which is, you know, AI and energy is a really big deal and we need to un bottleneck that.
And then also that we need to reform science to sort of take advantage of like the affordances that AI provides. So yeah, excited to chat about all this.
Kevin Frazier: And good vibes. Yes, Jessica?
Jessica Brandt: Yeah, I'd agree there's a lot to like here. I mean, good stuff on cybersecurity for critical infrastructure on building, you know, capacity for AI, incident response, investments in biosecurity, all kinds of good stuff. The woke AI stuff is I think, predictable and silly and unhelpful, but broadly, I think there's a lot here to build on.
Kevin Frazier: Well, peaking of building, we have a lot to construct here. So there are three pillars, three principles, and 30 different sections, and that's too much to cover in one Scaling Laws podcast.
Unfortunately, I would prefer to spend this whole AI book club with you all until the end of the night, but we have other obligations apparently. So I wanna start off though, with this AI innovation pillar. Which is probably the longest section or has the most recommendations. And Neil, you've dove into this extensively. Are there a couple provisions or sections that stand out most to you, or how do you think this perpetuates the pivot you were talking about earlier?
Neil Chilson: Yeah, so there's a, there is a lot here and it actually lumps, this might be the least well-defined category. There's, there's some stuff in here that doesn't there's just a lot of different things in here, but I'll pick out a couple that I think characterize the overall tone and those start up very early.
And so there's a real emphasis at the very beginning about like removing red tape and regulation and, and seeing that as a primary risk to the us maintaining and achieving global AI dominance. And so, this is a focus not just on the federal level.
And there's some recommendations about how federal, that federal agencies should examine how they can remove barriers to innovation in this space through their regulatory reform. But also there's a discussion of states and the potential challenges that's a wave of state regulation might present to U.S., continued U.S. innovation. So that's one big component that I picked out of this.
Kevin Frazier: And just to just hang there for a second, because the language here is suggesting that federal funding for some projects will be contingent upon the extent to which a state has AI regulations that are deemed burdensome.
I, I can just hear the Twitter howls right now of, well, my gosh, you know, how are we gonna figure this out and what does that mean? And I do think this is probably going to be one of the most contentious and controversial provisions, and we get it right at the start of just saying, hey, here we go.
Neil Chilson: It's right up front. And I think apparently Trump just mentioned it in his speech that he's giving he's probably still giving right now. And so, you know, that, that, that recommendation borrows some language from actually the Senate version of the moratorium, the pause. And so I, I think that shows some desire of this administration to deal with it. However, the recommendation there is pretty soft. There's a lot of squishiness in it.
And so, it, you know, I, I, I think people who are howling about this should spend some time looking at the language 'cause I, I don't, I don't find it strong enough for my taste. I prefer it was stronger. So maybe, maybe that gives some comfort to yeah people who are on the other side.
Kevin Frazier: They can go howl at the moon or something else.
Yeah so Tim, I wanted to come to you specifically on this section as well about the focus on science in the innovation bucket. And as Neil pointed out, maybe doesn't necessarily fit squarely in how folks were maybe previously defining innovation. But what excites you about this? Or what's some of the big takeaways about these science per business.
Tim Fist: Yeah. So I think directionally there's a couple of things, and I think the big thing to kind of know here is that, you know, the way that, well first that, you know, AI could obviously just massively accelerate scientific progress, especially in fields that are particularly amenable to it. So I think anything related to code, math, and biology in the near term especially fits the bill here.
But I think we also need to recognize that the way science is traditionally funded and structured by the federal government isn't that well suited to how AI research should be funded. Especially like AI driven scientific research as well. And so, you know, if you think about like the classic way that science is funded, it's, you know, you have like a PI university and you sort of, they hire a few students and you have like 20% overhead and, you know, most scientists spend who in, in this situation spend half their time applying for grants and half the time actually doing the research work.
It takes sort of an average like 10 months to sort of get access to funding after you apply. And if you think about sort of fast paced fields and take advantage of AI and the automation that provides as well as sort of the needs for centralized compute and engineering and infrastructure, that kind of grant structure really doesn't work. And so what's really cool to see in the action plan is calling out focus research organizations, sort of like a new model.
So basically have a big institution, you kind of give them a block grant. So you know, they have the hire the engineering team sort of like build up the compute, have that sort of centralized support and basically let scientists run wild with, you know, you know, delivering AI, AI discoveries sort of in areas like biology and material science.
So that's really cool to see. And we've already seen success with sort of, you know, outside of publicly funded research with the ARC Institute, who you might have seen, you know, has been super successful with like this like focused research organization model. You know, they put out this EVO model, which is sort of like a butler, like a foundation model for biology that's been incredibly successful.
So I think yeah, this, that especially is like very, very nice to see in this. But you know, I think the big meta thing with this is, you know, this is a very long list of recommendations. Some of them more tentative than others. And yeah, I'm really interested to see which of these things, hopefully all of them find their way into executive orders and how actually targeted and concrete they are.
Like, do they give mandatory action with a deadline to an agency? 'cause if they don't, very unlikely that the specific things will happen.
Kevin Frazier: Yeah and I, I omitted my own high level summary. I'm sorry for breaking my own rules, but it was going to be, now it's execution, right? It's very easy to put a bunch of things on a piece of paper and then just say good vibes as, as we did earlier, it's another thing to get the executive order going, get the agencies actually acting on it.
I mean, the number of action items now for a number of these agencies who are already thinking who, boy, I don't have enough staff, or perhaps staff have been leaving or whatever's been happening. There's a lot of capacity questions now that I think are going to be raised by this.
But to your point also, Tim, I feel like this is kind of like Christmas morning when you go down the stairs, you're super excited and you see like the outline of a bike and you think, oh, that's going to be my bike, but when do I actually get it? When, you know, when's it going to be assembled? Who's going to build it? And so it's been funny seeing the reactions on social media so far where everyone has their own thread and they're all like, here are my top 10 provisions.
And we see, you know, if what if you're an “Abundance” person, you're super excited about the scientific provisions. This could have been lifted from Derek Thompson and Ezra Klein's entire chapter on scientific discovery, right. And then at the same time, folks across the spectrum really welcoming this. So within this bucket, is there anything else that stands out to you, Neil, that you really wanted to, to drill in on?
Neil Chilson: Yeah, I mean, one thing that I found particularly gratifying was there's an early, very early emphasis is a Section, Section III is on open source, right? And, and talks about the vital nature of open source as a, a source of competition, but also a source, an input to research and scientific discovery. And I found that very gratifying. That was not a foregone conclusion that, that that open source would have such a strong position in the, the plans of this administration. And so as a strong supporter of open source, I, I found that very gratifying.
Kevin Frazier: Yeah and I'm, I'm eager to hear from Janet and Jessica on this one, because open source has been, it is gone through many lives of the worst thing we've ever heard about in AI of open source means we're instantly going to seed everything to China. Or open-source means, you know, bad actors are going to be developing bio weapons tomorrow.
Janet, what's your read of this kind of transition to a more favorable posture towards open source and, and what's behind it? Is this all just Deep Seek and our reaction to it?
Janet Egan: Yeah, I think what really stands out to me in this plan is, and the administration has done this really well, is that threading the needle between innovation and risk manage.
And so if you look across Twitter today, you'll see a lot of folks from the safety camp saying, hey, this is actually pretty good. And then you see the same commentary coming from the abundance and accelerationist camp as well. And I think what's really interesting here is that it's, it's done a good job of grappling with the uncertainty as to what's gonna be emerging at the frontier of AI capabilities.
So even in the section on open source, it mentions that, you know, you have to protect and disseminate and manage that balance. And we see this coming out too with the, the initiative here around protecting commercial and government AI applications and sensitive IP. So at the moment, it's the private sector that are custodians of this sensitive IP which is the amalgamation of like billions of dollars of investment in research and compute.
And these model weights are stealable. And at the moment, we don't really have that channel necessarily from the intelligence community, the threat intelligence channel through into the people who are actually taking action on protecting this sensitive IP. And so that's another thing I'm really excited to see that there's this focus on protecting the U.S. advantage as we look to bolster that lead.
Kevin Frazier: Yeah, and that tees up an interesting, expand, expanded version again, of innovation that we're seeing in this section where it's not only innovation in the sort of typical commercial fashion that we would typically discuss innovation in, but also innovation in AI r and d itself and in AI research itself.
So Jessica, what stood out to you about that expanded notion and really looking at how can we understand AI better as a sort of key innovation principle.
Jessica Brandt: Yeah, I think there were two things that stood out to me in this section, and one is, you know, the sort of support for investments in the science of evaluations, understanding interpretability control.
I do think that these things can help accelerate adoption because if we understand how a model's going to perform you know, it, it sort of enables us to sort of understand and therefore mitigate risks. I think that's super important. You know, the other piece of this was around sort of supporting you know, companies or firms’ efforts to protect themselves from cyber malicious cyber actors and insider threats.
I thought that was a really important ad. I, I, I'm obviously sort of in full throat support of that principle. There was almost nothing in there though about how they were gonna achieve that. So I actually thought there was quite a bit more room to talk about how, you know, what kind of information sharing mechanisms would be useful on this issue.
And, you know, I think room to consider things like a NIST-led you know, framework for AI security that might help kind of build a baseline. What about incentives for, you know, secure development? There's kind of a lot more that could be done there, so glad to see it mentioned. We'd love to see it flushed out.
Kevin Frazier: Yeah. And it brings to mind CISA, which CISA doesn't have the best reputation of folks buying into, hey, let's share a bunch of sensitive cybersecurity information and ensure we actually engage in this practice of sharing whatever exploits we're seeing and things like that. So there's a lot of room for improvement.
Jessica Brandt: I think that's also gotten harder recently, but, but let's hope we can get ourselves on the right track.
Kevin Frazier: Yeah. And I, I do wanna circle back to Janet's point about adoption generally being in this section, which, and this incorporates your perspective too, Jessica, on when I go and I speak to law groups, they're already such AI skeptics of, oh, well the second I enter that information, it's just gonna regurgitate out the whole paragraph I entered and there goes all my client information.
There's a whole ’nother can of worms. I don't want to get into the professional rules of, of yeah, that's not fun. But if you have that level of explainability and interpretability, then when, and if we achieve a GI or super intelligence or whatever you have, that adoption component is going to be so central, which brings up the, one of the first principles, the leading principle according to the plan worker readiness and kind of workforce development.
And so, Neil, what was your takeaway on the focus here on trying to get more employers and employees themselves up to speed on AI?
Neil Chilson: Yeah, I mean, I think this is a general, generally, right. It's hard to disagree with this point, right. So it's,
Kevin Frazier: I don't want workers to learn anything. Right, right.
Neil Chilson: Yeah, yeah.
Kevin Frazier: Let 'em figure it out.
Neil Chilson: I mean, and, and I think given that the plan is very focused on this as an opportunity, they want the opportunity to be, to be broad. This isn't just an opportunity for companies who are building products. It's a, it's a big, it's a big opportunity for everybody. And a lot of that worker development is, is not only training on how to use AI, but some of that worker development's also, like the skill sets that we need to actually build this, all this infrastructure that we need as well.
So it's a pretty well thought out like idea that basically we need to get people up to speed. Obviously, there's a lot of details to figure out there. I did wanna say, you know, you sort of framed the adoption question, which I like Janet, I am a huge fan of the fact that there's a more focus on adoption as a question that we'll have to answer after we get to a GI or a SI. I would say if we stop development right now there is a huge amount of benefit that could come from adopting these products across the economy. And so I think adoption is a challenge right now. I think the plan recognizes that, and I think that's right.
Kevin Frazier: Yeah and I, I would applaud that if it wouldn't mess up the microphone, so I, I will not do that.
And I wanted to kind of wrap up this section quickly on the repetition here of combating synthetic media. And Jessica, you and I, and this whole group could talk for four hours about mis and disinformation and AI. What was your takeaway from this slight mention of the Take it Down Act saying yes, we liked that, and let's do more of that. Is this another instance of good messaging? Let's wait to see it.
Jessica Brandt: Yeah. There's a handful of things here that I think are relevant. One, you saw that the, it calls for the NIST risk management framework to remove the word misinformation along with climate change, DEI, et cetera. I think it's a little bit ironic to be purporting to promote and protect free speech by creating a government list of band words. I think that, you know, is sort of unhelpful. And I think, you know, we do actually need to have kind of an honest conversation about what the last five years of content moderation, debates that I personally think have served nobody you know, mean for a world of LLMs.
Like what do we expect you know, these models to, to say when asked questions about sensitive political topics. So we have to grapple with that. I just don't think this is like an honest grappling with that. You know, on the specific provisions around synthetic media, I thought there was, you know, some interesting stuff here around particular use cases you know, regarding within the legal system you know, useful. And I think broadly, like not being drawn into a broader narrative about deep fakes are coming for us all in every context, everywhere, helpful.
Kevin Frazier: Yeah. And it does bring a lot of technical questions to mind. Specifically, I'm jumping to a different section, but I'm trying my best to adhere to the buckets. The provision requiring government procurement of models that show objective truth, I believe is, is something akin to the language called for here. Not sure how you train necessarily for objective truth, if anyone here would like to share their insight
Jessica Brandt: Or who should be the arbiter of objective truth.
Kevin Frazier: t's a tough one.
Tim Fist: Yeah, yeah, yeah. I guess there's like kind of two challenges here. Yeah. One is defining it and then two is ensuring that the model sticks to whatever your definition is.
This comes back to this robustness interpretability control question. Just like currently, we don't have good methods for reliably getting AI models to do what we want, as evidenced by a lot of the things that have recently happened, especially around roc. And so, yeah, I think there's like kind of two challenges to deal with here.
The first is obviously much more contentious, like, you know, what should the specific values be and how do you adjudicate that in really specific cases, but also just technically like this is, you know, currently like, well outside the reach of what the field is able to do. So I-
Jessica Brandt: Which is to say, sorry, but just that it might be sort of hand wav, right. You know, for the atmospherics and then, you know, we'll see where the rubber hits the nail.
Neil Chilson: Given all the reporting on this, I thought that this would be a much bigger focus in the plan. And it really isn't that big of a, I'm bias only appears twice in the entire plan. One of 'em is in like the sort of preamble and one of 'em is in this section.
But one thing that I did notice, there were lots, lots of people talking about this online, but I don't think anybody really focused on the, there's a, there's a. Yeah, there's two words here: top down ideological bias, right? And so I think the concern of the administration is not the sort of general algorithmic bias, right?
That, you know, might come from having a specific data set. It is that they don't want, they don't, developers putting their thumb on the scale on ideological bias. And that's the sort of thing that I think is technically easier to clear up in many ways, because it often happens in the inference phase, it happens with direct instructions to the AI about how they should answer to certain things. Not that it can't happen in other ways too, but that's the easiest way. And we saw that with Grock. It was relatively easy to figure out why it was doing that in some ways because it was kind of instructed to do some of that, right?
Right. And so, so I think that that's what they're going at here. And that to me, I, big First Amendment guy, classical liberal gives me a little less heartburn than, than it might be if it was talking about from a practical point of view, if it was talking about like eliminating all bias because that, that just isn't possible.
Kevin Frazier: I do think this is probably one of those points where we may see further information from an executive order in the coming days on this. And I would also point out that this does seem to be one of the issues that if I had to bet, and I'm a podcaster, so I have no money to bet, but if, if I could bet, I would say this cultural issue of thinking about what values are implicit to models is just going to become more and more of a concern, especially as we get AI adoption increased among in particular kids, right?
If you have any indication that a kid's engaging with an AI companion that's saying, hey, you know, let's do X or let's do Y or we don't care about this, but we really care about that, then I think we'll see even more of a focus here. But that's enough innovation for today. We can, we've done so much. Tim, you are my go-to for any idea about AI infrastructure.
Your ‘Special Compute Zones’ paper is bookmarked on my Google Chrome and I'm just going back and comparing notes today very briefly of what you were proposing there and what we see in the plan. Well done, sir. You were clearly persuasive. So I wanna start at a high level of just how important is this AI infrastructure question to achieving AI dominance?
And where do we stand today with respect to where we need to be and what our infrastructure looks like right now?
Tim Fist: Yeah. So, you know, if people know anything about AI sort of recently that it, you know, consumes a lot of energy and I think there's a lot of caveats to this, like on an individual basis, your usage of something like Chat-GPT is a very small fraction of your own energy consumption, but sort of writ large for the industry.
Sort of developing and deploying these models at scale requires a lot of, you know, concentrated power that is going to chips and data centers. And yeah, building these things is rapidly becoming this industrial scale undertaking one in which like the U.S. has historically led the world, but we're sort of now running up against the sort of classical barriers that we normally hit when trying to do large scale infrastructure projects, which is environmental permitting.
And we've talked a lot about the specific issues here and everything that goes wrong. But yeah, in general, sort of infrastructure projects in this country are held up for years largely because of the National Environmental Policy Act (NEPA). And yeah, unfortunately it's the case that sort of like a lot of the large scale projects that are prevented from being connected to the grid are clean energy.
So sort of those environmental groups who are typically, you know, suing these projects and causing them to sort of get held up endlessly and permitting delays are often undermining their own supposed goals in terms of protecting the environment. And we see this play out in AI. And so with AI in particular, sort of thinking of the requirements over the next few years Anthropic just recently put out this great report saying that they think they need five gigawatts of power to train their most advanced models in about two years.
I think it was 2028 was the year that they gave, which to put that in perspective, you know, a single gigawatt is about like one large nuclear reactor. And so that's five large nuclear reactors to train a single model. So, you know, running continuously for presumably a few months, which is kind of the average time that you train these models.
Kevin Frazier: And just to check my facts, yeah, we don't see those coming on board in the next two years. Yeah.
Tim Fist: Yeah, so there's, yeah, there's sort of been this effort over the last, I'd say two years to kind of snap up all the excess energy capacity, including at nuclear power plants across the country. And we're now seeing sort of companies push up against that limit of that approach and also start to look overseas to get access to large scale energy resources.
So, you know, the deals in the internet, UAE in the Gulf are sort of like the biggest example of this that we've seen with OpenAI and others sort of going over there and wanting to build, you know, their Stargate UAE over there and get access to both energy, but also, you know, the large amounts of money that these companies have in terms of sovereign wealth funds.
And so, yeah, I think there's basically, you know, to boil it down solving this environmental permitting issue and sort of like the delays that, that imposes on these kinds of projects, both data centers themselves as well as power plants and transmission lines is really important if you want to maintain leadership in AI. And -
Kevin Frazier: And to pause there for, for one second, because there's, I want to hear your takes on a lot of these specific provisions, but
Tim Fist: Yeah.
Kevin Frazier: Yeah. Janet, can you tee up the national security ramifications of building this infrastructure here as opposed to just saying, anytime we need more power production or another data center, we're always going to look overseas, or how do you think through that calculus?
Janet Egan: I mean, we're looking at the most transformative technology potentially in human history and a willingness to cede leadership to an overseas country, I don't think is in the U.S. national interest. And even to put it more in perspective, I saw in Anthropic’s report that, and some other places too that China bought online 400 gigawatts of energy last year.
And so, you know, there's multiple factors that enter into like your competitiveness in AI. You've got compute—U.S. leading, but then you've got energy to power that compute, you've got data. I mean, we've got copyright and other privacy laws here. Other countries don't necessarily, and then you've got talent.
And there's a, across this spectrum, I feel that energy is really posing a risk to U.S. leadership in this space. And sure you can partner with overseas countries, but you wanna be very careful about what capabilities and to what extent you are transferring the most powerful capabilities offshore. I think is a very different situation if you're looking at your most, your closest partners in top secret facilities or like with American protection and ownership. But if you are looking at a country that has ties to China, I think it's a very different story.
Kevin Frazier: Neil, did you want to jump in here? No, no, you're good. You're good. That's great. No jumping. Stay, stay, stay, stay seated, please. Jessica, I think looking again at these provisions, what stands out to me is that holistic picture yet again, where we're not just saying how do we improve the policy, but thinking through some of those second order effects here. The cybersecurity concern, what stood out to you?
Jessica Brandt: I thought this was really good stuff. The cybersecurity provisions and also the stuff about sort of maturing the federal capacity to do incident response. I mean, we actually, I think do need to kind of update our homeland security approach for an era of 21st century geopolitical competition that is technology enabled.
And I think, you know, obviously again, like really matters where the rubber hits the road, but this sort of tees up and elevates the need for that. And so, you know, my hope is that it can be followed through on.
Kevin Frazier: Yeah, and it's wild to think that cybersecurity really has to be something that is baked in throughout the entire AI stack.
And I'm not sure that's always been appreciated and it rings through this report which is impressive. So Tim, for you, given that you are well versed in all of these issues, are any of these provisions something you just circled with a big red pen of most excited about, or that you think is most transformative that you'll be paying attention to?
Tim Fist: Yeah, I'd say like ultimately in sort of the environmental per permitting space congressional action is the thing that actually moves the needle. We've talked a lot about this, the Institute for Progress in terms of, you know, specific things we wanna see in a NEPA reform bill, but I think the federal government has a range of authorities to speed up these kinds of projects in specific circumstances.
And so, I'd say my take on this so far, without, and obviously a lot of the devil is in the details, a lot of these things is, these are all pretty modest, but together a potentially sort of, you know, moderate effect on our ability to sort of be build data centers and associated engine infrastructure faster.
A couple things I'll call that, you know, it's the categoric exclusions thing is very useful. So this-
Kevin Frazier: Can you flesh that out in one?
Tim Fist: Yeah, so basically a categorical exclusion is where an agency basically determines that a particular type of project doesn't need to go through the environmental review process 'cause it's sort of, you know, excluded.
And so this, obviously you can't do this to an entire big nuclear reactor, but you can do it for a lot of the activities that sort of add calendar time to the project. So site characterization, going and taking soil samples, doing preliminary design, you know, installing transformers and switch gear sort of at their site before you sort of go ahead and do the big build.
These are all potentially activities that you can categorically exclude. And if one agency determines that it can do this, the other agencies can all do this as well. And so, yeah, I think directing agencies to be very creative with figuring out which categorical exclusions they can find in use is great. Yeah.
Janet Egan: Can I ask a follow up question on this?
Tim Fist: Yes.
Janet Egan: Because I think. I, I, I know you said that congressional action is what's needed to drive this forward. I, and people have different views as to-
Tim Fist: It’s the most helpful thing. Yes.
Janet Egan: Okay. I wonder how much do you think can be achieved with purely executive action, particularly given the litigious nature of these, these environmental permitting ideas and con and build outs?
I just think that some of the biggest delays seem to be just really protracted legal cases, particularly across jurisdictions, the transmission lines. How optimistic are you that in the next few years, which are critical for winning the AI race, that this is gonna be enough?
Tim Fist: Yeah. Not optimistic overall.
So, yeah, like I said, kind of modern effects, like we need more. And I think what I'm optimistic about is that this at least plants a flag in the ground for this administration where, you know, I think, you know, Congress has been waiting to some extent on the administration, especially sort of Republicans, to figure out what they should be prioritizing in AI, AI. And I think this sort of, you know, puts a flag in the ground and says, we really care about AI and energy. Here's the things we're excited about. We think, you know, NEPA is this huge burden, and hopefully we do see bills coming off the back of this that tries to tackle the core underlying problems.
Kevin Frazier: Well, now you're just sounding like Oliver Twist. Please, sir, may I have some more. We'll see what happens, we’ll see if my accent improves. Yeah, so before we move on to a different section here, I do think this is one of the, there's a provision here that didn't maybe find a home in the right spot, or they weren't sure where to place it, which is the workforce for AI infrastructure.
And, and we mentioned workforce earlier. Again, as I noted, workforce was the number one principle that was flagged. Although I will say if you look at the introductory statement by the president at the very start there's just a short maybe, oh, it may be one long run on sentence, but, there is no mention of workers.
There's reshaped global balance of power, address our global competitors in this race, focus on national security imperative, global technology dominance. So I understand that we listed workers as the first principle. I'm not sure I saw that as thoroughly fleshed out, but did anyone have a different take on whether this focus on workers really did ring through and what to watch for in the coming coming months and years, I gues?
Neil, any any indications? I will say, I'll, I'll tip my hand and say that we have been hearing about reskilling and up training excuse me, reskilling and upskilling since globalization, since probably time in memorium. I mean, you can go to the 1955 report on automation as one of us has. For the audience, I am putting my finger on my nose.
And we see this language brought up again and again and again about, oh, we'll just reskill or upskill folks, and they'll adjust and we will carry on. I didn't see the level of detail, and it is hard in a Christmas tree of a report to say, I'm gonna list out every single new program and innovation. Did you get the sense that this will be something that they're diving into and trying to improve upon past models of re-skilling?
Neil Chilson: I, I don't, I, I mean, I don't know. I mean, there's so much left to be developed in this space. I will, I will point out that when they talk about that first principle, the, the wording there is pretty careful actually. It says, first American workers are central to the Trump's administration's AI policy. The administration will ensure that our nation's workers and their families gain from the opportunities created in this technological work revolution. So it's, it's not clear that that first principle is entirely about gaining from this technology in their jobs. It could be as, you know, American citizens or American residents generally.
And, and that that sort of family category adds that. I will say the thing that is relatively new here, a lot of people talk about re-skilling and retraining because AI is taking your job. This, there's a whole section in this infrastructure section that's about making sure we have the workforce needed to, we need to build this infrastructure that's different, right?
Those are new jobs. That's not like re-skilling people. That's like, there's a huge number of plumbers, electricians, et cetera, that we need to be able to build these things. And so I think that focus is somewhat new to the conversation. It's something plenty of people have been calling for, but I think that emphasis here is really important.
Kevin Frazier: Yeah. And it's going to be an incredible effort to see that whole scale development of, yeah, where do we get all these plumbers? Where do we get all these engineers? Who's gonna raise their hand?
Neil Chilson: Right.
Kevin Frazier: I'm not sure. But it will be an impressive kind of big scale initiative yet.
Tim Fist: Yeah. And I think I'm not sure if any of you control left the word immigration in this, but it does not appear.
And I think, you know, this is something, you know, we came, we came up across this in like the CHIPS Act and I think it really wasn't solved as well, like where the government recognized that there's a massive shortage of workers to actually both build fabs but also work in them as fab technicians. And this is a problem that, you know, there's just not the workforce here in the States to do this.
There was like re-skilling and training programs that I think were not hugely successful. But also other countries are having these same problems as well. Like, you know. Taiwan and South Korea also have huge shortages of workers here. And so, yeah, it's not surprising to me that sort of, yeah, for these kind of like, construction related jobs that immigration potentially isn't the most helpful thing.
But, you know, when we think about winning the AI race more broadly and sort of like the really sort of high skilled talent that we actually need to bring in we did sort of a data analysis where we looked at, you know, top AI startups that were founded over the last two years and found that, you know, more than half of them had at least one immigrant co-founder.
And there's nothing in here around that. We had a bunch of recommendations in our action plan response basically looking at sort of what are the targeted moves that the federal government could make to make high school immigration much more easy. Unfortunately, none of those are in there for, I think, you know, understandable reasons it's a very political hot topic at the moment, but yeah,
Kevin Frazier: Hot topic and there were 10,000 comments, so maybe they just didn't get to that second page of yours. They just really focus on that infrastructure work almost. And before we get to our third bucket, because this one doesn't fit as squarely into our, our last topic here.
Another key omission we've talked about is IP, right? We don't see the big data question taken head on. And yeah, Janet, do you think that was a strategic decision to leave that out even though right before recording we did hear that President Trump mentioned that access to data was going to be a key consideration of his and perhaps leaning into a sort of fair use exception.
I'm, I'm doubting that that actual formal language was used, but saying we need to make sure that this data question gets answered and that we don't get hung up by copyright. What do you think the data sized toll in this or the IP sized toll in this plan signifies?
Janet Egan: Yeah, it flirts around the, the data issue.
It's got a little bit in terms of the scientific data sets, yes, but it just does not touch copyright, and I think that's because it's such a contentious political issue. If you're looking at a national security angle, you've got other countries who have much greater access to data or much fewer limitations on what data they can access and use that I think will be of concern if copyright decisions come out a certain way. I think it's something that administration might need to act on at some point, but I can imagine it's contentious.
Kevin Frazier: And at a later day. Yeah. And here again, I think the comparison of what adversaries are doing is fascinating. Where in China we know there are national data exchanges, for example, that remedy a lot of this data shortage issue or at least partially solve it, obviously AI labs would like as much data as possible are.
Neil Chilson: Part, part of the challenge here is to, you know, there's, unlike some of these other areas, the, the levers that the executive branch can pull on copyright are relatively limited. I mean, they could have the copyright office do a report of some kind.
Kevin Frazier: Or fire the copyright office. Yeah.
Neil Chilson: Maybe they could have, yeah. So. But the, the levers are limited, so maybe that's another reason why they talk about it too much.
Kevin Frazier: So, Jessica, I'm excited to go into our third pillar here lead in international AI diplomacy and security.
What were your vibes from, from this section and did any of these provisions and recommendations stand out in particular for you?
Jessica Brandt: Yeah, you know, I think it's a worthwhile ambition. It's not clear to me how that squares with recent changes at the State Department. I mean, I think some of the capacity to do this kind of work has been retained, but we also, you know, my understanding is like the science and technology advisor to the secretary and all kind of, some of that some of that infrastructure is gone.
So I just think the administration's gonna wanna think through, you know, ensuring that it retains the capacity to do the kind of work. It's, it's set, it wants to do on the, you know, sort of competing in standard setting bodies. Great. You know, let's just also be mindful that these should be scientific and technical bodies and we would like to I think we, again, this is a place where we have to thread the needle between understanding sort of, geopolitics and, and engaging in our interests.
And also like not making it a race to the bottom where this like place where science and you know, where like technical experts can really exchange free from politics, like kinda becomes this locus of geopolitical competition. I think that would make us all we'd all be, you know, for the poor for it.
Kevin Frazier: Do you think if you are sitting in the EU right now, what's your reaction to this section? Are you saying, well, at least it wasn't as angry as Vance's speech in Paris, so that's a step forward.
Jessica Brandt: Yeah, I mean, all the stuff about like American values and the cultural references, I think do not help to create the sense, right?
There was language here, I don't have it at hand, but there was language here about engaging with like-minded partners, but it's not clear how like-minded we are at the moment. And I, and I think some of the language and the rhetoric does not help to reinforce the perception that we're sort of on the same team. So it remains to be seen how that unfolds.
Kevin Frazier: Yeah. Janet, how do you think this is being received abroad right now? If you were to, to tap into your spidey international diplomacy senses?
Janet Egan: I think there's some allies and partners that are going to be welcoming this. I think particularly the first line on exporting American AI.
And for me that's one that's actually, that contains a lot. So, you know, last administration had a really restrictive approach to AI exports and primarily focused on the frontier of AI and chips. But what we're seeing now is this shift to full stack technology diplomacy. So they're looking at not just the hardware, but also the software, the models, the applications, and I think that's a really exciting shift.
Like Neil, you were saying that if, even if AI development stopped today, there'd be so much we could do with diffusing existing capabilities. I've, I've heard the phrase, no one's bothering to learn how to ride the bike because the cars are coming. And I think when you engage with international partners, they aren't AGI pilled like the US in most cases.
And their questions are, well, where's the breakthrough health applications? Where's the public good applications? And there, there's a real demand in international countries for those things. So at the moment, I think we've actually risked ceding, there's been an underinvestment in public good applications, in scientific breakthroughs and that risk ceding space to China where you don't need to have the frontier of capability.
To provide wraparound full stack, full stack technology offerings and support. So for me, this seems super strategic. You need government involvement because the labs are focused on the frontier and there's economic, like it's rational, economically rational to keep pushing that frontier forward. But at the same time, there's so much more that we could be doing to support their economic and growth ambitions of partners.
Kevin Frazier: Well, and it seems too that this section can't be analyzed without looking at the open source language of the pivot to saying, hey, we wanna make sure it's U.S. AI that folks are using around the world. And you don't get that with closed models in the same way you can with, with an open source approach. But Neil, yes.
Neil Chilson: This is the section that you know, other than the deregulatory part in the innovation section that stood out as the most direct contrast to the Biden administration approach. The diffusion rule was kind of an anti-diffusion rule. And it swept in many of our friends and allies in a way that I think surprised many of us.
And this is a, the, this takes the exact opposite position, that basically those people should be getting the full AI stack rather than having to, you know, beg for chips from America. And so, I think that really is a, a very big shift. And I, I think it's a positive one because the, the dynamic here is the rivalry with China.
I think the Trump administration sees it that way, and that if we don't fill that gap for many of these countries China, China will.
Janet Egan: I think there's, I think there's a yes and approach for the last diffusion rule as well, where I think there's still probably gonna be quite strict controls on the chips, but by broadening your exports to more of the technology stack, that means like if you think about compute the chips as the most defensible advantage that the U.S. and its closest allies in the ecosystem have, it's something you wanna be really careful about seeding leadership over. Like if, if China's out competing us on energy, data talent, potentially one day, this could be really key to U.S. leadership.
And I think it's unclear where they're going to, you know, fall on that spectrum as to how permissive with partners like UAE and Saudi Arabia. There's some deals happening, but there's strict controls on those. And I think as we look forward, they'll, there'll be work to codify in a new AI diffusion rule or otherwise more controls on the chips while nevertheless promoting export of other things.
Neil Chilson: Yeah. And I had actually flagged the recommended policy actions here as being pretty managed trade in many ways. I mean, the way they sort of recommended this is there's a lot of, a lot of potential for micromanagement on the trade there in a way that maybe achieve some of those goals that you were talking about. So it's not, it's not laissez-faire here by any means, it doesn't seem like.
Kevin Frazier: Yeah. And last but not least in terms of specific provisions, the invest in biosecurity kind of tucked away at the very end. Jessica, I don't know if this stood out to you or if anyone had a hot take on this addition.
Jessica Brandt: Good stuff. Not totally fleshed out, not clear how it aligns with, you know, proposals. I'm not sure I wanna say what I was about to say.
Kevin Frazier: You're good. You're good.
Jessica Brandt: I'm sorry out because I don't wanna
Kevin Frazier: So just not totally fleshed out. Yeah, you can just carry on from there.
Jessica Brandt: Okay. So just you know, good at a high level, not totally fleshed out. We'll see where the rubber meets the road. Yeah. Yeah.
Kevin Frazier: I, I do think it's telling that we're seeing a sort of convergence now between. The folks who are accelerate or accelerationist with respect to AGI and the folks who are maybe taking a safety vent of saying, hey, we, we may want AGI and both may want some pursuit of the best technology, while very much acknowledging that these biosecurity threats in particular are worthy of concern. Tim, do you want to jump in here?
Tim Fist: No, yeah, just agree and I think synthesis screening is a good approach here where, you know, you have these like centralized providers who are synthesize, synthesizing sequences for you, like you send orders to them, like a 3D printer for like proteins and such. And yeah, at the moment, like especially with like AI biological design tools, it is the case that it's becoming easier to, you know, design super COVID if you wanted to. And so it kind of makes sense that we would want these centralized syn, synthesis providers to check, is my customer a terrorist organization trying to synthesize super COVID? And if so, probably we shouldn't let them do that. I think it's a good, good thing to do.
Kevin Frazier: I, I would double down on that take just, alright. Yeah.
Janet Egan: Yeah. I agree. But I think in the, in this section, there's one thing missing. It talks about engaging with international partners and allies, that if we look at the COVID example, that's a really clear demonstration of, it doesn't have to be with a partner and ally that a risk emerges that can really devastate your national interests.
And I'd like to see some engagement with China on these things like risk managed, not sharing any sensitive IP or upskilling at all, but there's some good practices that should be shared across even with competitors minimally.
Tim Fist: Yeah, totally agree with that. I think there's like clear like, you know, shared international interest in preventing transnational crime especially around like terrorism and like drug trafficking and other things.
And like, you know, in AI these things could be exacerbated. Yeah, it makes total sense. Yeah.
Jessica Brandt: One thing I thought was missing in this section was references to mitigation. So great that we're gonna focus on evals, what happens when an eval shows that a model crossed a threshold. They've, all these companies have signed, you know, safety frameworks that say they'll pause deployment absent sufficient mitigations.
But I don't think we have anywhere near, you know, nothing approaching consensus on what constitutes a sufficient mitigation, who gets to decide what is sufficient. And I just, I, I think this is sort of coming, like we might not get there this year, but like 18 to 24 months before one of these models, like can cause real world harm at scale.
Maybe it's in bio, maybe it's advanced cyber, I don't know. But like really what is the national security policy community gonna do when that happens? I mean, these companies are spending upwards of, I don't know, $20 billion on a training run. And so how are we gonna, how are we gonna sort of manage sort of promoting innovation and, and, you know, fulfilling those investments and also managing these kinds of risks.
Kevin Frazier: And one maybe undercover provision that I don't think will get a ton of attention, but should is this focus on talent development specific to the Department of Defense, where if you go talk to folks in the armed services right now, they will tell you, hey, we don't know where we're gonna get all this AI talent that we need, all this AI expertise.
And there are provisions in here, for example, of encouraging the senior military colleges of having AI specific training and bringing in instructors. And that sort of upskilling and educational efforts is so exciting to me of saying we need to start thinking about the next generation because. To your point, Janet, earlier about talent. It's not just today's talent, but how are we thinking about the entire pipeline of folks who are going to have this level of expertise, who can begin to answer those mitigation questions and whatever comes down the road?
Well, we've officially reached the vibe portion of, of this episode. So Neil, earlier you were talking about how we've seen a pivot, arguably from a pervasive sense of AI safety being the dominant narrative on the Hill to perhaps now a complete or more thorough embrace of, we may not agree on all the details as we just saw from the AI moratorium debate, but we do need to lean into AI dominance as a national priority.
How secure do you feel about that narrative sticking on is it, is it here to stay or, or should we be looking for another pendulum swing?
Neil Chilson: Well, I, I think the rivalry with China like anchors it somewhat, right. As that narrative took over, I think the, the idea that we were gonna sort of, we'd be careful on our side and that would be enough to, to get there I think that argument gets weaker, or at least it has less narrative juice. So I think this sticks around for a while, as long as China's in the race and I think they're in the race. One thing that I did find interesting and that anchors to this pretty well is you know, the, the Center for AI Standards and Innovation, the renamed AI Safety Institute is clearly sticking around.
It's referred to many, many times in here. Right. And so, I think they're gonna have a lot of work to do because they're collaborators on so much of the stuff that's in here. So that, that is new. I think that's, that's interesting. And we'll see how that, like, how, like how the staffing of that affects, you know, the, that the interplay between safety.
Tim Fist: And yeah, this is super interesting because currently that team is, I think. Around 20 people with a shoestring budget not officially authorized. There's, you know, maybe an appropriations bill cut for them coming. But yeah, the amount of things that they have in here as well as just, you know, what Commerce Secretary Lutnick asked them to do in sort of his recent announcement is like way beyond like what they currently have the budget to be able to do.
So yeah, it would be good to see specific appropriations. I guess they're gonna be
Neil Chilson: the first of those government employees who need to make sure that they have access to AI tools. Yes. Yeah. Yeah. Just to
Janet Egan: Just to double down on that, I don't think there's anything more important than ensuring that public sector has an awareness and a situational understanding of AI capabilities and as they, as they're emerging.
And I think there's a lot of people out there who want to make sure that AI goes well. So I feel like they might soon be hiring and I'd just encourage people to, to yeah, go work for the government. You've heard it here.
Kevin Frazier: Scaling Laws is now the Craigslist of podcasts. We will be listing off job opportunities starting next episode.
I do think, to your point, Neil, as I brought up earlier, the. If you, if you're in the Department of Commerce tonight, if you are at KC tonight, if you are at BIS tonight, you're thinking, who, boy when's my pay raise come based off of all of the initiatives that are coming down the pike? And I wonder if this is another area where Tim, Congress, too is saying, okay, well what, how are we going to respond to this in terms of budget allocations, in terms of prioritizing what happens next session?
We know the House is closed, its doors through September, so we have a little bit of time to think through what they're gonna focus on.
Neil Chilson: But, well, to be fair, I mean, this is quite a bit smaller than the the massive executive order from the Biden administration. I think it has far fewer directives.
Kevin Frazier: That is true. That is true.
Neil Chilson: Maybe they have some resources left over from that.
Kevin Frazier: So yes, they can go back to that playbook. And I'm wondering too, how you all think this might shape the discourse at the state level. So following the. AI moratorium conclusion to what I think everyone can say was a I'm going to just bleep show and not complete my own sentence there.
Is this going to be something where folks are now going to their governors or going to their state legislators and saying, hey, we've got some guidance now about what the executive really wants to see, let's make sure we're aligning here. Tim, I'm, I'm eager to hear from you on this in particular because states have such a key role in a lot of these permitting issues and a lot of these decisions around energy allocation and how land may be used.
Obviously, we're talking mostly here about federal land, but how do you see states maybe buying into this or acting as bulwarks to it?
Tim Fist: Yeah, so I think, yeah, permitting is a key where productive state regulation is really possible for the goals outlined here. I think realistically, we are not going to see a massive slowdown in state, like proposed state AI bills as a result of this. I think, Neil, you often draw attention to this. I think you know, last year there was around 600 AI related state AI bills introduced. I think so far this year we're coming up on double that already, sort of halfway through the year. And so like the velocity of state AI related bills is coming really like thick and fast.
And I think, you know, in my perspective, the median state AI bill is like, likely pretty bad, but some of them actually can be, you know, pretty productive and like do useful things. And I think like one of the main things that's like really difficult at the state level is, you know, the median office, the median state legislator has like three or four staff and they're getting paid like $30,000 a year. They do not have the capacity to hire the kind of team and do the kind of research that's required to like grapple with big questions around AI. Like what capabilities are we gonna see in two years? Like what jobs are gonna be automated?
Are we gonna see recursive self-improvement? Like what's, when's a SI gonna come? Like these aren't the kind of things that you can resolve at that kind of level. But there are a lot of things that sort of do make sense to adjudicate the state level, like including the effects that state and local permitting laws have on AI development.
And yeah, I'm especially interested in your, take Neil, on I'm like the sort of moratorium-esque provisions here, like this FTC one is extremely interesting, but I dunno what the effect would be yet.
Neil Chilson: I don't fully understand the FTC one, either. The one that drew my attention because of my background having been at the FTC, was the FTC one, which I thought was a sort of repudiation of the previous FTC in a way that I don't think has been very vocal. Right. And I, that was interesting.
Kevin Frazier: I would love to get your perspective on what this, the, so this FTC provision basically says, hey, we're gonna reexamine a lot of the agreements we've reached with companies and see maybe we don't wanna adhere to that agreement anymore. What's on the table with respect to what could actually happen at the FTC chair Ferguson tonight? What's he thinking?
Neil Chilson: I mean, so yeah. So first of all, it says like, we're gonna look at the active cases that were started under the previous administration to see if, and it interestingly, it's not just if the case itself would harm innovation, but if the theories of liability that are advanced might harm future innovation, right. It doesn't have to be about that. That's interesting. And then, yeah, so all of the major tech companies, I mean, I don't know where're counting this major anymore because there's more of them now.
But all of the major tech companies are under a consent order with the FTC on something. And so there's lots of existing agreements that could be reviewed to see whether or not this makes sense if we want these guys to build like the, the AI that's gonna defeat China. So, I don't know exactly what's on the table. This is not an FTC that's particularly like super friendly with U.S. tech companies. So there's a bit of a bit of like trying to figure out exactly what this provision is saying and I'm not really sure.
Kevin Frazier: So to wrap things up, put a little bow on our Christmas present. I want to know what everyone is either gonna be keeping their closest eye on or what they think is most in need of further examination. So I'm gonna go ahead and start because I don't, I want to be a, a, a fair professor to all of you. I think for me, with the school year coming up, I am really, really interested in the execution.
And this is pulling from a different executive order, but it gets brought up in the AI Action Plan, the execution of making sure schools have the resources they need to get their teachers to get their administrators up to speed on AI because the number of state superintendents, for example, who haven't even outlined an AI policy is just ludicrous.
The fact that we don't know how we're using AI in schools or what's permissible and what's not, is just bonkers to me. And we can't expect the next generation of students to just vibe on AI and suddenly become the AI ready workforce that is called for here. So that's something that I'm gonna be, be studying closely, and I hope the, the folks at the White House are leaning into heavily. Neil?
Neil Chilson: Well, because I spend so much time looking at state legislation, I'm gonna really be keeping an eye on how these provisions get operationalized about the, the interaction between the federal government and the state government. There's lots of appetite and congress still to do something to assert, you know, congressional a, a congressional role here.
Obviously the administration thinks that there's a federal, a strong federal role in this policy space. And that's kind of counter some of the at least some of the proposed effects from some of the, especially blue states on how they want to do AI regulation. And the second one, I, I'm gonna look to see like this AI woke stuff. It, you know, the term isn't really in here.
Kevin Frazier: Where is it?
Neil Chilson: There's a little bit of bias
Kevin Frazier: Right
Neil Chilson: But there's, there's an executive order. I'm really curious about that. There's, I think there are constitutional ways and that they can achieve some of those goals, but there are some very unconstitutional ways they could pursue those goals as well. And it really will, I'd be interesting to see how that plays out.
Kevin Frazier: Yeah. And I think this might be more like Hanukkah than Christmas, because I think we're going to get multiple days of, of gifts that we're going to have to grapple with. Yeah. Yes. Janet?
Janet Egan: Two stand out to me. The first is the interoperability robustness and control work. I think these are the tricky issues that until we get right, you can't actually adopt these things in real critical systems and functionings. We still don't really understand why AI systems go weird sometimes, and until we get that right, it really inhibits our ability to use that for defense and national security and critical infrastructure outcomes.
And the second I'm really interested in, and we're doing some work on at CNS at the moment, gentle plug, that is getting that right balance between the protect and promote when it comes to U.S. AI offerings. So compute probably wanna be more not restricted, but careful about the amount and to whom large quantities go given that is what underpins America's AI edge at the moment and is very defensible.
But on the other side, how do you promote other aspects of the tech stack broadly to make sure that it's U.S. AI that the world runs on?
Kevin Frazier: And Janet, sorry before I let you go, I can't let you off that easy when you tee up a great ball like that. Were you surprised by the relative lack of analysis of U.S. military integration of AI here?
Or do you think that's something we'll see forthcoming? I mean, we had some mention of we want to encourage DoD adoption of AI, but we didn't get a play by play analysis of what our geopolitical ambitions with respect to integrating AI into the armed forces, in my opinion. But yeah, if you had a different read, I wanna hear it.
Janet Egan: I think there's still more to be done in that space. I'm interested in what happens to, like the National Security Memorandum 25 that came out under Biden had a whole range of efforts for agencies to, to take part in and understand that stress test AI capabilities to the very edge with classified data.
I'm interested to see what happens in this space, but I think some of the things that don't talk about the military side, like the inter, interpretability robustness and control, I keep coming back to because that's what needs to underpin it.
Kevin Frazier: I, I like tying a nice bow on our Christmas gift. All right, Tim?
Tim Fist: Yeah. Two things I'll be very interested to see how they're implemented and excited to see if they go well. One is there's just all these measures throughout the action plan that gesture at sort of setting up a infrastructure and processes for measurement and evaluation. So, you know, at the moment it's just really hard to predict what's coming and the government is sort of even further behind everyone else at the moment.
It just doesn't have the sort of team and people and processes and sort of like the science that it needs to be able to sort of do proactive policy making, figure out, you know, where do our capabilities sit right now? Where will they sit in six months? How does this apply to China? What does this, so what in terms of what we should do from a policy perspective, both around export controls, but also basic R and D and security and things like this.
And so, yeah, there's a lot of things in here that try to resolve this, but you know, I think we've been trying to do this for a while and it's extremely hard, especially with sort of the level of resourcing available to places like Casey. So excited to see how that plays out. And another thing I'm just glad to see, and I hope it plays out well, is the sort of focus on.
Full stack security, so for data center software models, et cetera. And I think this action plan really acknowledges that, you know, AI models are becoming a important source of new critical infrastructure that, you know, are going to increasingly drive economic growth, but also scientific discovery.
And there's a bunch of types of attacks that an adversary could carry out on that from, you know, denying the operation of data centers through sort of sabo, sabotaging power through to sort of data poisoning attacks on models, making them love owls. If you saw that recent paper on this you know, you can do very nefarious things.
Then there's very sort of AI specific threat models that we need to figure out how to address. And so, yeah, excited to see R and D, but also sort of like processes and standards around that kind of thing. Yeah.
Kevin Frazier: Okay, close this out, Jessica. No pressure.
Jessica Brandt: Yeah, no pressure. I guess three things for me.
One is just given the levels of uncertainty around, you know, vulnerabilities, threats, capabilities, like the information sharing piece is so important and I don't think that's fully fleshed out. We've got this sort of intra firm information sharing agreement useful and valuable, but like what is it that government expects from firms and what should be vulner, voluntary and what's mandatory?
We need like way more structured thinking about that. And I think this kind of moves us in the direction the right direction on a couple pieces, but the, the puzzle is like much, much bigger. The other is on this protect front, like great that we are making all of these investments and tightening export controls, but all of that is kind of useless if China can come in at the end and, and just, you know, scoop up our trained model. So, we really actually do need to put meat on the bones there and there's a lot that we can do. So I think this was a good baseline, but like, I'd like to see where that goes. And then the last is this mitigations piece, like the evaluation stuff is great and we need to take it to the next step.
Kevin Frazier: Yeah, well huge thanks to Neil, Janet, Tim, and Jessica for joining this episode of Scaling Laws. Listeners, don't worry. Our next AI book club book will be 50 Shades of Gray. So get ready for that. It should be a slightly different conversation. Scaling Laws is a joint production of Lawfare and the University of Texas School of Law.
You can get an ad free version of this and other Lawfare podcast by becoming a Lawfare material supporter at our website, lawfaremedia.org/support. You'll also get access to special events and other content available only to our supporters. Please rate and review us wherever you get your podcasts.
Check out our written work at lawfaremedia.org. You can also follow us on x and BlueSky and email us at scalinglaws@lawfaremedia.org. This podcast was edited by Jay Venables from Goat Rodeo. Our theme song is from Alibi Music. As always, thank you for listening.