Scaling Laws: Navigating AI Policy: Dean Ball on Insights from the White House

Published by The Lawfare Institute
in Cooperation With
Join us on Scaling Laws as we delve into the intricate world of AI policy with Dean Ball, former senior policy advisor at the White House's Office of Science and Technology Policy. Discover the behind-the-scenes insights into the Trump administration's AI Action Plan, the challenges of implementing AI policy at the federal level, and the evolving political landscape surrounding AI on the right.
Dean shares his unique perspective on the opportunities and hurdles in shaping AI's future, offering a candid look at the intersection of technology, policy, and politics. Tune in for a thought-provoking discussion that explores the strategic steps America can take to lead in the AI era.
Find Scaling Laws on the Lawfare website, and subscribe to never miss an episode.
This episode ran on the Lawfare Daily podcast feed as the Aug. 15 episode.
To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.
Click the button below to view a transcript of this podcast. Please note that the transcript was auto-generated and may contain errors.
Transcript
[Intro]
Intro
Kevin Frazier: It is the Lawfare Podcast. I'm Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law, and a senior editor at lawfare. Today we're bringing you something a little different. It's an episode from our new podcast series, Scaling Laws. Scaling Laws is a creation of Lawfare and Texas law. It has a pretty simple aim, but a huge mission.
We cover the most important AI and law policy questions that are top of mind for everyone from Sam Altman to Senators on The Hill, to folks like you, we dive deep into the weeds of new laws, various proposals, and what the labs are up to to make sure you're up to date on the rules and regulations, standards, and ideas that are shaping the future of this pivotal technology.
If that sounds like something you're gonna be interested in and are hunches, it is. You can find Scaling Laws wherever you subscribe to podcasts. You can also follow us on X and Bluesky. Thank you.
Alan Rozenshtein: When the AI overlords takeover, what are you most excited about? It's, it's not crazy. It's just smart. And just this year, in the first six months, there have been something like a thousand laws.
Kevin Frazier: Who's actually building the scaffolding around how it's gonna work, how everyday folks are gonna use it.
Alan Rozenshtein: AI only works if society lets it work.
Kevin Frazier: There are so many questions have to be figured out and nobody came to my bonus class. Let's enforce the rules of the road.
[Main Podcast]
Welcome back to Scaling Laws, the podcast brought to you by Lawfare and the University of Texas School of Law that explores the intersection of AI law and policy.
Alan Rozenshtein: Dean Ball, welcome to Scaling Laws.
Dean Ball: Thanks so much for having me.
Alan Rozenshtein: So I, I was preparing for this podcast and I was trying to figure out what I could ask you given that you're in the White House, and then I saw that this is your last day on the day we're recording this Monday. By the time we release this podcast, you will be a free man.
You'll be your own citizen. So let's just start. Why did you decide to leave and, and what's next for you?
Dean Ball: Yeah. So, it's a few different reasons that all kind of came together at the, at the same time, I think one, one way to think about it is like, you, you, you sort of just have to, you have to know and be honest with yourself about what you're good at, and particularly in the context of the White House.
Not just what you're good at, but what you believe you can do an extraordinary job at. And I believed. When, you know, when I was asked to sort of come in and told that the job at OSTP would would heavily involve not only involve, but heavily involve leading the drafting of the action plan. I said, I think I can do that.
When it comes to implementation, it's an entirely separate skillset. That I don't have as much experience in, right? Like I did management for a while. I was decent at it but I didn't love it. It didn't make my heart sing. And it's essentially a kind of management implementing something like the action plan more than that.
But that's a, that's a big chunk of it. And you know, I I, I also, don't have a ton of federal government experience. A lot of my background as, as you both know, a lot of my professional research background and, and like intellectual background is in state and local policy. So even by the standards of most policy wonks, I would say my knowledge of the federal government is fine.
But it's intermediate fluency, not, not you know, expert level fluency. And so, you know, there were so many things in, in federal life just in the, the four months that I was there where, you know, a lot of it was like, not necessarily a learning curve, but just like lots of new concepts to learn and things like that.
And I think that if you think you can do a b plus job. At doing something like implementing the action plan, it's a little bit like, you know, if you're on the sidelines in a stadium, you know, at a football game and you're like, I think I can do a b plus job of catching this football, well then like, you better not be about to get on the line for the Dallas Cowboys.
You know what I mean? Because that's not acceptable and that's what the White House is, right? It's the big leagues and, so I thought I could do it for the writing and, and when it came to the, the draft, the implementation, I thought, you know, may, maybe, maybe not as much. And then, you know, on the flip side of that, kind of the same issue, but, but closely related is that there is the politics.
Of AI on the right, right now are in such an interesting state of flux. And obviously inside the White House with some of the issues that came across our desk many of them that came across our desk, I kind of had a front row seat to not just the things that, you know, were in public, but also many things that were said and, and talked about in private that really captured that, that that, that state of flux.
And you know, I, I guess like reflecting on that, I, I, I, I, I feel like there's this enormous opportunity to create new, to like guide the rhetoric in a good direction. I think right now things could go in a really bad place or in a really good place, not just on the right, but in general. And that requires a degree of like creative freedom rhetorical flexibility or like discursive flexibility.
And also like policy freedom. You know, like the ability to throw out some policy ideas that you just can't do. You know, the White House is not about creative freedom. That's not the point of the White House, right? It's not an ad agency. And so, you know, you can't, you can't just throw out ideas and you can't just throw out, you know, political messages, right?
You have to you have to stick to the sort of like the, the bounds of things that have been agreed to by the West Wing. Right? And that's fine. That's the way it should be. It's no criticism at all. And that's obviously not specific to this administration or anything like that. This is just a fact of being in the White House.
But the kind of work I like to do is get my head around really thorny issues. Try out a couple different things, sort of like a little bit bull in a China shop. Like, like sort of like go around and like. Just, just try on different ideas like hats and you just really can't do that as a, as a White House staffer without causing a number of minor or potentially major incidents and probably being fired.
So not only, in other words, did I think like maybe I would be you know, like not the best at the implementation, but actually that, like my style. Was very well suited to the, to the action plan because that's exactly what you needed to do. What you needed to do was like, get your header around a bunch of thorny policy issues, figure out where to go, figure out how to optimize within a variety of very complex political constraints, and then find like there's like one high dimensional vector of like, this is the like right path right here that like we can sort of cobble together all the different coalitions and we can like get this right.
I can totally do that. But and I wanna continue doing that and I think the best place to continue doing that and the place that'll be most helpful to the action plan itself and to the president's ai vision is outside of government.
Kevin Frazier: So you mentioned so many deep, important concepts that we're gonna have to spend hopefully several podcasts going into.
I think this may become a anthology, but for the first thing I wanna dive into is, sorry, I'm gonna restart. So Dean, you mentioned your heart singing. You mentioned playing for the Dallas Cowboys. You mentioned being a bowl in the China shop. There's a lot that we can explore, but I wanna start with.
What you would say to others who are thinking right now, Hey, if I got called to serve in the federal government, any federal government, what was that learning curve like for you? In terms of just saying, I have this expertise. I want to go help on policy, for lack of a better phrase. Would you encourage others to follow in your footsteps or what's kind of your advice to folks who are saying, I don't know where to go in terms of how to make a difference in AI policy?
What's the pitch for and against getting involved at the federal level?
Dean Ball: It is the most incredible honor and opportunity that you will ever have in your life. And if you are lucky enough to be called to do it, you should probably find a way to say yes. I can only speak for the White House, which is. For a lot of obvious and some non-obvious reasons, like an extraordinary place within government, right?
It's not may maybe typical. So like, I can't speak for what it's like to work at any other agency in the federal government, and federal government's a very big place. So I will cabin my comments to the White House and to say there that like, you know, in that context, it's actually like in general, and this might be specific to this administration, you know, I don't, that I don't know, but I.
It was less bureaucratic, significantly less bureaucratic than I expected it to be. So when I say there's like a bunch of new concepts and things like that, I am literally talking about like, very basic aspects of like the federal, you know, like the federal pay scale and like, you know, the ethics rules and like, you know, oh, foia, right? I mean, I, who would've known that FOIA structured the incentives of public employees as much as it does, but it really does. And at the White House you also have the Presidential Records Act, which is its own whole special thing. But the like in general, it is just.
Like actually a quite flexible place. And so, I mean, maybe in particular it's a pitch for this administration. 'cause like you can make a ton of difference as, as a scholar, you know, like there aren't gonna be that many people in the room. And if you know a lot about something you might well be, you know, the, the best positioned expert and people, you know, you can really influence the way that, the way that policy goes.
A lot of it will be invisible. You know, so you have to be okay with that. Of course, you have to be okay with that in all ways, and you have to be okay with, you know, at the end of the day you are not a representative of yourself. You're not worried about your own reputation. You're not a represent, you are a representative of the President of the United States.
And that's an absolute commitment. And so, you know, you are implementing the president's vision and if you disagree with the president's vision in any way, you have to be you know, you have to be, you, you, you, you have to be okay with that. Right? And again, that's not specific to this administration.
That's true of working in government. And I think a lot of people also get this impression that like, oh, well I know more than the president about this issue. He's not in the weeds on this. So like, he's gonna say something and maybe I'll do something that like slightly diverges, but I know better and I know what he actually wants.
Like, no, no that's not the game. You know, you, you, that's not, that's not quite. How, how that, how that ought to work. The other thing I would say is that you really do, like you, you, you've gotta have policy ideas that are really well developed that you have lots and lots of fluency in because you will not have, you know, the, the, the constraints on your time and the amount of inbound that you will get, just inbound communications from people.
And it's like. People you've never met, it's everyone in your professional life. It's like people you went to high school with or you haven't talked to in 20 years. It's like all of that put together.
Kevin Frazier: Were you getting active feedback on the AI action plan from that guy in your sophomore like English class saying, yo, Dean, why haven't you thought of this brilliant idea?
Dean Ball: There was, you know, there were so many people that reached out and including, yeah, I did actually have several people from my high school that I haven't talked to in, in like two decades, reach out to me practically two decades. So like the amount of time that you will have to respond to like extraordinarily weighty questions about policy and about like tactics and strategy.
It would just, just, it'll just amaze you how little time you have on some things. It's like you have to make a call, you know, you have like 20 minutes or you have an hour to like, think through this really complicated thing. So, I'd say it's definitely a place that, like White House at least, is a place that rewards generalists, but it's also a place that rewards people that have, like, pretty well developed.
Like you have to have a quite well developed sense of, of like everything, you know, kind of that you wanna do. So, yeah, but no, I, I, I, I, it would be very hard for me to imagine unless like one of those things doesn't sound like it could, you know, it would work for you and obviously you gotta be willing to work like very long hours and all that stuff.
But like, assuming any of that's, you know, like assuming all that's true, I can't imagine why you would say.
Alan Rozenshtein: You, you, you said that it's a complicated time for AI policy on the political right. And I would just love for you to talk more about that. What, what do you mean and what are the different potential futures you see for how this could all this all play out?
Dean Ball: Yeah, so I mean, like, you know, I, I think there is some portion of the right, you know, you, you, you have still the kind of like techno libertarian types and. In some ways that's manifested itself as the new tech, right. And in some ways not I think–
Alan Rozenshtein: That has survived Elon Musk's defenestration from the, the coalition.
Dean Ball: I think in some, I mean, it, it's hard to, I don't really know what the new tech right is. You know, I, I don't consider myself to be a part of that. I mean, there's nothing new about my rightness, right? I've been, I've been conservative for like, like 15 years. I've been conservative since I was 18 years old.
So, yeah, like it, but, but, but, but, but there is kind of that. And then, you know, candidly, like there is some skepticism that those with which those people are viewed by others who have maybe see themselves and probably rightfully as having been in the right coalition for a longer period of time.
You have of course you know, lots of, lots of you know, members of, of the party who. Have very deep concerns about the influence of big tech on our society and our economy. And society versus economy is different for different people. There's the people who are really worried about it for the economy and, you know, financialization and software in the economy and wanna move to sort of a more hardware based world.
And then there are people who are worried about it more in terms of a society impact. And that can relate to, you know, kid safety, but also many other issues. And you know, also just politics though, right? Who, people who I think, you know, like, let's be real, like a lot of the companies that are producing these you know, leading AI products are companies that like patted themselves on the back for undermining President Trump in his, in his first term right. Like patted themselves on the back about it.
Alan Rozenshtein: What, what do you mean?
Dean Ball: Well, I mean, I think like one good example is you know, Google and their refusal to work with the Department of Defense as like, just one example, but there are others.
Alan Rozenshtein: Do you think that's a, do you think that's a Trump thing, or do you think that's more of a, they don't wanna work on some military project or something like that?
Dean Ball: It was totally a Trump thing. If Obama had been president, there's 0% chance they would've done that. I've, I'm convinced of that in my bones. It was about Trump and you know, I mean also like other things too, right? Like working with the Department of Homeland Security, right? Tho-, those contracts were often protested internally inside of big tech companies and not just Google, you know, I'm not just calling out Google, I'm calling it, it's, this was a problem that we saw repeatedly.
That gets, that's to say nothing of social media misinformation, all of that stuff that you guys have talked about on various things. So like without like re rehashing that old history. The point is there's a lot of people that, for a variety of reasons, whether you think they're good or bad, I happen to think they're pretty good.
Share or like have like a significant amount of skepticism about these companies and have kind of like, they feel like they've heard all this before about like the promises of how great this is gonna be. And they feel like what they're seeing in front of their eyes is not necessarily like, particularly in line with those promises of a grand future.
You know, there are people, for example, who called the relatively prominent people on the right who described the opening section of the action plan as, as utopian because it talked about the ability to unravel ancient scrolls once thought unreadable. And they were like, that's utopian. And I was like, that literally happened like 18 months ago. And actually the dude who did it, like works for the Trump administration.
Alan Rozenshtein: I, I still think that that may be the coolest thing that AI has done. I, I think probably alpha fold is more important, but there's something about x-raying old mud caked volcanic scrolls that I just, it really does it for me.
Dean Ball: It is extremely cool. So yeah, I think like there's all these different things and I think there's also, AI means different things to different people in this world. So to some people when you say AI, like they'll think of about LLMs, other people. When you say AI, they actually still mostly are thinking about like recommender, probabilistic recommender algorithms on social media.
And so they're thinking of it more that way. And so there's, there's a lot of, you know, this is a period in time. It reminds me a great deal of the, the late 18th century, sort of the last quarter of the 18th century when, you know, linguists and etymologists will go back and look at the ways that the definitions of words changed and like words like change, the meaning of words changed like a lot in that time, which makes it really hard for originalists because it's like, what did that word mean?
It's really different between like, you know, 7, 1770 versus the 1800 meaning of this word is like sometimes wildly different. I think we're living through a similar period of like intellectual fertility. I. And so, all that is happening all at the same time. And like, how exactly you disentangle it and figure out the right thing to say is a super interesting problem.
Kevin Frazier: Well, we know for certain that study of language from this period we'll see a huge uptick in delve minimally, and that will prove insightful for a lot of linguists. But Dean, I'm, I'm curious if you think that. Following the AI Action Plan, and we wanna get into the weeds of that in a second, but in the wake of the plan and following the initial fraus that happened around the AI moratorium where we had Senators Cruz and Blackburn reach a deal, unreach a deal.
Then just we don't know where things stand necessarily on the hill right now. Is it your sense that this fracture or the different visions on the right, are they narrowing, are they coalescing? Are we getting more agreement now? Are we kind of getting over the, for lack of a better phrase, as you kind of threw out there, a sort of hangover from social media?
The sense that we got social media so wrong, that we have to respond faster and more harshly to AI, is that. A symptom that we saw just from the introduction of AI that we can see dissipate soon? Or do you think that's gonna stick around?
Dean Ball: My guess is it will stick around, well, it will definitely stick around. The question of whether it stays the same, dissipates or grows, I think is really dependent on like what happens next. I think it depends
Kevin Frazier: So what, what are the key factors that you'll be paying attention to? Is it like, 'cause my thesis is that. AI companions are going to become the fulcrum for people's perception of ai, of, because if you're a parent and you have a kid who uses some AI companion and it becomes their best friend and they don't want to come down for dinner, I think you hate AI.
Now if you have no connection to kids using AI companions or just don't care or never experience them. I think you're, you're kind of in a different universe from an AI policy perspective, but what factors are you paying attention to in this debate?
Dean Ball: Yeah, so I mean, I think it is, it is right that you know, very, very likely a big chunk of this is going to be shaped by some kind of a crisis, which is by its nature stochastic. And so like, I think the whole, like, again, this is totally just me. This is totally not a White House opinion here. And like this is not something that people a lot every, I'm not saying that in winking, like I've truly–
Kevin Frazier: You're, you're a free man Dean. You're a free man. You're–
Dean Ball: I did not talk about, this wasn't like water cooler talk at the White House is my point.
Kevin Frazier: The badge is turned in, The phone is gone.
Dean Ball: Yes, that's right. But like, I think that. Some of these like, let's not call it companion. Like let's talk, I mean, let's be real and say like, there's like some, like some of this is pornographic, right? There's like soft core and sometimes hardcore pornography being offered, like, not by like a, a Russian, you know, bot farm, you know, the open-source image gen or video gen model running on a Russian bot farm.
But like, instead, like models being offered by like companies that have. Very prominent institutional investors and like, you know, like, and I, I think that's a ticking time bomb. I thought that about character before the tort cases against you know, the one in Florida most prominently. Before those lawsuits, I thought character was a, a ticking time bomb.
Not that character is explicitly sexual, though it often went there. You know, it often, the product, at least I haven't used, I have, I, not a regular user, but like it certainly in the past it, it had a tendency to go to sexual places, including with minors. That's a ticking time bomb with like American society.
I don't think that's just a ticking time bomb with conservatives. But that'll really affect how people see it. I think another thing in the physical world will be, do the data centers increase the price of power or, or make electricity less reliable for many Americans? There are totally ways to deal with that issue, right?
Like we can totally deal with that, but but do we effectively do that or not? Is, is a, is a really big question. So I think that if Americans view these companies as getting away with a kind of theft or getting away with like things that feel unfair you know, I think. First of all, this is a democratic society, and it's up to the American people ultimately to decide, you know, what things feel unfair, right?
And, and vote based on that. And you know, I think that will, that will bias ai, unfortunately. You know, it's, it's very vulnerable. AI, it's a very delicate thing. And you, the problem though is that even if you're, you know, if your, if your goals are fully punitive and you just wanna exact revenge, and maybe it's for stuff that happened on social media in 2018, and maybe it's for stuff that's happening right now, or maybe it's both or things that could happen in the future, you know, regardless of what those things are.
I. I think that it is possible, especially through like a lot of the laws that we're seeing at the state level, it is possible to freeze our society and amber in a way that will make all of the problems we have worse. And so, it seems to me that like that sort of area, that issue of like perception is going to matter a great deal.
Because I will tell you something right now, like you've gotta do better than like, you know, in the long term you've gotta do better than like. Well, we have to like, you know, China, China, China. We have to like, you know, that's a shipping of the org chart thing. That's a, like, we can't agree on anything else.
And again, that's the Democrats did this too. 'Cause the Democrats couldn't agree either. No one can agree. No one knows what to think. And so this is really articulating a vocabulary of like, what, what will the politics of a IBI think it's still pre paradigmatic. The last thing last thing I'll say is, you know, the president obviously called for a for a preemptive framework in his, in his speech, announcing the action plan. And you can't I think, you know, he was very clear in what he said. He said, we're gonna have rules and we need rules that are more brilliant. AI itself is the way he put it.
And you know, I actually think that's that's a bit of poetry that I like quite a bit because I do agree with that. I agree entirely with that. And, but that does not look like a moratorium, and that does not also look like your company releases some documents to the public about like the model's technical specifications or what your company did to mitigate bio risk.
Like, excuse me, get outta here. You're not getting preemption and like a liability shield because you did that. Like, I'm like, like, that's, that's insane. That's complete insanity. And no American will find that to be just so, you know, figuring out what it should be is its own whole can of worms.
But I think that you have to answer, you have to, you have to answer that question head on. Depending on how we answer that question, I think we'll, we'll determine a lot. If we answer it as one country, then we could end up, I think, in a very good place. If we don't answer it as one country. We end up resolving that by the states, then I think we could end up in a quite fractured place with a lot of really bad politics that, and bad economic outcomes as well.
Alan Rozenshtein: So I, I'm curious about the last thing you said in terms of, look, if you want preemption, you gotta do something more than just throw out some impact report. And I'm curious if that would've been your position going into this job. Because I, I would've taken you, I I would've guessed that sort of the, the pre White House Dean Ball would've been much more on the, the moratorium side than, than what I'm hearing now. And I'm, I'm curious if, if what you saw in the White House changed your mind on some of these, these, these, these foundational, these foundational issues.
Dean Ball: It's actually funny. No. I was give, I gave an interview to, I give two interviews. In fact in like December, one of those like, you know, reporter look ahead things where they, you do a, you do a, you know, it was on the record, but like an interview about like, what, what, what should we expect from the Trump administration kind of a thing.
And we talked about preemption in both of those interviews. One of them was never published, but one of them was. And in both of them I was asked about the concept of a moratorium. And I specifically said I don't think that's gonna work as a political matter. And I also don't think it's right as a policy matter.
I think you can't, like, I think it's like I get the idea that like the state's regulating stuff. Is a practical problem. I think that is true. At the same time, I think it doesn't, it's just a, it puts you in a difficult political position, quite frankly to to say, you know, the, the moratorium like that, that, that we're, well, we're just not gonna do anything.
You know, I think a lot of it doesn't, it's, it doesn't really meet where a lot of people are in the party. That being said, like. I think the moratorium had a very, the, you know, the, that's the, that was the abstract idea of a moratorium I was talking about the actual moratorium we got, had like a, i, I would say like a quite well it was fraught and there was a lot of debate about it, but it had like a pretty extensive carve out section of like all the different kinds of laws that did want to allow states to, to, to pass.
But nonetheless, I think, you know, as, as we saw, you know, regardless of sort of my, my own prescriptive opinions about that, about that specific moratorium, I think what I would say is that it was obvious from the political reaction that it just didn't quite, it just didn't quite seem right to a lot of people. And so I think you, you know, you will need to do something more. And you'll need to create some sort of sense of, of rules, whatever that might mean.
Kevin Frazier: So the conversation from President Trump on the moratorium was kind of after the fact, after the, the big fight occurred on the hill. Do you expect that this is going to be a position from the White House that gets reiterated or that we continue to see expressed?
Or is this a position that's maybe convenient or part of the, the AI Action Plan for now, but maybe is something that the administration may be willing to negotiate on or move away from?
Dean Ball: Well, there's two things I wanna, I wanna disentangle. I wanna disentangle the state law related provisions that were in the action plan and then the president's statement of support for a legislative preemptive package.
Because they are different, I would think a legislative preemptive power. I mean, you, there's all sorts of questions there about how you structure that. I can't you know, I, I, I can't speak to like what the White House will do in part because some of those things are conversations that I participated in and I can't, I can't share sort of like deliberative discussions that we had about, about things where a White House doesn't have a, you know, hasn't like.
But, you know, I would say like the president said this and generally speaking, we take what the president says with a extreme degree of seriousness, and I think a lot of other members of, of the Republican party do as well. And so, you know, I, I think, I think, you know, I would, I would take it quite seriously.
You know, I would, I would take it as a, as a very serious thing that the, that the president is invested in, that he gets, that he personally cares about, et cetera. You know, this was not just something he read off of a teleprompter, right? This was, in other words, like it wasn't just something that was fed into him by speech writing.
Nothing that he says is to be very clear, but like this in particular was not when it comes to the, the things in the action plans that deal with state laws, I would think of those as being much more narrowly tailored to specific sorts of you know, things that we see. So, and it's not every state, like, you know, with the, the stuff in the action plan, like.
There's a lot of state laws that have passed things relating to deep fakes or, or, or, or, or impersonation of other people or, you know, impersonation of like style and things like that, that, that, that, that section had nothing to do with you know, that was primarily aimed at a lot of these, sort of, most actively regulatory bills in that are percolating in the states.
And some areas where I actually think that some of those things might be quite useful policy instruments regardless of whether a legislative preemptive package passes.
Alan Rozenshtein: So I, I don't, I wanna make sure we don't let our entire conversation go by without talking about the actual substance of the action plan.
And obviously there's a lot of stuff in it. We don't have to go ask away, we don't have to go line by line. But I, I just wanna curious to get your high level, like if you had to define the thesis right, of the action plan in a sentence, right. What, to you, what do you, does this document represent? Like, what is the main.
Takeaway, obviously, again, there's a lot of policy about energy and, and this and that and semiconductors and all that sort of interesting stuff to get into. Yeah, yeah. But, but to the extent that you think it has an animating vision, like what is that one, one sentence animating vision?
Dean Ball: I think for me it would be that America and its institutions can successfully adapt by taking smart strategic steps right now. We don't have to think pie in the sky about the future or pass some big new law. We can do it right now and we can make the AI future better as a result of those things. And we can build it and we can lead the world.
And that we actually can do this. That it's possible to identify a common sense grownup agenda in a way that, you know, for many years I think a lot of Americans didn't feel as though they got outta Washington. That for me is a big part of the agenda or a big part of the, the message. And, and, but that's a subtextual message.
Alan Rozenshtein: Yeah. And, and like, what would be the core actions here, right? Like, like if, if, if, if there was, you know, one or two or three things, right?
That the federal government, whether the executive branch unilaterally or working with congress. Actually executed on, what would be the thing that would get us to that positive vision of a, of ai?
Dean Ball: It's a good question. You know, it's, it's just never really, like I am so much of a fox and not a hedgehog that like, that's the way I see the world is like.
Being made up of many, many, many different constituent parts, and there's just a lot of stuff to do on, on all the different things. Certainly I would say like, you know, I think that. Frankly the three pillars of the plan, like get, actually get at this pretty well though, right? Where it's like what do we need to do?
Well, we need to make sure that we have a regulatory environment that is conducive to innovation on like the product and deployment and adoption side of ai. That's number one. Number two, infrastructure. We need to make sure we can build the AI infrastructure and that we have the workforce that is equipped to, to build it as well, the skilled workforce necessary to do that.
And the third. Is you know, both international diplomacy and security, right? Like we have to there's a whole global infrastructure that needs to be built. And America is in pole position to do it right now, and we need to actually make sure that we do, and also there are all kinds of ways that AI in the hands of malicious foreign adversaries could present threats to America.
And we have to be ready to counter those threats in, you know, all appreciable ways. So like, I think like in that sense, the, the core thesis of the action plan is like, pretty much just those three pillars. The only other thing I would add to that though is like, like. There's the, I think of the action plan as being actually composed of both a strategy and a plan.
Where like if you just look at the subheaders under each one of the pillars, there's like stuff that's like, you know, build world class scientific data sets. It's like bigger font in ital headers with like a paragraph of text below. If you just took those things. That would pretty much be the country's AI strategy, right?
And like we could totally just put that out and be like, that's what we're, this is what we wanna do. These are things we think are important. And like we want all the agencies to operationalize plans to respond to those strategic objectives. And we would like to work with Congress to do that. And we want industry and civil society and et cetera, et cetera, to like respond to those things.
This is where we wanna steer the country. And then you can think of the action plan as being like okay. Within that. Here are the things that we think here are the dials and knobs that we think we can turn right now. And so, you know, I think, I think those are like, that's just like an important thematic thing that like, we didn't make it, we didn't explicate that in the plan because like why would you, it's kind of a boring point, but like to me it matters.
Kevin Frazier: So. Speaking of, you know, turning gears, flipping levers, flipping switches, all that. One of the common responses that I saw was, okay, great. There are 90 recommendations. Many of them, a lot of folks support. But I'm curious how, in your conversations with the agencies implicated from the Department of Commerce to the Department of Energy, how were the conversations about, oh boy.
You know, looking at the sum total of these recommendations, my agency now has. A dozen or maybe even two dozen new action items. There's a little bit of uncertainty around funding, around talent, around institutional capacity. What was the conversation in drafting the action plan around? Actual implementation and execution on some of these really bold proposals on pretty tight timelines.
Dean Ball: Yeah. So, the, you know, the text of the action plan and especially like the strategic parts of it, the strategic objectives but also like a good number of the action items themselves we're like. They were in like first draft complete by like the end of April, early May. Right. That was when it just as was a document that lived on, you know, a very small number of people's computers.
And then like the actual a lot of the labor with the action plan between then and, and, you know, July 23rd when the plan came out was, a process of working with agencies and within the White House to get to scope every objective, every policy, action. Exactly right. So a, to make it sufficiently specific.
Sometimes to get it like Technocratically, right? It would be like, oh, well that's not really the right way to do that. Other times it was like, should we do this or not? Right? And you kind of have that conversation. And then other times there were things that, you know, we changed the scoping of in light of talent and you know, our workforce and, you know, funding and things like that, right?
Like all those things of course come up and, you know, we are very explicitly with the action plan. You know, we're not we're not explicitly advocating for reallocation of agency budgets. We're saying that this is stuff that can be done within agencies. And so a lot of it was like that process, that kind of like, just process of, of, you know, working with the, with the, with the agencies.
So what that means though, is that a, a, you know, the action plan was like, I would say, quite thoroughly baked with agencies by the time it came out. This wasn't a surprise to anybody whose named in it. Is it true that it is like you know, like, an ambitious plan that pushes on a wide variety of different fronts, like Absolutely.
Yeah. And that was like, goal number one was like, let's make the best action plan. You know, let's make the best AI policy document that any government in the world has produced. Let's do that. But then you know, within that, like, let's actually do things that we think we can credibly deliver.
So figuring out exactly where that sweet spot is for every single one of those things is kind of like, was the hard part. That was substantively much harder than writing it than writing the first draft, I should say. So, you know, is there execution risk? Like totally, there's totally execution risk, but as far as things you can do preemptively to mitigate that execution risk I think that, you know, we did as, as much of it as one could.
And I also think that there's a lot of exceptionally capable people inside the administration who care a lot about these issues and who want to put this plan into fruition. And also, by the way, who are gonna come up with other things to do. That are consistent with the strategic objectives, but that might be supplementary to the policy recommendations.
And we think that's great. Obviously I shouldn't say we anymore, but I personally, as a private citizen, think that's great. And I would imagine that my former colleagues at the Office of Science and Technology policy would concur.
Alan Rozenshtein: Was it hard to get consensus on, on, on, on the action plan? Or maybe a different way of asking this is w were there things that you would've wished. Made its way into the action plan, but that did not because at the end of the day, it was just not possible to get consensus on them.
Dean Ball: Establishing consensus took time and was not easy, but I, for reasons that like wouldn't be obvious to people on the outside, like. There was, there's a lot of stuff that you might look at and think like, oh, that probably was hard. And it's like, actually, not really. Because it turns out that the relevant agency actually really cares about this too.
So it's like fine. I would say you know, a, I mean there's some stuff that's like relatively typical, right? It's like, well, you're adjudicating turf wars, right? Like every agency wants to be mentioned in this, and you have to like, and ev every agency wants to be mentioned in every bullet point.
And you know, that's the platonic ideal from the perspective of the interagency. And so, you know, then you have to like. You know, it's your job. It is my job as the person, you know, adjudicating all that to like, figure out what actually happens. Things like that. You know, there's just classic normal things.
Nothing, nothing nefarious or adversarial there. I, there are there are a couple of things that I would say. There's nothing that, like I personally, like deeply, deeply cared about and pushed for. That didn't make it in. There are absolutely things
Alan Rozenshtein: Help, helps when you own the Google Doc.
Dean Ball: Yeah, right.
Alan Rozenshtein: As I, as I discovered when I was in government.
Dean Ball: Though, I, I should say for the for, for any White House it staffers who might be listening to this we do not use Google Docs in the executive office of the president. So definitely
Kevin Frazier: Not nor nor Signal.
Alan Rozenshtein: No. Truly
Dean Ball: like we actually I'm, I'm
Alan Rozenshtein: just, I'm just saying the jokes. You, you put on X about how you were a human Google Docs.
Dean Ball: Well that's, but that'sliterally, that's why was the human Google Doc is because I was incorporating all those into Microsoft there.
Alan Rozenshtein: Oh. There was no Google. I see, I see what you're saying in a Google Doc. It was poor Dean Ball in an office somewhere. Dean has Google Doc.
Dean Ball: Yes, I was the Google Doc. Yes. Taking many word documents and compressing them into one. Because we can't, yeah. In, in EOP you can't use Google Docs for reasons of the Presidential Records Act, which I mentioned earlier. So, at least that's my understanding. There are definitely things though.
That I think we all would've wanted in, and that some people, you know, felt like, you know, you have to say no, right? You have to say no. So there's a lot of ideas and policy directions that we ultimately said no to. And there's a lot of great ideas from the RFIs that we received. You know, that we ultimately said no to the comments that we received from the public you know, the, the, we had to say no to for, for interest of brevity and focus. So, there are things like that, but at the end of the day, like I, I think we all worked together in really like, quite, quite a congenial way. And I think most people in the administration ended up at a place that, like we were all pretty happy with it.
Alan Rozenshtein: Lemme ask you one last process question, and this is maybe getting back to this point that you mentioned about the, the politics of AI on the right. And, and the question is about sort of the tension between the political side of this and the policymaking side of this, which obviously this kind of tension always exists in, in, mm-hmm.
Any White House. One thing that struck me, and this was especially true with the EO about quote unquote woke ai, and this was part of the, the AI action plan discussion of speech was in some places it, it felt like there were sort of. Two different audiences and two different authors on some of these documents.
And so like that EO in in particular, right? The, the kind of section one preamble was pretty sharp rhetorically, you know, criticizing DEI and wokeness and critical race theory, and I think transgenderism you know, quote unquote in sort of pretty strong, I think it's fair to say, kind of rightwing or mega, you know, terms.
And then the rest of the argument or the rest of the EO rather was this like very, I think much more. Sort of small c conservative discussion about unbiased AI principles and like a lot of, like the rhetoric and terms in the first section just kind of drop out. It, it almost felt like, my law firm colleague Renee Dures, we did a podcast about this and, and she sort of made the joke that the, it was almost as if there was a ignore all previous instructions just starting in, in, in section, in section two and mm-hmm.
Obviously you're, you should. So talk about whatever you're comfortable talking about, but it certainly felt like, to me this was an example of, of the tensions in doing policy making in a very called charged rhetorical environment. And I'm sort of curious if that sounds, if, if, if my read between the lines of how some of this stuff was put together is consonant with what you experienced.
Dean Ball: So, you know, in, in some ways Sure. But I think in a lot of ways, like it ends up being it, it, the, the conversations, you know, I can't go into detail on them, of course. Of, of like what other, I could say what I said, right. But I can't say what other people said about those, those things. But I would say like at a high level, the conversations around everything in the actual plan, including that executive order, were quite.
Nuanced and focused throughout. So the, the, there is this issue of you know, of, of political bias in ai, and we have absolutely seen you what, what you would call like DEI or woke principles embedded into these things. And like, you know, I think. Yeah, I, I, I, I don't wanna attribute this view to every single member of the Biden administration, but the general thrust that I got from the Biden admin a lot of the time was that traditional AI safety, like existential and catastrophic risk type people were useful idiots for people that wanted to insert a you know.
Sort of political agenda into AI systems that that made a lot of people, you know, very, very worried, including me. It's actually like, it's, frankly, it's part of the reason I got into AI policy in the first place.
Kevin Frazier: And, and just to, to flush that out, you're saying that the concern around AI safety and the need for things like audits and evals was a vehicle. For the Biden administration to get to some of the DEI related concerns is that the-?
Dean Ball: because it was always yes and right. It was always yes. And it was always like, yeah, of course we're worried about bio risk and we're also worried that the model might misgender someone. Right.
It was this kind of thing. Right. Where, and like certainly, you know, AI enabled misinformation, like of course that can and should sound dystopian. At least in my, well, I think it can for sure as a factual matter and should also as a prescriptive matter, in my view, sound dystopian. So, for the government to be adjudicating what misinformation is and is not.
So that is like a, I think a very real concern. But the interesting thing. That is that when you think about that issue you actually get pretty quickly into what I would view as some of the deepest issues about you know. About AI like that, that are out there, right? You're talking, you're getting into issues of like, what is really going on inside of this thing?
What kind of values does it have? Is it manipulating me in some way? Right? I mean, can imagine if an AI system, an advanced AI system, say like GPT seven or something and multiple generations from now or even today was in some way programmed to undermine. The, the whoever the president was at that time.
And we had no way of knowing it because it was smart enough to make its moves extremely subtle. Well, like, that seems like something that the government has a very, very clear interest in understanding. You know, for, at the very least, the models that it procures. I think you know. And so that's where we bracketed that, that's where that EO ultimately comes down, is that's where we bracket the issue.
We are absolutely making a statement that is, you know, about this, this broader concern about the value systems and how they're gonna interact with our society. But the policy is limited to the things that, you know, we felt was within our power to do, because of course, there are things that are not.
Within our power to do. And I think most people, when the rumors of that leaked, of that executive order leaked, most people thought we were gonna go in like shockingly broad ways that I have to tell you. You know, like we're not. That was never really the contour of the, of the deliberations internally about that those kinds of things.
Like what, what, what we were always trying to do is like what we actually felt with high conviction we could do. And so we weren't trying to police speech between you know, an AI company and a private citizen. You know, we're, we're not trying to go within a country mile. We're not trying to define, what unbiased means in all settings. We're not trying to define what truth is and not truth, et cetera. We're trying to. Elicit transparency about how models are grappling and how AI companies are grappling with these very weighty issues in the development of their models. And so in that case, in that sense, I think the EO is responsive to a core concern that like literally animated.
My personal decision to get into AI policy was a central theme of the president's AI related campaign you know, of the, of the party platform. And you know, also implicates like, I think issues that, that are gonna be with us for a long time.
Kevin Frazier: So speaking of things that are within your power Dean, you have now accumulated even more power when it comes to AI policy insights, and I'm sure at least a fraction of our listeners are wondering, where in the world is Dean Ball headed next?
What in the world will he do next? And I guess one place I'd be curious to start is how much of a hand you wanna play in some of these. State level AI fights. You were one of the folks who helped author or dream up SB813, which we did a podcast on with Andrew Friedman of Fathom, currently pending before the California State Legislature.
Are we gonna see you in Sacramento? Are you a D.C. man? Are you gonna be testifying before every state legislature? What's, what's next for you? Where are you headed?
Dean Ball: I would very happily. Well, I, I, I, I absolutely plan to work on both state and federal issues, work on local issues if I need to if it, if it makes sense.
But no, I, I, I absolutely, you know, I'll, I'll go where interesting things are happening and things that merit, you know. Ultimately my reader's attention, you know, so to answer actually the question, the, the immediate question of like where I'm going next, I will be joining a, as a senior fellow at the Foundation for American Innovation, which is a wonderful think tank in DC that I was affiliated with before.
I will probably have some other projects and affiliations that I announce in the coming weeks. But, and, and I will also be resuming regular weekly publication of my Substack Hyperdimensional, which was mostly paused during my time in government. So, you know, and, and in there and, and other outlets too.
I absolutely plan to write about the state issues. I will also write about federal issues. There's a lot of state bills that I think deserve attention right now. And debate and, and all sorts of stuff. So I'm looking forward to that. I think that'll be, that'll be a lot of fun. And you know, I think like.
There, there's, there's a lot of, there's a lot of amazing in, in fact, I mean, the, the, the difficult part about working at the White House, like I was, I was actually like relatively narrow as an AI policy scholar. Like, I had like a couple things that I was like really into. I was like really into like liability.
I like wrote quite a bit about the transparency stuff. You know, AI and science. I wrote, I wrote a decent amount about manufacturing, some things like that. But, AI, the, the, the AI policy advisor job at OSTP really caused me to like broaden the lens. And so, particularly around things like the electrical grid there is a lot that I have learned.
I went very, very deep into that. And actually, that's one thing I would say if I wish we could have been more specific in the action plan on the electric, on the grid stuff. It's the one thing that's like, it sort of like much higher level objectives. It's really more of a preview of, of plans that are kind of in motion right now.
It's explicitly framed that way really in the plan, in the action plan. I, I, I just, at the end of the day, we had a very good reason, which is that, you know, the, the, the, the stuff wasn't baked yet. And it's extremely complicated stuff. But there are some really exciting ideas there. That I developed.
And so, you know, if anything, the thing that's daunting to me is like, how am I gonna, where am I gonna, like I have to find all these different specialized outlets for all this different stuff, and how am I gonna find homes for all these different ideas I wanna develop? So, yeah.
Alan Rozenshtein: Well, you're al you're always welcome to come join us at, at Lawfare.
Thanks Dean for, for coming on. And again, congratulations on I think a really impactful, and, and a very positive. And I think especially across the across the whole spectrum, which is a, a rare thing these days, a very, a very positive and very impressive four months in government.
Dean Ball: Thanks guys. I really appreciate it.
Kevin Frazier: Scaling Laws is a joint production of Lawfare and the University of Texas School of Law. You can get an ad free version of this and other Lawfare podcasts by becoming a Lawfare material supporter at our website, lawfaremedia.org/support. You'll also get access to special events and other content available only to our supporters.
Please rate and review us wherever you get your podcasts. Check out our written work at lawfaremedia.org. You can also follow us on X and bluesky and email us at scaling laws@lawfaremedia.org. This podcast was edited by Jay Venables from Goat Rodeo. Our theme song is from Alibi Music. As always, thank you for listening.