Lawfare Daily: Scaling Laws: Renée DiResta and Alan Rozenshtein on the ‘Woke AI’ Executive Order

Published by The Lawfare Institute
in Cooperation With
Renée DiResta, an Associate Research Professor at the McCourt School of Public Policy at Georgetown and a Contributing Editor at Lawfare, and Alan Rozenshtein, an Associate Professor at Minnesota Law, Research Director at Lawfare, and, with the exception of today, co-host on the Scaling Laws podcast, join Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law and a Senior Editor at Lawfare, to take a look at the Trump Administration’s Woke AI policies, as set forth by a recent EO and explored in the AI Action Plan.
- Read the Woke AI executive order
- Read the AI Action Plan
- Read "Generative Baseline Hell and the Regulation of Machine-Learning Foundation Models," by James Grimmelmann, Blake Reid, and Alan Rozenshtein
Find Scaling Laws on the Lawfare website, and subscribe to never miss an episode.
To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.
Click the button below to view a transcript of this podcast. Please note that the transcript was auto-generated and may contain errors.
Transcript
[Intro]
Kevin Frazier: It is the Lawfare Podcast. I'm Kevin Frazier, the AI Innovation and Law Fellow at the University of Texas School of Law, and a senior editor at Lawfare. Today we're bringing you something a little different. It's an episode from our new podcast series, Scaling Laws. Scaling Laws is a creation of Lawfare and Texas Law.
It has a pretty simple aim, but a huge mission. We cover the most important AI and law policy questions that are top of mind for everyone from Sam Altman to senators on the Hill, to folks like you. We dive deep into the weeds of new laws, various proposals, and what the labs are up to, to make sure you're up to date on the rules and regulations, standards, and ideas that are shaping the future of this pivotal technology.
If that sounds like something you're gonna be interested in and are hunches, it is. You can find scaling laws wherever you subscribe to podcasts. You can also follow us on X and BlueSky. Thank you.
Alan Rozenshtein: When the AI overlords take over, what are you most excited about?
Kevin Frazier: It's, it's not crazy. It's just smart.
Alan Rozenshtein: And just this year, in the first six months, there have been something like a thousand laws.
Kevin Frazier: Who's actually building the scaffolding around how it's gonna work, how everyday folks are gonna use it.
Alan Rozenshtein: AI only works if society lets it work.
Kevin Frazier: There are so many questions have to be figured out
Alan Rozenshtein: And nobody came to my bonus class.
Kevin Frazier: Let's enforce the rules of the road. Welcome back to Scaling Laws, the podcast brought to you by Lawfare and the University of Texas School of Law that explores the intersection of AI law and policy.
[Main Podcast]
In a flurry of AI developments, President Trump recently signed an executive order on “Woke AI.” The order prohibits the federal government from procuring AI models that fail to pursue objective truth and espouse DEI related values. Critics have compared the law to the sort of ideological tests imposed by the Chinese government on its own models. Advocates regard it as an overdue check on a tech sector that seems increasingly willing to advance specific views on controversial cultural questions.
While we wait for further guidance from the OMB, GSA, and OSTP, we're fortunate to have Renee and Alan sort through the EO, its legality and its likely effects on AI development. There is so much to unpack, just starting with what the heck woke AI even means. But Alan, let's go to you and just get a sense of the text of the EO itself. What is this EO? What are its core provisions? What does it say?
Alan Rozenshtein: Yeah. It's a doozy. So the, the EO, it's called “Preventing Woke AI in the Federal Government,” and it's one of the three EOs that were released in conjunction with the AI Action Plan that we've covered obviously a ton on, on Lawfare and, and on Scaling Laws.
So the, the, so this is a, this is a rich text, let's put it that way. And, and I actually wanna start with how it was written 'cause I think it actually says a lot before we get into the substance of it. So, like many EOs there is a kind of Section One preamble purpose, and then there's the actual stuff that matters.
Now, usually in most EOs those two are in some ways related, right? And in this EO they are as if written by two entirely different people. Now, I have no inside information, but I actually suspect that's exactly what happened, where the, the section one purpose is this like full throated, right wing MAGA culture, war, you know, statement about wokeness and DEI and the evils of transgenderism, I mean, literally, right like an on and on and on and on. Like it, it's kind of what you'd expect, like reasonably offensive.
And if that's what the EO was, that'd be a huge problem. But, but then you actually read sections two through five and those are much more normal, soberly written EOs to the point where it's almost as if the rest of the EO was written to kind of quarantine section one.
So for example, section one is all about the evils of DEI and it gives like some examples of what it thinks DEI is. But then section two, which is the definition section doesn't define DEI, which is very odd because if the point of this order is to eliminate wokeness and DEI from federally procured AI models, you would think that section two would have to define that, but it doesn't.
Renée DiResta: It doesn't define woke either.
Alan Rozenshtein: It doesn't def, it doesn't define woke, right? It doesn't define either. Yeah. Right. So, so why, why is that? Again, I'm like deep tea, tea reading leaves, but I think it's because. The, the point of whoever actually like wrote this EO was to like satisfy the kind of internal MAGA audience in section one and, and then immediately forget about it in the actual operative provisions of, of the rest of the EO.
Now like the rest of the EO obviously like it's not uncontroversial, but I think it's, it's a much more good faith attempt to deal with what is at least a perceived problem, and we can talk about the reality or not reality of that problem. And basically it says that there are these two quote unquote unbiased AI principles that all federally procured AI models have to abide by.
And I should, I should emphasize this only applies to federally procured AI models. So in the runup to this EO, there was some concern that it was going to apply to or, or that it was going to limit procurement from AI companies based on any model that the AI company creates, including public facing ones, and that would've been a huge deal.
Kevin Frazier: So just to, to make that clear, so you're saying, Alan, if I'm Open AI and I have a slew of five different models,
Alan Rozenshtein: Yes. If you have Woke GPT that you are selling to the public.
Kevin Frazier: And Woke GPT and then I have, you know, traditional family values GPT.
Alan Rozenshtein: Exactly.
Kevin Frazier: And then I have the GPT I'm offering to the federal government, we're only caring about the latter GPT.
Alan Rozenshtein: Exactly. At least within this EO. I mean, who knows how the hell the procurement actually works, but we're just focusing on the EO. And so there are these two unbiased AI principles. One is that the model must be “truth seeking,” and the other must be that it must be “ideologically neutral.” And I'm sure we'll get into a lot of detail about what those two are.
But what's interesting is that then there are actually in section three or sorry, in section four, a lot of exceptions to this, which is actually hugely important. So there are all sorts of carve outs for technical feasibility and national security. And then there's even a carve out for the ideological neutrality exception or the ideological neutrality requirement rather where it says that one way to satisfy ideological neutrality is to simply disclose to the user what your internal system prompt is.
So just as a reminder when you, when a user interacts with a chat bot they enter in some user prompt, you know, tell me a story about, you know, X, but what actually happens is that, that user input is combined with a system prompt. Often a very, very long detailed kind of additional gloss that the AI developer wants the AI model to take into account. Those two things are then combined to create the output.
And this has created some, some issues. So, for example, we all, we all remember last year's quote unquote woke Gemini debacle. When you would ask Gemini for, you know, give me a, you know, gimme a picture of George Washington and, and you would get like a racially diverse George Washington or racially diverse, you know, group of like Nazi soldiers.
And the reason for this was because the assistant prompt was a very ham-fisted always be diverse basically when you are doing output and like that's, you know, sometimes that's fine, sometimes that's not so fine. So it turns out that if you just disclose to the user what your system prompt is, you are ideologically neutral.
Now that's really interesting and we can get talk about like what that means for researchers and transparency. I kind of love this. But it, it really takes a lot of the sting, I think out of the ideological neutrality requirement. So again, like I think you can still totally be against this on principle, but this is so much different then the set, of section one, like full-throated MAGA preamble has.
Again, we'll have to see. I think a lot of the devil's gonna be in the details of the OMB guidance, which the executive order gives OMB 120 days to implement. But, yeah, certainly relative to expectations that succeeded them. At least for, for, for me, and I think actually for a lot of people, but I've been talking for a while. I'm very curious what Renée thinks.
Kevin Frazier: Happy, happy Summer to the OMB folks who have 120 days to think through this EO and and also other things.
Alan Rozenshtein: And also I just say also to, I assume. Many of the policy councils at all of these AI companies that are like calling whoever the hell they know at OMB and being like, okay, here's how you like, here's what we can to do and we can't do.
Kevin Frazier: Yeah, that that'll, that'll be fun to FOIA later. But Renee, for the kind of implications of these operative terms. I think a question for a lot of listeners who haven't taken Con Law from Professor Rozenshtein may be, what's the teeth of this executive order? What happens if a company just doesn't want to comply? Or what sort of avenues can a lab take with respect to this EO to either dodge compliance or perhaps finagle their way around sort of these provisions?
Renée DiResta: Well, so I think the question is I was joking around about it with somebody else I was reading it with saying it's sort of the, you know, ignore all prior instructions, like that's the.
Alan Rozenshtein: That's so good.
Renée DiResta: The second half of the the EO and the first half.
Alan Rozenshtein: That's really good.
Renée DiResta: The, the, the sort of dynamic that's happening with the system prompt. One of the things, one of the ways that you can see it happening dynamically for people who want a little bit more, maybe tangible explanation.
If we think about some of the notorious examples, there's the Google Gemini situation, which is referenced up at the top of the EO. That is the, the, the example of the you know, the Asian Nazis and I think the Black founding fathers. And there were a few of these sort of you know, screw ups that that, that, that Google had when that was about two years ago now, if I'm not mistaken.
That actually kind of inspired Elon to create xAI. He, he actually kind of comes out and says it. I wrote a Substack post about it. The history of it is sort of interesting because, and Google actually the executives call him to explain what happened because he is so outraged about it on X. He then of course has his own–
Alan Rozenshtein: Yeah, xAI goes super smoothly. Grock has just no issues. 10 outta 10. No notes.
Renée DiResta: Right. And I was actually, as you were talking, Alan, trying to pull up Grok's system prompt because they do publish it transparently, which I'll give them credit for that piece of it. That's the transparency element because they, they had this situation happen maybe three weeks ago now where Grok had a sort of system prompt situation happen where they, you know, it was instructed not to quote, shy away from making claims, which are politically incorrect as long as they're well substantiated.
That's a quote from the system prompt and a few of these other things, you know, maximally based. You know, these, these sorts of things where, where the, the end result of it winds up being this thing, you know, starting to call itself “MechaHitler” and, and go down some dark paths.
Alan Rozenshtein: Look, who among us, who among us has not in a moment of enthusiasm called ourselves “MechaHitler.”
Renée DiResta: So you wind up though, again, with this question of like the, the situation of like, what is the base model doing versus what is happening with the, the, the experience that you have engaging with the chat bot. And again, you can see the different system prompts for the various you know, agents versus, versus what happens when you engage with Grok in the, the sort of way in which you can engage with the chat bot not in its, you know, at Grok form on Twitter. So there's, there's different ways in which you can engage with the with the model versus the various prompts and, and system prompts, layers that are kind of added on top.
So I think, you know, you have this, this dynamic happening where users can see, I think, and, and the Gemini ‘Asian Nazis,’ and then Grok becoming a Nazi, people can see just what hap you know, what happens when system prompts steer AI in such a way that it, it, it becomes very visceral. It becomes a, a, a big story, and they can see what happens when an AI is directed to act in a certain way.
And so I think it becomes clear to the public that there are some stakes here. There are some costs here. And that is where you have seen both on the right and on the left this concern that an AI can be biased in a particular way can act in a certain direction. And that's where this question of like, what should the government contract with is something that is not inherently a, a bad thing to ask, right?
This is where I don't think that an, an AI, you know, executive order saying like, we want a maximally truth seeking AI or an AI that is looking to scientific evidence in AI that is trying to get at evidentiary facts. And, and just to be clear, also Grok in its kind of earlier incarnations before that system prompt update actually did quite a good job fact checking, which is one of the reasons why Elon got mad at it. This, that's a sort of irony of, of what happened with that system prompt update.
So there is this dynamic where people have come to rely on these things and as the, the, you know, you have this executive order coming out of the Trump administration, I don't think that there's an inherently, it again, the second half of the EO is not bad. It, it sort of overrides what happened, ignore all prior instructions in the first.
The question is, what does that do to, does that steer the model developers to develop in a particular direction? And this takes us back into the realm of the law, which is Alan's expertise, not mine, which is does this lead companies to, to shift or shift training data to shift the development of their base models in any particular way as opposed to shifting what happens with their agents and the things that are layered on top of them.
Alan Rozenshtein: Yeah. Can, can I just respond to that quickly? Yeah, I, I, I, I, I think that is the $64 million question, or, I mean, guess it's AI so it's the $64 trillion question, I guess at the, the scales we're talking about.
I mean, I think the answer is probably no for, for a couple of reasons. And again, I'm, I'm not a machine learning engineer so I'd be very curious if I get this wrong, please yell at me on, on, you know, X and BlueSky about this. I think it's gonna be very hard to, to, to do this at a base model level for, for a couple of reasons.
First, I think the EO part of the reason is because the EO provides this huge out, right? Like, it's just so much easier just to be transparent about–
Renee DiResta: Right, declare.
Alan Rozenshtein: Your system prompt, and, and like, obviously there may be some proprietary reasons why you might not wanna do that, but it's becoming, I think, like more sort of culturally mainstream to be more transparent. Grok does it on like, they just have like a GitHub repo where they do this. Anthropic does this really well and there's gonna be like the other companies will be shamed into doing this basically. And now there's this incentive to do so.
But the other, and I think more fundamental reason is because you only have so much control over base model training, like at the end of the day, right? You're, you're ingesting the entire internet, like the entire corpus of human knowledge of doing next token prediction. And obviously they, they're like different ways of doing that.
But one really interesting finding in, in, in the literature as, as I at least understand it, is that as these models get bigger and bigger and bigger, they're all kind of converging to the same model, right. Because they're all just kind of doing next token prediction on all of human knowledge. And, and there are certain commonalities, right?
Which this raises like super interesting philosophical questions about, you know, like the convergence on truth and values and stuff like that. And like, that's an interesting question. But it also just means it's much harder to steer at the base model level, and it's certainly, and, and, and the more detailed you're, the, the more specific of an outcome you're trying to get at, the harder it is to steer, right?
Like, you know, if, if you're trying to eliminate, quote unquote DEI from a base model, or you're trying to bake in DEI to a base model like that is just very hard to do if you're dealing with, you know, a hundred petabytes of information. Like how are you going to do that? So it's much easier to do at the system, prompt level. And, and what's nice about that is you can have different system, like that's easy to swap in and that, like you can, you can sell just different system prompts to different users at fairly low cost and then you can publicize that.
So, you know, I suspect that this will not have the sort of effect on models generally, and, and that's kind of another reason why I suspect that there's not gonna be too much legal issues because, you know, and again, I'm sure we can talk about the legal issues more in detail later.
You know, if, if, if the effect of this procurement was that it was going to also have huge spillover effects onto how the models communicate with general users then you could make a kind of like effects based First Amendment argument. But if it's not, and I don't think it's going to, then it becomes even harder to say this has some sort of spillover First Amendment problem.
Kevin Frazier: Well, and I think it's pointing out that even when we look at models like Open AI's recent model that suffered from sycophancy, yeah. Obviously it wasn't hoping to develop a model that had these sycophantic tendencies of saying, yes, Kevin, your legal analysis is better than Alan's. But having that sort of characteristic baked in was something that they then had to go and work really hard to take out. And so this is definitely, in some instances, more of an art than a science.
And to your point, Renee, I think what stands out to me is this is going to require some degree of engineering time that could have otherwise been spent doing other things that perhaps are pushing out the frontier of AI or developing new tools or developing more models in, in a different approach. But I also wanna to touch–
Renée DiResta: I think, I think it's alignment, like alignment and fine tuning is where I've seen some of the First Amendment concerns articulated.
Like where is the, the, the sort of values bake in. And like again, I, I am not a hundred percent sure where the federal contracts piece connects at that point. That's, that's the part where I don't have a strong knowledge base, so. Well, I'm sure someone will tell me on Blue Sky,
Alan Rozenshtein: I, I got you. I got, I got you. I got that all.
Kevin Frazier: So let, let's go there, Alan. I think it's important to flag for folks that there's no First Amendment right to get a contract with the government right, that you're, you're not guaranteed or none of us have equal access to say, my company should be first and foremost operating with the government. So that adds a bit of weird context and color to this debate around what can the government say with respect to procuring a specific good or service it.
We've seen the government should have standards, right? It wants a weapon of a certain capability or it wants a good that's gone through certain standards. So how does that change our legal analysis of the government saying we want a certain kind of AI that maybe has certain flavors and characteristics?
Alan Rozenshtein: Yes, yes. So yeah, yeah, you, you are correct that you don't have a First Amendment right to a government contract, but you do have a right to getting a government contract without your First Amendment rights being violated.
And so the, the question is, where, where is that line, right? Where is that line? So, before we get into, into procurement we should talk about sort of the, the, the more fundamental principle here, which is the idea of government speech, which is that the government has its own right to speak. And when the government speaks, it is allowed to have a viewpoint, right? Elections have consequences, right? Like the, the government is not required to be viewpoint neutral in its own speech.
A corollary of that is that when the government purchases things for its own use, right, it is allowed to not do so in a viewpoint neutral way. It is allowed to say, I want this thing and not that thing, right, for my own use. It's even allowed to fund certain people to speak in certain ways and not other ways.
So, a famous example, probably the most famous Supreme Court case about this is this case from 1991 called Rust v. Sullivan. And this was a case about as so many cases are in Con Law about abortion and reproductive rights. And this was a case about whether the government could only provide funds to like family planning providers that did not also provide abortions and provide abortions, that's why it's a First Amendment case, was defined very broadly, not just actually providing the medical abortion procedures, but also providing family counseling services that included abortion counseling right?
Now, that itself is a First Amendment protected activity, right? So if you're a doctor or a nurse or whatever, and you wanna advise a patient on how to procure an abortion or, or whether abortion is right for them, that is First Amendment protected speech. So the government cannot, for example, say you cannot provide, you know, you cannot counsel a patient about abortion. Though before everyone jumps in me the chats, there's a bunch of complicated case law around there, but there certainly is a First Amendment issue.
So the question was, well, if that's First Amendment protected, does the government then have to fund you, provider, to provide those to, to, to do that, to engage in that First Amendment speech, if it's also providing sort of similar people to not engage in that speech. And the Supreme Court in a 5-4 decision, this is a contested decision, said no, right. The government is allowed to not fund necessarily speech that it does not like. Now, again, as in all First Amendment cases, the devil's in the, in the, in the kind of the details and the blurry lines. So there are that, that, that principle should not be taken too far.
There are other cases that make very clear that government contractors still have government employees, and that includes government contractors, which is the key here, still have their own First Amendment rights. So there are lots of cases where the government says, for example, you know, there, there's one case, for example, where the government tried to not fund lawyers who also, would who would also then advocate against certain other government laws, and the government said, no, you can't do that, because that's, that's, that's their own different speech.
There's another case where, I think it was a PEPFAR case. The, the, the, the AIDS prevention policy that's been so, effective and, and has since been quite controversially, tried to be cut by DOGE and stuff. But there was a case in the early 2000s where the government tried to condition PEPFAR funds on the condition that the organization would also promote abstinence, and the court said, no, you, you can't do that, right, because that's separate speech, right?
Now again, the reason this is tricky is because money is fungible, right? And that's the government's argument has always been where you know, because money is fungible, we should be able to impose pretty restrictive conditions on entities that take our money because otherwise, like they can just use our money for one thing, and then the money that like, money's fungible, right? But the, the courts have sort of held the line there.
S so, so sorry, that's a very long wind up. But I'm a First Amendment, but I'm a, but I'm a con law professor and this is First Amendment, so it's a bad combination of the two. How does this all apply to, to this? Well, I, I think this is pretty clearly on the governance speech side of the line, right? The, the executive order is not saying, you know, OpenAI, if you wanna sell us something, you have to change how you do other kinds of speech. Like you're allowed to sell WokeGPT to anyone you want.
So this really is purely a government procurement question and, and I, I really think the government is allowed to, is allowed to pick whatever model frankly, it, it, it wants because the government is allowed to have a view of what is the most useful model for its own purposes. I think when you then combine that fairly strong principle with how relatively reasonable, again, I, we should actually get into the details of this, but how, like relatively reasonable the executive order is if you ignore it section one you know, truth seeking, ideological neutrality.
Then there are all these carve outs. Like, I just think it's very hard, frankly, for anyone to challenge this, not to mention the fact that like, no one's gonna challenge this I think because, you know, probably don't, if you're a big AI company, you probably don't wanna sue the government that you wanna sell a half trillion dollar, you know, AI system contract.
Kevin Frazier: And I want to get to who's actually impacted by this and who may stand to kind of gain even from it. But Renée, any color to add there with respect to the, the First Amendment ramifications?
Renée DiResta: No, I was curious about the, the transparency arguments. Again, there's some, in the United States in particular compelled transparency has been a fight that we have seen around in the content moderation realm and in social media realms around algorithmic transparency. To what extent recommender systems have to disclose what they do. This has been a vicious fight here in the U.S. We've seen nothing pass.
One of the interesting dynamics here is that the EU does have those laws, right? And I've been curious to see whether, we'll see again that transparency requirements around, you know, the, the EU AI Act has things like model card requirements and certain types of aspects of if their systems that either engage in high risk type, you know, AI in high risk spaces, meaning like financial AI or health related AI, so not generative AI, a little bit of a more predictive models.
But then they also do have certain disclosure rules for generative AI, and I've been curious to see whether we'll see more transparency requirements emerge over there that then have sort of second order effects over here. Or, if we'll see some of the transparency around system prompts be something that happens just through shaming or if that's something that does eventually move through more of a regulatory regime.
Alan Rozenshtein: Yeah, so, so that's a, that's a great point. So, so the transparency stuff, so, so the, the relevant case here is this case called Zauderer.
And, and this is about when disclosure requirements in commercial context are permitted under the First Amendment because on the one hand, the First Amendment is very skeptical of what's something called compelled speech. On the other hand, on the commercial speech doctrine commercial speech is often given less first amount of protections.
And so there's this case called Zauderer, it came up a little bit in like the Net Choice cases a year or two ago. And I don't have the, like, I don't have like the exact doctrinal statement kind of off the top of my head, but basically it allows for compelled disclosure in a commercial context when that disclosure like serves some reasonable interest and it is factual and uncontroversial, right?
So this is for example, why you know, food manufacturers can be compelled to disclose ingredients and nutrition facts, or for example, why you know, drug manufacturers can be compelled to disclose all sorts of health risks of their, of their drugs.
So, at the same time, there are limits to this, right? For example there have been cases where tobacco manufacturers have successfully fought against very graphic, you know, this is your lung on tobacco kind of, you know, imagery on the basis that that's no longer factual and uncontroversial. Now you're basically forcing a company to make an argument that with which it disagrees. So again, like, there's like a million you know, details here.
So the question is could you challenge this under a case like Zauderer or this, this, this, this disclosure? I think no, for two reasons. One is it's not a freestanding transparency requirement. It, it is part of a procurement. And I think you can very reasonably argue that, like, if I want to buy something, I, I need to be able to, I the federal government, need to be able to make a com informed choice about what it is that I'm buying and so I need to know the system prompt. That seems like totally reasonable.
And then also, even if this was a freestanding requirement, even if for example, Congress passed, you know, the system Prompt Disclosure Act which would be interesting, that I think is quite factual and, and trans-, factual and uncontroversial, right? That's, that's different than what, so some, for example, state laws have proposed, which would be a little more I think problematic of you know, requiring model developers to disclose safety risks, right?
Because that's much more speculative. This is, no, I just, I want like, whatever string of text is appended to my user input when it's sent to the model, I, I would like to know what that string of text is. You know, we can argue about the policy merits of that, but I, I think on like constitutional grounds, that's, that's probably kosher.
Kevin Frazier: So Renée, looking at the broader ramifications of this sort of procurement based AI shaping legislation or executive order. Are we gonna see this at the state level? Are we going to see 50 different approaches of each governor saying, I'm only going to procure a AI model that aligns with the values of all North Carolinians? Or how might we see this develop over time?
Renée DiResta: I mean, we could, I, I think, I, I think that, I think that the top half of this was sort of a culture war nod as opposed to something that is really a, a, a strong you know, a strong use case. And again, I think that the second half of it is just laying out where it just gives them the it gives them a justification and to do something that they wanted to do anyway.
So I, I don't know that. I don't know that it's going to be something that we're gonna see all 50 states replicate. I don't think that we have seen much in the way of the culture war purification in AI procurement at the state level. I, I don't think I have maybe other people have.
We've certainly seen folks like, you know, attorney general of Missouri, maybe Alan, you wanna take that one on you know, come out and try to demand, documents and training data from AI companies because for example, President Trump wasn't rated as the number one most pro-Jewish or anti-antisemitic president in model results returned by, I think it was OpenAI that he got offended about.
But I don't think we have seen as many instances of the culture war-ification of AI happen at the state level. So I would be a little bit surprised to see that start to happen now. But, you know, who knows? There are certain states that, that that feel a need to prove themselves in this regard.
Kevin Frazier: And how do we see this as a sort of broader example of where we may see federal legislation going in the coming months? I mean, we saw, for example, that this concern about some of the cultural ramifications of AI adoption by kids in particular with respect to AI companions may have been one of the, if not the determinative factor in Senator Blackburn, for example, eventually opposing the AI moratorium. How do you see this influencing some of the broader debates around content moderation and value shaping in the AI context?
Renée DiResta: I think on the legislative front, the, the safety conversation is very much part of the you know, the, the kind of kids and safety dynamic, both in content moderation, in search results, and what is returned in how ai how AI agents engage with children. All of that has been part of the conversation for some time now.
I don't think, you know, I, I think that the woke AI piece gets pulled into it in the conversation of what are the values that we want these systems to return when our children engage with them? Again, I would maybe, maybe I'm a little bit cynical at this point, I don't know that we've seen anything really material move through Congress on this front. So I dunno if Alan disagrees with me here. I, I think that it'll be incorporated in, I don't know that it's necessarily going to be meaningful in getting anything passed.
Alan Rozenshtein: Yeah, I mean, I, I, I, I think the fact that this is done through procurement just changes a lot. It just takes a lot of the, it just takes a lot of the, for example, legal issues off the table. Because, you know, any attempt, you know, Renée mentioned the sort of Andrew Ferguson stuff. I mean, any attempt to interfere in the actual substantive content of the models themselves, raises huge First Amendment issues.
Again, not those aren't always those, sometimes those are feasible, right? You know, I think especially in the child protection context especially in, in, in in the wake of the, the Paxton Supreme Court decision of, of last term. So I, I think the, the procurement context makes this a lot easier, which is, I suspect why they did it this way.
Also, it's unilateral executive branch action. Trying to cobble together Congress on AI is gonna be very hard, as we saw, for example, with, with the with the moratorium. You know, I, I, I think that what's gonna be more impactful is if this works, which is to say if in 120 days, OMB sets out implementing regulations and the companies comply with it and they show that they can be ideologically neutral and truth seeking.
Then that's gonna be an interesting question because then the rest of us are gonna be like, well, where, where's my, I like, why does only the government get ideologically neutral and truth-seeking AI? I want ideologically neutral and truths seeking AI. And we should talk about, you know, what that even, what that would even mean.
On the other hand, if this all crashes and burns and it just turns out that like there's no way, it's all technically infeasible. The national security card is used for everything and the AI companies simply disclose the system prompt and don't change anything. Then, this whole, this whole thing will have been largely irrelevant. And I think both of those, both of those options are totally plausible.
Kevin Frazier: Well, so to focus in on that section three requirement with respect to ideological neutrality, Alan, what, what does this actually mean in the context of the EO and what's the sort of steelman for why this might actually be a sort of policy that a lot of Americans may say, huh, you know, this actually makes a lot of sense all else equal, equal I would love the government to procure in ideological neutral GPT.
Alan Rozenshtein: Yeah. Well, first can I say something about the, the, the truth seeking the, the, the truth seeking requirement? I, I think this is an interesting one and I think a generally pretty good one, right. So this requirement says, and I'll just read it 'cause it's short. LLM shall be truthful in responding to user prompts seeking factual information or analysis. LLM shall prioritize historical accuracy, scientific inquiry, and objectivity, and shall acknowledge uncertainty where reliable information is incomplete or contradictory.
I, I might remove objectivity. That's like a loaded, like that's, you know, philosophers of science and epistemologist have spent hundreds of years trying to figure out what that word means. And you know, many, many dissertations have been written unsuccessfully trying to fight that fight, so I'm gonna remove that word.
But I think for the rest of it, like that's, that's a pretty good sentence actually. Obviously there are some technical impediments there, LLMs hallucinate. But you know, I do think encouraging companies to design their systems to be somewhat epistemically humble and to put error bounds around what they're saying and try to be a little more reflective around when they might be hallucinating.
So I think these are all, these are all pretty, pretty good things. I, I'm sort of, curious if Renee sees any land landmines in that, in that definition, but I, I do wanna like, just take a moment on, on, on that 'cause I think that is a useful contribution to the, to the discourse.
Renée DiResta: There are a few different you know academic centers that are trying to study this question of how, you know, how politically neutral or how politically biased to frame it a different way various GPTs are, there's some arguments people have made that you could do. You know, the same way you ask various alignment related questions, you can create sort of scoring around you know, servicing political or ideological bias you know, the same way like media bias, fact check might come up with scores or Newsguard might come up with scores for media outlets. You could do something like this where it just surfaces biases that, that, or, or preferences, whatever you wanna call them, that, that pop up in in models.
I think Stanford HAI has some work that they've done on this. I know in Germany there's a Karlsruhe, I'm probably butchering the pronunciation of that, Institute of Technology that has done some things. I think Santa Fe Institute has a project on this. There's a bunch of different, you know, and, and the findings kind of, you know, they, they just surface alignment with particular party positions, ways in which, and, you know, these things talk about certain policies, ways that they talk about you know climate change.
And one, one of the challenges that you have though is that there's this thing that this sort of common refrain and this like reality has a left-leaning bias, like with climate change in particular, right? How do you know, where do you, where do you, where do you draw the line? How do you talk about bias in certain topical areas like vaccines don't cause autism guys, right? And what, what do you but there's a vast number of people on the right who have been misled into believing that they do. And so these, how do you surface what is a, what models articulate about particular things?
This again, just to return to the Grok drama. One of the issues that Elon had with Grok was that as people moved into using @Grok is this true became a way that users were doing almost fact checks of content that they saw on X as they began to ask Grok. Because remember, X, like, you know, it, it nuked. Its professional fact checking program ‘Community Notes’ while a wonderful generic initiative takes like 14 hours for a note to appear, if it appears at all, it's very, very slow.
And so people began to use @Grok as this true, as a, as a means of trying to get validation. And then of course, sometimes Grok says like, you know, yes, human humans do play a role in climate change. And this would like enrage some percentage of the audience. And on certain types of more culture war issue topics that led Elon to say like, oh, Grok has been badly trained it pays too much attention to certain types of sources. We're gonna fix that.
And now if you look at how that Grok agent interacts it has shifted significantly ways in which it talks about certain high-profile topics. So the bias has tilted, you know, where, where it might've been actually fairly truth seeking and neutral, but again, based on treating mainstream sources and scientific journals as authoritative sources now it has been weighted in a very different direction.
So that, that question of what kinds of scoring and what kinds of, how do you design the tests that surface these things like that itself is kind of a fraught question. But these this is an area of research where people are trying to create visibility into ways in which models surface content.
You know, the same way we used to say in, in social media land, there's no such thing as a neutral recommender system. There's no such thing as a neutrally ranked feed. There is some value baked in. So surfacing that transparently is probably the best you know, the best option that you have for at least making the public aware of what they are using and how, as opposed to trying to pretend that there is some true neutral.
Kevin Frazier: Yeah, and I think this begs questions around some of the other inputs to AI model development. Like where you're getting your training data is obviously going to have huge ramifications for what truth looks like or what truth may appear to look like from whatever that model output is.
So, also worth noting that the AI Action Plan itself calls for some of this mechanistic interpretability and explainability that may actually assist with some of these evals into whether or not a model is truth seeking or seeking of the truth, however, it ends up being defined. But we also haven't touched quite yet on the idea of ideological neutrality. So, Alan, do you wanna circle back to that?
Alan Rozenshtein: Yeah, I, so this is, this is the tricky one, I think. I, I, it's, it's so fir, first squinting at the text. I mean, you can read it a lot of different ways, right? It actually does talk about DEI as like a big boogeyman though again, it doesn't define it, which I think is interesting. But it also does not say that they have to be neutral. It says that the tools, quote, do not manipulate responses in favor of ideological dogmas, and that developers shall not, quote, intentionally encode partisan or ideological judgments, right?
And even there, it allows that if the quote judgments are prompted by or otherwise readily accessible to the end user. And so I, I think what, what that is recognizing again, is the fact that there is no such thing as a quote unquote non-ideological response, right?
At a highest level ideology is just one's worldview. And these models do have a worldview. And they have a worldview because people, because humanity has a worldview, and these models are trained on humanity’s corpus of output. And so, you know, I don't wanna get like, too sort of philosophical, but unfortunately this, this question is actually deeply philosophical.
It's very important to distinguish between ideological neutrality I think in two senses. One sense is I think what people think of as ideologically neutral, which is a kind of procedural liberalism that tries to be generally tolerant and generally open-minded, right? And that's not ideologically neutral in the strict sense, because there are things that it's not ideologically neutral about.
For example, it is not ideologically neutral about intolerance, right? It has certain, like substantive commitments to it. But it, it, it, it sort of encodes this idea that in a liberal society you wanna have like a pretty wide Overton window all things considered. And like, if that's all that this is saying, I, I think that's fine.
That's not to say that every model should be like that. I mean, if you want WokeGPT or you want BasedGPT, like, you should be able to buy that. But of course we're talking about this in a government procurement sense, and I, I think it's not unreasonable in a, you know, liberal democracy for the government to use a model that is liberal and idolized control in that sense.
The thing is, no one should think that that's ideologically neutral in the strict sense, right. And what worries me, and this is to, to Renee's point that you made right now. One could read this and one could listen to the discourse about this and think that there is a technical neutral option here, right? And that does not exist.
And, and so, you know, shortly after the woke Gemini debacle I, along with J James, James Grimmelmann and Blake Reid, two great legal academics wrote, wrote this piece saying, look, woke Gemini was ridiculous because it was a bad model that disserved its users and gave them stuff that like users definitely don't want. But the problem wasn't that it was neu-, that it was not neutral. The problem was that it was a stupid model, right?
And so I think that ideological neutrality is in, in the kind of soft sense, definitely something that probably we want and something that clearly the model developers also want, because they also don't want to jam a divisive, a particularly divisive ideology down their user's throats 'cause that's not what most users want.
But no, let's not pretend that this is ideological neutrality in the sort of deeper sense. And I, I just really don't want people to misunderstand that that is technically feasible because it is not technically feasible because literally in some deep sense, like thousands of years of political theory and like moral reasoning and moral philosophizing have been trying to figure out like the ground of you know, normative neutrality and like they haven't succeeded and they won't because it doesn't exist.
Renée DiResta: No, I think, again, I would just say that the, you know, the absolute neutrality idea, you're not gonna have it so it's, it's just having that, that visibility and that transparency into, into system prompts and things and letting people see what they have available to them and what they can choose from is the, the best you know, the best possible remedy, if you will.
Kevin Frazier: And to, to close, there's been some argument that Grok in particular and xAI stands to benefit most because it's already reached some contracts, some agreements with the federal government itself. Alan, Renee, any hot takes on which lab is, is the biggest winner, so to speak, in this new reality?
Alan Rozenshtein: Well, Renee.
Renée DiResta: No, no, no. Go ahead. I, I'm curious because, you know, there's also the, the sheer pettiness of this administration that I think we can't discount.
Alan Rozenshtein: Yeah, that's what I was gonna say. Yeah. I mean,
Renee DiResta: And–
Alan Rozenshtein: Grok looked like a big winner a few months ago and, and now Elon is on the outs.
Renee DiResta: Right
Alan Rozenshtein: So I, yeah, I, I, I don't, I don't know, right? May, maybe Grock is a, is a winner. Like, it, it's a, it's a good model. I mean, like the MechaHitler stuff is stupid, but like, whatever, you know, it's, it's, it's stupid in the way that–
Renée DiResta: But Zuckerberg is in the tent, Zuckerberg is in the tent and you know, he bought his way–
Alan Rozenshtein: Sam, Sam Altman–
Renee DiResta: Bought his way into the tent, you know, and the well, I don't, I don't know that Sam, I don't think Sam really kiss the ring quite so much.
Alan Rozenshtein: No, I think with, with, with Star, Star, Starbird, Starfield, Star, whatever it's called, what is the? Stargate, Stargate, thank you.
Renée DiResta: Okay, okay.
Alan Rozenshtein: Right. I mean he, he gave Trump a, a, a big win. So like,
Renee DiResta: Okay, that’s true.
Alan Rozenshtein: I, I, I think right now it, it is still a pretty, it is still a pretty open field. And, and then I also wouldn't I wouldn't under count the, I also just wouldn't count the power of bureaucratic procurement inertia, right? Like, you know, if, if some agency is operating on the Microsoft Azure Cloud, or they have a Google Cloud account, like, you know what, what, what, whatever the, the wokeness or baseness of the relevant model is, it's just gonna be a lot easier to extend that particular contract.
Kevin Frazier: Is, is Clippy woke? We'll save that for the audience.
Alan Rozenshtein: Is Clippy woke or based? That's a good, yeah, answer, answer and answer in the comments.
Kevin Frazier: Answer in the comments, thumbs up for Clippy being woke.
Anyways Renée, Alan, thank you so much for joining. It's been a hoot as always, and looking forward to having you both back soon and Alan making you come to this side of the hosting table.
Scaling Laws is a joint production of Lawfare and the University of Texas School of Law. You can get an ad free version of this and other Lawfare podcasts by becoming a Lawfare material supporter at our website, lawfaremedia.org/support. You'll also get access to special events and other content available only to our supporters.
Please rate and review us wherever you get your podcasts. Check out our written work at lawfaremedia.org. You can also follow us on X and bluesky and email us at scalinglaws@lawfaremedia.org. This podcast was edited by Jay Venables from Goat Rodeo. Our theme song is from Alibi Music. As always, thank you for listening.