Cybersecurity & Tech Executive Branch

Lawfare Daily: The Pentagon Designates Anthropic as a Supply Chain Risk

Benjamin Wittes, Alan Z. Rozenshtein
Tuesday, March 3, 2026, 7:00 AM
What has been the reaction to the designation of Anthropic as a supply chain risk?

In a live conversation on March 2, Lawfare Editor in Chief Benjamin Wittes spoke to Lawfare Senior Editor and Research Director Alan Rozenshtein about the Pentagon's designation of AI company Anthropic as a supply chain risk, the implications of a designation, how other AI companies have reacted, and the legal challenges the designation may face.

Read Rozenshtein’s article on the topic, co-authored with Michael Endrias, here.

To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.

Click the button below to view a transcript of this podcast. Please note that the transcript was auto-generated and may contain errors.

 

Transcript

[Intro]

Alan Rozenshtein: It is completely insane to simultaneously say, this product is so important, we're gonna force you to give it to us. It's so safe that we're gonna use it during an active military engagement. And it's so dangerous that we're gonna burn you to the ground.

Benjamin Wittes: It's the Lawfare Podcast. I'm Benjamin Wittes, editor in chief of Lawfare with Lawfare Senior Editor and Research Director Alan Rozenshtein.

Alan Rozenshtein: So it's quite possible that, you know, the people at OpenAI are, in good faith, assuming that when the government says, oh yeah, according to law and practice and regulation and this DOD guideline, we're not gonna autonomous weapons like that, that protects them. And DOD is saying, okay, that's great for us 'cause we can change the guidelines.

Benjamin Wittes: In a live recording on March 2, we talked about the Pentagon's designation of AI company Anthropic as a supply chain risk, the reaction from Anthropic and other AI companies, and the legal challenges, the designation is surely going to face.

[Main Podcast]

I am here with Lawfare Senior Editor Alan Rozenshtein, professor at the University of Minnesota Law School and sudden expert on procurement law. Alan, how did you spend your weekend?

Alan Rozenshtein: Reading a lot of procurement law. I, I have to, I just have to say, the nature of expertise is really relative. I, I really always think of the, of the, in the land of the blind, the one-eyed man is king. And so, I spent a, a fun weekend going from zero to 60, at least 45.

Benjamin Wittes: So what caused you to do a crash course in defense department procurement law?

Alan Rozenshtein: So there's this, there's this little company that some of us may have heard of called Anthropic, and they make a artificial intelligence systems called Claude. And Anthropic has actually for the last few years, been at, at the front among the major AI labs of working with the government in particular on military and classified systems.

And actually last summer, the company signed a deal with the Pentagon to increase the Pentagon's usage of those systems, again on his classified networks and for military purposes as part of that contract, and had a couple of what are now being called red lines, primarily that its systems would not be used for mass surveillance of Americans and also that its systems would not be used for fully autonomous military operations.

And the Defense Department, clearly was okay with that. But in January, Secretary of Defense, Pete, he put out a memorandum on the use of AI. And in that he demanded, or he set the policy that the military would only allow AI contracts where those contracts permitted, quote, all lawful uses of those systems.

So basically he got tired of companies imposing sort of additional restrictions on the use of their systems beyond what was kind of required or permitted under U.S. law. That plus the use of Claude or the reported use of Claude in the operation to capture the Venezuelan president, Nicholas Maduro and some reports that maybe some folks at Anthropic asked some questions about that clearly caused some alarm bells to go off in the Pentagon.

And over the last two weeks, we've seen increasingly tense kind of growing standoff between the Defense Department that has been pushing Anthropic to remove usage restrictions from its contract. And Anthropic has been willing to play ball somewhat, but has these couple of red lines. In the last week, Hegseth threatened to invoke this law called the Defense Production Act, which would've potentially required anthropic to provide Claude without the restrictions.

But in the end, what ended up happening was that on Friday, President Trump wrote a social post banning its use on any government systems. And then soon after Hegseth put out a post on X purporting to designate Anthropic as a quote, supply chain risk. We'll get into I'm sure what that means in detail. But in particular banning any, not only Anthropic from government contracts, from DOD contracts, but banning, and this is the key, any business that, government contracts from itself doing any business with Anthropic.

Benjamin Wittes: So it's kind of like a secondary boycott?

Alan Rozenshtein: Exactly. I mean, it's, it's a sanction, it's essentially a sanctions regime against Anthropic. And the reason this is so important is not only does Anthropic really rely on its enterprise customers, many of which also do business with the government.

But two of Anthropics main cloud compute providers, Amazon and Google are themselves defense contractors. So if Hegseth’s designation is read to its utmost, it's effectively a death sentence for it because loses its capacity for compute. So Hegesth said that it will.

There, there's a little bit of confusion right now, whether or not Hegseth has formally designated Anthropic or he's about to designate Anthropic. Unsurprisingly the process in DOD has not been great on this, but at the very least in response, Anthropic is put out a statement saying that it would sue in court against any supply chain designation. So where we are right now is I think we're waiting for the formal designation to come in and then for Anthropic to, you know, run to the nearest courthouse and sue to enjoin that designation.

Benjamin Wittes: Alright, so first of all, memo to Anthropic. Please sue in the District of Columbia, it would be much more convenient for Lawfare than, you know–

Alan Rozenshtein: Cut down on airfare costs for our reporters.

Benjamin Wittes: Yeah. You know, we already have Anna Bower flying all over the country to Tennessee, to Florida to Georgia.

Alan Rozenshtein: Don't bring California into this.

Benjamin Wittes: Alright so before we get into the details of the law, I wanna try to isolate what this is really about, because Pete Hegseth knew we were about to attack Iran when he did this on Friday, right? This happens Friday evening. The, by the time we all wake up on Saturday morning, we are at war with Iran.

This so Hegseth, he made a conscious decision–and Trump made a conscious decision–let's go to war with an American defense contractor the day before we go to war with a significant foreign adversary. So they must have cared about this a lot for some reason. Yet they have made clear that they don't engage in mass surveillance of Americans and they don't aspire to, have they even issued a statement over the weekend that they're not building any fully autonomous weapons?

So is this just a chest thumping? We are gonna beat up on you and make you do what we wanna do. For the symbolism of it, or do you think there is something the Defense Department actually wants to do that Anthropic does not want Claude to help with?

Alan Rozenshtein: So that is an excellent question. So let me first say, I think in this administration in particular, you can never rule out personality driven decision making, which is obviously kind of unsatisfying from like a legal and policy analysis. You know, like we, we all are, are used to administrations where, you know, good people, bad people, whatever, they're fundamentally rational actors. We can kind of game out what they're doing.

This is just not the case for this administration. So it would not surprise me at all if a lot of this is driven, you said ideology I'd, I'd maybe just say peak. Like they just got pissed off because, you know, some, you know, nerd with, you know, curly hair from San Francisco is purporting to tell, you know, Secretary quote unquote of War Pete Hegseth how to use his war fighters.

And you know, just as Donald Trump once bragged that the way he set Switzerland tariffs is that after he set them, the Swiss Prime Minister called him and complained. And because quote, I didn't like her tone, I just doubled them instead of lowering them. And to be clear, he was bragging that this is how he sets tariff policy. I really think that and this is a family show, so I, I won't use the analogy I wanna use, but I really think there's an element of, I'd like to show you that my stick is bigger than your stick.

And maybe that is for some rational purpose down the line, but I think a lot of it is because I just wanna show it to you because domination and this kind of symbolic politics is how these people think, right? And it seems kind of crazy to potentially threaten the very foundation of American AI. And we'll talk about why I think that is part of this, right, based on chest thumping.

But I mean we just went to war with Iran for reasons I don't fully understand. I mean, this is with a well within the realm of possibility, right? So I, I think there's a lot of this that might be that. But I do think there also may be substantive issues here. I think there is an ideological component here that the military does not want its contractors to be telling it how to use its tools and how to set military policy, right?

That if we're gonna have a military industrial complex, it's the military side, not the industrial side that should be running it. I should be clear, I am actually very sympathetic to that position, and I think it's important, you know, even as we're analyzing this sort of shambolic policy that Pete Hegesth is implementing right now, to try to abstract a little bit from the personalities involved and say, okay, but what should the overall relationship be between the military and AI? And I think there's actually quite defensible that it's the military that should ultimately make those decisions.

Now that's separate from, okay, what should the military do if a company doesn't wanna play ball? But I think there's an component here and, and I think that's one worth taking very seriously.

And then finally, it may be the case that the military is trying to build autonomous weapons and do some surveillance that Anthropic would be less comfortable with. Now, the military saying it's not doing that, but a lot of this depends on, you know, how you define fully autonomous and how you define unlawful surveillance.

And you know, I don't have to tell you, Ben, that like depending on how you squint, you can, you can call things mass surveillance or not. And so obviously the military does amount a huge amount of surveillance. The NSA is part of the military and a lot of it's lawful. And so maybe they, they don't just, they, they, they never wanna put themselves in the position of having to know, call Amodei and ask for permission.

Benjamin Wittes: Right. Although I, I mean, I do think that the, that software vendors and software as a service vendors are different from other vendors. It's not like, you know, you buy an F-16 from, whoever makes the F-16 Lockheed, and then you decide how to use it as a military.

Whenever you're buying software, you're actually buying a license to use software, not the software itself. And that always comes with a long click through agreement that is the company's terms. Right? And so why is this any different from Microsoft saying, you know, and you can't use it to do X, Y, or Microsoft Word to use X, Y, or Z.

Alan Rozenshtein: Yeah. So it's a fair point. And, and, and the idea that is going on is sort of run, is circulating at least, you know, some parts of, of the, of the internet that it's totally unprecedented for a company to try to impose conditions. That's just not true. Conditions are imposed all the time. That's totally standard now.

The question is, what is the nature of those conditions? So I don't, I have not read like the master services agreement between Microsoft and in the military. I'd be very surprised if you can't use Microsoft Word to plan an invasion of Iran or something like that. Right. I mean, maybe it says that–

Benjamin Wittes: You can only, you can only use Google Docs for that.

Alan Rozenshtein: Exactly. Exactly. Exactly. Good, good luck with that. So, so I think the real question, but, but look, I think the real question is, is not, you know, is this legitimate or not? It's, it's, in this particular case, does the government wanna abide by these restrictions?

The company has every right to insist on them, and the government has every right to say, no thank you. Right? And like if we were civilized people, we would just shake hands and this would not be a story.

Benjamin Wittes: And so this would, the way this would resolve is presumably if we were civilized people, the government has some kind of exit term from the contract, or it simply doesn't renew the contract when it comes back up.

Alan Rozenshtein: Exactly.

Benjamin Wittes: Right. And it, it chooses not to do business with Anthropic because the terms are not adequate.

Alan Rozenshtein: Yeah. And then it's a, you know, it's a 24 hour story kind of interesting. You know, and then we move on with life.

Benjamin Wittes: All right, so the government doesn't do that. Instead, it designates it as a supply chain risk. Let's pause here and say everybody was expecting them to do something under the Defense Production Act. What could they have done under the Defense Production Act?

Alan Rozenshtein: Yeah. Yeah. And, and I'll say I, I was quite surprised by, by this, I, I did not have on my bingo card that I'd be spending the whole weekend studying supply chain risk because it seems so outlandish. But here we are.

So under the Defense Production Act, which is a law, which is a Korean War era statute passed largely to kind of regularize what had been done in World War II, where the entire kind of economy became a military economy under some combination of cooperation and cajoling from FDR, right, the our first, you can just do things, president.

The government require companies to fast track government contracts, that's the most straightforward thing. But in addition to that, it can require companies to enter into contracts with the government to sell the government standard commercial goods and services.

And on a particularly extreme reading of the Defense Production Act–that has not been tested in court so there's some question about that–to even produce new products for the government. And although the Defense Production Act is quite old, it's actually been renewed something like 51 times, it has very short sunset clause. And at some point, I think 10 or 20 years ago, Congress explicitly included, you know, software and high technology as part of it.

Benjamin Wittes: So the, the Defense Production Act really is just a, if we need a command economy situation, A, the government can do it, it can compel you to produce stuff, and B, it's gotta pay you for it.

Alan Rozenshtein: Yes. And, and I, I, I should say the extent to which it can, you know, force a company to produce wholly new things is that, that's some somewhat unclear, but the, the text is certainly very, very broad. And my thought was that if the government wanted to require Anthropic, at the very least, to provide Claude, right, like this current system, but under different contractual terms. It could do that pretty easily.

And while I wasn't a fan of that as a policy matter, I thought the government would have a pretty good legal case. So naively I thought that that's what the government would, would do. That is not what the government chose to do.

Benjamin Wittes: All right, so what the government chose to do. I wanna just assert this is closely analogous to what it did to Harvard University, what it did to law firms, right? Which is to say you have asserted your rights, in this case, rights under a contract. In the case of Harvard and the law firms and NPR First Amendment rights that we don't like.

So we are going to take retaliatory action against you using, in this case, not money, which is what they did with the with Harvard, but using our ability to prevent other entities from doing business with you, prevent you from contracting with the government.

Alan Rozenshtein: Which is to say money.

Benjamin Wittes: Right, but it's not a direct it, it, it, it's a little bit more indirect except in the government contracting sense.

Alan Rozenshtein: Yes.

Benjamin Wittes: My instinct looking at this is that, first of all, as a normative evaluation matter, we should be exactly as skeptical of it as we are with Harvard or the law firms or NPR. And secondly, we need to interrogate, which is the point of your article, the legal basis for the actions that they're taking. So let's, let's take the easy one first. Is there any reason to think of this in a different framework from the Harvard action or the law firm's action?

Alan Rozenshtein: No, but I, I, I might zoom in just a little bit 'cause I actually think, and again, it's been a while since I really dug into like the exact details of the Harvard and law firm stuff.

I think this is actually more like the law firms than it is like the Harvard action because I, if I understand the, the main Harvard issue was the withdrawal of federal funds. I mean, I guess, I guess that's not true as I think about it because I think Harvard also was banned from, from getting foreign students, which is actually a, a little bit more like what's happening here.

But, but the point I'm trying to make is this isn't just withdrawal of funds, right? This is essentially kind of persona non grata in of a entity, right? And, and for Harvard, that was done by restricting international students. For the law firms, it was done, you know, actually very similarly to what's happening now. And, and yes, I, I think, I think this is the, this is the right way of thinking about what is, what is happening, right? It's, it's almost sanctions regime attempt against some domestic company, right? Which is wild.

Benjamin Wittes: All right, so what authority does the government have to–let's start with the one where their authority should be stronger–point at a company and say, you're a supply chain risk and nobody, nobody in the government is allowed to do business with you. Let's hold aside for a minute the secondary sanctions issue.

Government decides it doesn't like Alan Rozenshtein, Inc. The president issues a tweet or a Truth Social post that says, no government agency can do business with Alan Rozenshtein, Inc 'cause he's a supply chain risk. What do we, what authority do they have to do that?

Alan Rozenshtein: Well? Well, first, Benjamin, you have to promise that you'll, you'll frame that for me, for my office.

Benjamin Wittes: I'll frame it for you.

Alan Rozenshtein: Thank you. Thank you. So there, there are two statutes, both from the 2010s. One is the Federal Acquisition Supply Chain Security Act, FASCSA, which is very hard to say.

Benjamin Wittes: Yeah, it's a bad acronym.

Alan Rozenshtein: It's very bad acronym. And the other is the statute 10 U.S.C. 3252, which was initially enacted as part of the 2011 National Defense Authorization Act, and then was made permanent in the 2018 National Defense Authorization Act.

I, I mentioned them both because although it seems that the government is acting under Section 3252, they're still both useful to think about because I think they express kind of how Congress was thinking about the issue of supply chain risk at a certain time in the 2010s.

They did enact two somewhat different statutes, but I think you can sort of read them together. And that's important because the language of both statutes is reasonably broad. But you have to understand the context here.

But lemme just focus on 3252, which is, which is what we all think the government is acting under. It's certainly what Anthropic thinks the government is acting under. It's what other knowledgeable people think the government is acting under. And, and the reason is that while the FASCSA requires like an interagency process and 30 days notice and like a whole thing, it's a more regulatory statute.

That's really not what's happening here. 3252, this other statute basically allows the secretary of defense essentially on his own authority to basically find that a particular supplier is a supply chain risk. And then immediately exclude that supplier from government contracts for quote unquote covered systems, basically national security products.

Benjamin Wittes: Now, I I, I wanna pause you right there because when you say it's something as a supply chain risk, it does not sound to me like you can say, out of one side of your mouth, give this to me on the terms that I want or, out of the other side of your mouth, you are a supply chain risk. That feels a little bit like, I don't know the food here is terrible in such small portions, right?

Alan Rozenshtein: I mean, yeah, yeah. Yes. I mean, let, let me try to steelman that argument 'cause I, I can imagine.

Here's what I can imagine: a DOJ, you know, federal programs attorney saying in court when the judge quotes Annie Hall to this effect. Well, look, we think that under the current contract term regime, the use of Anthropic is intolerable. And, and the very idea that we'd have to call Dario Amodei for permission, right, even that's a possibility, is totally intolerable. But if you remove the contract risk, suddenly it's not a problem anymore. Look, I'm just saying if you had to speak out of both sides of your mouth, that is what you would say.

Benjamin Wittes: But what's the language of the statute? I mean, if the, if the secretary of defense finds that, what about a product that the terms at which it's being provided, the contractual terms are intolerable?

Alan Rozenshtein: No. That, that, that this supplier, you know, is an adversary whose products will quote sabotage, subvert or maliciously introduce unwanted function. I'm not saying it's a good argument, Ben.

Benjamin Wittes: Yeah, it seems-

Alan Rozenshtein: I'm just saying.

Benjamin Wittes: It seems like that is not like, we negotiated a contract that in retrospect, we regret and we don't wanna wait until the contractual terms are up to renegotiate it. And we don't like Dario Amodei’s hair.

Alan Rozenshtein: Look, I'm trying to play along here, but as our esteemed Lawfare colleague, Anna Bower likes to put it, we live in the dumbest of all possible timelines.

Yes, of course. It's completely insane to simultaneously say, this product is so important, we're gonna force you to give it to us. It's so safe that we're gonna use it during an active military engagement, and it's so dangerous that we're gonna burn you to the ground. Yeah. You, you can't have all, obviously you can't have all three of those at the same time.

Benjamin Wittes: Right. It, it just seems like that dog won't hunt.

Alan Rozenshtein: It's, it's bad. It's bad, man.

Benjamin Wittes: All right, so on we go. What is Anthropics argument going to look like? That this is I mean, beyond what I just said, that this does not cover. I mean, it sounds to me just read listening to the statute that it's like directed at Kaspersky or that it's directed at, you know, some foreign entity that wants that you wanna keep the U.S. supply chain pure of not an American existing defense contractor that you have a contract dispute with. Am am I overstating it?

Alan Rozenshtein: I mean, funnily enough I, along with the fabulous Howard University law student, Michael Endrias, published earlier today, three and a half thousand word analysis of all the things Anthropic I expect will say, when it sues. This is a how do we say, a target rich environment. Every layer of this is just a disaster for the government.

Benjamin Wittes: Right. So give us an overview of the before the secondary sanctions problem–

Alan Rozenshtein: Yes, yes.

Benjamin Wittes: –what, what are the major arguments that are available to Anthropic?

Alan Rozenshtein: Well, the first argument is that it's not actually clear that this law can even in principle, to apply to a U.S. company like Anthropic.

Now, it is true that the law, let's be fair here, the text of the law does not single out foreign companies. This is not one of those laws. But when you, for example, read the legislative history of this law, of this particular law, it's all about the threats of from globalization to supply chains. But Anthropic of course, is headquartered in a lovely office building in San Francisco. When you look at the other law, the, the, FASCSA law, and again, they're different laws, but I, I think they're getting at the same thing. That law, the legitimate fish is all about Kaspersky, Huawei, and ZTE.

And then when you look at, I think another thing that is worth mentioning is, whereas FASCSA actually allow, gives the targeted company some procedural protections at 30 days notice, some D.C. Circuit review, the 3252 provides essentially no procedural protections. Now that's fine. It doesn't have to provide procedural protections, but given that FASCSA clearly, basically only applies to foreign companies, it'd be very weird if a law that applies to domestic companies provided less protections than a law that applied pretty clearly to foreign companies, right?

That's the exact opposite of what you would think because of course, domestic companies have due process rights. You know, no one is owed a government contract, but they are definitely owed some notice and an opportunity to be heard and something reasonable if the government is gonna suddenly cancel contracts and especially impose a secondary boycott.

So it's just not at all clear that as a threshold matter, any of this applies. And the reason that's important is because courts are generally, and I think rightly so, loathed, to really second guess the specific national security determinations of the executive branch. So it's a much stronger argument for Anthropic to go in and say it's not that we're not a supply chain risk, though, I think they can win that argument. This just doesn't apply to us.

This is a classic example, it's called ultra vires action, where the government is just, it's invoking a law that just does not apply to the situation. So I think that's just a, a primary argument, a primary argument here. Right. But it is also the case that a court, I think will be able to, you know, under the Administrative Procedures Act, review the actual determination for being arbitrary and capricious. And, and here I think Benjamin, the, the exact point we were just talking about of, of you. Again, you can't simultaneously–

Benjamin Wittes: It's almost the definition of arbitrary.

Alan Rozenshtein: It's, it's almost the definition. Yeah. Yeah. I mean, I, I, I may literally use this to teach the concept next year when I teach administrative law.

Benjamin Wittes: And then, and then add to it that they've delayed enforcement for six months. So it's like, it's so dangerous that it poses us a, a, a supply chain risk. So six months from now we're gonna stop using it and ban everybody else from using it.

Alan Rozenshtein: Exactly. And, and then finally there are concerns about pretext here and, and the pretext comes in sort of two flavors. One is that when you look at the public statements that Sec Hegseth and President Trump have made they, they, they, they are not exactly sort of sober minded, we have analyzed and we have decided that, you know, on points two and three.

No, no, no. It's all about how Anthropic, you know, Trump says is radical, left, woke something, something, and Hegseth insulted it a bunch of times. It's pretty clear that these, like, they don't like Anthropic, they don't like Amodei. They don't like, I don't know, whatever, ambient leftism that they are imputing to Anthropic, which I actually don't think is accurate, but kind of besides the point.

So, so there's a, a pretext concern there. There's another pretext concern, which is gonna get us to kind of equally interesting sort of side quest. Maybe we can talk about this later in the conversation about OpenAI, because just to preview very briefly.

Benjamin Wittes: Yeah, we're gonna, we're gonna get to OpenAI and Grok momentarily.

Alan Rozenshtein: Yeah, yeah. Right on the, I think very day or something, or like basically simultaneously as Hegseth is, is setting fire to Anthropic, he's also signing an agreement with OpenAI that, and this is where it gets very bizarre that OpenAI claims is actually as, if not more restrictive than the than what Anthropic wanted.

Now we're gonna get in a, in a few minutes to whether that's true or not, but let's assume it's true. Well then, now I'm utterly confused, right? Cause how could it be that Anthropic is such a dangerous supply chain risk if OpenAI, which is bragging about how it's gonna go forward, deploy engineers in in DOD and impose all the safety stack stuff and is gonna have all these red lines, that's not a supply chain risk.

The the math does not math. And again, and we haven't gotten to the secondary boycott issue yet.

Benjamin Wittes: So let's get to the secondary boycott thing. Let's imagine we were dealing with Kaspersky.

Alan Rozenshtein: Yeah.

Benjamin Wittes: And we were dealing with something that was generally understood to be a legit supply chain risk. Again, not making any comments about Kaspersky, but that's how it's understood rightly or wrongly.

So imagine that SecDef Hegseth had said, all right, any company that does business with Kaspersky, even if it's insulated from its business with the Defense Department, can't do business with the Defense Department. Does the SecDef have the authority to do that?

Alan Rozenshtein: He almost certainly does not. So what the SecDef does have the authority to do, and this makes sense, is he's allowed to say you're a supply chain risk. You can't sell your products to us. And also anyone who is building a national security product for us cannot use Kaspersky as part of that product. Right?

And, and maybe you could even make the following argument that like the nature of Kaspersky or the nature of a, a model like Anthropic or Claude is such that you can't kind of isolate those from the business. So if you use it anywhere, you can't sell us a product. Maybe you can make that argument that'd be more specific. But what you definitely cannot do is say, and also you can't do any business with Kaspersky. Like you can't provide financial processing to Kaspersky, right? There's nothing in the statute that allows you to–

Benjamin Wittes: I mean there other statutes that give you the authority, like IEEPA, which does not allow you to impose tariffs, but does allow you to designate an entity and say, you're not allowed to do business with that entity.

Alan Rozenshtein: Well, it gets even better because. This issue has come up. So in the 2019 National Defense Authorization Act, there's a section 889 for the, you know, open your hymnals to page, to section 889 for those following along. And they're Congress basically imposed a full-on secondary boycott Congress, of Huawei, I mean Huawei and ZTE, right? The two Chinese telecommunications firms.

So in that situation, Congress said, anyone who uses Huawei and ZTE anywhere in their systems, right, in a substantial way, cannot do business with the government. Now, interestingly, even that did not go as far as what Hegseth said this purporting to do, because remember what he reporting to do.

At least based on his, his X posts, which apparently is how we do national security policy now, is prevent, let's say cloud compute providers from selling compute to Anthropic, right? That's actually even beyond what Congress did in this section.

So again, all, all, all of this is very strong. And, and by the way, we haven't even got, you mentioned IEEPA and you mentioned the tariff case. We haven't even gotten to the sort of brooding, omnipresence in the sky that is the major questions doctrine.

The, the idea that, you know, especially these days with a somewhat conservative Supreme Court, we don't read in really dramatic grants of policymaking authority to the executive branch that are unclear. And again burning a American frontier AI company to the ground because you don't like how they contracted with you. That's a pretty major question.

Benjamin Wittes: Right. And we know from the tariffs case that the major questions doctrine now does apply to presidential action. And to national security actions purporting to be in the national security space, which was a bit of a question prior to last week.

Alan Rozenshtein: Yeah.

Benjamin Wittes: Alright, so let's talk about OpenAI and Elon Musk because we have two different reactions to this demand from Hegseth. From these two Elon Musk says Grok will absolutely do anything the government wants it to do. And OpenAI says it has a contract that's more restrictive than the one that Anthropic is in trouble for. So what do we actually know about OpenAI's actual contract? And do we know that there are real restrictions in it?

Alan Rozenshtein: Yeah. Well, lemme just say one thing about Grok for a second. Not super surprising that this is Elon Musk's position. I think, you know, putting aside my feelings about Elon Musk, it is a perfectly coherent position.

I, I will say it, it is worth, again, to everyone involved, please try to think more than six months ahead because, you know, the, the no party is in power forever. And again, you wanna be careful about the precedents you set, right? So, you know, just, just as, just as every Democrat should always think about what happens when Trump and JD Vance are in power, every Republican should think about what happens when President Newsom or President AOC are in power. Right.

Benjamin Wittes: Or when Hakeem Jeffries is speaker of the house.

Alan Rozenshtein: Oh. Or even, or even then. Yes, exactly right. So I'll, I'll just leave that there.

The real question is OpenAI. So this I find to be one of the most bizarre scenarios I have ever witnessed in my time studying AI policy because you have this contract that OpenAI has signed with the government now OpenAI has released several important provisions from that contract, okay.

They did this in a blog post, those provisions, I think pretty clearly, and this is the essentially near unanimous consensus of like all the law types that are engaged on this issue at least on X, does not impose meaningful red lights. It just does not, because essentially what it says is you will not use our systems for autonomous weapons where such use is banned under law, policy or practice. Okay?

But, and even if today it is banned under law policy or practice, which I'm not at all clear, it is, well, what happens when tomorrow it's not banned. That's not a red line. That's just restating all lawful uses. Similarly, you will not use our tools for mass surveillance where that is banned by the Fourth Amendment and FISA and 12333.

It's like, okay, okay, but a ton of mass surveillance as normal people understand it is perfectly legal under the Fourth Amendment, FISA and 12 triple three. Right? Again, let's have a, we can, let's have a totally separate conversation one day about whether we should have AI mass surveillance and how. That's not the question. The question is just what did OpenAI agree to? Okay.

Benjamin Wittes: And the terms of the contract are not public, I take it?

Alan Rozenshtein: Well, not the whole contract, these paragraphs are. So OpenAI releases this and everyone, including myself, but not at all, not just me, starts pointing out like, guys, this what is happening here. These are not red lines.

So then Sam Altman, and I'm going into, I'm going into detail here because I really wanna emphasize how important for like the future of technology and the American democratic experiment is that we get AI, right? And this, this, this situation is not providing with a lot of confidence.

Sam Altman says, hey, we're gonna do a ‘Ask Me Anything’ on X, ask me questions, and I'm gonna have some of my senior people, my like national security person, and some engineers come and join. And it just gets worse and worse from there on in because people start asking politely, these are not red lines.

And then you, what you're getting is responses saying, oh no, you know, there are other parts of the contract that actually fix the, the, the nature of the law at the time we signed it. So DOD can't change its mind, but we're not gonna, we're not releasing that part of the contract, and we're not, but I don't know why we're not releasing that part of the contract.

So, just from a comms perspective, I think this is honestly kind of a disaster. I mean no one needs to take my comms advice, but, but reputation matters here. And, and, and I'm afraid, and look, I should say, I, I know a lot of people at OpenAI. I, I respect a lot of people at OpenAI, but I will say OpenAI do not think is covering itself in glory in terms of pushing back against what has always been its reputation, whether fair or not, that it's a little bit of a shady, a little bit of, a little of a, you know, talk outta both ends of your mouth actors, that's not great.

The, the real, obviously the comms issue is whatever the real question is, well, what did OpenAI agree to? And there seems to be three possibilities, and I, I honestly cannot tell you which one, which one it's right. One possibility is that OpenAI has in fact gotten the red lines Anthropic wanted, and there's some other part of the contract that will clarify that that's possible. Now that then raises the question of, well, why are we trying to burn Anthropic to the ground then?

Benjamin Wittes: Right?

Alan Rozenshtein: But whatever that's not OpenAI’s problem. That's possibility number one. Possibility number two is OpenAI’s, just lying. Right? These are not real red lines. Well,

Benjamin Wittes: Or playing too cute by half.

Alan Rozenshtein: No, no, no. But I, I'll, I'll, I'll, I'll, yeah, yeah. Or whatever. But, but that, that these are, these are not red lines. OpenAI lawyered this very carefully so as to give it wiggle room and it's hoping no one notices.

Or third, I think this actually, I don't know, but this might be what's going on, and this is maybe even scarier, OpenAI thinks it has red lines, but the DOD does not think it has red lines. And this happens all the time. Right, right. You know, very frequent thing contract drafting. You just don't have what's called the meaning of the minds. Right.

People think it's, it's, it's different. Right? And so it's quite possible that, you know, the people at OpenAI are in good faith assuming that when the government says, oh yeah, according to law and practice and regulation and this DOD guideline, we're not gonna autonomous weapons like that, that protects them.

And DOD is saying, okay, that's great for us because we can change the guidelines. But it's completely unclear which of those three it is, which is kind of maddening.

Benjamin Wittes: Let's, let's play a little inside baseball here, which is that OpenAI's counsel are serious national security lawyers including former colleagues of yours at NSD. People who know their way around terms like mass surveillance. And it seems to me hard to believe that you could have a loophole here big enough to drive a DOD sized drone through, and that not be apparent to the legal team that negotiated this contract for OpenAI.

Alan Rozenshtein: I mean that, that, yes, I, that seems to me to be the most likely answer. Right?

Benjamin Wittes: You think? You think that's more likely than that they negotiated something that allows them to say they have the red line and allows everybody to know that they don't really, but it allows them to say it.

Alan Rozenshtein: But-

Benjamin Wittes: What, whether you call that lying or whether you call it spinning or whether you call it.

Alan Rozenshtein: But, but, but I, but I thought that's the same is isn't that just what you said, right? You have these brilliant lawyers inside OpenAI. They have access to the best lawyers in the universe.

Benjamin Wittes: No, no, but what they're saying publicly and what they're advising the client could be quite different. What they're advising the client is, look, you go say whatever you want about this contract, and it's red lines.

But these red lines are not enforceable under the terms of the contract. And at the end of the day, DOD is going to, you know, type, hey, ChatGPT. Can you loose the fully autonomous drone against the robo of war guys we don't like and have it make decisions about who to kill? And there's nothing in the contract that will stop that as long as that is legal under U.S. law at the time.

Alan Rozenshtein: Look, based on, based on what I've seen of the contract, that seems to me the most likely outcome. The reason that I'm hesitating to say that is because it raises real questions and, and honestly disturbing questions about the candor of OpenAI's public statements. And, and, and, you know, given that, you know, OpenAI has for years talked a very big game about, you know, how dangerous artificial intelligence is and how important it's to be a good steward of this.

It's not, it's not. I would look, I would much rather if it's gonna go down the route of X or, you know, Palantir or Andre or whatever, just to say so. Right. Just, just say so and then we can have that debate.

Benjamin Wittes: Right.

Alan Rozenshtein: But, but, but I, I, I really, I, I, I cannot emphasize enough how big of, I think a reputational disaster this is, and look, I don't think who, who, they don't need to care what I think.

Right. But I will just say in Silicon Valley, the only thing that's more valuable than compute is talent, right. Getting the best engineer is, you know, possibly a billion dollar or $10 billion asset. You know, that's why Meta is paying these people a hundred million dollars, right, to come over.

A lot of these engineers are motivated by money, they're humans, but a lot of these engineers are also motivated by wanting to do the right thing. And so I just, I, you know, if I were OpenAI, I would really worry that my reputation atop all the other stuff that's been happening is gonna be very seriously and durably harmed in this very, very small community of, you know, elite AI engineers in, in, in, in San Francisco. And, and the fact that they have not resolved this I is, I find puzzled.

Benjamin Wittes: Alright, so I wanna go back to a point that you made in an earlier piece that you wrote. Which was, leave aside the merits of this dispute. This is a truly horrible way to make the rules under which the U.S. industry are gonna interact with the Defense Department over major policy questions. And so I want you to flesh that out because.

It's kind of laying in the background of a lot of what we're talking about, but what should be the mechanism by which we decide as a society, whether anthropic should or should not have its product used for, you know, autonomous weapons and for mass surveillance. Why is contract dispute not the right answer to that question?

Alan Rozenshtein: Yeah, and, and I should say, I mean, contract dispute is the right answer of how you operationalize it. It's just a weird vehicle to set. The, the, the, the substantive principles, let me separate your question into a kind of a substantive component and a procedural component. So the substantive component is that, and this is, you know, as much as it is to like, as much as much fun as it is to sit and like mock incompetence.

I think it is useful to try to zoom out a little bit and kind of think about this issue in the broader context. The way I view this is that this is the opening shot in what will be by far the most important AI regulation question of the next several years, which is to what extent will we nationalize the AI industry?

Okay. It was never going to be realistic. People who have thought about AI for much longer than I have, I think have understood this, that the government, which is to say the people through their elected representatives and bureaucrats were gonna sit in D.C. and twiddle their thumbs and go, oh, how interesting while a very small group of people in San Francisco were gonna build the machine God, that that was just never gonna be realistic, right?

And so we as a society are gonna have to figure out one way or the other how much control we're gonna have over these companies. And all the regulations about, you know, privacy and this and that and labor and corporate, they're important, but I think they pale in comparison to this more foundational question.

Benjamin Wittes: Can it kill you?

Alan Rozenshtein: Well, but also just at the end of the day, how much are we going to say this is a cool product made by the private sector versus this is like an epic ethical transformation in human civilization and therefore we're gonna have to, we're gonna have to control it a little more invasive.

Benjamin Wittes: Right? But, but my point is it's not an accident that we confront that question, that we don't confront that question ultimately over. Will it discriminate against you? Can it, you know, can it judge you for real estate transactions?

Alan Rozenshtein: Yeah, yeah, yeah.

Benjamin Wittes: All these kind of consumer protection e things where the rubber hits the road and the Defense Department says, no, we are in charge of this. And the AI people say, no, we are is when it comes down to can it kill you?

Alan Rozenshtein: Yes. Killer robots have a fabulous way of focusing the mind,

Benjamin Wittes: Concentrating the mind.

Alan Rozenshtein: Concentrating the mind. Okay. This is a, a legitimately difficult question, right? And, and, you know, we'll be thinking and writing about this for a long time, but that's one question.

There's, and there's, then there's a separate question. Okay. Whatever the answer to that question is, who's gonna determine that? Right? Right. Now this is being determined in like, not a great way, right?

You know, a, a not the best secretary of defense that you could imagine, right? And the CEO of a, a company. Now, look, I'll be honest, I like Anthropic. You know, I, I know some people there. I've met some of the co-founders. I've never met Dario, but he seems like a really thoughtful and, and interesting guy. If someone's gonna build the machine God, I can imagine worse people to do it than Anthropic. But look, I didn't vote for-

Benjamin Wittes: Like Grok.

Alan Rozenshtein: Like Grok, right. But look, I didn't vote for any of these people, right? Right. I'm not comfortable with them setting broad societal policy. Right. But I'm also not comfortable with like the current, you know, people in the Pentagon and the White House doing it either 'cause they're not great. No.

We do have an institution that is supposed to do this. It is called the United States Congress. And you know, just saying that phrase should fill everyone with a certain degree of existential dread and malaise. But at the end of the day, like this is going to be, this is, lemme put it this way, this is not going to be how this is gonna be decided. Who knows how this is gonna be decided.

But under any rational system Congress is, would be the one who would decide it, right? It would be Congress that would say, hey, here's how DOD you can and cannot use autonomous weapons. Here's how you can and cannot do surveillance. Here is the ongoing oversight that the, you know, our services committees and the intelligence committees are going to do.

And I would imagine that had Congress done that or had Congress even shown any sign that it will do this in some rational way over the next few years. Someone like Dario Amodei would feel a lot better about signing onto an all lawful usage-

Benjamin Wittes: Right

Alan Rozenshtein: -policy. Because look, you know, as, as, as much as philanthropic is often criticized for a, you know, kind of holier than thou, we know what's best.

And maybe there's some of that. I actually don't think at the end of the day they want to be the ones doing this. Like I think they wanna go and like cure cancer or something. And they would much rather have the democratic process come to some reasonable resolution that even if they don't agree with in every particular they can live with, but in the absence of that, this is how we do this and it's, it's bad.

Benjamin Wittes: Alright. Let's wrap up with the question of what's gonna happen. So, we assume at some point there will be a document that translates the Trump Truth Social and the he statement into some kind of policy or some kind of action. I mean, have they actually done anything yet or have they merely said they're doing something?

Alan Rozenshtein: So it's, it's unclear. So, so there's the Trump social posts that orders just U.S. government agencies to not do business with anthropic. There's no secondary boycott there, and some agencies are doing that. I think Treasury and like, like the, the mortgage, like the housing, mortgage agencies are, are doing that.

But they're just doing that like kind of as a truth, social executive order thing. Maybe that'll be challengable, but that's kind of separate. Then there's a Hegseth post, which is a little confusing because. It ordered some undersecretary of defense to do the designation, but then it said effective immediately, no one can do business with anthropic, which certainly sounds to me as if Hegseth was purporting to do a designation.

Now, the law does not require, the law requires Hegseth to make some written findings and to transmit those findings to Congress in classified or unclassified form. It does not, oddly enough, seem to require that. Executive to tell the company. I think that's gotta be a drafting oversight because how's the company supposed to know?

My understanding is that Anthropic has not yet received a piece of paper, but they're certainly acting as if this is real. And they have said explicitly that they will challenge this in court. So let's assume the procedural stuff gets resolved one way or the other. What I would expect is going to happen is that Anthropic will will go to court and hopefully on the east coast, not the west coast, but you know, Lawfare travel, aside, you should do it in the place where you're supposed to do it.

And they'll say, this is really illegal. Here's a Lawfare post. Like here's our briefs, here's a nice, right. And I would expect that if they get a remotely within parameters, judge that there will be a, and I'm, I should say I'm, I'm not a litigator, so I dunno the exact details of how this works, but they'll get a temporary restraining order.

They'll get an injunction, get a something or other, and this will all drag and, and, and, and this will all kind of pause, at which point there will begin a lot of litigation. I expect that Anthropic will win that litigation for all the reasons we talked about, and I think it's so obvious that they will win, that it almost, that I almost wonder if suing the government would almost be de-escalatory of Anthropic because I suspect that what may very well happen.

And I say this, I would just note that I think an hour ago the Wall Street Journal reported that the White House or the government has decided not to appeal the injunctions of the law firm punishments that it tried to do a long time ago.

Benjamin Wittes: Withdrawing the appeals.

Alan Rozenshtein: Withdrawing the appeals. I suspect some of that could happen here, where the White House is gonna, you know, designate them a supply chain risk so that they can beat their chest. This will all be stopped and then everyone will, will lose interest in this in, in a few months, and maybe even people will kiss and make up because as we can see from Anthropic services being used in the war against Iran, these are useful services now.

For Anthropic’s position. I mean, that's obviously a better outcome than them losing the supply chain risk. But it's still very dangerous because, you know, Anthropic, although it's, it's a absolutely a leader in AI, you know, unlike a company like Google or Meta, let's say, which has kind of an infinite cache machine through advertisements that it can use to sort of shovel money into the money pit that is. AI Anthropic is, is just an AI company, right?

Benjamin Wittes: And it's a very young one.

Alan Rozenshtein: And it's a very young one. Right. And so, so it's not like its relationships with its clients are super, super deep. And, and even if it loses a small number of clients who are just scared away by all this noise coming out of the White House, you know, that could meaningfully set its, its AI.

Right, and, and I'll just note something that Dario Amodei said in a podcast. I think that this was do with Kash Patel a few weeks ago, or last month. You know, he said, I'm very bullish on AI. We're gonna get journal intelligence, we're gonna do all the things, but also if something gets screwed up for 12 months, we could go bankrupt.

Right, which is to say their, their, their, their margins here, which is weird to think of a company that has hundreds of billions of dollars in revenue, but every cent of that gets plowed back into compute and training. So from Anthropic position, what I think is really scary is not that they lose the lawsuit, but that this does enough damage, their enterprise relationships that it, you know, wounds them permanently.

I mean, I, I'm at, I'm optimistic. I think they'll get a lot of goodwill as well out of this. The prediction markets seem to not think that anthropic is going to be severely hurt by this and, you know, take that for what it's worth. But I think the risk for them is, is much more business risk than it is fundamentally legal risk.

But I'm just a lawyer so I focus on the legal risk.

Benjamin Wittes: We are gonna leave it there. Alan Rozenshtein. thank you for joining us today.

Alan Rozenshtein: My pleasure.

Benjamin Wittes: The Lawfare Podcast is produced by the Lawfare Institute. You can get ad free versions of this and other lawfare podcasts by becoming a material supporter of lawfare at our website, lawfare media.org/support. You'll also get access to special events and other content available only to our supporters. The podcast is edited by Jen Patia, and our theme music is from alibi Music. As always, thanks for listening.


Benjamin Wittes is editor in chief of Lawfare and a Senior Fellow in Governance Studies at the Brookings Institution. He is the author of several books.
Alan Z. Rozenshtein is an Associate Professor of Law at the University of Minnesota Law School, Research Director and Senior Editor at Lawfare, a Nonresident Senior Fellow at the Brookings Institution, and a Term Member of the Council on Foreign Relations. Previously, he served as an Attorney Advisor with the Office of Law and Policy in the National Security Division of the U.S. Department of Justice and a Special Assistant United States Attorney in the U.S. Attorney's Office for the District of Maryland. He also speaks and consults on technology policy matters.
}

Subscribe to Lawfare