Cybersecurity & Tech Executive Branch

The Situation: Stand With Anthropic

Alan Z. Rozenshtein, Benjamin Wittes
Monday, March 2, 2026, 5:54 PM

You might not like the creator of Claude, but its fight with Pete Hegseth is important for the rule of law. 

Anthropic claude ai chatbot
Anthropic's Claude AI interface. (https://tinyurl.com/3rec7vfe; CC BY-NC 4.0, https://creativecommons.org/licenses/by-nc/4.0/)

The Situation on Thursday catalogued what certain federal judges are saying about the conduct of the Trump administration and the lawyers who represent it.

Today, let’s talk about the Pentagon and Anthropic.

AI companies worth gazillions of dollars aren’t exactly the victim of administration lawlessness most likely to garner public sympathy. The AI that is coming for your job on its way to posing a catastrophic risk of ending the human race on Earth—and the tech bros and gals who are making billions of dollars by building such products—are figures of public fear, and sometimes loathing and resentment, more than admiration and intuitive identification.

So there may be a tendency to regard the ongoing attempts by Secretary of Defense Pete Hegseth to bully Anthropic into removing all restrictions on its AI product Claude for Department of Defense use as a battle between powerful malign forces in which the anti-authoritarian member of the general public has no dog.

And to be sure, Anthropic is no powerless victim of masked ICE agents, who have been scooped up on the streets and held in detention or deported summarily under the Alien Enemies Act. The battle between the AI company and the Pentagon is, indeed, a clash of the titans.

That said, the battle between Hegseth and Anthropic is not one from which the public should turn away in disgusted neutrality. The attack on Anthropic is no different from the attack on Harvard University, the attack on any number of law firms, or the attack on National Public Radio. It is, to put the matter simply, a retaliation against a private actor for asserting its rights—in this case rights under a contract the federal government signed that protects matters of conscience important to executives of the company—aimed at destroying an entity that has displeased the administration. It is designed both to target and punish Anthropic for not submitting and, by doing so, to intimidate the other frontier AI labs into a more accommodating posture.

Whether this effort succeeds on either front remains to be seen. What is clear already is that, like the attacks on other universities, law firms, and others, the attack on Anthropic is poisonous conduct in a society that purports to be governed by anything like a rule of law.

The law that governs this situation is not all that complicated. One of us summarized it in detail earlier today. Without repeating that analysis here, let’s just say that the government can’t simply label an American company a risk to Defense Department—or wider government—supply chains for a product that same government actively wants to deploy because the company that makes it won’t give the government its preferred terms of deployment. And the government really can’t impose a secondary boycott on that product, forbidding business with any company that itself does business with the blacklisted entity. The statutes simply don’t give the government the power to engage in these sorts of extortionate activities, and when Anthropic litigates the matter, expect it to prevail.

What the government can do, merely by dint of being the government, is scare the crap out of investors and enterprise clients of a company like Anthropic. Merely having the president and the secretary of defense—of course fashioning himself the “secretary of war”—announce that they are blacklisting Anthropic and anyone who does business with it creates doubt about the company and its viability. This is, after all, still a young company. And it’s a company going up in a ferociously competitive environment against major players: Google, OpenAI, Meta, and Elon Musk’s xAI. An actor as powerful as the United States federal government doesn’t have to have much of a legal leg to stand on to raise doubts about it to enterprise clients who make up the core of Anthropic’s business and investors.

And it doesn’t need to have a legal leg to stand on to send a loud message to Anthropic’s competitors. 

That message, at least, is clearly being heard.

In response to the dispute with Anthropic, Xai was quick to agree that its product, Grok, would have no restrictions—as the government has demanded.

OpenAI's response was more uncertain. The day after Anthropic's designation, OpenAI announced its own classified deployment deal with the Pentagon and publicly claimed three red lines that mirror Anthropic's: no mass domestic surveillance, no autonomous weapons, and no high-stakes automated decisions.

But the actual contract language OpenAI published tells a different story. The restrictions on autonomous weapons apply only where "law, regulation, or Department policy requires human control"—meaning that the operative safeguard is a Defense Department directive that Hegseth can rewrite at will. The surveillance restriction prohibits only "unconstrained monitoring" of "private information”—leaving plenty of room for slightly constrained surveillance of private information, or unconstrained surveillance of information the government deems public. 

In other words, OpenAI's “red lines” track whatever the government decides the law and its own policies already require. That is not a constraint on the Pentagon; it is a restatement of the status quo with better PR. As LASST, a legal advocacy group focused on AI, put it, the contract language “does not purport to prohibit the government from any uses beyond what is already prohibited by law.”

The sort of extortionate relationship between an administration and private institutions the administration is engaged in here is toxic stuff in a democratic society. It’s toxic if the institution is a university. It’s toxic if the institution is a law firm. And it’s toxic if the institution is a business with clients and investors.

And just as it was thus important that law firms not cave and challenge the administration’s executive orders, and just as it was thus important that Harvard University took the administration to court, and just as it was thus important that NPR went to court, it is important that Anthropic not capitulate.

And there is reason for optimism: The administration's legal position in these fights is often far weaker than its bluster suggests. Indeed, only today, the Wall Street Journal reported that the Justice Department plans to withdraw its appeals defending the punitive executive orders against law firms—a reminder that this administration frequently backs down when its targets actually fight back in court. Anthropic's legal position is no less powerful than that of the law firms. 

We are not AI engineers, much less are we businesspeople who have ever been responsible for managing either the client-side or the investor-side of a business like Anthropic. That said, it is fair to observe that the law firms that fought the administration seem to be doing okay. None has been obviously denuded of its client base. And the courts have been effective in protecting the firms from extortionate predation by the administration.

There are, of course, many more law firms and universities than there are frontier AI labs. So there are more opportunities for some firms and schools to capitulate and still leave others to fight.

In the case of AI labs, there are only a small number of total players, and there is only one—Anthropic—that has centered its identity on standing for an ethical approach to inventing god-machines that might just mean the end of humanity. If Anthropic doesn’t fight, in other words, it’s completely unclear who will.

It thus seems of no small importance that the administration not get away with a frankly lawless assertion of power to force the company into designing the future the way Pete Hegseth prefers—important for the future of AI, important for the future of the relationship between the administration and big tech, and important for the notion that the law meaningfully constrains presidential actions.

You might not like Anthropic, but as Donald Rumsfeld might have put it, you go to war with the plaintiffs you have, not the plaintiffs you wish you had. 

The Situation continues tomorrow.


Alan Z. Rozenshtein is an Associate Professor of Law at the University of Minnesota Law School, Research Director and Senior Editor at Lawfare, a Nonresident Senior Fellow at the Brookings Institution, and a Term Member of the Council on Foreign Relations. Previously, he served as an Attorney Advisor with the Office of Law and Policy in the National Security Division of the U.S. Department of Justice and a Special Assistant United States Attorney in the U.S. Attorney's Office for the District of Maryland. He also speaks and consults on technology policy matters.
Benjamin Wittes is editor in chief of Lawfare and a Senior Fellow in Governance Studies at the Brookings Institution. He is the author of several books.
}

Subscribe to Lawfare