The Right Remedy in the Anthropic Case
The government can stop buying from Anthropic anytime it wants. It just can't bypass the procurement system Congress built to do it.
On Tuesday, March 24, Judge Rita Lin of the U.S. District Court for the Northern District of California will hear Anthropic's request for a preliminary injunction against the Department of Defense, the White House, and 16 other federal agencies. After Anthropic refused to remove usage restrictions on its Claude AI model—restrictions on lethal autonomous warfare without human oversight and mass surveillance of Americans—the Trump administration designated the company a "supply-chain risk to national security" and ordered every federal agency to stop using its products. Having read the government's opposition brief—and having filed an amicus brief myself arguing that the supply chain risk statutes were never designed for this kind of dispute—I think the case comes down to a simple question: Can the executive branch use extraordinary national security authorities to bypass the ordinary procurement system Congress built for run-of-the-mill contract disputes?
The answer is clearly no, but the remedy should be surgical. Judge Lin should set aside the supply chain risk designation and enjoin agencies from implementing the government-wide ban and—critically—prohibit the Defense Department from pressuring defense contractors to sever commercial relationships with Anthropic. Defense Secretary Pete Hegseth originally declared that "no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic," and Anthropic's complaint alleges that a Pentagon official threatened to "require that all our vendors and contractors certify that they don't use any Anthropic models": a secondary boycott that goes well beyond anything Section 3252 authorizes.
But Judge Lin should leave the government free to stop buying from Anthropic through ordinary procurement channels: Terminating contracts for convenience, declining renewals, directing prime contractors on specific contracts not to use Anthropic as a subcontractor, and, in the extreme case, seeking to debar and exclude Anthropic as a government contractor under the proper statutory authorities. The government doesn't need extraordinary powers to break up with a vendor. It just needs to follow the law.
The Supply Chain Statute Doesn't Fit
The government's core argument is that Anthropic qualifies as a "supply chain risk" under 10 U.S.C. § 3252 (parallel litigation over the government’s designation under the Federal Acquisition Supply Chain Security Act is ongoing in the D.C. Circuit). That statute was enacted in 2011 after a foreign intelligence service compromised Department of Defense classified networks through malware on a USB drive. Congress gave the secretary authority to exclude sources that might "sabotage, maliciously introduce unwanted function, or otherwise subvert" military systems. Those are espionage verbs describing covert, hostile acts by adversaries seeking to compromise U.S. military infrastructure from within.
The government argues that Anthropic fits this description because, as the developer of Claude, it retains "privileged access" to the model and could theoretically alter its behavior during military operations. A declaration from Under Secretary of War for Research and Engineering Emil Michael characterizes Anthropic's refusal to accept an "all lawful use" contractual term—combined with its public advocacy on AI safety—as evidence of an "adversarial posture" creating risks of "model poisoning" and "denial of service."
The technical point about AI vendor dependency is real. AI providers do retain the ability to update models, and that creates a form of ongoing access different from buying static hardware. But the technical characteristics the government describes—software maintained by an outside vendor that retains continuous access and can push updates—are not unique to AI. They describe most of the vendor-hosted software the government uses, from cloud storage to collaboration platforms. If those features suffice to trigger Section 3252, the secretary could designate virtually any software vendor a supply chain risk over an ordinary contract dispute. The statute would become a general-purpose tool for managing vendor relationships rather than a targeted authority for addressing foreign espionage. And the government's brief argues that "adversary" in the statute encompasses any "opponent in a contest, conflict, or dispute"—a reading under which any vendor that contests contract terms becomes eligible for designation.
Blacklisting by Tweet
As procurement law expert Jessica Tillipman points out, "[W]e have tools in our procurement system to exclude contractors, and blacklisting by tweet isn't one of them." The Federal Acquisition Regulation's suspension and debarment framework—FAR 9.4—covers exactly this situation. As Tillipman explains:
If the government believes a contractor is not responsible (i.e., it can't be trusted to perform reliably or poses a risk to government interests), there is a detailed, decades-old framework: suspension and debarment. It includes notice to the contractor, an opportunity to respond, a debarring official who is insulated from political pressure, findings based on grounds specified in the regulation, and judicial review. The government has long understood that excluding a contractor can have severe consequences. That's why we call it the corporate death penalty.
The government bypassed all of that. Tillipman traces the sequence: "The President tweeted, then the Secretary tweeted, then the Department reverse-engineered an administrative record to backfill the justification." The government concedes this ordering, arguing that the secretary's Feb. 27 post was not final agency action but just "the beginning" of the process. As Tillipman observes: "Let's take them at their word that the post was just the beginning. This means that the Secretary publicly announced the outcome, directed his subordinates to produce the justification, and the justification confirms the predetermined conclusion. Just to be clear, this is not how our procurement system was designed to work."
The presidential directive is worse still. At least the Section 3252 designation has a statutory hook, however stretched. The directive—ordering every federal agency to "IMMEDIATELY CEASE all use of Anthropic's technology"—has no statutory basis at all. The government defends it as routine Article II supervisory authority. But a blanket order directing all federal agencies to terminate a named vendor's products displaces the procurement frameworks Congress vested in each agency. The Department of Health and Human Services, NASA, the Social Security Administration, and the National Endowment for the Arts all make their own procurement decisions based on their own needs. None conducted an independent analysis of whether Anthropic's technology serves their needs or poses a risk to their operations. They're complying with a presidential order, not exercising the discretion Congress gave them.
The Remedy
Judge Lin should enjoin the Secretarial Determination and prohibit agencies from implementing the Presidential Directive absent independent, agency-specific justification. But she should not order the government to continue using Anthropic's products. All she needs is to prevent using an anti-espionage statute to brand a U.S. company a national security risk over a contract dispute—and using a social media post to override the procurement judgments of federal agencies.
This approach also lets the court avoid the First Amendment question, which is genuinely hard and, in any event, likely cuts against Anthropic. While the record of viewpoint-based animus is striking—the president called Anthropic a "RADICAL LEFT, WOKE COMPANY" and the secretary attacked its "defective altruism" within hours of Anthropic's CEO publicly refusing to capitulate—even if a plaintiff shows that protected speech was a motivating factor in the government's action the government can still prevail by demonstrating that it would have reached the same decision regardless. The government argues that Anthropic expressed its AI safety views for years while the government happily contracted with it, and that the adverse action followed only after Anthropic refused the "all lawful use" contractual term.
There is also a practical problem for Anthropic's First Amendment argument: If the court credits the government's national security justification for the supply chain designation on the merits, it is hard to imagine the same court then enjoining the designation because the officials who carried it out were intemperate on social media. Rightly or wrongly, courts are unlikely to override a national security finding on the basis of mean tweets. The statutory and structural arguments are sufficient. The government can stop buying from Anthropic whenever it wants—it just can't bypass the procurement system to do it, and it can't use an anti-espionage statute to coerce private companies into joining the boycott. The court can say so without reaching the constitutional question.
* * *
There is no question about the government's right to stop doing business with Anthropic. If the Pentagon needs AI models without usage restrictions, it can transition to vendors willing to provide them. The government's purchasing discretion is broad, and courts should respect it. But that discretion operates through a system Congress built over decades, with competitive bidding requirements, individualized judgments by contracting officers, and—if the government truly believes a vendor cannot be trusted—a debarment process with real procedural protections. When the government bypasses it through an anti-espionage statute repurposed for a contract dispute and a social media post that overrides dozens of agencies' procurement authority and procedures, the court should push back—not by forcing the government to buy a particular product, but by making it follow the law.
