Congress—Not the Pentagon or Anthropic—Should Set Military AI Rules
The Pentagon's threat to designate Anthropic a "supply chain risk" over its AI use restrictions is extreme—but the deeper problem is that the rules for military AI are being set through ad hoc haggling instead of by Congress.
The Department of Defense is threatening to designate Anthropic, the maker of Claude, a "supply chain risk," which would not only bar Anthropic from government contracts but also force Pentagon contractors to cut ties with the company. That's a crippling penalty reserved normally for foreign adversaries such as the Chinese telecom company Huawei and the Russian cybersecurity company Kaspersky. Anthropic's offense is insisting that any military use of its artificial intelligence (AI) adhere to Anthropic’s two red lines: no mass surveillance of Americans and no fully autonomous weapons. In response, a senior Pentagon official told Axios, the Defense Department will "make sure they pay a price."
Both sides have real claims here—though the way the government is pressing its position is, to put it mildly, disproportionate. But the deeper problem isn't who's right in this negotiation; it's that the negotiation is happening at all. The terms governing how the military uses the most transformative technology of the century are being set through bilateral haggling between a defense secretary and a startup CEO, with no democratic input and no durable constraints. Congress should be setting these rules. And it should do so in a hurry.
Both Sides Have a Point
In a system built on private property and the rule of law, companies get to choose who they do business with and on what terms. Anthropic has no obligation to sell its products to the military without conditions. And the company's red lines are hardly frivolous—they reflect concerns that CEO Dario Amodei has articulated publicly and consistently, most recently in a January essay arguing that democracies should use AI for national defense "except those which would make us more like our autocratic adversaries."
But democratic governance requires that the military—the public side of the military-industrial complex—be in charge of how it uses its tools. We wouldn't want Lockheed Martin selling the military an F-35 and then telling the Pentagon which missions it could fly. What I've argued in the context of government surveillance also applies to military affairs: Decisions about government power should be made through democratic processes, not by private companies unilaterally constraining the government through product design.
In principle, these two points resolve fairly cleanly. Companies shouldn't be forced to participate in work they find objectionable—but that means they might lose the government's business. Anthropic can maintain its red lines; the Pentagon can find a different vendor. Neither side gets to conscript the other.
The Easy Case and the Hard Case
In practice, things aren't always so clean. Given the Trump administration’s behavior, it's easy to side with Anthropic. Its red lines are certainly a plausible starting point for thinking about responsible AI use. And the Pentagon isn't just asking for flexibility; it's demanding the right to use AI for "all lawful purposes" without limitation. That might sound reasonable until you consider that existing surveillance law was written long before AI could monitor millions of people simultaneously. "Lawful" covers a lot more territory than it used to, and I don't trust this administration to stay within even those capacious boundaries.
The Pentagon's supply-chain-risk threat makes the situation worse. If the Pentagon simply wants a different contractor, fine. But a designation would amount to a secondary boycott—banning government contractors from using Anthropic as a subcontractor and potentially requiring them to drop Anthropic's services entirely. That's a lot of pressure: eight of the ten biggest U.S. companies reportedly use Anthropic's products.
It's also far from clear that a designation would even be legal. The relevant statutes—10 U.S.C. § 3252 and the Federal Acquisition Supply Chain Security Act (FASCSA)—were designed for foreign adversaries who might undermine defense technology, not domestic companies that maintain contractual use restrictions. The statutes target conduct such as "sabotage," "malicious introduction of unwanted function," and "subversion"—hostile acts designed to compromise system integrity. A company that openly restricts certain uses of its product through a license agreement is doing something categorically different. The only time a FASCSA order has ever been issued was against Acronis AG, a Swiss cybersecurity firm with reported Russian ties. Anthropic is not Acronis.
And the designation is strategically counterproductive. Anthropic has been more willing to work with the military than most AI companies—it was the first frontier lab to deploy on classified networks, and Claude was reportedly used in the military operation to capture Venezuelan President Nicolas Maduro. Punishing the one company that showed up sends exactly the wrong signal and threatens to cripple one of the U.S.’s national AI champions. As Dean Ball—who served in the Trump White House and was the lead drafter of the administration's AI action plan—put it, the supply chain risk designation is unnecessary when "cheaper options are on the table."
But siding with Anthropic over this administration is the easy case. The hard case requires thinking beyond President Trump and Secretary of Defense Pete Hegseth. Imagine a different, more normal administration in 2029, Democratic or Republican. Shouldn't a future president be able to walk away from, say, xAI's technology if that company refuses to support asylum processing or civil rights enforcement? The case for that flexibility is concrete: As J.B. Branch has described on Lawfare, the current administration has deployed xAI’s Grok across classified Pentagon networks and at the Department of Energy’s Lawrence Livermore National Laboratory, despite Grok’s “documented history of biased, misleading, antisemitic, and harmful outputs.” A future administration should absolutely be able to end that.
And even Anthropic's own red lines get more complicated once you remove Trump as a variable. How much automated surveillance is actually appropriate? AI-assisted analysis of publicly available information might be exactly what intelligence agencies need to identify genuine threats. Could autonomous weapons reduce both civilian and military casualties in some scenarios? Reasonable people disagree, and in our system Congress is supposed to make those calls.
The Case for Congress
The rules for military AI shouldn't depend on the ethical commitments of whichever CEO happens to be in charge, or the political preferences of whichever defense secretary happens to be in office. Congress should be deciding. But while Congress has imposed some limited reporting requirements and governance structures, it hasn't set substantive rules about which AI applications the military can and can't pursue—even as AI stands to dramatically expand presidential authority, enabling mass enforcement and a national security apparatus that resists oversight.
There's also a practical reason Congress needs to act: Anthropic's stance can't actually constrain the government. If Anthropic holds firm, the government will simply get unconstrained AI from someone else. Only legislation creates constraints that survive a change of AI supplier or the White House’s occupant.
Congress already regulates military acquisition extensively—through standing procurement law and annual defense legislation-—and imposes conditions on weapons systems, intelligence collection, and contractor behavior. On the buyer side, Congress could specify which AI systems the military can purchase, and under what conditions. On the seller side, it could establish what companies are required—or forbidden—to build into AI systems sold to the government. And it could impose additional transparency and reporting requirements that give the public visibility into how military AI is actually being used.
The Anthropic-Pentagon dispute will resolve one way or another—either Anthropic loosens its terms, the government finds substitutes, or some awkward compromise emerges. But without congressional action, the underlying problem will remain: The rules governing military AI will be set through ad hoc negotiations between executive officials and individual companies, with no democratic input, no durable constraints, and no framework that survives the next change of administration. That's not how a democracy should make decisions.
