Cybersecurity & Tech Executive Branch Surveillance & Privacy

The Situation: Thinking About Anthropic’s Red Lines

Benjamin Wittes
Tuesday, March 10, 2026, 1:57 PM

What does the AI company mean by “mass surveillance” and “lethal autonomous warfare”?

Anthropic claude ai chatbot
Anthropic's Claude AI interface. (https://tinyurl.com/3rec7vfe; CC BY-NC 4.0, https://creativecommons.org/licenses/by-nc/4.0/)

The Situation on Friday compared the president’s Iran policy to BASE jumping.

Yesterday, the AI frontier lab Anthropic sued the Department of Defense and other federal agencies over the Trump administration’s designation of its products as a “supply chain risk.”

I have some friendly advice for the company: Your red lines need some refinement.

Let me be clear, I support the company’s lawsuit. 

The government’s action against Anthropic is a gross abuse—no less so than the actions taken against law firms and universities. It is simple and overt retaliation for the company’s placing use restrictions on its Claude product.

As Anthropic describes those restrictions in its complaint: “Anthropic’s Usage Policy has always conveyed its view that Claude should not be used for two specific applications: (1) lethal autonomous warfare and (2) surveillance of Americans en masse.”

The crux of the dispute is that the Pentagon demanded that Claude be available for all lawful uses. And, as Anthropic summarizes, “[w]hen Anthropic held fast to its judgment that Claude cannot safely or reliably be used for autonomous lethal warfare and mass surveillance of Americans, the President directed every federal agency to ‘IMMEDIATELY CEASE all use of Anthropic’s technology’—even though the Department of War…had previously agreed to those same conditions.”

In my view, it should be a simple case. A company is entitled to draw lines about for what uses it does and does not wish to sell its product. If the government doesn’t like those lines, it can use a different project. It can’t move to destroy the company because it doesn’t like those lines. 

I also, for the record, like the fact that Anthropic is trying to draw lines—which is more than one can say for any of its competitors. The AI world is full of impossibly difficult questions, difficult on legal, moral, ethical, practical, and philosophical grounds, nd the AI industry is filled with companies that are all too willing to sidestep those questions entirely in their pursuit of innovation and progress at any cost. It’s not a bad thing that Anthropic is imposing use restrictions on its products. 

That said, I’m not honestly sure that the company is drawing the right lines here, or even that it’s drawing lines whose meaning can be easily discerned. 

The term “lethal autonomous warfare” seems clear enough on the surface. When Anthropic objects to Claude being “used for” this work, it seems to be objecting to Claude’s use in killer robots. 

Scratch beneath the surface even a little, however, and things get more complicated.

The trouble begins with the fact that, as the UN Office For Disarmament Affairs puts it bluntly: “At present, no commonly agreed definition of Lethal Autonomous Weapon Systems (LAWS) exists.” What Anthropic means by “lethal autonomous warfare” is defined with only modest precision—at least in the company’s public statements. In its complaint yesterday, the most it says is this: “By its terms, the Policy has always prohibited the use of Anthropic’s services for lethal autonomous warfare without human oversight.”

The company’s CEO, Dario Amodei, has elaborated a little bit in other public comments:

Partially autonomous weapons, like those used today in Ukraine, are vital to the defense of democracy. Even fully autonomous weapons (those that take humans out of the loop entirely and automate selecting and engaging targets) may prove critical for our national defense. But today, frontier AI systems are simply not reliable enough to power fully autonomous weapons. We will not knowingly provide a product that puts America’s warfighters and civilians at risk. We have offered to work directly with the Department of War on R&D to improve the reliability of these systems, but they have not accepted this offer. In addition, without proper oversight, fully autonomous weapons cannot be relied upon to exercise the critical judgment that our highly trained, professional troops exhibit every day. They need to be deployed with proper guardrails, which don’t exist today.

In this statement, Amodei seems to be saying that the problem is limited to Claude powering fully autonomous weapons systems based on current technology, and he defines full autonomy as systems that “take humans out of the loop entirely and automate selecting and engaging targets.” 

But Amodei here is also acknowledging that such fully autonomous systems “may prove critical for our national defense” in the future and that research on such systems is therefore desirable. In other words, his objection is based on current technology and its limitations only. It is not a point of principle. And he actively wants to work on R&D towards full autonomy.

But that raises a different issue. The military actually does not currently use fully autonomous weapons without humans in the loop, nor is it clear that such weapons would pass legal review under the laws of armed conflict—for some of the same reasons the Amodei articulates. So if Anthropic’s position is that research is okay and the problem is limited to actually powering deployed systems based on current technology, and the military doesn’t have any such systems it wants to power with Claude, is the dispute here based on a null-set of real-world cases? 

At least for the moment, I suspect the answer to this question is yes—that this issue is largely or entirely hypothetical. That said, Anthropic would benefit from a clearer public definition of the autonomy it will and won’t support, particularly with respect to defensive weapons. Specifically, Amodei defends the partial autonomy of weapons being used now in Ukraine. Would Anthropic really object to deploying Claude on more-fully autonomous drone and anti-missile defense? These are not “lethal” in the sense that they target robots, not humans, but the falling debris from intercepted weapons kills people regularly.

Anthopic’s objection to “mass surveillance” of Americans is a bigger problem. Unlike autonomous weapons, this is not a hypothetical issue. There are presumably use cases right now in which the Defense Department wants to acquire or process data that it can’t do under Anthropic’s Usage Policy. 

But what is “mass surveillance”? It’s not a term of art in American surveillance law. In fact, it doesn’t map onto American law at all, even though civil liberties activists use it constantly.

Some mass surveillance is perfectly legal—like, for example, installing cameras outside the Pentagon and filming everyone who walks up to the door and matching their faces against images of known terrorists. Using satellites or airplanes to film automobile traffic is also mass surveillance. But I know of no law that forbids it. 

By contrast, some forms of mass surveillance would be wildly illegal—for example, the targeting in bulk of American communications without warrants directed at the individual subjects.

So my first question is what Anthropic even means when it says it doesn’t want Claude engaged in mass surveillance of Americans. Does it mean it doesn’t want Claude engaged in any non-individualized surveillance—including, say, surveillance of military bases or other sensitive sites? Surely that is overbroad. Does it mean it doesn’t want Claude engaged in bulk acquisition of communications data involving Americans? 

Another key question: Is the objection here limited to collection—in other words, the actual acquisition of data obtained about Americans based on any kind of non-individualized authority? Or is Anthropic also objecting to using Claude to analyze data that may have been obtained by means of non-individualized surveillance? If the latter, be careful. What about large datasets of, say, COVID vaccination patterns or other disease surveillance? Or, particularly pertinent to military applications, a large dataset of where Americans live in an area one is thinking about bombing?

One possibility here would be to define “mass surveillance” with reference to some existing category of surveillance law. 

The most obvious approach would be to bar Claude from participating in unlawful surveillance (leaving aside the question of whether Claude should be allowed to analyze the fruits of poisonous trees). But that appears redundant of the Pentagon’s position. The Department of Defense, after all, is demanding access to Claude for all lawful uses. So Anthropic is presumably aiming to restrict some lawful uses that constitute mass surveillance. Assuming the folks that wrote their Usage Policy know their way around American surveillance law, there must thus be a category of lawful mass surveillance of Americans that its Usage Policy restricts. 

A second possibility has a certain intuitive appeal: Restrict Claude’s surveillance of communications to statutorily authorized collection. This would allow Claude’s participation in, for example, the FISA 702 program but disallow its participation in surveillance under Executive Order 12333.

This approach has the benefit of a certain logical coherence: FISA 702 is targeted surveillance, though vast in scale. It’s not technically “mass surveillance,” and it has been specifically and repeatedly authorized by Congress. It also specifically disallows collection targeted at Americans or targeted at people believed to be in the United States. So by tying “mass surveillance” to statutory law, Anthropic would be effectively taking the position that Claude can participate in surveillance programs that are: (1) specifically contemplated and approved by Congress; (2) targeted at individual selectors (no matter how many of them); and (3) targeted at individual selectors who are both overseas and not Americans. 

It’s clever, but it doesn’t quite work. Collection under 12333 against Americans is generally restricted too, after all—albeit under different rules. And FISA, in any event, is predominantly a statute limited to communications. But there are all kinds of other mass surveillance. What about satellite imagery? What about bulk acquisition of purchase records and banking transactions? What about ubiquitous cameras? What about the bulk collection of public records?

At some level, all collection and processing of very large datasets about humans involves mass surveillance. So the principle of barring Claude from mass surveillance per se is necessarily an overinclusive one. Conversely, limiting mass surveillance to mass surveillance of communications, as the above approach would do, is also an underinclusive principle. A giant DNA database of Americans overseas or a collection of medical records, for example, would be horribly intrusive but wouldn’t violate 702.

Anthropic isn’t asking for my advice, but I would suggest that the concept of mass surveillance from which Claude is barred requires refinement. Specifically, I would clarify the concept in the following directions.

First, surveillance for this purpose is the acquisition of material—for example in offensive cyber operations—not the processing or analysis of that material. Claude doesn’t have an exclusionary rule wherein it is prohibited from thinking about material acquired by means it couldn’t participate in. In other words, Anthropic’s goal here should be to keep Claude away from spying on Americans, not to regulate the government’s internal handling of data it has lawfully acquired.

Second, surveillance for this purpose should be understood as covert surveillance only. I don’t think Anthropic wants or means to wall Claude off from mass surveillance of disease spread or COVID cases. And I don’t think it makes sense to have a policy that by its terms would prevent the study of macro-economic data. A simple rubric here is that any collection the government acknowledges doing and does in the open for purposes other than intelligence, law enforcement, or defense is presumptively outside of the “mass surveillance” walled off by the policy.

If this sounds like weakening Anthropic’s red line on mass surveillance, let me add a third point that would strengthen it: The bar against mass surveillance should not be limited to communications surveillance. If the Defense Department is using satellites in a fashion that constitutes covert mass spying, Anthropic might perfectly reasonably apply the policy there too. In other words, to make sense, the policy should apply anywhere the Defense Department is covertly acquiring or stealing large datasets on Americans, whether it is doing so legally or not.

The policy, in short, should cover the use of Claude for intelligence-gathering purposes of a covert nature in the absence of a warrant or some other legal process in circumstances in which it is intended to collect bulk data on some large number of American nationals.

Now, I know what you’re thinking: You’re thinking, wait a second, this won’t stop ICE from using Claude to locate and round up migrants. That’s probably right, but no contract with the military is going to prevent that. The solution to that problem is for Anthropic not to do business with ICE and not to make Claude available for immigration enforcement at all. According to press reports, Anthropic does not have contracts with ICE.

Anthropic’s position here is a righteous one. It is not, however, a particularly clear one. It would benefit from greater clarity. That might mean narrowing it in certain ways. But when this matter goes to court, Anthropic is going to have to explain to the courts what its position actually means. And it’s going to need to be able to do so with much greater specificity than it has done in public so far.

The Situation continues tomorrow.


Benjamin Wittes is editor in chief of Lawfare and a Senior Fellow in Governance Studies at the Brookings Institution. He is the author of several books.
}

Subscribe to Lawfare