Cybersecurity & Tech Executive Branch

Blame the Pentagon, Not AI, for Preventable Targeting Mistakes

Rebecca Crootof
Thursday, April 23, 2026, 1:00 PM
Yet another preventable tragedy—with an AI twist.
The Pentagon (Photo by Master Sgt. Ken Hammond, bit.ly/3Gq1NKB; Public Domain)

In October 2015, a U.S. strike on a medical facility in Afghanistan caused the deaths of at least 42 people. The United States characterized it as a “tragic incident.”

In August 2021, a U.S. strike on misidentified individuals in Afghanistan killed 10 civilians, including seven children. The United States characterized it as a “tragic mistake.”

And in March 2026, what appears to be a U.S. strike on an Iranian elementary school resulted in the deaths of at least 175 people, mainly schoolgirls. Once the ongoing investigation is completed, this, too, will likely be described as some variation of a “tragic accident.”

Unlike the prior mistaken strikes, however, this last one came days after a public spat between Anthropic and the Department of Defense, sparking speculation that Claude Gov—Anthropic’s defense-focused generative artificial intelligence (AI)—might bear some of the blame.

Children and other civilians were killed, needlessly, in a mistaken attack. But the problem is not whether AI was incorporated in the targeting kill chain. Rather, it’s that the Defense Department employed—and is likely still employing—a deeply flawed decision-making process for target selection.

The U.S. military already has the tools it needs to deploy AI responsibly and reduce civilian harm. It just needs an institutional commitment to doing so.

Absent that, there will be more preventable tragedies.

Increased Risks Associated With Military AI

Incorporating AI in targeting decisions incorporates new risks. AI can supercharge warfighting by increasing the number of targets an operator can engage. It can also supercharge accidents in war, both by magnifying old sources of error and creating new ones. These issues make the thoughtful design of lethal human/machine decision-making processes all the more critical.

AI depends on data, so feeding insufficient or inappropriate information into a target-selection system can result in unintended engagements. This is likely what led to the elementary school strike. Bad data leads to bad results, and AI enables bad results at unprecedented speed and scale, with a dash of tech-powered overconfidence.

The speed and scale of AI-supported decision-making can also amplify the impacts of human errors. Confirmation bias—our inclination to interpret facts in accordance with our goals and beliefs—caused the operators in both Afghan strikes to interpret ambiguous information as evidence that we were attacking legitimate targets; add in AI, as Israel reportedly did to identify potential targets in Gaza, and you have confirmation bias-based misidentifications on steroids.

Ensuring a human is in the loop is not a solution. Human/machine systems have their own unique problems, including automation bias and interface issues. Automation bias is the human tendency to defer to machine conclusions, even when the human-in-the-loop is supposed to act as a corrective safeguard. This was a primary cause of the 1988 USS Vincennes incident, a U.S. strike in the Strait of Hormuz that resulted in the deaths of 290 civilians. The ship’s Aegis Combat System mislabeled an Iranian passenger plane with an icon of an Iranian fighter jet. Despite contradictory hard data—such as the fact that a fighter jet is half the size of a passenger plane—the crew shot down the plane. (Per the U.S. investigative report, this was “a tragic and regrettable accident.”)

Poor design choices can also introduce translation errors between humans and machines. A confusing navigation interface led to the 2017 USS John S. McCain accident, where 10 sailors died in the U.S. Navy’s worst mishap at sea in 40 years.

Generative AI introduces additional sources of vulnerability and error: Large language models hallucinate answers, are easily corrupted, and, when used in war games, are biased toward actions that escalate conflicts. They may also foster moral offloading, where an operator’s qualms are calmed by delegating hard ethical decisions to a machine.

And all AI systems can be hacked, gamed, and spoofed. One of many examples of this was a test by the Defense Advanced Research Projects Agency (DARPA), where eight Marines tricked a trained Marine-recognition system into not detecting them by ... not acting like Marines. Some somersaulted, some hid under a cardboard box, and one acted like a fir tree. Per Paul Scharre, these simple tricks were “sufficient to break the algorithm.”

Most of the issues with incorporating AI in complex decision-making processes can be mitigated through smart system design that accounts for the strengths and weaknesses of humans and AI. But this requires an institutional commitment to using AI only where appropriate.

Private Companies Are a Structurally Insufficient Check

The outcome of the Anthropic-Pentagon dispute showcases the current administration’s disinterest in responsible AI practices. Likely for reasons similar to those outlined above, Anthropic informed the Department of Defense that Claude is insufficiently reliable to be used with autonomous weapon systems. Rather than deferring to Anthropic’s expertise, the Pentagon took the ludicrous step of labeling Claude Gov a supply chain risk, in a transparent effort to punish the leading generative AI company for daring to act as a check on irresponsible AI deployment and to deter other companies from doing so.

This public spat is an outlier in part because military contractors such as Anthropic are not prone to downplaying their technologies’ capabilities. All of their incentives point in the other direction, especially as contractors are indemnified from product liability (the costs of defects are instead borne by taxpayers, our warfighters, and foreign civilians). Military contractors produce dependable products because they want to win future lucrative defense contracts. If Anthropic states that their technology is unreliable and is willing to make this dispute public, it’s fair to assume that generative AI is far too unreliable to be used with autonomous weapon systems. This wasn’t civilians trying to tell the military what to do; it was a contractor flagging that this technology wasn’t fit for purpose.

Private companies should not be expected to guard against the inappropriate use of military AI. The fact that Anthropic did so, publicly—and was publicly punished for doing so—is a warning klaxon, not a comfort. It speaks to a broader culture of disinterest in minimizing wartime mistakes.

Scrapped Practices to Minimize Civilian Harm

Responsible military AI integration requires an institutional commitment to responsible warfighting, a commitment that appears to be sorely lacking in today’s Pentagon.

The more information comes to light, the more it seems the elementary school strike was just the kind of mistake the United States was previously working to prevent. Under the Biden administration, the United States took a proactive, evidence-based approach to avoiding these types of mistaken engagements. This began, publicly, when the Department of Defense released the August 2022 Civilian Harm Mitigation and Response Action Plan, which included instructions to use AI tools to minimize risk. Shortly thereafter, the Pentagon published a formal policy, hired approximately 166 people across the department, and established internal institutions to implement practices for reducing civilian harm in war.

This effort was driven in part by a desire to improve military effectiveness. Mistaken engagements are horrific. They are also a huge waste of resources. After studying more than 2,000 incidents, experts found that “half of all civilian harm incidents [in Afghanistan] were caused by misidentification.” Every mistake diverts resources from valid military targets. Fewer misidentifications don’t just save innocent lives and reduce our warfighters’ moral guilt—they improve military effectiveness. Indeed, the experts found that as the rate of special operation missions’ rate of civilian harm decreased, the rate of successful missions increased.

Under Secretary of Defense Pete Hegseth’s mantra of “lethality, lethality, lethality,” however, these policies were scrapped, the positions eliminated, and these institutions shuttered. A culture of tolerance for civilian casualties flourished.

The dead schoolgirls paid the price.

We Need Accountability for Accidents in War

Arguing for more accountability for accidents in war right now admittedly feels like shouting into the wind. In recent months, the United States has repeatedly violated international and domestic law governing the commencement and conduct of hostilities. Hegseth seems to approve and advocate for war crimes with shocking regularity. And in the brief period while this piece was being prepared for publication, President Trump arguably committed a war crime by threatening Iran’s “whole civilization” and began a possibly-unlawful blockade.

There are undoubtedly many reasons for our leaders’ presumed impunity and the Pentagon’s current disregard for law. Factors that may be enabling this apparent institutional indifference include the (appropriately) high bar for war crimes and the (deplorable) lack of accountability for accidents in war.

I will not tackle the challenges of ensuring that individuals are held liable for their war crimes, except to note that, if no one acts willfully, no one is or should be held criminally liable for wartime acts. And, if there is no internationally wrongful act, states cannot be held responsible under the law of state responsibility. These are appropriate standards, but they create a legal loophole for accidents. Both in formal courts and courts of public opinion, those accused of criminal recklessness can argue, “It’s war. Accidents happen.”

Absent institutional incentives to learn from past mistakes and avoid future ones, preventable accidents will multiply. And neither international nor domestic law provides a remedy for accidental civilian harm. Victims and their advocates often look to international criminal law as a route to redress, but even in cases where individuals are held liable for war crimes or states are held accountable under the law of state responsibility, there is no guarantee that individual civilians will be compensated for their individual harms. Accordingly, I have long argued for creating mechanisms to compensate harmed civilians and to promote greater accountability for wartime accidents.

Such mechanisms would not prevent war crimes or eliminate accidents in war. But they would create a pathway to formal recognition and at least some redress for those who are harmed. And they would create additional institutional incentives to minimize civilian harm, ones which might be more difficult for careless and cavalier leaders to undermine in the future.

We Are All Within the Diameter of the Bomb

The Department of Defense has “leverag[ed] a variety of advanced AI tools” in the Iranian war. Should it be confirmed that AI played a role in the elementary school strike, it will be tempting to blame the technology. This would be a mistake.

Instead, we need to overhaul the targeting decision-making process. If anything, the integration of AI, with its attendant benefits and risks, makes a focus on the process design all the more critical. Thoughtful and responsible process design can both reduce accidental engagements and improve military effectiveness.

Americans cannot undo what was done in our name. We can take responsibility for it, we can hold our leaders accountable for their reckless choices, and we can take steps to prevent future tragic accidents. We can learn from studies of civilian harm on how to reduce common sources of battlefield mistakes. We can learn from high-risk industries about how to effectively integrate AI into human/machine decision-making processes. And we can develop legal structures that incentivize taking greater care in targeting, regardless of who is in charge.

There will always be accidents in war. As Hegseth observed, “War is hell” and “bad things can happen.” But this is cause for sorrow and proactive care, not an excuse for failures to take basic precautions.


Rebecca Crootof is an Assistant Professor of Law at the University of Richmond School of Law. Dr. Crootof's primary areas of research include technology law, international law, and torts; her written work explores questions stemming from the iterative relationship between law and technology, often in light of social changes sparked by increasingly autonomous systems, artificial intelligence, cyberspace, robotics, and the Internet of Things. Work available at www.crootof.com.
}

Subscribe to Lawfare