Cybersecurity & Tech Executive Branch

Military AI as ‘Abnormal’ Technology

Scott Sullivan
Thursday, March 12, 2026, 10:03 AM

AI may be a “normal” technology in the boardroom. In the military, where costs are externalized and secrecy is default, it’s anything but.

Lance Corporal David Fierro launches the Dragon Eye Unmanned Aerial Vehicle, a small plane guided by computers which provides real time video of the terrain below it. (Photo: USMC/DVIDS, https://tinyurl.com/mr4akj5e; Public Domain).

On Feb. 27, the Pentagon labeled Anthropic a national security risk over usage restrictions the company imposed on its military contract. But less than 24 hours later, the U.S. military reportedly used Anthropic’s frontier artificial intelligence (AI) model, Claude, in initiating its operations in Iran. The juxtaposition is not merely ironic—it is analytically instructive. Both the magnitude of Claude’s integration into military operations and Anthropic’s concern regarding boundaries on the expansion of AI integration for military purposes are part of the same story: The diffusion of AI in the military is not following the patterns of the private sector.

Commentators like Arvind Narayanan and Sayash Kapoor are leading advocates for understanding “AI as a Normal Technology,” akin to other general-purpose technologies such as electricity or the internet. They hold that AI’s diffusion will be gradual and uneven, shaped by the same societal and industry dynamics that have governed every prior general-purpose technology: institutional inertia, risk aversion, regulatory friction, and the high costs of integrating novel systems into established workflows.

But as Narayanan and Kapoor explicitly recognize, military AI is an exception possessing “unique dynamics that require a deeper analysis.” While most sectors are adopting AI incrementally—held back by legal risk, institutional inertia, or economic caution—the military domain is charging ahead around the world. Israel has put AI systems front and center in its conflict against Hamas. AI-enabled weapons are proliferating in Sudan and elsewhere in Africa. The United States, China, Iran, Saudi Arabia, and the United Arab Emirates are racing not only to integrate military AI capabilities widely, but to push the boundaries of the technology itself. AI may be marching slowly through civilian institutions, but on the battlefield, it’s in a full sprint.

The difference is not merely one of speed. Military institutions operate within incentive systems and governance environments that weaken—or in some cases invert—the feedback mechanisms that ordinarily restrain technological deployment. Strategic competition rewards early integration under uncertainty, costs of experimentation are frequently externalized, and operational secrecy limits opportunities for external scrutiny. These dynamics suggest that military AI is better understood not as a normal technology, but as an abnormal one. Drawing on prominent use cases, this article examines each of these structural features to illustrate why military AI requires a distinct approach to governance.

Differing Incentives

Many industries have few incentives to deeply integrate AI into current operations. Commercial entities with a healthy bottom line are generally hesitant to deviate from proven systems and processes. This reluctance is amplified where the costs of failure are high.

Health care exemplifies the phenomenon. There may be no industry more mature in developing AI tools, yet adoption is obstructed by risk-averse practitioners who prefer the “tried and true” to technological tools they perceive as untested and opaque. These commercial and professional incentives are often reinforced by law and policy. Corporate legal regimes hold executives accountable for decisions that fail to serve shareholder interests, while medical professionals face liability for harms traced to utilizing AI outputs that are not already embedded in standard practice.

However, the incentive structure surrounding military incorporation of technology is quite different. The strategic logic of warfare rewards even marginal operational advantages, especially when derived from technologies that increase speed, precision, or decision-making superiority.

Moreover, military innovation is often embedded in a competitive arms-race logic. Military commanders often remind us that “the enemy gets a vote”—meaning that an operation is not successful in a vacuum, but hinges on how well it assesses an adversary’s capabilities and anticipates its response.

At the strategic level, the sentiment incentivizes militaries to prepare for worst-case scenarios in the development and fielding of new technologies. Technological uncertainties surrounding the limits and advantages of AI amplify this dynamic and place the speed of integration at a premium. This compressed timeline is further facilitated by defense acquisition systems that allow for fast-tracked or classified procurement, experimental deployments under “urgent needs” authorizations, and a permissive environment for dual-use technology.

The Pentagon’s AI Acceleration Strategy crystallizes these dynamics. The strategy explicitly frames AI integration as a “race” in which “speed wins,” directing the department to “accept that the risks of not moving fast enough outweigh the risks of imperfect alignment.” The strategy demands that exercises and experiments failing to “meaningfully incorporate AI and autonomous capabilities” be reviewed for “resourcing adjustment,” effectively penalizing units that do not adopt AI at a sufficient pace. This top-down mandate to accelerate has no analog in the private sector, where large numbers of firms retain independent discretion over the timing and scope of technology adoption.

Externalized Costs

In civilian applications, the costs associated with integrating “normal” technologies are generally felt internally—due to either practical realities or legally imposed ones. Commercial investments in technology come from limited funds and at the expense of other ventures. Early AI enthusiasts in the commercial world discovered that rushed deployment of fledgling AI tools can impose significant reputational harm or legal liability for their firms. Companies that have utilized AI in employment screening, customer service, and content creation have all faced significant costs due to tools that failed either to meet customer needs or to comply with legal requirements.

In the military context, these error costs are largely externalized. History is littered with costly weapons projects that either fail to reach their potential or never make it to the battlefield. The consequences of such failures are borne not by the institutions that procure them, but by the taxpayers who fund them.

For wealthier states, the ready availability of resources means that the desire to gain or preserve military advantage enables governments to invest in AI along parallel tracks of development and deployment, with the understanding that much of that investment will ultimately fail to yield fruit. The deployment of GenAI.mil illustrates this dynamic. In mid-2025, the Chief Digital and Artificial Intelligence Office (CDAO) executed separate contracts worth up to $200 million each with Google, OpenAI, xAI, and Anthropic—investing simultaneously in competing platforms rather than selecting a single provider. The size and diverse portfolio of projects insulates the military from any single failure while demonstrating a willingness to absorb financial losses that no business would countenance.

That no government appears eager to entrust analogous AI platforms with distributing civil benefits underscores that, when the potential costs of AI are internalized, even the highest-performing AI systems are often viewed as untenable gambles.

But externalized costs are not limited to financial waste. The performance of Israeli AI tools in Gaza demonstrates that even “successful” AI products externalize their most consequential costs to civilians on the battlefield.

Due to the introduction of AI tools like Lavender and Habsora (the “Gospel”), the Israeli Defense Forces (IDF) were able to expand the number of targets identified in Gaza from 50 per year to 100 per day. The actual performance of these systems is unknown and hotly debated. Even assuming that every single identified target was, indeed, a lawful target, the structure of international humanitarian law means that dramatically expanding the target set inexorably expands the scope of permissible civilian harm. This is because each newly identified lawful target, an assessment resting on a comparatively modest evidentiary threshold, creates a new strike opportunity in which incidental civilian damage may be deemed proportionate. Because proportionality is assessed strike by strike rather than cumulatively, AI-enabled target proliferation can significantly increase total civilian casualties without ever violating the governing legal standard.

Ukraine offers an additional empirical window into this dynamic. The “Test in Ukraine” program, launched by government-backed defense technology accelerator Brave1, explicitly converts an active battlefield into a product-development environment for foreign defense firms. Companies receive real-time performance data and iterative feedback from combat conditions—benefits that would ordinarily require decades of simulated testing. The costs of any failures, however, are borne by Ukrainian soldiers and civilians operating in the testing environment. The program’s appeal to foreign manufacturers lies precisely in the externalization of risk: As the head of Brave1 explained, “In Ukraine, everything happens much faster: there’s no need to wait months for testing permits, and feedback from technical and military experts comes almost instantly.” The speed is a direct function of the absence of the regulatory safeguards that would ordinarily accompany the deployment of untested weapons systems. In the civilian world, such a program would be unthinkable. In the military domain, it is a selling point.

Epistemic Opacity and the Problem of Oversight

Naturally, the dramatic expansion of targets enabled by Israel’s AI platforms, especially when coupled with the errors inevitable in armed conflict, has prompted commentators to question the validity of Lavender’s and Habsora’s outputs. But it is impossible to determine the reality.

In the civilian realm, the shortcomings of AI models are often quickly revealed. When a chatbot fabricates legal citations, the judge notices. When an AI hiring tool discriminates, plaintiffs sue. “Normal” technologies are subject to external validation and retrospective correction. In contrast, the development, deployment, and use of military AI is obscured by layers of opacity imposed by operational realities and legal doctrines.

Operationally, gauging the accuracy and reliability of high-stakes military decisions is difficult, whether they’re made by humans or machines. Part of the problem is practical, as access to the ground truth is, at best, fundamentally incomplete in most operational settings. Intelligence assessments and battlefield reporting unfold under conditions of ambiguity, and in that fog, human cognition tends to favor coherence over uncertainty. Analysts and commanders are as vulnerable as anyone to confirmation bias, the inclination to see what one expects or wants to see. According to military researchers, approximately half of all civilian harm incidents between 2007 and 2012 were a product of misidentification, with confirmation bias representing a recurring structural challenge. Layered on top of that are institutional and political pressures—both implicit and explicit—that reward narratives of success. In short, there are powerful incentives to classify a decision, operation, or assessment as “correct,” not necessarily because it was, but because acknowledging ambiguity or error is disfavored.

Legal doctrines further insulate military actions, particularly those involving emerging technologies, from external assessment and validation. Classification regimes restrict access to the very data needed for rigorous, independent evaluation of AI systems. Even when failures occur in a manner that would typically invite exposure and liability, doctrines such as the state secrets privilege and limitations on governmental liability can shield these technologies from outside scrutiny. Moreover, legal standards for accountability in national security contexts tend to defer heavily to executive assessments, which are themselves shaped by institutional incentives to portray emerging technologies as both effective and compliant.

The AI Acceleration Strategy demonstrates some of this structural opacity. The strategy includes classified annexes, provided “by separate cover,” governing “special initiatives” that are exempt from public disclosure. Taken together with the strategy’s mandate for a “wartime approach to blockers,” the emerging institutional posture is one in which the speed of AI integration is explicitly prioritized over procedural safeguards that might enable greater visibility.

Collectively, this results in intentional structural obscurity: Decisions about which AI systems to procure, how they perform, and what failure modes they exhibit are shielded from both public and scholarly scrutiny.

Governing the Abnormality of Military AI

Understanding artificial intelligence as a “normal” technology may provide an apt heuristic for thinking about its integration into commercial and civic life. But in the military domain, AI resists normalization. The divergence is not merely one of pace. Military AI operates within institutional environments where the feedback mechanisms that ordinarily slow technological adoption—market accountability, liability exposure, regulatory scrutiny, and external validation—are structurally weakened, redirected, or inverted. These dynamics accelerate integration while simultaneously obscuring performance, externalizing risk, and compressing opportunities for meaningful oversight.

This abnormality has significant implications for governance. Many contemporary policy debates assume that frameworks emerging from civilian AI regulation—risk management regimes, transparency obligations, or voluntary ethical commitments—can be extended into the military context with minimal modification. Yet the structural features examined here suggest that such transposition may be insufficient. Governance tools designed for environments characterized by slow diffusion, external review, and internalized costs may fail when applied to systems developed under strategic competition, operational secrecy, and institutional pressure to privilege speed over deliberation.

At present, even modest international efforts to legally constrain military AI appear dead on arrival, while domestic regulatory frameworks show little appetite for constraining national military initiatives amid accelerating geopolitical competition. This sluggishness risks producing a widening gap between AI adoption and governance capacities.

History suggests that legal systems are capable of adaptation under conditions of technological disruption. The emergence of nuclear weapons prompted the development of nonproliferation regimes that reshaped arms control beyond traditional use-based restrictions. Likewise, earlier claims that international law was ill-suited for new forms of conflict—including operations against non-state actors—ultimately gave way to doctrinal evolution and reinterpretation. Military AI may demand a similar shift: not simply extending existing frameworks, but reimagining how accountability, precaution, and oversight operate when decision-making processes become partially opaque and temporally compressed.

If military AI is an abnormal technology—defined by accelerated incentives, externalized costs, and epistemic opacity—then governance cannot simply mirror civilian models. It must instead anticipate structural pressures that erode traditional safeguards. This may require earlier-stage legal interventions than we’ve seen so far, anchored in the proposition that law of armed conflict rules do not merely regulate outputs at the point of strike, but, at least in high-stakes operations, necessitate design and development choices to reasonably ensure that the deployment of AI-enabled systems will be lawful by default.

But upstream legal design constraints are only half the solution. Because abnormality is institutional as much as technical, they must be reinforced by strengthened ex ante review mechanisms, and governance structures capable of operating under conditions of secrecy without abandoning meaningful accountability. The challenge is not merely to regulate the outputs of AI-enabled systems, but to confront the institutional environments that shape how and why they are adopted.

The Pentagon has already deployed frontier AI tools to millions of personnel. Ukraine has turned its frontlines into a multinational proving ground for AI-enabled weaponry. Other states are poised to follow similar trajectories. Whether institutions can evolve quickly enough to address this abnormality remains uncertain. What is clear is that the future of military AI will not be determined solely by technical capability, but by whether legal and institutional frameworks can adapt to a domain where the tempo of machine-enabled warfare increasingly outpaces the mechanisms designed to constrain it.


Scott Sullivan is a professor of law at the U.S. Military Academy at West Point and the Army Cyber Institute. He serves as a drafter and the primary technology advisor for the Manual on International Law Applicable to Artificial Intelligence in Warfare. This article represents the opinion of the author alone, and not that of the U.S. Military Academy, the U.S. government, or any of its departments or agencies.
}

Subscribe to Lawfare