Artificial Intelligence and the Resort to Force
A recent post on the New York Times’s At War blog begins with this hypothetical scenario:
Published by The Lawfare Institute
in Cooperation With
A recent post on the New York Times’s At War blog begins with this hypothetical scenario:
It’s a freezing, snowy day on the border between Estonia and Russia. Soldiers from the two nations are on routine border patrol, each side accompanied by an autonomous weapon system, a tracked robot armed with a machine gun and an optical system that can identify threats, like people or vehicles. As the patrols converge on uneven ground, an Estonian soldier trips and accidentally discharges his assault rifle. The Russian robot records the gunshots and instantaneously determines the appropriate response to what it interprets as an attack. In less than a second, both the Estonian and Russian robots, commanded by algorithms, turn their weapons on the human targets and fire. When the shooting stops, a dozen dead or injured soldiers lie scattered around their companion machines, leaving both nations to sift through the wreckage — or blame the other side for the attack.
Although the Times post is largely about lethal autonomous weapons systems and where things stand on their development and use during ongoing armed conflicts, this introductory paragraph actually tees up a different question—one that involves the initial resort to force by a state. Will we soon be in a place where autonomous systems lead states into armed conflict in the first place? How will AI change the way states make decisions about when to resort to force under international law? Will the use of AI improve or worsen those decisions? What should states take into account when determining how to use AI to conduct their jus ad bellum analyses?
Noam Lubell, Daragh Murray, and I have just published an article that begins to consider these questions. Much has been written about the development of AI and machine learning in other areas of the law, including criminal justice, self-driving cars, and administrative decision-making, but this is, we think, the first project to consider the role of AI in the resort to force.
Here’s an abstract of the article:
Big data technology and machine learning techniques play a growing role across all areas of modern society. Machine learning provides the ability to predict likely future outcomes, to calculate risks between competing choices, to make sense of vast amounts of data at speed, and to draw insights from data that would be otherwise invisible to human analysts.
Despite the significant attention given to machine learning generally in academic writing and public discourse, however, there has been little analysis of how it may affect war-making decisions, and even less analysis from an international law perspective. The advantages that flow from machine learning algorithms mean that it is inevitable that governments will begin to employ them to help officials decide whether, when, and how to resort to force internationally. In some cases, these algorithms may lead to more accurate and defensible uses of force than we see today; in other cases, states may intentionally abuse these algorithms to engage in acts of aggression, or unintentionally misuse algorithms in ways that lead them to make inferior decisions relating to force.
This essay’s goal is to draw attention to current and near future developments that may have profound implications for international law, and to present a blueprint for the necessary analysis. More specifically, this article seeks to identify the most likely ways in which states will begin to employ machine learning algorithms to guide their decisions about when and how to use force, to identify legal challenges raised by use of force-related algorithms, and to recommend prophylactic measures for states as they begin to employ these tools.