Published by The Lawfare Institute
in Cooperation With
Brookings

John C. Dehn of the United States Military Academy West Point writes in with the following comments on the exchange between Human Rights Watch, Matt and Ken, Tom Malinowski, and me:
Lawfare’s discussion of the Human Rights Watch report, "Losing Humanity: The Case Against Killer Robots" has been interesting to say the least. It is difficult to add much to what Matt Waxman, Ken Anderson, Ben Wittes and Tom Malinowski—not to mention the report’s authors—have already written. I want to emphasize the importance of clarifying a single aspect of the report: what the authors intended to include in the term “fully autonomous weapon” or perhaps the impermissible targets of such weapons. As has been mentioned, the report recommends that all states: “Prohibit the development, production, and use of fully autonomous weapons through an international legally binding instrument.” (emphasis mine) Unfortunately, the report does not clearly define or explain the “fully autonomous weapons” to be prohibited and the discussion thus far emphasizes this lack of clarity. As Matt and Ken pointed out, and Ben reiterated, there is a wide spectrum of relative weapon autonomy as well as potential targets. On one end might be a weapon that calculates and automatically adjusts the elevation of a gun barrel to account for the distance to target and the ballistics of the round being fired—a system long in use on U.S. tanks. Here, the operator selects the target and the weapon automatically helps him aim at it. On the other end would be those weapons that independently select or identify a target, a person or thing, as the proper object of an attack and then engages it. The report might be discussing only those weapons on the most autonomous end of the spectrum, at one point referring to “fully autonomous weapons that could select and engage targets without human intervention” and at another as “a robot [that] could identify a target and launch an attack on its own power.” Somewhat confusingly, though, the report includes three types of “unmanned weapons” in its definition of “robot” or “robotic weapon”—human-in-the-loop; human-on-the-loop; and human-out-of-the-loop. (p. 2) Thus, the report potentially generates confusion about the precise level of autonomy that the authors of the report intended to target (pun intended), though human-(totally-)out-of-the-loop weapons are the obvious candidate. Even assuming the report clearly intends “fully autonomous weapons” to include only weapons that independently identify/select and then engage targets, the discussion here (particularly between Ben and Tom) demonstrates that this definition of the term is not without its problems. These problems include: (1) what types of targets should be cause for concern (humans, machines, buildings, infrastructure (roads, bridges, etc.), or munitions (such as rockets and artillery or mortar rounds); and (2) what is meant by target “selection” or “identification.” Turning first to the types of targets that should be subject to the proposed ban on fully autonomous weapons, I agree with Ben that the report is less than precise. At many if not most points, its language seems to disfavor any weapon that independently indentifies a specific person or thing as targetable. In my view, Ben appropriately questions Tom’s attempt to limit the focus of the report to weapons that kill humans. At any rate, it is certainly the case that a weapon’s ability to accurately indentify/select a target without human assistance depends upon the nature of the target itself. Targetable persons in armed conflict include most (but not all) members of an opposing conventional armed force or members of an unconventional enemy organized armed group with a continuous combat function. Targetable persons also include any civilians who take a direct part in hostilities, but only for such time as they do so. For the reasons articulated in the report and more, it would be difficult for a weapon to independently sort persons into targetable and non-targetable categories on any battlefield, conventional or unconventional, but more so the latter. But this does not mean that they could never do so. Targetable things are called military objectives and include objects which “by their nature, location, purpose or use make an effective contribution to the enemy’s military action and whose total or partial destruction, capture or neutralization, in the circumstances ruling at the time, offers a definite military advantage.” Obviously, things that are military objectives by their intrinsic nature or purpose are more readily identified by observable/detectable characteristics than those that become so by their location or use, which require contextual analysis. Munitions (particularly incoming munitions) and tanks are more easily identified as proper targets than, say, a white pickup truck, building or bridge. To provide an example, technology already permits the detection and precise identification of certain, known fixed- and rotary-wing military aircraft in myriad indirect ways. Fully autonomous detection and attack of any such enemy aircraft in flight would rarely raise significant distinction or proportionality concerns. For these reasons, I believe that the report should have more carefully differentiated among potential targets before recommending a categorical ban of (seemingly all) fully autonomous weapons. By focusing primarily on the most difficult case, identifying targetable persons in an unconventional war against an unconventional force, the report’s authors generalize a proposition that requires a much more particularized analysis, as Ben has noted. If the report intends to ban only fully autonomous weapons that would target humans, as Tom suggested, it should much more clearly say so. Similar precision is required regarding what is meant by fully autonomous target identification/selection. The report’s authors appear to presume a fully autonomous weapon roaming a battlefield in which everything is ambiguous and targets must be selected or identified based upon mostly objective but highly contextual criteria. Ben countered with hypothetical examples, two of which emphasize the report’s problematic synonymous use of “selection” and “identification.” He suggested that a weapon might eventually be able to independently identify members of an (more or less conventional) armed force by detecting their uniform or insignia. He also hypothesized an autonomous, very discrete weapon that might be used to target a specific individual. Implicitly (and explicitly to me in an email exchange), he suggested that the latter weapon might use face recognition to identify its target, and determine when to attack based upon “split-second calculations about the target’s distance from the civilians in relation to the weapon’s blast radius.” These two hypothetical examples involve target “identification” based upon reasonably definable and detectable criteria, similar to the enemy aircraft example. The weapon would be programmed to detect specific features of an individual or group and attack the person or persons possessing them. In these cases, though, the specific individual or group to be targeted is “selected” by the human who defines their detectable features. A human determines the characteristics that make a person or group member targetable in the context of a specific armed conflict (members of the enemy force or a specific individual identified as having a continuous combat function). It would therefore not require the weapon to interpret important matters of context. Depending upon the believed accuracy of both the definition of requisite features and a weapon’s ability to detect them, I seriously doubt that the military would object to such a weapon. The main concern of the report seems to be the inability of fully autonomous weapons to detect and interpret context in the “selection” of human targets. In other words, when a target must be selected based upon its actions rather than (or in addition to) its detectable features, the report posits that human judgment is preferable and morally necessary. This is certainly a point of view that I suspect would resonate with military leaders. Those of us who have spent many years training soldiers on what constitutes “hostile intent” or a “hostile act” justifying the proportionate use of responsive force are familiar with the endless “what ifs” that accompany any hypothetical example chosen. Ultimately, we tell soldiers to use their best “judgment” in the face of potentially infinite variables. This seems to me a particularly human endeavor. While artificial intelligence can deal with an extremely large set of variables with amazing speed and accuracy, it may never be possible to program a weapon to detect and analyze the limitless minutia of human behavior that may be relevant to an objective analysis of whether a use of force is justified or excusable as a moral and legal matter. Ultimately, it seems, one’s view of the morality and legality of “fully autonomous weapons” depends very much upon what function(s) they believe those weapons will perform. Without precision as to those functions, however, it is hard to have a meaningful discussion. In any case, I fully agree with Ben that existing international humanitarian law and domestic policy adequately deals with potentially indiscriminate weapons, rendering the report's indiscriminate recommendation unnecessary.

Benjamin Wittes is editor in chief of Lawfare and a Senior Fellow in Governance Studies at the Brookings Institution. He is the author of several books.

Subscribe to Lawfare